It is argued that language use by humans is intentional and
[Tomasello, 2008], which is in conflict with the classic HCI model
in which people use computer as tools and the job of the
interface designer is to make the consequences of action clear and useful
[Sharp et al, 2007].
One can design a voice interface to mimic links, buttons and pull-down
[Balentine, 2007], but really natural language interfaces would do something else.
The argument [Wallis, "
The Intentional Interface" 2013] is based on Daniel Dennett's observation that we humans take different stances when trying to understand the world around us
HCI generally assumes the design stance (although touch screen interfaces
exploit our folk physics understanding of "flicking" and "stretching") but
people produce utterances that presume the recipient is using an intentional
stance. Really natural language interfaces would not only handle this, but
also produce the same kind of utterances.
- The physical stance is where we can understand causal relationships - folk
physics enables us to predict the future behaviour of balls on a snooker table
- The design stance enables us to deal with complex artefacts such as
alarm-clocks. We know what they are designed to do and learn how to
- More complex systems - in particular other people - we assume will act in
accordance with beliefs to achieve goals. Seeing two children tugging at a
teddy bear, we are extremely likely to assume they both want it.
Deep Learning is considered by many to be the answer to everything.
However, all machine learning techniques have the classic problem of
distinguishing between the accidental and essential. Until training corpora
include data on intent, these systems will be learning the wrong thing.