As a computer scientist the proposed approach is fairly boring. Once the speech
recognition software has turned the sound into words, and once the techniques from Natural Language Processing have been applied to get the meaning of what was said, the system still needs to decide what to say back. If the user asked a question or issued a command then the response may seem obvious but people have many other kinds of conversation. The Alexa Prize focuses on casual conversation which is certainly challenging (if not particularly useful). Deciding what to say next turns out to look a lot like robotics - timely decisions in a dynamic environment with limited information.
We use an approach based on the well established BDI architecture for situated action.
As a linguist the proposed solution is also well established. We use
Analysis to "pull apart" human-human dialogs on the same topic.
These are collected via Wizard-of-Oz style trials with real users.
ProseCo's unique contribution has been to synthesise a consistent approach from
these two seemingly incompatible disciplines.