Authors
Tom Williams
Venue
Tufts University PhD Theses
Publication Year
2017
Abstract: As intelligent agents become integrated into our society, it becomes increasingly important for them to be capable of engaging in natural, human-like human-agent interactions. A key aspect of such interactions is the ability to engage in pragmatically appropriate natural language dialogues. That is, intelligent agents must be able to understand and generate natural language expressions in a way that is sensitive to their current environmental context, social context, and dialogue state. This problem is especially difficult in the uncertain and open worlds common to typical human-robot interaction scenarios, in which a robot cannot be expected to have perfect or complete knowledge of its environment. What is more, many of the approaches that have been developed to facilitate human-robot dialogues are tailored to specific knowledge representation schemes or particular domains of information that prevent them from being generally applicable across robot architectures or across application domains. To address these concerns, I have developed a set of algorithms for understanding and generating natural language in uncertain and open worlds, and a set of general frameworks and architectural mechanisms that allow these algorithms to be agnostic to representational format and application domain whenever possible. The algorithms and architectural mechanisms presented in this dissertation represent an interdisciplinary approach to artificial intelligence, in which cognitive science is drawn upon to provide theoretical frameworks (e.g., Speech Act Theory, the Givenness Hierarchy), and cognitive models (e.g. the Incremental Algorithm), and in which computer science is drawn upon to provide computational frameworks (e.g., Multi-Agent Systems, Integrated Robot Architectures) and techniques (Dempster-Shafer Theory, logical inference, search). In this dissertation, I demonstrate how these algorithms and architectural mechanisms can be integrated into a single natural language processing pipeline within an integrated robot architecture. What is more, I show how this integrated system extends the state of the art in domains such as natural language enabled wheelchairs when implemented on robot hardware.