Authors

Kevin Spevak and Zhao Han and Tom Williams and Neil Dantam

Venue

IEEE/RSJ International Conference on Intelligent Robots and Systems

Publication Year

2022
Robots that use natural language in collaborative tasks must refer to objects in their environment. Recent work has shown the utility of the linguistic theory of the Givenness Hierarchy (GH) in generating appropriate referring forms. But before referring expression generation (REG), collaborative robots must the determine content and structure of a sequence of utterances, a task known as document planning in the natural language generation (NLG) community. This problem presents additional challenges for robots in situated contexts, where described objects change both physically and in the minds of their interlocutors. In this work, we consider how robots can “think ahead” about the objects they must refer to and how to refer to them, sequencing object references to form a coherent, easy to follow chain. Specifically, we leverage GH to enable robots to plan their utterances in a way that keeps objects at a high cognitive status, which enables use of concise, anaphoric referring forms. We encode these linguistic insights as a mixed integer program (MIP) within a planning context, formulating constraints to concisely and efficiently capture GH-theoretic cognitive properties. We demonstrate that this GH-informed planner generates sequences of utterances with high inter-sentential coherence, which we argue should enable substantially more efficient and natural human-robot dialogue.