Authors

Tom Williams and Matthew Bussing and Sebastian Cabrol and Ian Lau and Elizabeth Boyle and Nhan Tran

Venue

11th International Conference on Virtual, Augmented, and Mixed Reality

Publication Year

2019
Mixed reality technologies offer interactive robots many new ways to communicate their beliefs, desires, and intentions to human teammates. In previous work, we identified several categories of visualizations that when displayed to users through mixed reality technologies serve the same role as traditional deictic gestures (e.g., pointing). In this work, we experimentally investigate the potential utility of one of these categories, allocentric gestures, in which circles or arrows are rendered to enable human teammates to pick out the robot’s target referents. Specifically, through two human subject experiments, we examine the objective and subjective performance of such gestures alone as compared to language alone and the combination of language and allocentric gesture. Our results suggest that allocentric gestures are more effective than language alone, but to maintain high robot likability allocentric gestures should be used to complement rather than replace complex referring expressions.