Authors

Nhan Tran

Venue

Colorado School of Mines MS Theses

Publication Year

2020
Research has shown that the use of physical deictic gestures such as pointing and presenting by robots enables more effective and natural human-robot interaction. However, not all robots come equipped with gestural capabilities. Recent advances in augmented reality (AR) and mixed reality (MR) provide powerful new forms of deictic gestures in human-robot communication. My thesis focuses on allocentric mixed reality gestures, in which target referents are picked out in fields of view of human teammates using AR visualizations such as circles and arrows, especially when these gestures are paired with verbal referring expressions and deployed under various types of mental workload of human teammates. We also present a software architecture that enables mixed reality gestural capabilities, and present the results of a human subject experiment that measures user objective performance and their subjective responses. These results demonstrate the trade-offs between different types of mixed reality robotic communication under different levels of user workload. The findings of this study suggest that although humans may not notice differences, the manner of load a user is under and the type of communication style used by a robot they interact with do in fact interact to determine their task time. The data collected from my experiment is a first step towards answering this overarching question: How can a robot select the most effective communication modality given information regarding its human teammate's level and type of mental workload?