Authors

Tom Williams and Matthew Bussing and Sebastian Cabrol and Ian Lau

Venue

2nd International Workshop on Virtual, Augmented, and Mixed Reality for HRI

Publication Year

2019
Research has shown that robots that use physical deictic gestures such as pointing enable more effective and natural interaction. However, it is not yet clear whether these benefits hold true for new forms of deictic gesture that become available in mixed-reality environments. In previous work, we presented a human-subject study suggesting that these benefits may indeed translate in the case of allocentric mixed-reality gestures, in which target referents are picked out in users' fields of view using annotations such as circles and arrows, especially when those gestures are paired with complex referring expressions. In this paper we provide additional evidence for this hypothesis through a second experiment that addresses potential confounds from our original experiment.