Authors
Lixiao Zhu and Tom Williams
Venue
International Conference on Social Robotics
Publication Year
2020
The performance of human-robot teams depends on human-robot trust, which in turn depends on appropriate robot-to-human transparency. A key way for robots to build trust through transparency is by providing appropriate explanations for their actions. While most previous work on robot explanation generation has focused on robots' ability to provide post-hoc explanations upon request, in this paper we instead examine proactive explanations generated before actions are taken, and the effect this has on human-robot trust. Our results suggest a positive relationship between proactive explanations and human-robot trust, and reveal fundamental new questions into the effects of proactive explanations on the nature of humans' mental models and the fundamental nature of human-robot trust.