Authors
Terran Mott and Tom Williams
Venue
Proceedings of the HRI Workshop on Scarecrows in Oz: Large Language Models in HRI
Publication Year
2024
Robots’ ability to act as social agents means they have the potential to engage in many aspects of humans’ lives. However, it also means that people will encounter situations where they must judge a robot’s trustworthiness or fallibility. A key challenge to appraising a robot’s cognition, moral competence, or trustworthiness is that the same social behaviors may be generated by a variety of different computational processes—including cognitive architectures or neural networks. In this brief paper, we explore people’s varied assumptions about robot cognition revealed in qualitative data from a user study on robot moral communication. These qualitative data show that participants made varied assumptions about how robots think and speak—even based on viewing the same interactions. We reflect on the ramifications and potential risks of users making these assumptions inaccurately and affirm that roboticists can pursue transparent design that supports users in understanding how robots function and how they may fail.