Authors
Terran Mott and Aaron Fanganello and Tom Williams
Venue
ACM/IEEE International Conference on Human-Robot Interaction
Publication Year
2024
For social robots to succeed human environments, they must comprehend and follow human norms. In particular, robots must respond in effective, yet appropriate ways when humans violate these norms, e.g., when humans give robots unethical commands. Previous work has shown that humans expect robots to be proportional in their norm-violation responses; but there are a wide range of approaches robots could use to tune the politeness of their utterances to achieve proportionality, and it is not obvious whether all such strategies are appropriate for robots to use. In this work, we present the results of a human-subjects study assessing the use of human-like Face Theoretic proportionality. Our results show that while people expect robots to modulate the politeness of their responses, they do not expect them to strictly mimic human linguistic behaviors. Specifically, linguistic politeness strategies that use direct, formal language are perceived as more effective and more appropriate than strategies that use indirect, informal language.