Authors

Alyssa Hanson and Nichole Starr and Cloe Emnett and Ruchen Wen and Bertram Malle and Tom Williams

Venue

ACM/IEEE International Conference on Human-Robot Interaction

Publication Year

2024
Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that people may blame human and robot agents differently for violations of those norms. These complications raise particular challenges for robots giving moral advice to primary decision makers, as the robots and the deciders may be blamed differently for endorsing the same moral action. In this work, we thus explore how people morally evaluate both human and robot advisors for human and robot deciders. In Experiment 1 (š¯‘› = 555), we examine human blame judgments of robot and human moral advisors and find clear evidence for an advice as decision hypothesis: advisors are blamed similarly to how they would be blamed for making the decisions they advised. In Experiment 2 (š¯‘› = 1326), we examine peopleā€™s blame judgments of a robot or human decider following the advice of a robot or human moral advisor. We replicate the results from Experiment 1 and also find clear evidence for a differential dismissal hypothesis, in which moral deciders are penalized for ignoring moral advice, especially when a robot decider ignores a human advisorā€™s recommendation. Our results raise questions about peopleā€™s perception of moral advising situations, especially when they involve robots, and they present challenges for the design of morally competent language-capable robots more generally.