Ruchen Wen and Boyoung Kim and Elizabeth Phillips and Qin Zhu and Tom Williams
Proceedings of the Companion of the 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI LBRs)
Research has shown that robots are perceived as moral agents and hold significant persuasive power over humans. Thus, it is crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots' use of moral language may facilitate role-centered moral cultivation.