Authors
Tom Williams
Venue
2018 HRI Workshop on Longitudinal Human-Robot Teaming
Publication Year
2018
Recent work on natural language generation algorithms for human-robot interaction has not considered the ethical implications of such algorithms. In this work, we provide preliminary results suggesting that simply by asking for clarification, a robot may unintentionally communicate that it would be willing to perform an unethical action, even if it has ethical programming that would prevent it from doing so. In doing so, the robot may not only miscommunicate its own ethical programming, but negatively influence the morality of its human teammates.