Ryan Blake Jackson and Ruchen Wen and Tom Williams
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society
There is a significant body of research seeking to enable moral decision making and ensure ethical conduct in robots. One aspect of ethical conduct is rejecting unethical human commands. For social robots, which are expected to follow and maintain human moral and sociocultural norms, it is especially important not only to engage in ethical decision making, but also to properly communicate ethical reasoning. We thus argue that it is critical for robots to carefully phrase command rejections. Specifically, the degree of politenesstheoretic face threat in a command rejection should be proportional to the severity of the norm violation motivating that rejection. We present a human subjects experiment showing some of the consequences of miscalibrated responses, including perceptions of the robot as inappropriately polite, direct, or harsh, and reduced robot likeability. This experiment intends to motivate and inform the design of algorithms to tactfully tune pragmatic aspects of command rejections autonomously.