Authors

Boyoung Kim and Ruchen Wen and Ewart de Visser and Chad Tossell and Qin Zhu and Tom Williams and Beth Phillips

Venue

International Journal of Human-Computer Studies

Publication Year

2023
A growing body of human–robot interaction literature is exploring whether and how social robots, by utilizing their physical presence or capacity for verbal and nonverbal behavior, can influence people’s moral behavior. In the current research, we aimed to examine to what extent a social robot can effectively encourage people to act honestly by offering them moral advice. The robot either offered no advice at all or proactively offered moral advice before participants made a choice between acting honestly and cheating, and the underlying ethical framework of the advice was grounded in either deontology (rule-focused), virtue ethics (identity-focused), or Confucian role ethics (role-focused). Across three studies (N=1693), we did not find a robot’s moral advice to be effective in deterring cheating. These null results were held constant even when we introduced the robot as being equipped with moral capacity to foster common expectations about the robot among participants before receiving the advice from it. The current work led us to an unexpected discovery of the psychological reactance effect associated with participants’ perception of the robot’s moral capacity. Stronger perceptions of the robot’s moral capacity were linked to greater probabilities of cheating. These findings demonstrate how psychological reactance may impact human–robot interaction in moral domains and suggest potential strategies for framing a robot’s moral messages to avoid such reactance.