Authors

Tom Williams and Ryan Blake Jackson and Jane Lockshin

Venue

40th annual meeting of the Cognitive Science Society

Publication Year

2018
One of the principle tenets of modern behavioral ethics is that human morality is dynamic and malleable. Recent work in technology ethics has highlighted the role technologies can play in this process. As such, it is the responsibility of technology designers to actively identify and address possible negative consequences of such technological mediation. In this work, we examine dialogue systems employed by current robotic agents, arguing that they can have deleterious effects on both the human moral ecosystem and human perception of the robots, regardless of the robots' actual ethical competence. We present a preliminary Bayesian analysis of empirical data suggesting that the architectural status quo of clarification request generation systems may (1) cause robots to unintentionally miscommunicate their ethical intentions (our two tests for this yielded Bayes factors of 1319 and 1099) and (2) weaken humans' contextual application of moral norms (Bayes factor of 1069).