Tom Williams and Cynthia Matuszek and Kristiina Jokinen and Raj Korpan and James Pustojevsky and Brian Scassellati


Communications of the ACM

Publication Year

Language is often viewed as a distinctly human capability, and one that is at the heart of most human-human interactions. To make human-robots natural and humanlike, roboticists are increasingly developing language-capable robots. In socially assistive contexts, these include tutoring robots that speak with children to guide and encourage them through educational programming, assistive robots that engage in smalltalk to provide companionship for the elderly, and robots that recommend physical activities and healthy eating. In field contexts, these include robots for search and rescue and space exploration, that accept verbal commands for navigation, exploration, and maintenance tasks, and may verbally ask questions or report on their success or failure. This emerging trend requires computer scientists and roboticists to attend to new ethical concerns. Not only do language-capable robots share the risks presented by traditional robots, (such as risks to physical safety and risks of exacerbating inequality) and the risks presented by natural language technologies like smart speaker (such as encoding and perpetuation of hegemonically dominant white heteropatriarchal stereotypes, norms, and biases (cp. Noble, 2018) and climate risks (cp. Bender et al., 2021)), but they also present fundamentally new and accentuated risks that stem from the confluence of their communicative capability and embodiment. As such, while roboticists have a long history of working to address safety risks, and while computational linguists are increasingly working to address the bias encoded into language models, researchers who hope to work at the intersections of these fields must be aware of the new and accentuated risks -- and the responsibility to mitigate them -- that arise from that intersection. In this article, we explore three examples of the unique types of ethical concerns that arise with language-capable robots (influence, identity, and privacy) that require consideration by researchers, practitioners, and the general public, and need unique technical -- and social -- responses. We then use these examples to provide recommendations for roboticists as they design, develop, and deploy language-capable robot technologies.