Terran Mott


Colorado School of Mines PhD Dissertations

Publication Year

o be successful and acceptable, social robots must demonstrate social competence, navigate sensitive situations, and react to adverse events. Designing robot behaviors for these interactions is challenging because poor robot responses risk harming humans’ dignity and well-being. This dissertation explores how social robots can be designed to effectively and appropriately respond to adverse or sensitive social interactions in positive ways that minimize risk to users’ well-being. Chapter 2 begins by exploring an instance in which social robots are already used in the wild for potentially sensitive interactions— the use of teleoperated socially assistive robots in education, therapy, and telehealth for children. This work demonstrates the advantages of human oversight in this domain by identifying users’ existing strategies to mitigate the social and emotional risks of child-robot interaction. It then presents design recommendations summarizing how roboticists can develop tools that support users’ ability to prepare for and adapt to unforeseen situations. Chapters 3 and 4 evaluate interaction design for autonomous robots in adverse interactions involving norm violations, such as unethical commands or hate speech. Chapter 3 explores how people appraise these interactions and investigates why they may prefer a robot to intervene or abdicate from responding to adverse events. Chapter 4 furthers this work through an empirical evaluation of robots’ use of human-like linguistic politeness cues to address unethical commands. It presents a framework delineating how robots could use human-like cues to effectively and appropriately address adverse interactions while avoiding negative perceptions. This work also reemphasizes broader concerns about the extent to which robots should be able to perceive and react to such scenarios. Overall, this dissertation makes empirical and design contributions to the field of HRI that inform how social robots can preserve humans’ dignity and well-being in adverse interactions. It argues that these contexts require roboticists to recognize factors outside of individual human-robot interactions— including the experiences of secondary stakeholders and bystanders, existing sociocultural norms of collaboration and conflict, and the potential for ill use of robots’ capabilities.