Authors

Ruchen Wen

Venue

Colorado School of Mines PhD Dissertations

Publication Year

2023
Social robots must be able to interact naturally and fluidly with users who lack prior experience with robots. To address this challenge, it is essential to develop robots that are consistent with both the expectations of the interactants and the social conventions of the contexts in which interaction takes place. Moreover, language-capable robots hold unique persuasive power over their human interactants, which offers exciting opportunities to encourage pro-social behavior. However, these capabilities also come with risks, particularly in regard to the potential for robots to accidentally harm human norm systems. Thus, it is not only important to enable social robots with moral and social competence, but also to investigate the impact that these robots have on humans in order to facilitate the successful integration of robots into human society. This dissertation focuses on two overarching key research questions: (1) How can we leverage knowledge of both environmental and relational context to enable robots to understand and appropriately communicate about social and moral norms? (2) How do robots influence humans' moral and social norms through explicit and implicit design? We start by examining the impact of human-robot interaction designs on human behavior, with a special focus on how these designs can influence human compliance with social norms in interactions with robots as well as other humans. We then investigate how to structure human-robot interactions to better facilitate human-robot moral communication. Next, we present computational work on a role-sensitive relational-normative model of robot cognition, which consists of a role-based norm system, role-sensitive mechanisms for using the norm system to reason and make decisions, and the ability to communicate about the decisions on role-based grounds. We then present empirical evidence for how the different forms of explanation enabled by our system practically impact observers' trust, understanding confidence, and perceptions of robot intelligence. Then, we show how to leverage existing moral norm learning techniques to sociocultural linguistic context-sensitive norm learning. As part of this work, we demonstrate how norms of an appropriate level of context specificity can be automatically chosen based on the strength of evidence available. Finally, we present a simple mathematical model of proportionality that could explain how moral and social considerations should be balanced in multi-agent norm violation response generation, and use this model to start a discussion about the hidden complexity of modeling proportionality.