Sayanti Roy and Trey Smith and Brian Coltin and Tom Williams


ACM/IEEE International Conference on Human-Robot Interaction

Publication Year

Interactive intelligent systems are increasingly being deployed in safety critical contexts like Space Exploration. For humans to safely and successfully complete collaborative tasks with robots in these contexts, they must maintain Situational Awareness of their task context without being cognitively overloaded -- regardless of whether they are co-located with robots or interacting with them from a distance of thousands or millions of miles. In this paper, we present a novel autonomy design strategy we term Performative Autonomy, in which robots behave as if they have a lower level of autonomy than they are truly capable of (i.e., asking for advice they do not believe they truly need), for the sole purpose of maintaining interactants' Situational Awareness. In our first experiment (n=264), we begin by demonstrating that Performative Autonomy can increase Situational Awareness (SA) without overly increasing workload, and that this is true across tasks with different baseline levels of Mental Workload. In our second experiment (n=318), we consider cases where robots do not believe they need advice, but in fact have faulty perception or decision making capabilities. In this experiment, we only observed benefits to Performative Autonomy for specific types of questions, and only when there was significant cognitive load imposed by a secondary task; yet we observed uniform benefit on task performance for asking these types of questions regardless of task-imposed Mental workload. Our results from these two studies (total n=582) thus provide strong support for using this autonomy design strategy in future safety-critical missions as humanity explores the Moon, Mars, and beyond.