Terran Mott and Tom Williams


Companion Proceedings of the 18th ACM/IEEE International Conference on Human-Robot Interaction (HRI LBRs)

Publication Year

Social robots of the future will need to perceive, reason about, and respond appropriately to ethically sensitive situations. At the same time, policymakers and researchers alike are advocating for increased transparency and explainability in robotics -- design principles that help users build accurate mental models and calibrate trust. In this short paper, we consider how Rube Goldberg machines might offer a strong analogy on which to build transparent user interfaces for the intricate, but knowable inner workings of a cognitive architecture's moral reasoning. We present a discussion of these related concepts, a rationale for the suitability of this analogy, and early designs for an initial prototype visualization.