NeurIPS 2021 Lead Author Spotlight
Rohan Paleja, PhD Robotics student
We present two human-subject studies quantifying the benefits of deploying Explainable AI (xAI) techniques within a human-machine teaming scenario, finding that the benefits of xAI are not universal. We create a rich, interactive human-machine teaming scenario in Minecraft where a human and collaborative robot (i.e., a cobot) must work together to build a house. We show that xAI techniques providing an abstraction of the cobot’s behavior can support situational awareness (SA) and examine how different SA levels induced via a collaborative AI policy abstraction affect ad hoc human-machine teaming performance. Our work presents one of the first analyses looking at the impact of explainable AI in collaborative sequential decision-making settings. Our results demonstrate that researchers must deliberately design and deploy the right xAI techniques in the right scenario by carefully considering human-machine team composition and how the xAI method augments SA.
Q&A with Rohan Paleja
(click question to show answer)
What motivated your work on this paper?
Given my prior work in interpretable machine learning, I was interested in identifying the utility of explainable AI (xAI) approaches when deployed to complex, human-machine teaming domains. Furthermore, I was hoping that deploying several xAI approaches within a human-machine teaming setting would reveal key drawbacks in the real-world practicability of current xAI approaches and inspire my future work in developing xAI for high-performance human-machine teaming.
If readers remember one takeaway from the paper, what should it be and why?
Full explainability, providing complete information about a collaborative robot’s policy, is preferred by users prior to task execution. However during task execution, partial explainability, which provides a low-level abstraction of the collaborative robot’s policy, proves more beneficial.
We hope this takeaway can inform other researchers in their design of xAI approaches, modifying the design appropriately based on whether the approach is online (during task execution) or offline (before or after task execution).
Were there any “aha” moments or lessons that you’ll use to inform your future work?
When running human-subjects studies, assess/plot your data often. You may detect new and thought-provoking patterns that can further inform your experimental analysis and lead to interesting conclusions.
What are you most excited for at NeurIPS and what do you hope to take away from the experience?
I’m excited to present my research and hope to both meet other researchers in the field and discover interesting future directions.