Beyond Teleoperation: Towards Explainable and Trusted Autonomous Vehicles
Tuesday, October 06, 2020: 10:30 AM - 11:15 AM
Machine Learning has made tremendous strides in the past decade, showing promise in many areas of active research, as well as pragmatic application in a variety of industries, and even finance. In particular, deep neural networks (DNNs) have been shown to perform extremely well in a variety of computer vision tasks such as classification and segmentation. Although DNN based tools currently can assist operators of unmanned vehicles, these cutting edge algorithms have not yet been accepted as trusted partners in critical tasks, such as control systems for these autonomous vehicles. For ML systems to act as trusted partners when performing critical tasks, a comprehensive understanding of the system’s competencies is necessary, as well as an understanding of how their input is mapped to an output. However, it is challenging to accurately understand and predict an ML system’s performance. Even state-of-the-art ML algorithms can be sensitive to changes in their environment—they can perform poorly when there are just subtle deviations from training, and may do so with no awareness of their poor performance. A framework that transforms autonomous systems from tools into trusted, collaborative partners, allowing human operators to gain insight into a system’s competence in complex environments. Our approach for developing his framework first identifies and catalogs the experiences of ML systems via mathematically constrained distilled representations of their internal processes. These representations surface human-intelligible features that are meaningful to both the ML system and to the user. Second, our framework learns rich probabilistic causal models with a relational structure that can describe the dependencies that ultimately determine the ML system’s task behaviors. These models are powerful because they can still function when data is fragmented or missing and can provide informative counterfactual queries to help the user explore similar scenarios to determine the ML system’s likely competency. Finally, our framework provides an intuitive interface that provides users with rich comprehensive measures of system performance, recommendations of when to adjust the ML system to perform better for the selected scenario, and what-if scenarios in which the selected ML system would perform better. This approach will yield an efficient ML system that is aware of its own competencies and can provide the information the human partner needs to make informed decisions about scenarios in which the ML system can be trusted.
Analyst,Engineering/Technical,Research & Development