Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3610977.3634973acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article
Open access

Improving Explainable Object-induced Model through Uncertainty for Automated Vehicles

Published: 11 March 2024 Publication History

Abstract

The rapid evolution of automated vehicles (AVs) has the potential to provide safer, more efficient, and comfortable travel options. However, these systems face challenges regarding reliability in complex driving scenarios. Recent explainable AV architectures neglect crucial information related to inherent uncertainties while providing explanations for actions. To overcome such challenges, our study builds upon the "object-induced" model approach that prioritizes the role of objects in scenes for decision-making and integrates uncertainty assessment into the decision-making process using an evidential deep learning paradigm with a Beta prior. Additionally, we explore several advanced training strategies guided by uncertainty, including uncertainty-guided data reweighting and augmentation. Leveraging the BDD-OIA dataset, our findings underscore that the model, through these enhancements, not only offers a clearer comprehension of AV decisions and their underlying reasoning but also surpasses existing baselines across a broad range of scenarios.

References

[1]
Shahin Atakishiyev, Mohammad Salameh, Housam Babiker, and Randy Goebel. 2023. Explaining Autonomous Driving Actions with Visual Question Answering. In Proceedings of the 2023 IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC-2023). https://doi.org/10.48550/arXiv.2307.10408 Accepted.
[2]
Hédi Ben-Younes, Éloi Zablocki, Patrick Pérez, and Matthieu Cord. 2022. Driving behavior explanation with multi-level fusion. Pattern Recognition 123 (2022), 108421. https://doi.org/10.1016/j.patcog.2021.108421
[3]
Mark Colley, Benjamin Eder, Jan Ole Rixen, and Enrico Rukzio. 2021. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 155, 11 pages. https://doi.org/10.1145/3411764.3445351
[4]
Mark Colley, Max Rädler, Jonas Glimmann, and Enrico Rukzio. 2022. Effects of Scene Detection, Scene Prediction, and Maneuver Planning Visualizations on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2, Article 49 (Jul 2022), 21 pages. https://doi.org/10.1145/3534609
[5]
Luca Cultrera, Lorenzo Seidenari, Federico Becattini, Pietro Pala, and Alberto Del Bimbo. 2020. Explaining Autonomous Driving by Learning End-to-End Visual Attention. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1389--1398. https://doi.org/ 10.1109/CVPRW50498.2020.00178
[6]
Na Du, Jacob Haspiel, Qiaoning Zhang, Dawn Tilbury, Anuj K. Pradhan, X. Jessie Yang, and Lionel P. Robert. 2019. Look who's talking now: Implications of AV's explanations on driver's trust, AV preference, anxiety and mental workload. Transportation Research Part C: Emerging Technologies 104 (2019), 428--442. https: //doi.org/10.1016/j.trc.2019.05.025
[7]
E. Fersini, E. Messina, and F.A. Pozzi. 2014. Sentiment analysis: Bayesian Ensemble Learning. Decision Support Systems 68 (2014), 26--38. https://doi.org/10.1016/j. dss.2014.10.004
[8]
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, New York, USA, 1050--1059. https://proceedings.mlr.press/v48/gal16. html
[9]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778. https://doi.org/10. 1109/CVPR.2016.90
[10]
Ruihan Hu, Qijun Huang, Sheng Chang, HaoWang, and Jin He. 2019. The MBPEP: a deep ensemble pruning algorithm providing high quality uncertainty prediction. Applied Intelligence 49 (2019), 2942--2955. https://doi.org/10.1007/s10489-019- 01421--8
[11]
Eyke Hüllermeier and Willem Waegeman. 2021. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Machine Learning 110 (2021), 457--506. https://doi.org/10.1007/s10994-021-05946--3
[12]
Heinrich Jiang, Been Kim, Melody Guan, and Maya Gupta. 2018. To Trust Or Not To Trust A Classifier. 31 (2018). https://proceedings.neurips.cc/paper/2018/ file/7180cffd6a8e829dacfc2a31b3f72ece-Paper.pdf
[13]
Jinkyu Kim and John Canny. 2017. Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). 2961--2969. https://doi.org/10.1109/ICCV. 2017.320
[14]
Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. 2018. Textual Explanations for Self-Driving Vehicles. In Computer Vision -- ECCV 2018. Springer International Publishing, Cham, 577--593. https://doi.org/10.1007/978- 3-030-01216--8_35
[15]
J. Koo, J. Kwac, W. Ju, et al. 2015. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal of Interactive Design and Manufacturing (IJIDeM) 9 (2015), 269--275. https://doi.org/10.1007/s12008-014-0227--2
[16]
Matthew A. Kupinski, John W. Hoppin, Eric Clarkson, and Harrison H. Barrett. 2003. Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. Journal of the Optical Society of America A 20, 3 (Mar 2003), 430--438. https://doi.org/10.1364/JOSAA.20.000430
[17]
Daniel Omeiza, Konrad Kollnig, Helena Web, Marina Jirotka, and Lars Kunze. 2021. Why Not Explain? Effects of Explanations on Human Perceptions of Autonomous Driving. In Proceedings of the 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). 194--199. https://doi.org/10. 1109/ARSO51874.2021.9542835
[18]
Daniel Omeiza, Helena Web, Marina Jirotka, and Lars Kunze. 2021. Towards Accountability: Providing Intelligible Explanations in Autonomous Driving. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV). 231--237. https: //doi.org/10.1109/IV48863.2021.9575917
[19]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 6, 1137--1149. https: //doi.org/10.1109/TPAMI.2016.2577031
[20]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (2019), 206--215. https://doi.org/10.1038/s42256-019-0048-x
[21]
Murat Sensoy, Lance Kaplan, and Melih Kandemir. 2018. Evidential Deep Learning to Quantify Classification Uncertainty. 31 (2018). https://proceedings.neurips. cc/paper/2018/file/a981f2b708044d6fb4a71a1463242520-Paper.pdf
[22]
Yuan Shen, Shanduojiao Jiang, Yanlin Chen, and Katie Driggs Campbell. 2022. To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles. In Proceedings of the NeurIPS 2022 Progress and Challenges in Building Trustworthy Embodied AI Workshop (TEA 2022). https://doi.org/10. 48550/arXiv.2006.11684 Won Best Paper Award at NeurIPS 2022 Progress and Challenges in Building Trustworthy Embodied AI Workshop (TEA 2022).
[23]
Connor Shorten and Taghi M. Khoshgoftaar. 2019. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data 6 (2019), 60. https: //doi.org/10.1186/s40537-019-0197-0
[24]
Jakub Swiatkowski, Kevin Roth, Bastiaan Veeling, Linh Tran, Joshua Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. 2020. The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 9289--9299. https://proceedings.mlr.press/v119/swiatkowski20a.html
[25]
Theodoros Tsiligkaridis. 2021. Information Aware max-norm Dirichlet networks for predictive uncertainty estimation. Neural Networks 135 (2021), 105--114. https://doi.org/10.1016/j.neunet.2020.12.011
[26]
Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, and Trevor Darrell. 2019. Deep Object-Centric Policies for Autonomous Driving. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA). 8853--8859. https://doi.org/10.1109/ICRA.2019.8794224
[27]
Huazhe Xu, Yang Gao, Fisher Yu, and Trevor Darrell. 2017. End-to-End Learning of Driving Models from Large-Scale Video Datasets. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3530--3538. https://doi.org/10.1109/CVPR.2017.376
[28]
Yiran Xu, Xiaoyin Yang, Lihang Gong, Hsuan-Chu Lin, Tz-Ying Wu, Yunsheng Li, and Nuno Vasconcelos. 2020. Explainable Object-Induced Action Decision for Autonomous Vehicles. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 9520--9529. https://doi.org/10. 1109/CVPR42600.2020.00954
[29]
Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. 2020. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2633-- 2642. https://doi.org/10.1109/CVPR42600.2020.00271
[30]
Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, et al. 2022. Explainability of Deep Vision-Based Autonomous Driving Systems: Review and Challenges. International Journal of Computer Vision 130 (2022), 2425--2452. https://doi.org/10.1007/ s11263-022-01657-x

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
March 2024
982 pages
ISBN:9798400703225
DOI:10.1145/3610977
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 March 2024

Check for updates

Author Tags

  1. autonomous vehicle
  2. explainable ai
  3. object-induced model
  4. uncertainty quantification

Qualifiers

  • Research-article

Conference

HRI '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 179
    Total Downloads
  • Downloads (Last 12 months)179
  • Downloads (Last 6 weeks)48
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media