Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Introspection of DNN-Based Perception Functions in Automated Driving Systems: State-of-the-Art and Open Research Challenges

Published: 22 September 2023 Publication History

Abstract

Automated driving systems (ADSs) aim to improve the safety, efficiency and comfort of future vehicles. To achieve this, ADSs use sensors to collect raw data from their environment. This data is then processed by a perception subsystem to create semantic knowledge of the world around the vehicle. State-of-the-art ADSs’ perception systems often use deep neural networks for object detection and classification, thanks to their superior performance compared to classical computer vision techniques. However, deep neural network-based perception systems are susceptible to errors, e.g., failing to correctly detect other road users such as pedestrians. For a safety-critical system such as ADS, these errors can result in accidents leading to injury or even death to occupants and road users. Introspection of perception systems in ADS refers to detecting such perception errors to avoid system failures and accidents. Such safety mechanisms are crucial for ensuring the trustworthiness of ADSs. Motivated by the growing importance of the subject in the field of autonomous and automated vehicles, this paper provides a comprehensive review of the techniques that have been proposed in the literature as potential solutions for the introspection of perception errors in ADSs. We classify such techniques based on their main focus, e.g., on object detection, classification and localisation problems. Furthermore, this paper discusses the pros and cons of existing methods while identifying the research gaps and potential future research directions.

References

[1]
U.K. Department for Transport. (2022). Reported Road Casualties Great Britain, Annual Report: 2020. [Online]. Available: https://www.gov.uk/government/statistics/reported-road-casualties-great-britain-annual-report-2020/reported-road-casualties-great-britain-annual-report-2020
[2]
U.S National Transportation Safety Board. (Nov. 2019). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian. [Online]. Available: https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf
[3]
U.S National Transportation Safety Board. (Feb. 2019). Collision Between a Sport Utility Vehicle Operating With Partial Driving Automation and a Crash Attenuator. [Online]. Available: https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR2001.pdf
[4]
A. Karpathy. (2020). AI for Full-Self Driving at Tesla. Matroid. [Online]. Available: https://youtu.be/hx7BXih7zx8
[5]
P. Koopman and M. Wagner, “Autonomous vehicle safety: An interdisciplinary challenge,” IEEE Intell. Transp. Syst. Mag., vol. 9, no. 1, pp. 90–96, Spring. 2017.
[6]
O. Willers, S. Sudholt, S. Raafatnia, and S. Abrecht, “Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks,” in Proc. Int. Conf. Comput. Saf., Rel., Secur. Cham, Switzerland: Springer, 2020, pp. 336–350.
[7]
P. Koopman and M. Wagner, “Challenges in autonomous vehicle testing and validation,” SAE Int. J. Transp. Saf., vol. 4, no. 1, pp. 15–24, Apr. 2016.
[8]
N. Rajabli, F. Flammini, R. Nardone, and V. Vittorini, “Software verification and validation of safe autonomous cars: A systematic literature review,” IEEE Access, vol. 9, pp. 4797–4819, 2021.
[9]
C. E. Tuncali, G. Fainekos, H. Ito, and J. Kapinski, “Simulation-based adversarial test generation for autonomous vehicles with machine learning components,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2018, pp. 1555–1562.
[10]
D. Fenget al., “Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 3, pp. 1341–1360, Mar. 2021.
[11]
B. Gao, Y. Pan, C. Li, S. Geng, and H. Zhao, “Are we hungry for 3D LiDAR data for semantic segmentation? A survey of datasets and methods,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 6063–6081, Jul. 2022.
[12]
D. Bogdollet al., “Description of corner cases in automated driving: Goals and challenges,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshops (ICCVW), Oct. 2021, pp. 1023–1028.
[13]
J. Breitenstein, J.-A. Termöhlen, D. Lipinski, and T. Fingscheidt, “Systematization of corner cases for visual perception in automated driving,” in Proc. IEEE Intell. Vehicles Symp. (IV), Oct. 2020, pp. 1257–1264.
[14]
F. Heideckeret al., “An application-driven conceptualization of corner cases for perception in highly automated driving,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jul. 2021, pp. 644–651.
[15]
S. Mohseni, H. Wang, C. Xiao, Z. Yu, Z. Wang, and J. Yadawa, “Taxonomy of machine learning safety: A survey and primer,” ACM Comput. Surv., vol. 55, no. 8, pp. 1–38, Dec. 2022. 10.1145/3551385.
[16]
S. Gao, K. Yang, H. Shi, K. Wang, and J. Bai, “Review on panoramic imaging and its applications in scene understanding,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–34, 2022.
[17]
Y. Qian, M. Yang, and J. M. Dolan, “Survey on fish-eye cameras and their applications in intelligent vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, pp. 22755–22771, Dec. 2022.
[18]
X. Huanget al., “A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability,” Comput. Sci. Rev., vol. 37, Aug. 2020, Art. no. 10.1016/j.cosrev.2020.100270.
[19]
S. Bulusu, B. Kailkhura, B. Li, P. K. Varshney, and D. Song, “Anomalous example detection in deep learning: A survey,” IEEE Access, vol. 8, pp. 132330–132347, 2020.
[20]
Q. M. Rahman, P. Corke, and F. Dayoub, “Run-time monitoring of machine learning for robotic perception: A survey of emerging trends,” IEEE Access, vol. 9, pp. 20067–20075, 2021.
[21]
D. Bogdoll, M. Nitsche, and J. M. Zollner, “Anomaly detection in autonomous driving: A survey,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2022, pp. 4488–4499.
[22]
J. Guo, U. Kurup, and M. Shah, “Is it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous driving,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 8, pp. 3135–3151, Aug. 2020.
[23]
Q. Guo, Y. Qian, X. Liang, Y. She, D. Li, and J. Liang, “Logic could be learned from images,” Int. J. Mach. Learn. Cybern., vol. 12, no. 12, pp. 3397–3414, Dec. 2021.
[24]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
[25]
L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder–decoder with atrous separable convolution for semantic image segmentation,” in Proc. ECCV, Sep. 2018, pp. 801–818.
[26]
E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A survey of autonomous driving: Common practices and emerging technologies,” IEEE Access, vol. 8, pp. 58443–58469, 2020.
[27]
L. L. Mero, D. Yi, M. Dianati, and A. Mouzakitis, “A survey on imitation learning techniques for end-to-end autonomous vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 9, pp. 14128–14147, Sep. 2022.
[28]
X. Wu, Z. Wu, H. Guo, L. Ju, and S. Wang, “DANNet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 15764–15773.
[29]
E. Romera, L. M. Bergasa, K. Yang, J. M. Alvarez, and R. Barea, “Bridging the day and night domain gap for semantic segmentation,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2019, pp. 1312–1318.
[30]
C. Sakaridis, D. Dai, and L. Van Gool, “ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 10745–10755.
[31]
J. Zhang, K. Yang, A. Constantinescu, K. Peng, K. Müller, and R. Stiefelhagen, “Trans4Trans: Efficient transformer for transparent object and semantic scene segmentation in real-world navigation assistance,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 19173–19186, Oct. 2022.
[32]
On-Road Automated Driving (ORAD) Committee, “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles,” SAE Int., Warrendale, PA, USA, Tech. Rep. J3016_202104, 2018. 10.4271/j3016_202104.
[33]
C. Corbière, N. Thome, A. Bar-Hen, M. Cord, and P. Pérez, “Addressing failure prediction by learning model confidence,” in Proc. Adv. Neural Inf. Process. Syst., vol. 32, 2019, pp. 1–12.
[34]
Q. M. Rahman, N. Sünderhauf, and F. Dayoub, “Per-frame mAP prediction for continuous performance monitoring of object detection during deployment,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. Workshops (WACVW), Jan. 2021, pp. 152–160.
[35]
M. S. Ramanagopal, C. Anderson, R. Vasudevan, and M. Johnson-Roberson, “Failing to learn: Autonomously identifying perception failures for self-driving cars,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 3860–3867, Oct. 2018.
[36]
C. Gurău, D. Rao, C. H. Tong, and I. Posner, “Learn from experience: Probabilistic prediction of perception performance to avoid failure,” Int. J. Robot. Res., vol. 37, no. 9, pp. 981–995, 2018.
[37]
C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 1321–1330.
[38]
B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Proc. Adv. Neural Inf. Process. Syst., Dec. 2017, pp. 6403–6414.
[39]
Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” in Proc. Int. Conf. Mach. Learn., 2016, pp. 1050–1059.
[40]
X. Zhang, S. Oymak, and J. Chen, “Post-hoc models for performance estimation of machine learning inference,” 2021, arXiv:2110.02459.
[41]
Q. M. Rahman, N. Sünderhauf, and F. Dayoub, “Online monitoring of object detection performance during deployment,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Sep. 2021, pp. 4839–4845.
[42]
Q. M. Rahman, N. Sünderhauf, P. Corke, and F. Dayoub, “FSNet: A failure detection framework for semantic segmentation,” IEEE Robot. Autom. Lett., vol. 7, no. 2, pp. 3030–3037, Apr. 2022.
[43]
C. B. Kuhn, M. Hofbauer, S. Lee, G. Petrovic, and E. Steinbach, “Introspective failure prediction for semantic image segmentation,” in Proc. IEEE 23rd Int. Conf. Intell. Transp. Syst. (ITSC), Sep. 2020, pp. 1–6.
[44]
C. Gurǎu, C. H. Tong, and I. Posner, “Fit for purpose? Predicting perception performance based on past experience,” in Proc. Int. Symp. Experim. Robot. Cham, Switzerland: Springer, 2016, pp. 454–464.
[45]
T. A. Henzinger, A. Lukina, and C. Schilling, Outside the Box: Abstraction-Based Monitoring of Neural Networks (Frontiers in Artificial Intelligence and Applications), vol. 325. Amsterdam, The Netherlands IOS Press, 2020.
[46]
C.-H. Cheng, G. Nührenberg, and H. Yasuoka, “Runtime monitoring neuron activation patterns,” in Proc. Design, Autom. Test Eur. Conf. Exhib. (DATE), Mar. 2019, pp. 300–303.
[47]
K. Khalifa, M. Safar, and M. W. El-Kharashi, “Verification of neural networks for safety critical applications,” in Proc. 32nd Int. Conf. Microelectron. (ICM), Dec. 2020, pp. 1–4.
[48]
C. G. Blair, J. Thompson, and N. M. Robertson, “Introspective classification for pedestrian detection,” in Proc. Sensor Signal Process. Defence (SSPD), Sep. 2014, pp. 1–5.
[49]
J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” Adv. Large Margin Classifiers, vol. 10, no. 3, pp. 61–74, 1999.
[50]
X. Qiu and R. Miikkulainen, “Detecting misclassification errors in neural networks with a Gaussian process model,” in Proc. AAAI Conf. Artif. Intell., vol. 36, no. 7, 2022, pp. 8017–8027.
[51]
K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” in Proc. Adv. Neural Inf. Process. Syst., Dec. 2018, pp. 7167–7177.
[52]
S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” in Proc. 6th Int. Conf. Learn. Represent. (ICLR), 2018.
[53]
M. Humt, J. Lee, and R. Triebel, “Bayesian optimization meets Laplace approximation for robotic introspection,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Oct. 2020, pp. 1–6.
[54]
G. Melotti, C. Premebida, J. J. Bird, D. R. Faria, and N. Gonçalves, “Reducing overconfidence predictions in autonomous driving perception,” IEEE Access, vol. 10, pp. 54805–54821, 2022.
[55]
A. Vyas, N. Jammalamadaka, X. Zhu, D. Das, B. Kaul, and T. L. Willke, “Out-of-distribution detection using an ensemble of self supervised leave-out classifiers,” in Computer Vision—ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham, Switzerland: Springer, 2018, pp. 560–574.
[56]
H. Jiang, B. Kim, M. Y. Guan, and M. R. Gupta, “To trust or not to trust a classifier,” in Proc. NeurIPS, 2018, pp. 5546–5557.
[57]
Y. Geifman and R. El-Yaniv, “SelectiveNet: A deep neural network with an integrated reject option,” in Proc. 36th Int. Conf. Mach. Learn., vol. 97, K. Chaudhuri and R. Salakhutdinov, Eds., Jun. 2019, pp. 2151–2159. [Online]. Available: https://proceedings.mlr.press/v97/geifman19a.html
[58]
S. Mohseni, M. Pitale, J. Yadawa, and Z. Wang, “Self-supervised learning for generalizable out-of-distribution detection,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 4, 2020, pp. 5216–5223.
[59]
T. Cheet al., “Deep verifier networks: Verification of deep discriminative models with deep generative models,” in Proc. AAAI Conf. Artif. Intell., vol. 35, no. 8, 2021, pp. 7002–7010.
[60]
H. Grimmett, R. Paul, R. Triebel, and I. Posner, “Knowing when we don’t know: Introspective classification for mission-critical decision making,” in Proc. IEEE Int. Conf. Robot. Autom., May 2013, pp. 4531–4538.
[61]
H. Grimmett, R. Triebel, R. Paul, and I. Posner, “Introspective classification for robot perception,” Int. J. Robot. Res., vol. 35, no. 7, pp. 743–762, Jun. 2016.
[62]
A. Ranjbar, S. Hornauer, J. Fredriksson, S. X. Yu, and C.-Y. Chan, “Safety monitoring of neural networks using unsupervised feature learning and novelty estimation,” IEEE Trans. Intell. Vehicles, vol. 7, no. 3, pp. 711–721, Sep. 2022.
[63]
A. Roy, A. Cobb, N. D. Bastian, B. Jalaian, and S. Jha, “Runtime monitoring of deep neural networks using top-down context models inspired by predictive processing and dual process theory,” in Proc. AAAI, 2022.
[64]
J. Nitschet al., “Out-of-distribution detection for automotive perception,” in Proc. IEEE Int. Intell. Transp. Syst. Conf. (ITSC), Sep. 2021, pp. 2938–2943.
[65]
Z. Shao and J. Yang, “Increasing the trustworthiness of deep neural networks via accuracy monitoring,” in Proc. Workshop Artif. Intell. Saf. (Co-Located With IJCAI-PRICAI), 2020.
[66]
P. Wang and N. Vasconcelos, “Towards realistic predictors,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 2018, pp. 36–51.
[67]
J. Löhdefinket al., “Self-supervised domain mismatch estimation for autonomous perception,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2020, pp. 1359–1368.
[68]
J. Aigrain and M. Detyniecki, “Detecting adversarial examples and other misclassifications in neural networks by introspection,” 2019, arXiv:1905.09186.
[69]
A. J. Joshi, F. Porikli, and N. Papanikolopoulos, “Multi-class active learning for image classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 2372–2379.
[70]
R. Triebel, H. Grimmett, R. Paul, and I. Posner, “Driven learning for driving: How introspection improves semantic mapping,” in Robotics Research. Cham, Switzerland: Springer, 2016, pp. 449–465.
[71]
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res., vol. 32, no. 11, pp. 1231–1237, Sep. 2013.
[72]
T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2016, pp. 785–794.
[73]
A. Harakeh, M. Smart, and S. L. Waslander, “BayesOD: A Bayesian approach for uncertainty estimation in deep object detectors,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2020, pp. 87–93.
[74]
D. Miller, L. Nicholson, F. Dayoub, and N. Sünderhauf, “Dropout sampling for robust object detection in open-set conditions,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2018, pp. 3243–3249.
[75]
D. Miller, N. Sünderhauf, M. Milford, and F. Dayoub, “Uncertainty for identifying open-set errors in visual object detection,” IEEE Robot. Autom. Lett., vol. 7, no. 1, pp. 215–222, Jan. 2022.
[76]
M. Schubert, K. Kahl, and M. Rottmann, “MetaDetect: Uncertainty quantification and prediction quality estimates for object detection,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2021, pp. 1–10.
[77]
Y. Wang and D. Wijesekera, “Pixel invisibility: Detect object unseen in color domain,” in Proc. 7th Int. Conf. Vehicle Technol. Intell. Transp. Syst., 2021, pp. 201–210.
[78]
X. Du, X. Wang, G. Gozum, and Y. Li, “Unknown-aware object detection: Learning what you Don’t know from videos in the wild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 13668–13678.
[79]
S. Wilson, T. Fischer, F. Dayoub, D. Miller, and N. Sünderhauf, “SAFE: Sensitivity-aware features for out-of-distribution object detection,” 2023, arXiv:2208.13930.
[80]
D. Feng, L. Rosenbaum, and K. Dietmayer, “Towards safe autonomous driving: Capture uncertainty in the deep neural network for LiDAR 3D vehicle detection,” in Proc. 21st Int. Conf. Intell. Transp. Syst. (ITSC), pp. 3266–3273, Nov. 2018.
[81]
J. Cen, P. Yun, J. Cai, M. Y. Wang, and M. Liu, “Open-set 3D object detection,” in Proc. Int. Conf. 3D Vis. (3DV), Dec. 2021, pp. 869–878.
[82]
Q. M. Rahman, N. Sünderhauf, and F. Dayoub, “Did you miss the sign? A false negative alarm system for traffic sign detectors,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Nov. 2019, pp. 3748–3753.
[83]
Q. Yang, H. Chen, Z. Chen, and J. Su, “Introspective false negative prediction for black-box object detectors in autonomous driving,” Sensors, vol. 21, no. 8, p. 2819, Apr. 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/8/2819
[84]
P. Antonante, D. I. Spivak, and L. Carlone, “Monitoring and diagnosability of perception systems,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Sep. 2021, pp. 168–175.
[85]
P. Antonante, H. Nilsen, and L. Carlone, “Monitoring of perception systems: Deterministic, probabilistic, and learning-based fault detection and identification,” 2022, arXiv:2205.10906.
[86]
J. Hawke, C. Gurǎu, C. H. Tong, and I. Posner, “Wrong today, right tomorrow: Experience-based classification for robot perception,” in Field Service Robotics. Cham, Switzerland: Springer, 2016, pp. 173–186.
[87]
W. Liuet al., “SSD: Single shot multibox detector,” in Computer Vision ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham, Switzerland: Springer, 2016, pp. 21–37.
[88]
C. Huanget al., “Out-of-distribution detection for LiDAR-based 3D object detection,” in Proc. IEEE 25th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2022, pp. 4265–4271.
[89]
A. Kendall, V. Badrinarayanan, and R. Cipolla, “Bayesian SegNet: Model uncertainty in deep convolutional encoder–decoder architectures for scene understanding,” in Proc. Brit. Mach. Vis. Conf., 2017, pp. 1–12.
[90]
M. Grcić, P. Bevandić, Z. Kalafatić, and S. Šegvić, “Dense out-of-distribution detection by robust learning on synthetic negative data,” 2021, arXiv:2112.12833.
[91]
V. Besnier, A. Bursuc, D. Picard, and A. Briot, “Instance-aware observer network for out-of-distribution object segmentation,” 2022, arXiv:2207.08782.
[92]
R. Chan, M. Rottmann, and H. Gottschalk, “Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 5108–5117.
[93]
X. Du, Z. Wang, M. Cai, and Y. Li, “VOS: Learning what you don’t know by virtual outlier synthesis,” in Proc. Int. Conf. Learn. Represent., 2022.
[94]
G. Di Biase, H. Blum, R. Siegwart, and C. Cadena, “Pixel-wise anomaly detection in complex driving scenes,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 16913–16922.
[95]
P. Zhang, J. Wang, A. Farhadi, M. Hebert, and D. Parikh, “Predicting failures of vision systems,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 3566–3573.
[96]
Y. Xia, Y. Zhang, F. Liu, W. Shen, and A. L. Yuille, “Synthesize then compare: Detecting failures and anomalies for semantic segmentation,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2020, pp. 145–161.
[97]
D. Haldimann, H. Blum, R. Siegwart, and C. Cadena, “This is not what I imagined: Error detection for semantic segmentation through visual dissimilarity,” 2019, arXiv:1909.00676.
[98]
T. Vojir, T. Šipka, R. Aljundi, N. Chumerin, D. O. Reino, and J. Matas, “Road anomaly detection by partial image reconstruction with segmentation coupling,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 15631–15640.
[99]
W. Zhou, J. S. Berrio, S. Worrall, and E. Nebot, “Automated evaluation of semantic segmentation robustness for autonomous driving,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 5, pp. 1951–1963, May 2020.
[100]
V. V. Valindriaet al., “Reverse classification accuracy: Predicting segmentation performance in the absence of ground truth,” IEEE Trans. Med. Imag., vol. 36, no. 8, pp. 1597–1606, Aug. 2017.
[101]
F. Yuet al., “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 2633–2642.
[102]
H. Jing, Y. Gao, S. Shahbeigi, and M. Dianati, “Integrity monitoring of GNSS/INS based positioning systems for autonomous vehicles: State-of-the-art and open challenges,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 9, pp. 14166–14187, Sep. 2022.
[103]
S. Rabiee and J. Biswas, “V-SLAM: Introspective vision for simultaneous localization and mapping,” in Proc. Conf. Robot Learn., vol. 155, Nov. 2021, pp. 1100–1109.
[104]
Z. Alsayed, G. Bresson, A. Verroust-Blondet, and F. Nashashibi, “Failure detection for laser-based SLAM in urban and peri-urban environments,” in Proc. IEEE 20th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2017, pp. 1–7.
[105]
S. Nobili, G. Tinchev, and M. Fallon, “Predicting alignment risk to prevent localization failure,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2018, pp. 1003–1010.
[106]
H. Almqvist, M. Magnusson, T. P. Kucner, and A. J. Lilienthal, “Learning to detect misaligned point clouds,” J. Field Robot., vol. 35, no. 5, pp. 662–677, Aug. 2018.
[107]
D. Adolfsson, M. Magnusson, Q. Liao, A. J. Lilienthal, and H. Andreasson, “CorAl—Are the point clouds correctly aligned?” in Proc. Eur. Conf. Mobile Robots (ECMR), Aug. 2021, pp. 1–7.
[108]
D. Adolfsson, M. Castellano-Quero, M. Magnusson, A. J. Lilienthal, and H. Andreasson, “CorAl: Introspection for robust radar and LiDAR perception in diverse environments using differential entropy,” Robot. Auto. Syst., vol. 155, Sep. 2022, Art. no.
[109]
X. Ju, D. Xu, and H. Zhao, “Scene-aware error modeling of LiDAR/Visual odometry for fusion-based vehicle localization,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 6480–6494, Jul. 2022.
[110]
N. Akai, L. Y. Morales, T. Hirayama, and H. Murase, “Misalignment recognition using Markov random fields with fully connected latent variables for detecting localization failures,” IEEE Robot. Autom. Lett., vol. 4, no. 4, pp. 3955–3962, Oct. 2019.
[111]
N. Akai, Y. Akagi, T. Hirayama, T. Morikawa, and H. Murase, “Detection of localization failures using Markov random fields with fully connected latent variables for safe LiDAR-based automated driving,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 17130–17142, Oct. 2022.
[112]
S. Rabiee and J. Biswas, “IVOA: Introspective vision for obstacle avoidance,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Nov. 2019, pp. 1230–1235.
[113]
J. M. Garcia, R. Prophet, J. C. F. Michel, R. Ebelt, M. Vossiek, and I. Weber, “Identification of ghost moving detections in automotive scenarios with deep learning,” in IEEE MTT-S Int. Microw. Symp. Dig., Apr. 2019, pp. 1–4.
[114]
M. Chamseddine, J. Rambach, D. Stricker, and O. Wasenmuller, “Ghost target detection in 3D radar data using point cloud based deep neural network,” in Proc. 25th Int. Conf. Pattern Recognit. (ICPR), Jan. 2021, pp. 10398–10403.
[115]
T. Griebel, D. Authaler, M. Horn, M. Henning, M. Buchholz, and K. Dietmayer, “Anomaly detection in radar data using PointNets,” in Proc. IEEE Int. Intell. Transp. Syst. Conf. (ITSC), Sep. 2021, pp. 2667–2673.
[116]
J. Breitenstein, A. Bär, D. Lipinski, and T. Fingscheidt, “Detection of collective anomalies in images for automated driving using an Earth mover’s deviation (EMDEV) measure,” in Proc. IEEE Intell. Vehicles Symp. Workshops (IV Workshops), Jul. 2021, pp. 90–97.
[117]
A. Gupta and L. Carlone, “Online monitoring for neural network based monocular pedestrian pose estimation,” in Proc. IEEE 23rd Int. Conf. Intell. Transp. Syst. (ITSC), Sep. 2020, pp. 1–8.
[118]
L. Sun, K. Yang, X. Hu, W. Hu, and K. Wang, “Real-time fusion network for RGB-D semantic segmentation incorporating unexpected obstacle detection for road-driving images,” IEEE Robot. Autom. Lett., vol. 5, no. 4, pp. 5558–5565, Oct. 2020.
[119]
S. Geisler, C. Cunha, and R. K. Satzoda, “Better, faster small hazard detection: Instance-aware techniques, metrics and benchmarking,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 9062–9077, Jul. 2022.
[120]
Q. Yang, S. Fu, H. Wang, and H. Fang, “Machine-learning-enabled cooperative perception for connected autonomous vehicles: Challenges and opportunities,” IEEE Netw., vol. 35, no. 3, pp. 96–101, May 2021.
[121]
D. Xiao, W. G. Geiger, H. Y. Yatbaz, M. Dianati, and R. Woodman, “Detecting hazardous events: A framework for automated vehicle safety systems,” in Proc. IEEE 25th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2022, pp. 641–646.
[122]
D. Xiao, M. Dianati, W. G. Geiger, and R. Woodman, “Review of graph-based hazardous event detection methods for autonomous driving systems,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 5, pp. 4697–4715, May 2023.
[123]
A. Roitberget al., “Is my driver observation model overconfident? input-guided calibration networks for reliable and interpretable confidence estimates,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, pp. 25271–25286, Dec. 2022.
[124]
J. Zhang, K. Yang, and R. Stiefelhagen, “Exploring event-driven dynamic context for accident scene segmentation,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 2606–2622, Mar. 2022.
[125]
H. Blum, P.-E. Sarlin, J. Nieto, R. Siegwart, and C. Cadena, “The fishyscapes benchmark: Measuring blind spots in semantic segmentation,” Int. J. Comput. Vis., vol. 129, no. 11, pp. 3119–3135, Nov. 2021.
[126]
R. Chanet al., “SegmentMeIfYouCan: A benchmark for anomaly segmentation,” in Proc. Neural Inf. Process. Syst. Track Datasets Benchmarks, vol. 1, J. Vanschoren and S. Yeung, Eds. Curran, 2021.
[127]
D. Hendrycks, S. Basart, M. Mazeika, M. Mostajabi, J. Steinhardt, and D. X. Song, “Scaling out-of-distribution detection for real-world settings,” in Proc. Int. Conf. Mach. Learn., 2022, pp. 8759–8773.
[128]
J. Gawlikowskiet al., “A survey of uncertainty in deep neural networks,” 2021, arXiv:2107.03342.
[129]
A. Dosovitskiyet al., “An image is worth 16×16 words: Transformers for image recognition at scale,” in Proc. ICLR, 2021.
[130]
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Proc. Eur. Conf. Comput. Vis. (ECCV). Glasgow, U.K.: Springer, Aug. 2020, Aug. 2020, pp. 213–229.
[131]
W. Jia, L. Yang, Z. Jia, W. Zhao, Y. Zhou, and Q. Song, “TIVE: A toolbox for identifying video instance segmentation errors,” Neurocomputing, vol. 545, Aug. 2023, Art. no.

Index Terms

  1. Introspection of DNN-Based Perception Functions in Automated Driving Systems: State-of-the-Art and Open Research Challenges
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Intelligent Transportation Systems
        IEEE Transactions on Intelligent Transportation Systems  Volume 25, Issue 2
        Feb. 2024
        1100 pages

        Publisher

        IEEE Press

        Publication History

        Published: 22 September 2023

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 01 Sep 2024

        Other Metrics

        Citations

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media