Abstract
Current automotive safety standards are cautious when it comes to utilizing deep neural networks in safety-critical scenarios due to concerns regarding robustness to noise, domain drift, and uncertainty quantification. In this paper, we propose a scenario where a neural network adjusts the automated driving style to reduce user stress. In this scenario, only certain actions are safety-critical, allowing for greater control over the model’s behavior. To demonstrate how safety can be addressed, we propose a mechanism based on robustness quantification and a fallback plan. This approach enables the model to minimize user stress in safe conditions while avoiding unsafe actions in uncertain scenarios. By exploring this use case, we hope to inspire discussions around identifying safety-critical scenarios and approaches where neural networks can be safely utilized. We see this also as a potential contribution to the development of new standards and best practices for the usage of AI in safety-critical scenarios. The work done here is a result of the TEACHING project, an European research project around the safe, secure and trustworthy usage of AI.
This research was supported by TEACHING, a project funded by the EU Horizon 2020 research and innovation programme under GA n. 871385.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Athalye, A., Carlini, N., Wagner, D.: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (2018). https://doi.org/10.48550/arXiv.1802.00420
Bacciu, D., et al.: TEACHING - trustworthy autonomous cyber-physical applications through human-centred intelligence. In: 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS), pp. 1–6 (2021). https://doi.org/10.1109/COINS51742.2021.9524099
Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent Advances in Adversarial Training for Adversarial Robustness (2021). https://doi.org/10.48550/arXiv.2102.01356
Carlini, N., et al.: On Evaluating Adversarial Robustness (2019). https://doi.org/10.48550/arXiv.1902.06705
Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Du, T., et al.: Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks, South Korea, p. 19 (2021)
Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)
Fan, L., Liu, S., Chen, P.Y., Zhang, G., Gan, C.: When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In: Advances in Neural Information Processing Systems, vol. 34, pp. 21480–21492. Curran Associates, Inc. (2021)
Fawzi, A., Fawzi, O., Frossard, P.: Fundamental limits on adversarial robustness
Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268, 87–99 (2017). https://doi.org/10.1016/J.NEUCOM.2016.12.089
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1–32 (1997). https://doi.org/10.1144/GSL.MEM.1999.018.01.02
Ko, C.Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: Proceedings of the 36th International Conference on Machine Learning, pp. 3468–3477. PMLR (2019)
Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv:1904.00759 [cs, stat] (2019)
Qin, C., et al.: Adversarial robustness through local linearization. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Stutz, D., Hein, M., Schiele, B.: Disentangling adversarial robustness and generalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6976–6987 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bacciu, D., Carta, A., Gallicchio, C., Schmittner, C. (2023). Safety and Robustness for Deep Neural Networks: An Automotive Use Case. In: Guiochet, J., Tonetta, S., Schoitsch, E., Roy, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops. SAFECOMP 2023. Lecture Notes in Computer Science, vol 14182. Springer, Cham. https://doi.org/10.1007/978-3-031-40953-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-40953-0_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40952-3
Online ISBN: 978-3-031-40953-0
eBook Packages: Computer ScienceComputer Science (R0)