Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Safety and Robustness for Deep Neural Networks: An Automotive Use Case

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops (SAFECOMP 2023)

Abstract

Current automotive safety standards are cautious when it comes to utilizing deep neural networks in safety-critical scenarios due to concerns regarding robustness to noise, domain drift, and uncertainty quantification. In this paper, we propose a scenario where a neural network adjusts the automated driving style to reduce user stress. In this scenario, only certain actions are safety-critical, allowing for greater control over the model’s behavior. To demonstrate how safety can be addressed, we propose a mechanism based on robustness quantification and a fallback plan. This approach enables the model to minimize user stress in safe conditions while avoiding unsafe actions in uncertain scenarios. By exploring this use case, we hope to inspire discussions around identifying safety-critical scenarios and approaches where neural networks can be safely utilized. We see this also as a potential contribution to the development of new standards and best practices for the usage of AI in safety-critical scenarios. The work done here is a result of the TEACHING project, an European research project around the safe, secure and trustworthy usage of AI.

This research was supported by TEACHING, a project funded by the EU Horizon 2020 research and innovation programme under GA n. 871385.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (2018). https://doi.org/10.48550/arXiv.1802.00420

  2. Bacciu, D., et al.: TEACHING - trustworthy autonomous cyber-physical applications through human-centred intelligence. In: 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS), pp. 1–6 (2021). https://doi.org/10.1109/COINS51742.2021.9524099

  3. Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent Advances in Adversarial Training for Adversarial Robustness (2021). https://doi.org/10.48550/arXiv.2102.01356

  4. Carlini, N., et al.: On Evaluating Adversarial Robustness (2019). https://doi.org/10.48550/arXiv.1902.06705

  5. Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)

    Google Scholar 

  6. Du, T., et al.: Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks, South Korea, p. 19 (2021)

    Google Scholar 

  7. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  8. Fan, L., Liu, S., Chen, P.Y., Zhang, G., Gan, C.: When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In: Advances in Neural Information Processing Systems, vol. 34, pp. 21480–21492. Curran Associates, Inc. (2021)

    Google Scholar 

  9. Fawzi, A., Fawzi, O., Frossard, P.: Fundamental limits on adversarial robustness

    Google Scholar 

  10. Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268, 87–99 (2017). https://doi.org/10.1016/J.NEUCOM.2016.12.089

    Article  Google Scholar 

  11. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1–32 (1997). https://doi.org/10.1144/GSL.MEM.1999.018.01.02

    Article  Google Scholar 

  12. Ko, C.Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: Proceedings of the 36th International Conference on Machine Learning, pp. 3468–3477. PMLR (2019)

    Google Scholar 

  13. Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv:1904.00759 [cs, stat] (2019)

  14. Qin, C., et al.: Adversarial robustness through local linearization. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)

    Google Scholar 

  15. Stutz, D., Hein, M., Schiele, B.: Disentangling adversarial robustness and generalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6976–6987 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Carta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bacciu, D., Carta, A., Gallicchio, C., Schmittner, C. (2023). Safety and Robustness for Deep Neural Networks: An Automotive Use Case. In: Guiochet, J., Tonetta, S., Schoitsch, E., Roy, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops. SAFECOMP 2023. Lecture Notes in Computer Science, vol 14182. Springer, Cham. https://doi.org/10.1007/978-3-031-40953-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40953-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40952-3

  • Online ISBN: 978-3-031-40953-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics