Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Logically Sound Arguments for the Effectiveness of ML Safety Measures

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2022 Workshops (SAFECOMP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13415))

Included in the following conference series:

  • 1128 Accesses

Abstract

We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions. By considering the known weaknesses of DNN-based 2D bounding box detection algorithms, we sharpen the metric of imprecise pedestrian localization by associating it with the safety goal. The sharpening leads to introducing a conservative post-processor after the standard non-max-suppression as a counter-measure. We then propose a semi-formal assurance case for arguing the effectiveness of the post-processor, which is further translated into formal proof obligations for demonstrating the soundness of the arguments. Applying theorem proving not only discovers the need to introduce missing claims and mathematical concepts but also reveals the limitation of Dempster-Shafer’s rules used in semi-formal argumentation.

This work is funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Fraunhofer Institute for Cognitive Systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Due to space limits, we refer readers to the extended version [2] for details including the formal proof.

References

  1. ISO/PAS 21448:2019 road vehicles - safety of the intended functionality (2019). https://www.iso.org/standard/70939.html

  2. Cheng, C.-H., Schuster, T., Burton, S.: Logically sound arguments for the effectiveness of ml safety measures. arXiv preprint arXiv:2111.02649 (2021)

  3. Cyra, L., Gorski, J.: Support for argument structures review and assessment. Reliab. Eng. Syst. Saf. 96(1), 26–37 (2011)

    Article  Google Scholar 

  4. Dai, X., et al.: Dynamic head: unifying object detection heads with attentions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7373–7382. IEEE (2021)

    Google Scholar 

  5. Idmessaoud, Y., Dubois, D., Guiochet, J.: Belief functions for safety arguments confidence estimation: a comparative study. In: Davis, J., Tabia, K. (eds.) SUM 2020. LNCS (LNAI), vol. 12322, pp. 141–155. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58449-8_10

    Chapter  Google Scholar 

  6. Kelly, T., Weaver, R.: The goal structuring notation - a safety argument notation. In: Proceedings of the Dependable Systems and Networks Workshop on Assurance Cases, p. 6. Citeseer (2004)

    Google Scholar 

  7. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  8. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  9. Owre, S., Rushby, J.M., Shankar, N.: PVS: a prototype verification system. In: Kapur, D. (ed.) CADE 1992. LNCS, vol. 607, pp. 748–752. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-55602-8_217

    Chapter  Google Scholar 

  10. Sentz, K., Ferson, S.: Combination of evidence in Dempster-Shafer theory, vol. 4015. Sandia National Laboratories (2002)

    Google Scholar 

  11. Wang, R.: Confidence in safety argument-an assessment framework based on belief function theory. Ph.D. thesis, INSA de Toulouse (2018)

    Google Scholar 

  12. Yuan, C., Wu, J., Liu, C., Yang, H.: A subjective logic-based approach for assessing confidence in assurance case. Int. J. Performability Eng. 13(6), 807 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Hong Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cheng, CH., Schuster, T., Burton, S. (2022). Logically Sound Arguments for the Effectiveness of ML Safety Measures. In: Trapp, M., Schoitsch, E., Guiochet, J., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2022 Workshops . SAFECOMP 2022. Lecture Notes in Computer Science, vol 13415. Springer, Cham. https://doi.org/10.1007/978-3-031-14862-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14862-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14861-3

  • Online ISBN: 978-3-031-14862-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics