Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Assurance Cases as Foundation Stone for Auditing AI-Enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Papers: HCI for Today's Community and Economy (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13520))

Included in the following conference series:

Abstract

The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statutory accident insurance companies, work safety and standardization commissions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Fachforum Autonome Systeme im Hightech-Forum: Autonome Systeme – Chancen und Risiken für Wirtschaft, Wissenschaft und Gesellschaft. Langversion, Abschlussbericht, Berlin (2017)

    Google Scholar 

  2. Fortune Business Insights. https://www.fortunebusinessinsights.com/autonomous-mobile-robots-market-105055. Accessed 19 Jan 2022

  3. Global Autonomous Construction Equipment Opportunities and Strategies Market Report. https://www.thebusinessresearchcompany.com/report/autonomous-construction-equipment-market. Accessed 19 Jan 2022

  4. Brand new roadmap: Safety, Security, and Certifiability of Future Man-Machine Systems. https://www.safetrans-de.org/en/Latest-reports/brand-new-roadmap-%22safety%2C-security%2C-and-certifiability-of-future-man-machine-systems%22/286. Accessed 19 Jan 2022

  5. Machinery directive. https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A32006L0042. Accessed 19 Jan 2022

  6. KI Testing & Auditing - Gesellschaft für Informatik e.V. https://testing-ai.gi.de/meldung/workshop-zu-ki-in-der-industrieproduktion-zwischen-potenzial-risiko-und-regulierung. Accessed 19 Jan 2022

  7. KI in der Industrie absichern & prüfen – Was leisten Assurance Cases? https://www.stiftung-nv.de/sites/default/files/ki_in_der_industrie_sichern_und_prufen.pdf. Accessed 19 Jan 2022

  8. German Standardization Roadmap on Artificial Intelligence. https://www.din.de/resource/blob/772610/e96c34dd6b12900ea75b460538805349/normungsroadmap-en-data.pdf. Accessed 19 Jan 2022

  9. Hawkins, R., Kelly, T., Knight, J., Graydon, P.: A new approach to creating clear safety arguments. In: Dale, C., Anderson, T. (eds.) Advances in Systems Safety, pp. 3–23. Springer London, London (2011). https://doi.org/10.1007/978-0-85729-133-2_1

    Chapter  Google Scholar 

  10. A More Precise Definition for ANSI/UL 4600 Safety Performance Indicators (SPIs). https://safeautonomy.blogspot.com/2021/06/a-more-precise-definition-of-ansiul.html. Accessed 19 Jan 2022

  11. Keynote: A Safety Case plus SPIs Metric Approach for Self-Driving Car Safety” DREAMS Workshop at EDCC 2020. https://www.youtube.com/watch?v=FsKbCV7MWmk. Accessed 19 Jan 2022

  12. Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products. https://ul.org/UL4600. Accessed 19 Jan 2022

  13. Bounding the open context - Specifying safety requirements for automated driving. https://assuringautonomy.medium.com/bounding-the-open-context-26cf01da2e1c. Accessed 19 Jan 2022

  14. Webpage: Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on machinery products. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2021:202:FIN. Accessed 19 Jan 2022

  15. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on machinery products - 21.4.2021. https://eur-lex.europa.eu/resource.html?uri=cellar:1f0f10ee-a364-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF. Accessed 19 Jan 2022

  16. Workshop zu KI in der Industrieproduktion – zwischen Potenzial, Risiko und Regulierung. https://testing-ai.gi.de/meldung/workshop-zu-ki-in-der-industrieproduktion-zwischen-potenzial-risiko-und-regulierung. Accessed 19 Jan 2022

  17. Assurance Cases als Prüf- und Zertifizierungsgrundlage von KI. https://testing-ai.gi.de/fileadmin/GI/Projekte/KI_Testing_Auditing/Assurance_Cases_als_Pruef-_und_Zertifizierungsgrundlage_von_KI_final_Draft_offcial.pdf. Accessed 19 Jan 2022

  18. Pegasus project homepage. https://www.pegasusprojekt.de/en/home. Accessed 19 Jan 2022

  19. Pegasus method picture. https://www.pegasusprojekt.de/en/pegasus-method. Accessed 19 Jan 2022

  20. Pegasus Safety Argumentation. https://www.pegasusprojekt.de/files/tmpl/pdf/PEGASUS%20Safety%20Argumentation.pdf. Accessed 19 Jan 2022

  21. V&V project homepage. https://www.vvm-projekt.de/en/. Accessed 19 Jan 2022

  22. KI-Absicherung project homepage. https://www.ki-absicherung-projekt.de/en/. Accessed 19 Jan 2022

  23. ISO/TC 22 N 4142, ISO/NP PAS 8800 Road Vehicles -- Safety and Artificial Intelligence. https://standardsdevelopment.bsigroup.com/projects/9021-05782#/section. Accessed 19 Jan 2022

  24. Understanding What It Means for Assurance Cases to “Work”. https://core.ac.uk/download/pdf/83530236.pdf. Accessed 19 Jan 2022

  25. EASA First usable guidance for Level 1 machine learning applications. https://www.easa.europa.eu/newsroom-and-events/news/easa-releases-its-concept-paper-first-usable-guidance-level-1-machine-0. Accessed 19 Jan 2022

  26. EASA AI roadmap. https://www.easa.europa.eu/downloads/109668/en. Accessed 19 Jan 2022

  27. DARPA Assured Autonomy Program. https://www.darpa.mil/program/assured-autonomy. Accessed 19 Jan 2022

  28. David, M.T.: T&E of Cognitive EW: An Assurance Case Frame-work (Conference Presentation). Institute for Defense Analyses (2020). http://www.jstor.org/stable/resrep25251

  29. Asaadi, E., Denney, E., Menzies, J., Pai, G.J., Petroff, D.: Dynamic assurance cases: a pathway to trusted autonomy. Computer 53(12), 35–46 (2020). https://doi.org/10.1109/MC.2020.3022030

  30. DEIS Project homepage. https://www.deis-project.eu/. Accessed 19 Jan 2022

  31. DREAMS workshop homepage. https://www.iese.fraunhofer.de/en/seminare_training/edcc-workshop.html. Accessed 19 Jan 2022

  32. ZENZIC Safety Case Framework Report. https://zenzic.io/content/uploads/2020/03/Zenzic-Safety-Framework-Report-2.0-final.pdf. Accessed 19 Jan 2022

  33. From Principles to Practice - An interdisciplinary framework to operationalize AI ethics. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf. Accessed 19 Jan 2022

  34. VDE-AR-E 2842-61. https://www.vde-verlag.de/standards/0800738/vde-ar-e-2842-61-1-anwendungsregel-2021-07.html. Accessed 19 Jan 2022

  35. A European approach to artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence. Accessed 19 Jan 2022

  36. Graydon, P., Holloway, C.M.: An investigation of proposed techniques for quantifying confidence in assurance arguments. Safety Sci. 92, 53–65 (2017)

    Google Scholar 

  37. Kläs, M., Adler, R., Jöckel, L., Gross, J., Reich, J.: Using complementary risk acceptance criteria to structure assurance cases for safety-critical AI vomponents. In: AISaftey 2021 at International Joint Conference on Artifical Intelligence (IJCAI), Montreal, Candada (2021)

    Google Scholar 

  38. AMLAS. https://www.york.ac.uk/assuring-autonomy/news/news/amlas-published/. Accessed 19 Jan 2022

  39. LOPAAS. https://www.iese.fraunhofer.de/en/press/current_releases/pm_2021_10_18_paradigmenwechsel-se.html. Accessed 19 Jan 2022

  40. Innovationslandkarte “Autonomes Fahren im ÖPNV”. https://www.vdv.de/innovationslandkarte.aspx. Accessed 19 Jan 2022

Download references

Acknowledgments

Parts of this work have been funded by the Observatory for Artificial Intelligence in Work and Society (KIO) of the Denkfabrik Digitale Arbeitsgesellschaft in the project "KI Testing & Auditing" (ExamAI) and by the project "LOPAAS" as part of the internal funding program "ICON" of the Fraunhofer Society. We would like to thank Sonnhild Namingha for the initial review of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rasmus Adler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Adler, R., Klaes, M. (2022). Assurance Cases as Foundation Stone for Auditing AI-Enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project. In: Rauterberg, M., Fui-Hoon Nah, F., Siau, K., Krömker, H., Wei, J., Salvendy, G. (eds) HCI International 2022 – Late Breaking Papers: HCI for Today's Community and Economy. HCII 2022. Lecture Notes in Computer Science, vol 13520. Springer, Cham. https://doi.org/10.1007/978-3-031-18158-0_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18158-0_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18157-3

  • Online ISBN: 978-3-031-18158-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics