Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Continuous Development and Safety Assurance Pipeline for ML-Based Systems in the Railway Domain

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops (SAFECOMP 2024)

Abstract

Automated Driving Systems (ADS) will operate in an open world. Since it is difficult to specify all possible situations in this open world a priori and the world will most likely change during the system’s lifecycle, it requires agile MLOps cycles, including testing & validation in the field. In this paper, we present how the safe MLOps process proposed in Zeller et al. [25] is realized in the safe.trAIn project using Git-centric methods. Thereby, appropriate tooling support is provided at the different stages of the safe MLOps process to enable continuous development and safety assurance of ML-based systems in the railway domain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://safetrain-projekt.de/en.

  2. 2.

    https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning.

  3. 3.

    https://karpathy.medium.com/software-2-0-a64152b37c35.

  4. 4.

    https://karpathy.medium.com/software-2-0-a64152b37c35.

References

  1. Antony, J., et al.: D-ace: Dataset assessment and characteristics evaluation. https://github.com/Dependable-Intelligent-Systems-Lab/Dataset-Characteristics

  2. Bloomfield, R., Bishop, P.: Safety and assurance cases: Past, present and possible future–an Adelard perspective. In: Making Systems Safer: Proceedings of the 18th Safety-Critical Systems Symposium, pp. 51–67 (2010)

    Google Scholar 

  3. Borg, M., et al.: Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system. Software Qual. J. 31, 335–403 (2023)

    Article  Google Scholar 

  4. Cheng, C.H., et al.: Quantitative projection coverage for testing ml-enabled autonomous systems. In: Automated Technology for Verification and Analysis, pp. 126–142 (2018)

    Google Scholar 

  5. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pp. 886–893 (2005)

    Google Scholar 

  6. EN 50126-1:2018-10: Railway Applications – The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS) - Part 1: Generic RAMS Process (2018)

    Google Scholar 

  7. Feng, X., Jiang, Y., Yang, X., Du, M., Li, X.: Computer vision algorithms and hardware implementations: a survey. Integration 69, 309–320 (2019)

    Article  Google Scholar 

  8. Geerkens, S., Sieberichs, C., Braun, A., Waschulzik, T.: \(\text{QI}^2\): an interactive tool for data quality assurance. AI Ethics 4(1), 141–149 (2024)

    Google Scholar 

  9. Hawkins, R., et al.: Guidance on the assurance of machine learning in autonomous systems (AMLAS) (2021). https://doi.org/10.48550/ARXIV.2102.01564

  10. IEC 61508-1:2010-04: Functional Safety Of Electrical/electronic/programmable Electronic Safety-related Systems – Part 1: General requirements (2010)

    Google Scholar 

  11. ISO 21448:2022-06: Road Vehicles - Safety of the Intended Functionality (2022)

    Google Scholar 

  12. ISO/IEC 23053:2021-06: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) (2022)

    Google Scholar 

  13. ISO/IEC DIS 5338: Information technology – Artificial intelligence – AI system life cycle processes (2022)

    Google Scholar 

  14. Kelly, T., Weaver, R.: The goal structuring notation–a safety argument notation. In: Proceedings of the Dependable Systems and Networks 2004 (2004)

    Google Scholar 

  15. LeCun, Y., et al.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  MathSciNet  Google Scholar 

  16. Mattioli, J., et al.: Empowering the trustworthiness of ML-based critical systems through engineering activities (2022). https://arxiv.org/abs/2209.15438

  17. McInnes, L., Healy, J., Melville, J.: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction (2020). http://arxiv.org/abs/1802.03426

  18. Schneider, D., et al.: WAP: Digital dependability identities. In: 2015 IEEE 26th Internationl Symposium on Software Reliability Engineering (ISSRE), pp. 324–329 (2015)

    Google Scholar 

  19. Sculley, D., et al.: Hidden technical debt in machine learning systems. In: Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015)

    Google Scholar 

  20. Sieberichs, C., Geerkens, S., Braun, A., Waschulzik, T.: ECS: an interactive tool for data quality assurance. AI Ethics 4(1), 131–139 (2024)

    Google Scholar 

  21. Thirugnana Sambandham, V., Kirchheim, K., Ortmeier, F.: Evaluating and increasing segmentation robustness in CARLA. In: International Conference on Computer Safety, Reliability, and Security, pp. 390–396. Springer (2023). https://doi.org/10.1007/978-3-031-40953-0_33

  22. UL 4600 Ed. 3-2023: Evaluation Of Autonomous Products (2023)

    Google Scholar 

  23. VDE-AR-E 2842-61-2 Anwendungsregel:2021-06: Development and Trustworthiness of Autonomous/cognitive Systems (2021)

    Google Scholar 

  24. Weiss, G., et al.: Approach for argumenting safety on basis of an operational design domain. In: 3rd International Conference on AI Engineering - Software Engineering for AI, pp. 184–193 (2024)

    Google Scholar 

  25. Zeller, M., et al.: Towards a safe MLops process for the continuous development and safety assurance of ML-based systems in the railway domain. AI Ethics 4(1), 123–130 (2024)

    Google Scholar 

  26. Zendel, O., otehrs: Railsem19: a dataset for semantic rail scene understanding. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1221–1229 (2019)

    Google Scholar 

Download references

Acknowledgment

This research received funding from the Federal Ministry for Economic Affairs and Climate Action (BMWK) and the European Union under grant agreements 19I21039A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc Zeller .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zeller, M. et al. (2024). Continuous Development and Safety Assurance Pipeline for ML-Based Systems in the Railway Domain. In: Ceccarelli, A., Trapp, M., Bondavalli, A., Schoitsch, E., Gallina, B., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops. SAFECOMP 2024. Lecture Notes in Computer Science, vol 14989. Springer, Cham. https://doi.org/10.1007/978-3-031-68738-9_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-68738-9_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-68737-2

  • Online ISBN: 978-3-031-68738-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics