Abstract
Automated Driving Systems (ADS) will operate in an open world. Since it is difficult to specify all possible situations in this open world a priori and the world will most likely change during the system’s lifecycle, it requires agile MLOps cycles, including testing & validation in the field. In this paper, we present how the safe MLOps process proposed in Zeller et al. [25] is realized in the safe.trAIn project using Git-centric methods. Thereby, appropriate tooling support is provided at the different stages of the safe MLOps process to enable continuous development and safety assurance of ML-based systems in the railway domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antony, J., et al.: D-ace: Dataset assessment and characteristics evaluation. https://github.com/Dependable-Intelligent-Systems-Lab/Dataset-Characteristics
Bloomfield, R., Bishop, P.: Safety and assurance cases: Past, present and possible future–an Adelard perspective. In: Making Systems Safer: Proceedings of the 18th Safety-Critical Systems Symposium, pp. 51–67 (2010)
Borg, M., et al.: Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system. Software Qual. J. 31, 335–403 (2023)
Cheng, C.H., et al.: Quantitative projection coverage for testing ml-enabled autonomous systems. In: Automated Technology for Verification and Analysis, pp. 126–142 (2018)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pp. 886–893 (2005)
EN 50126-1:2018-10: Railway Applications – The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS) - Part 1: Generic RAMS Process (2018)
Feng, X., Jiang, Y., Yang, X., Du, M., Li, X.: Computer vision algorithms and hardware implementations: a survey. Integration 69, 309–320 (2019)
Geerkens, S., Sieberichs, C., Braun, A., Waschulzik, T.: \(\text{QI}^2\): an interactive tool for data quality assurance. AI Ethics 4(1), 141–149 (2024)
Hawkins, R., et al.: Guidance on the assurance of machine learning in autonomous systems (AMLAS) (2021). https://doi.org/10.48550/ARXIV.2102.01564
IEC 61508-1:2010-04: Functional Safety Of Electrical/electronic/programmable Electronic Safety-related Systems – Part 1: General requirements (2010)
ISO 21448:2022-06: Road Vehicles - Safety of the Intended Functionality (2022)
ISO/IEC 23053:2021-06: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) (2022)
ISO/IEC DIS 5338: Information technology – Artificial intelligence – AI system life cycle processes (2022)
Kelly, T., Weaver, R.: The goal structuring notation–a safety argument notation. In: Proceedings of the Dependable Systems and Networks 2004 (2004)
LeCun, Y., et al.: Deep learning. Nature 521(7553), 436–444 (2015)
Mattioli, J., et al.: Empowering the trustworthiness of ML-based critical systems through engineering activities (2022). https://arxiv.org/abs/2209.15438
McInnes, L., Healy, J., Melville, J.: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction (2020). http://arxiv.org/abs/1802.03426
Schneider, D., et al.: WAP: Digital dependability identities. In: 2015 IEEE 26th Internationl Symposium on Software Reliability Engineering (ISSRE), pp. 324–329 (2015)
Sculley, D., et al.: Hidden technical debt in machine learning systems. In: Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015)
Sieberichs, C., Geerkens, S., Braun, A., Waschulzik, T.: ECS: an interactive tool for data quality assurance. AI Ethics 4(1), 131–139 (2024)
Thirugnana Sambandham, V., Kirchheim, K., Ortmeier, F.: Evaluating and increasing segmentation robustness in CARLA. In: International Conference on Computer Safety, Reliability, and Security, pp. 390–396. Springer (2023). https://doi.org/10.1007/978-3-031-40953-0_33
UL 4600 Ed. 3-2023: Evaluation Of Autonomous Products (2023)
VDE-AR-E 2842-61-2 Anwendungsregel:2021-06: Development and Trustworthiness of Autonomous/cognitive Systems (2021)
Weiss, G., et al.: Approach for argumenting safety on basis of an operational design domain. In: 3rd International Conference on AI Engineering - Software Engineering for AI, pp. 184–193 (2024)
Zeller, M., et al.: Towards a safe MLops process for the continuous development and safety assurance of ML-based systems in the railway domain. AI Ethics 4(1), 123–130 (2024)
Zendel, O., otehrs: Railsem19: a dataset for semantic rail scene understanding. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1221–1229 (2019)
Acknowledgment
This research received funding from the Federal Ministry for Economic Affairs and Climate Action (BMWK) and the European Union under grant agreements 19I21039A.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zeller, M. et al. (2024). Continuous Development and Safety Assurance Pipeline for ML-Based Systems in the Railway Domain. In: Ceccarelli, A., Trapp, M., Bondavalli, A., Schoitsch, E., Gallina, B., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops. SAFECOMP 2024. Lecture Notes in Computer Science, vol 14989. Springer, Cham. https://doi.org/10.1007/978-3-031-68738-9_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-68738-9_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-68737-2
Online ISBN: 978-3-031-68738-9
eBook Packages: Computer ScienceComputer Science (R0)