Abstract
More and more software-intensive systems include components that are data-driven in the sense that they use models based on artificial intelligence (AI) or machine learning (ML). Since the outcomes of such models cannot be assumed to always be correct, related uncertainties must be understood and taken into account when decisions are made using these outcomes. This applies, in particular, if such decisions affect the safety of the system. To date, however, hardly any AI-/ML-based model provides dependable estimates of the uncertainty remaining in its outcomes. In order to address this limitation, we present a framework for encapsulating existing models applied in data-driven components with an uncertainty wrapper in order to enrich the model outcome with a situation-aware and dependable uncertainty statement. The presented framework is founded on existing work on the concept and mathematical foundation of uncertainty wrappers. The application of the framework is illustrated using pedestrian detection as an example, which is a particularly safety-critical feature in the context of autonomous driving. The Brier score and its components are used to investigate how the key aspects of the framework (scoping, clustering, calibration, and confidence limits) can influence the quality of uncertainty estimates.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kläs, M.: Towards Identifying and Managing Sources of Uncertainty in AI and Machine Learning Models - An Overview. arXiv:1811.11669 (2018)
Kläs, M., Sembach, L.: Uncertainty wrappers for data-driven models – increase the transparency of AI/ML-based models through enrichment with dependable situation-aware uncertainty estimates. In: WAISE 2019, Turku, Finland (2019)
Kläs, M., Vollmer, A.M.: Uncertainty in machine learning applications – a practice-driven classification of uncertainty. In: WAISE 2018, Västerås, Sweden (2018)
Phan, B., Khan, S., Salay, R., Czarnecki, K.: Bayesian uncertainty quantification with synthetic data. In: WAISE 2019, Turku, Finland (2019)
Gal, Y.: Uncertainty in Deep Learning. University of Cambridge, Cambridge (2016)
Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: SafeAI 2020, New York, USA (2020)
Snoek, J., et al.: Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems (2019)
Niculescu-Mizil, A., Caruana, R.: Predicting good probabilities with supervised learning. In: 22nd International Conference on Machine Learning (2005)
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Czarnecki, K., Salay, R.: Towards a framework to manage perceptual uncertainty for safe automated driving. In: WAISE 2018, Västerås, Sweden (2018)
Matsuno, Y., Ishikawa, F., Tokumoto, S.: Tackling uncertainty in safety assurance for machine learning: continuous argument engineering with attributed tests. In: WAISE 2019, Turku, Finland (2019)
Cheng, C.-H., Huang, C.-H., Nührenberg, G.: nn-dependability-kit: engineering neural networks for safety-critical systems. https://arxiv.org/abs/1811.06746 (2018)
Brier, G.W.: Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 78(1), 1–3 (1950)
Murphy, A.H.: A new vector partition of the probability score. J. Appl. Meteorol. 12(4), 595–600 (1973)
Dumicic, K.: Representative samples. In: Lovric, M. (ed.) International Encyclopedia of Statistical Science. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-04898-2
Developer Survey Results. https://insights.stackoverflow.com/survey/2019 (2019)
Redmond, J., Farhadi, A.: YOLOv3: An Incremental Improvement. arXiv:1804.02767
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: 1st Annual Conference on Robot Learning (2017)
Pimentel, M., Clifton, D., Clifton, L., Tarassenko, L.: A review of novelty detection. Sig. Process. 99, 215–249 (2014)
Kumar, A., Liang, P.S., Ma, T.: Verified uncertainty calibration. In: NIPS 2019 (2019)
Jöckel, L., Kläs, M.: Increasing trust in data-driven model validation. In: Romanovsky, A., Troubitsyna, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11698, pp. 155–164. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26601-1_11
Acknowledgments
Parts of this work have been funded by the Ministry of Science, Education, and Culture of the German State of Rhineland-Palatinate in the context of the project MInD and the Observatory for Artificial Intelligence in Work and Society (KIO) of the Denkfabrik Digitale Arbeitsgesellschaft in the project “KI Testing & Auditing”. We would like to thank especially Naveed Akram and Pascal Gerber for providing the dataset we used to illustrate the framework application, and Jan Reich and Sonnhild Namingha for the initial review of the paper.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kläs, M., Jöckel, L. (2020). A Framework for Building Uncertainty Wrappers for AI/ML-Based Data-Driven Components. In: Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12235. Springer, Cham. https://doi.org/10.1007/978-3-030-55583-2_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-55583-2_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-55582-5
Online ISBN: 978-3-030-55583-2
eBook Packages: Computer ScienceComputer Science (R0)