Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Formally Compensating Performance Limitations for Imprecise 2D Object Detection

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13414))

Included in the following conference series:

  • 1229 Accesses

Abstract

In this paper, we consider the imperfection within machine learning-based 2D object detection and its impact on safety. We address a special sub-type of performance limitations related to the misalignment of bounding-box predictions to the ground truth: the prediction bounding box cannot be perfectly aligned with the ground truth. We formally prove the minimum required bounding box enlargement factor to cover the ground truth. We then demonstrate that this factor can be mathematically adjusted to a smaller value, provided that the motion planner uses a fixed-length buffer in making its decisions. Finally, observing the difference between an empirically measured enlargement factor and our formally derived worst-case enlargement factor offers an interesting connection between quantitative evidence (demonstrated by statistics) and qualitative evidence (demonstrated by worst-case analysis) when arguing safety-relevant properties of machine learning functions.

T. Schuster and E. Seferis—Equal contribution.

This work is funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Fraunhofer Institute for Cognitive Systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Precisely, when the DNN has not-so-good performance where within the collected dataset the observed intersection-over-union \(\alpha \) is small, one needs to enlarge the bounding box more conservatively to ensure box coverage.

  2. 2.

    Due to space limits, we refer readers to the extended version [16] for further details.

  3. 3.

    Based on the analysis, for low IoU values, the required expansion factors can be very large. For example, for \(\alpha = 0.4\), Eq. 9 would give an enlargement factor of \(k_{math} = 4\), thus a vehicle with a bounding box of length \(w = 5\,\mathrm {m}\) would be enlarged to \(w' = 20\,\mathrm {m}\), which is forbiddingly large in practice. Hence, for meaningful practical applications, the implication of our result is the need of high IoUs within the collected dataset.

  4. 4.

    https://github.com/DanielHfnr/Carla-Object-Detection-Dataset.

  5. 5.

    https://carla.org/.

  6. 6.

    If we assume that the occurrence of bounding box non-alignment is a random variable, and the measured mean and variance match the real ones, then from Chebyshev’s inequality we know that the probability of exceeding \(6\sigma _{W,data}\) is below 2.78%.

References

  1. Abrecht, S., Gauerhof, L., Gladisch, C., Groh, K., Heinzemann, C., Woehrle, M.: Testing deep learning-based visual perception for automated driving. ACM Trans. Cyber-Phys. Syst. 5(4), 1–28 (2021)

    Article  Google Scholar 

  2. Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1

    Chapter  Google Scholar 

  3. Cheng, C.-H., Huang, C.-H., Yasuoka, H.: Quantitative Projection Coverage for Testing ML-enabled Autonomous Systems. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 126–142. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_8

    Chapter  Google Scholar 

  4. Cheng, C.H., Schuster, T., Burton, S.: Logically sound arguments for the effectiveness of ML safety measures. arXiv preprint arXiv:2111.02649 (2021)

  5. Houben, S., et al.: Inspect, understand, overcome: a survey of practical methods for AI safety. arXiv preprint arXiv:2104.14235 (2021)

  6. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020)

    Google Scholar 

  7. Safety of the intended functionality - SOTIF (ISO/DIS 21448). Standard, International Organization for Standardization (2021)

    Google Scholar 

  8. Jia, Y., Lawton, T., McDermid, J., Rojas, E., Habli, I.: A framework for assurance of medication safety using machine learning. arXiv preprint arXiv:2101.05620 (2021)

  9. Jocher, G., et al.: ultralytics/yolov5: v4.0 - nn.SiLU() activations, weights & biases logging, PyTorch hub integration, https://zenodo.org/record/4418161

  10. Koopman, P., Ferrell, U., Fratrik, F., Wagner, M.: A safety standard approach for fully autonomous vehicles. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 326–332. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_26

  11. Lin, T., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

  12. Lyssenko, M., Gladisch, C., Heinzemann, C., Woehrle, M., Triebel, R.: From evaluation to verification: towards task-oriented relevance metrics for pedestrian detection in safety-critical domains. In: CVPR Workshop, pp. 38–45. IEEE (2021)

    Google Scholar 

  13. Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: SOSP, pp. 1–18. ACM (2017)

    Google Scholar 

  14. Pezzementi, Z., et al.: Putting image manipulations in context: robustness testing for safe perception. In: SSRR. pp. 1–8. IEEE (2018)

    Google Scholar 

  15. Salay, R., Czarnecki, K., Kuwajima, H., Yasuoka, H., Nakae, T., Abdelzad, V., Huang, C., Kahn, M., Nguyen, V.D.: The missing link: Developing a safety case for perception components in automated driving. arXiv preprint arXiv:2108.13294 (2021)

  16. Schuster, T., Seferis, E., Burton, S., Cheng, C.H.: Unaligned but safe-formally compensating performance limitations for imprecise 2D object detection. arXiv preprint arXiv:2202.05123 (2022)

  17. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: Structural test coverage criteria for deep neural networks. In: ACM TECS, vol. 18, pp. 1–23 (2019)

    Google Scholar 

  18. Volk, G., Gamerdinger, J., Bernuth, A.v., Bringmann, O.: A comprehensive safety metric to evaluate perception in autonomous systems. In: ITSC, pp. 1–8. IEEE (2020)

    Google Scholar 

  19. Zhao, X., et al.: A Safety Framework for Critical Systems Utilising Deep Neural Networks. In: Casimiro, A., Ortmeier, F., Bitsch, F., Ferreira, P. (eds.) SAFECOMP 2020. LNCS, vol. 12234, pp. 244–259. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-54549-9_16

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Hong Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schuster, T., Seferis, E., Burton, S., Cheng, CH. (2022). Formally Compensating Performance Limitations for Imprecise 2D Object Detection. In: Trapp, M., Saglietti, F., Spisländer, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2022. Lecture Notes in Computer Science, vol 13414. Springer, Cham. https://doi.org/10.1007/978-3-031-14835-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14835-4_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14834-7

  • Online ISBN: 978-3-031-14835-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics