Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images

  • Conference paper
  • First Online:
Data Augmentation, Labelling, and Imperfections (DALI 2022)

Abstract

Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://decathlon-10.grand-challenge.org/evaluation/challenge/leaderboard/.

References

  1. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  2. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  3. Milletari, F., Navab, N., Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)

    Google Scholar 

  4. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Meth. 18, 203–211 (2020)

    Article  Google Scholar 

  5. He, Y., Yang, D., Roth, H., Zhao, C., Xu, D.: DiNTS: differentiable neural network topology search for 3D medical image segmentation. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5837–5846 (2021)

    Google Scholar 

  6. Hatamizadeh, A., et al.: UNETR: transformers for 3D medical image segmentation. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1748–1758 (2022)

    Google Scholar 

  7. Antonelli, M., et al.: The medical segmentation decathlon. Nat. Commun. 13(1), 1–13 (2022)

    Article  Google Scholar 

  8. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, vol. 2017, pp. 6000–6010 (2017)

    Google Scholar 

  9. Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin UNETR: swin transformers for semantic segmentation of brain tumors in MRI images. In: Crimi, A., Bakas, S. (eds.) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I, pp. 272–284. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-08999-2_22

    Chapter  Google Scholar 

  10. Sakinis, T., et al.: Interactive segmentation of medical images through fully convolutional neural networks. arXiv preprint arXiv:1903.08205 (2019)

  11. Zhao, F., Xie, X.: An overview of interactive medical image segmentation. Ann. Brit. Mach. Vis. Assoc. 2013(7), 1–22 (2013)

    Google Scholar 

  12. Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)

    Article  Google Scholar 

  13. Grady, L., Schiwietz, T., Aharon, S., Westermann, R.: Random walks for interactive organ segmentation in two and three dimensions: implementation and validation. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3750, pp. 773–780. Springer, Heidelberg (2005). https://doi.org/10.1007/11566489_95

    Chapter  Google Scholar 

  14. Boykov, Y., Funka-Lea, G.: Graph cuts and efficient N-D image segmentation. Int. J. Comput. Vis. 70(2), 109–131 (2006)

    Article  Google Scholar 

  15. Akkus, Z., et al.: Semi-automated segmentation of pre-operative low grade gliomas in magnetic resonance imaging. Cancer Imaging 15(12), 1–10 (2015)

    Google Scholar 

  16. Xu, N., Price, B., Cohen, S., Yang, J., Huang, T.: Deep interactive object selection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 373–381 (2016)

    Google Scholar 

  17. Agustsson, E., Uijlings, J.R., Ferrari, V.: Interactive full image segmentation by considering all regions jointly. In: 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 11614–11623 (2019)

    Google Scholar 

  18. Wang, G., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans. Med. Imaging 37(7), 1562–1573 (2018)

    Google Scholar 

  19. Wang, G., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1559–1572 (2019)

    Article  Google Scholar 

  20. Yushkevich, P.A., et al.: User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31(3), 1116–1128 (2006). https://doi.org/10.1016/j.neuroimage.2006.01.015

    Article  Google Scholar 

  21. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)

    Article  Google Scholar 

  22. Fedorov, A., et al.: 3D slicer as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30, 1323–1341 (2012)

    Article  Google Scholar 

  23. Nolden, M., et al.: The medical imaging interaction toolkit: challenges and advances: 10 years of open-source development. Int. J. Comput. Assist. Radiol. Surg. 8(4), 607–620 (2013)

    Article  Google Scholar 

  24. Maninis, K.K., Caelles, S., Pont-Tuset, J., Van Gool, L.: Deep extreme cut: from extreme points to object segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 616–625 (2018)

    Google Scholar 

  25. Adams, R., Bischof, L.: Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16(6), 641–647 (1994)

    Article  Google Scholar 

  26. Osher, S., Sethian, J.A.: Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79(1), 12–49 (1988)

    Article  MathSciNet  Google Scholar 

  27. MONAI Consortium: MONAI: Medical Open Network for AI, March 2020

    Google Scholar 

  28. Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: ProstateX Challenge data (2017)

    Google Scholar 

  29. Mehta, P., et al.: AutoProstate: towards automated reporting of prostate MRI for prostate cancer assessment using deep learning. Cancers 13(23), 6138 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andres Diaz-Pinto .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 10268 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Diaz-Pinto, A. et al. (2022). DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images. In: Nguyen, H.V., Huang, S.X., Xue, Y. (eds) Data Augmentation, Labelling, and Imperfections. DALI 2022. Lecture Notes in Computer Science, vol 13567. Springer, Cham. https://doi.org/10.1007/978-3-031-17027-0_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17027-0_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17026-3

  • Online ISBN: 978-3-031-17027-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics