Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks

  • Conference paper
  • First Online:
Medical Image Understanding and Analysis (MIUA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14122))

Included in the following conference series:

  • 505 Accesses

Abstract

Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/Bidur-Khanal/MVAAL-medical-images.

References

  1. Al Khalil, Y., et al.: On the usability of synthetic data for improving the robustness of deep learning-based segmentation of cardiac magnetic resonance images. Med. Image Anal. 84, 102688 (2023)

    Article  Google Scholar 

  2. Ansari, M.Y., et al.: Practical utility of liver segmentation methods in clinical surgeries and interventions. BMC Med. Imaging 22, 1–17 (2022)

    MathSciNet  Google Scholar 

  3. Bellet, A., Habrard, A., Sebban, M.: A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709 (2013)

  4. Bouget, D., et al.: Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms. Front. Radiol. 1, 711514 (2021)

    Article  Google Scholar 

  5. Budd, S., et al.: A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med. Image Anal. 71, 102062 (2021)

    Article  Google Scholar 

  6. Chen, X., et al.: Semi-supervised semantic segmentation with cross pseudo supervision. In: Proceedings IEEE Computer Vision and Pattern Recognition, pp. 2613–2622 (2021)

    Google Scholar 

  7. Gulrajani, I., et al.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  8. Hamamci, A., et al.: Tumor-cut: segmentation of brain tumors on contrast-enhanced MR images for radiosurgery applications. IEEE Trans. Med. Imaging 31, 790–804 (2011)

    Article  Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  10. Kim, D.D., et al.: Active learning in brain tumor segmentation with uncertainty sampling, annotation redundancy restriction, and data initialization. arXiv preprint arXiv:2302.10185 (2023)

  11. Laradji, I., et al.: A weakly supervised region-based active learning method for COVID-19 segmentation in CT images. arXiv:2007.07012 (2020)

  12. Lewis, D.D.: A sequential algorithm for training text classifiers: corrigendum and additional data. In: ACM SIGIR Forum, vol. 29, pp. 13–19. ACM New York, NY, USA (1995)

    Google Scholar 

  13. Luo, X., et al.: Semi-supervised medical image segmentation through dual-task consistency. In: AAAI Conference on Artificial Intelligence, pp. 8801–8809 (2021)

    Google Scholar 

  14. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015)

    Article  Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Shao, W., et al.: Deep active learning for nucleus classification in pathology images. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 199–202 (2018)

    Google Scholar 

  17. Sharma, D., et al.: Active learning technique for multimodal brain tumor segmentation using limited labeled images. In: Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data: MICCAI Workshop 2019, pp. 148–156 (2019)

    Google Scholar 

  18. Singh, N.K., Raza, K.: Medical image generation using generative adversarial networks: a review. In: Patgiri, R., Biswas, A., Roy, P. (eds.) Health Informatics: A Computational Perspective in Healthcare. SCI, vol. 932, pp. 77–96. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9735-0_5

    Chapter  Google Scholar 

  19. Sinha, S., et al.: Variational adversarial active learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5972–5981 (2019)

    Google Scholar 

  20. Skandarani, Y., et al.: GANs for medical image synthesis: an empirical study. J. Imaging 9(3), 69 (2023)

    Article  Google Scholar 

  21. Tahir, A.M., et al.: COVID-QU-Ex Dataset (2022), https://www.kaggle.com/dsv/3122958

  22. Thapa, S.K., et al.: Task-aware active learning for endoscopic image analysis. arXiv:2204.03440 (2022)

  23. Verma, V., et al.: Interpolation consistency training for semi-supervised learning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 3635–3641 (7 2019)

    Google Scholar 

  24. Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 399–407. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_46

    Chapter  Google Scholar 

  25. Zhan, X., et al.: A comparative survey of deep active learning. arXiv:2203.13450 (2022)

Download references

Acknowledgements.

Research reported in this publication was supported by the National Institute of General Medical Sciences Award No. R35GM128877 of the National Institutes of Health, and the Office of Advanced Cyber Infrastructure Award No. 1808530 of the National Science Foundation. BB and DS are supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) [203145Z/16/Z]; Engineering and Physical Sciences Research Council (EPSRC) [EP/P027938/1, EP/R004080/1, EP/P012841/1]; The Royal Academy of Engineering Chair in Emerging Technologies scheme; and the EndoMapper project by Horizon 2020 FET (GA 863146).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bidur Khanal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Khanal, B., Bhattarai, B., Khanal, B., Stoyanov, D., Linte, C.A. (2024). M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks. In: Waiter, G., Lambrou, T., Leontidis, G., Oren, N., Morris, T., Gordon, S. (eds) Medical Image Understanding and Analysis. MIUA 2023. Lecture Notes in Computer Science, vol 14122. Springer, Cham. https://doi.org/10.1007/978-3-031-48593-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48593-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48592-3

  • Online ISBN: 978-3-031-48593-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics