Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Compositional Representation Learning for Brain Tumour Segmentation

  • Conference paper
  • First Online:
Domain Adaptation and Representation Transfer (DART 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14293))

Included in the following conference series:

Abstract

For brain tumour segmentation, deep learning models can achieve human expert-level performance given a large amount of data and pixel-level annotations. However, the expensive exercise of obtaining pixel-level annotations for large amounts of data is not always feasible, and performance is often heavily reduced in a low-annotated data regime. To tackle this challenge, we adapt a mixed supervision framework, vMFNet, to learn robust compositional representations using unsupervised learning and weak supervision alongside non-exhaustive pixel-level pathology labels. In particular, we use the BraTS dataset to simulate a collection of 2-point expert pathology annotations indicating the top and bottom slice of the tumour (or tumour sub-regions: peritumoural edema, GD-enhancing tumour, and the necrotic/non-enhancing tumour) in each MRI volume, from which weak image-level labels that indicate the presence or absence of the tumour (or the tumour sub-regions) in the image are constructed. Then, vMFNet models the encoded image features with von-Mises-Fisher (vMF) distributions, via learnable and compositional vMF kernels which capture information about structures in the images. We show that good tumour segmentation performance can be achieved with a large amount of weakly labelled data but only a small amount of fully-annotated data. Interestingly, emergent learning of anatomical structures occurs in the compositional representation even given only supervision relating to pathology (tumour).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The code for vMFNet is available at https://github.com/vios-s/vMFNet.

References

  1. Alexander, R.G., Waite, S., Macknik, S.L., Martinez-Conde, S.: What do radiologists look for? Advances and limitations of perceptual learning in radiologic search. J. Vis. 20(10), 17–17 (2020)

    Article  Google Scholar 

  2. Antonelli, M., et al.: The medical segmentation decathlon. Nat. Commun. 13(1), 4128 (2022)

    Article  Google Scholar 

  3. Arad Hudson, D., Zitnick, L.: Compositional transformers for scene generation. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  4. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4(1), 1–13 (2017)

    Article  Google Scholar 

  5. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BraTS challenge. arXiv preprint arXiv:1811.02629 (2018)

  6. Chartsias, A., et al.: Disentangled representation learning in cardiac image analysis. Med. Image Anal. 58, 101535 (2019)

    Article  Google Scholar 

  7. Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)

    Article  Google Scholar 

  8. Dubuisson, M.P., Jain, A.K.: A modified Hausdorff distance for object matching. In: Proceedings of the International Conference on Pattern Recognition (ICPR), vol. 1, pp. 566–568. IEEE (1994)

    Google Scholar 

  9. Huynh, D., Elhamifar, E.: Compositional zero-shot learning via fine-grained dense feature composition. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 19849–19860 (2020)

    Google Scholar 

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  11. Kortylewski, A., He, J., Liu, Q., Yuille, A.L.: Compositional convolutional neural networks: a deep architecture with innate robustness to partial occlusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8940–8949 (2020)

    Google Scholar 

  12. Kortylewski, A., Liu, Q., Wang, H., Zhang, Z., Yuille, A.: Combining compositional models and deep networks for robust object classification under occlusion. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (CVPR), pp. 1333–1341 (2020)

    Google Scholar 

  13. Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332–1338 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Liu, N., Li, S., Du, Y., Tenenbaum, J., Torralba, A.: Learning to compose visual relations. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), vol. 34 (2021)

    Google Scholar 

  15. Liu, X., Sanchez, P., Thermos, S., O’Neil, A.Q., Tsaftaris, S.A.: Compositionally equivariant representation learning. arXiv preprint arXiv:2306.07783 (2023)

  16. Liu, X., Thermos, S., Chartsias, A., O’Neil, A., Tsaftaris, S.A.: Disentangled representations for domain-generalized cardiac segmentation. In: Proc. International Workshop on Statistical Atlases and Computational Models of the Heart (STACOM). pp. 187–195 (2020)

    Google Scholar 

  17. Liu, X., Thermos, S., O’Neil, A., Tsaftaris, S.A.: Semi-supervised meta-learning with disentanglement for domain-generalised medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 307–317. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_29

    Chapter  Google Scholar 

  18. Liu, X., Thermos, S., Sanchez, P., O’Neil, A.Q., Tsaftaris, S.A.: vMFNet: compositionality meets domain-generalised segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13437, pp. 704–714. Springer, Cham (2022)

    Google Scholar 

  19. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Article  Google Scholar 

  20. Milletari, F., Navab, N., Ahmadi, S.A.: VNet: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV, pp. 565–571. IEEE (2016)

    Google Scholar 

  21. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), vol. 32 (2019)

    Google Scholar 

  22. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  23. Singh, K.K., Ojha, U., Lee, Y.J.: Finegan: unsupervised hierarchical disentanglement for fine-grained object generation and discovery. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6490–6499 (2019)

    Google Scholar 

  24. Thermos, S., Liu, X., O’Neil, A., Tsaftaris, S.A.: Controllable cardiac synthesis via disentangled anatomy arithmetic. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 160–170. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_15

    Chapter  Google Scholar 

  25. Tokmakov, P., Wang, Y.X., Hebert, M.: Learning compositional representations for few-shot recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6372–6381 (2019)

    Google Scholar 

  26. Yuan, X., Kortylewski, A., et al.: Robust instance segmentation through reasoning about multi-object occlusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11141–11150 (2021)

    Google Scholar 

  27. Yuille, A.L., Liu, C.: Deep nets: what have they ever done for vision? Int. J. Comput. Vision 129, 781–802 (2021)

    Article  Google Scholar 

  28. Zhang, Y., Kortylewski, A., Liu, Q., et al.: A light-weight interpretable compositionalnetwork for nuclei detection and weakly-supervised segmentation. arXiv preprint arXiv:2110.13846 (2021)

Download references

Acknowledgements

S.A. Tsaftaris acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819\(\backslash \)8\(\backslash \)25). Many thanks to Patrick Schrempf and Joseph Boyle for their helpful review comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, X., Kascenas, A., Watson, H., Tsaftaris, S.A., O’Neil, A.Q. (2024). Compositional Representation Learning for Brain Tumour Segmentation. In: Koch, L., et al. Domain Adaptation and Representation Transfer. DART 2023. Lecture Notes in Computer Science, vol 14293. Springer, Cham. https://doi.org/10.1007/978-3-031-45857-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45857-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45856-9

  • Online ISBN: 978-3-031-45857-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics