Abstract
Most Neural Networks for organ segmentation are trained to recognize the appearance of the organ, without considering the location of the organ in the body. However, a medical expert would include in their reasoning also the context around the organ. In this work, we propose reproducing this human behavior by enhancing the conventional multi-class segmentation pipeline with additional anatomical information. We apply this concept to a ventral organ segmentation model having a vertebrae label map as additional input, and to a vertebrae segmentation model enhanced by ventral organ information. In both cases, our proposed label dependency approach improved the performance of the baseline models: the dice score (DS) of the ventral organ segmentation improved by more than 3.5 % and the vertebrae identification rate by 1.8%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Salahuddin Z, Woodruff HC, C hatterjee A, Lambin P. Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput Biol Med. 2022;140:105111.
Cramer G, Darby S. Clinical Anatomy of the Spine, Spinal Cord, and ANS. 2013:1–672.
Wang G, Li W, Ourselin S, Vercauteren T. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. International MICCAI brain lesion workshop. Springer. 2017:178–90.
Venugopalan J, Tong L, Hassanzadeh HR, Wang MD. Multimodal deep learning models for early detection of alzheimer’s disease stage. Sci Rep. 2021;11(1):1–13.
Sekuboyina A, Husseini ME, Bayat A, Löffler M, Liebl H, Li H et al. VerSe: a Vertebrae labelling and segmentation benchmark for multi-detector CT images. Med Image Anal. 2021;73:102166.
Rister B, Yi D, Shivakumar K, Nobashi T, Rubin DL. CT-ORG, a new dataset for multiple organ segmentation in computed tomography. Sci Data. 2020;7(1):1–9.
Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. International conference on medical image computing and computer-assisted intervention (MICCAI). Springer. 2016:424–32.
Sugino T, Kawase T, Onogi S, Kin T, Saito N, Nakajima Y. Loss weightings for improving imbalanced brain structure segmentation using fully convolutional networks. Healthcare. Vol. 9. (8). MDPI. 2021:938.
Sekuboyina A,Rempfler M,Kukačka J, Tetteh G,Valentinitsch A, Kirschke JS et al. Btrfly net: vertebrae labelling with energy-based adversarial learning of local spine prior. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer. 2018:649–57.
Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K et al. Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. 2018.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 Der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature
About this paper
Cite this paper
De Benetti, F., Frasch, R., Venegas, L.F.R., Shi, K., Navab, N., Wendler, T. (2023). Enhancing Medical Image Segmentation with Anatomy-aware Label Dependency. In: Deserno, T.M., Handels, H., Maier, A., Maier-Hein, K., Palm, C., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2023. BVM 2023. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-41657-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-658-41657-7_12
Published:
Publisher Name: Springer Vieweg, Wiesbaden
Print ISBN: 978-3-658-41656-0
Online ISBN: 978-3-658-41657-7
eBook Packages: Computer Science and Engineering (German Language)