Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning

  • Head and Neck
  • Published:
European Radiology Aims and scope Submit manuscript

Abstract

Objective

Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs.

Methods

The ODS net consists of two convolutional neural networks (CNNs). The first CNN proposes organ bounding boxes along with their scores, and then a second CNN utilizes the proposed bounding boxes to predict segmentation masks for each organ. A total of 185 subjects were included in this study for statistical comparison. Sensitivity and specificity were performed to determine the performance of the detection and the Dice coefficient was used to quantitatively measure the overlap between automated segmentation results and manual segmentation. Paired samples t tests and analysis of variance were employed for statistical analysis.

Results

ODS net provides an accurate detection result with a sensitivity of 0.997 to 1 for most organs and a specificity of 0.983 to 0.999. Furthermore, segmentation results from ODS net correlated strongly with manual segmentation with a Dice coefficient of more than 0.85 in most organs. A significantly higher Dice coefficient for all organs together (p = 0.0003 < 0.01) was obtained in ODS net (0.861 ± 0.07) than in fully convolutional neural network (FCN) (0.8 ± 0.07). The Dice coefficients of each OAR did not differ significantly between different T-staging patients.

Conclusion

The ODS net yielded accurate automated detection and segmentation of OARs in CT images and thereby may improve and facilitate radiotherapy planning for NPC.

Key Points

• A fully automated deep-learning method (ODS net) is developed to detect and segment OARs in clinical CT images.

• This deep-learning-based framework produces reliable detection and segmentation results and thus can be useful in delineating OARs in NPC radiotherapy planning.

This deep-learning-based framework delineating a single image requires approximately 30 s, which is suitable for clinical workflows.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Abbreviations

CNN:

Convolutional neural network

FCN:

Fully convolutional neural network

GPU:

Graphics processing unit

NPC:

Nasopharyngeal carcinoma

OARs:

Organs at risk

ODS net:

Organs-at-risk detection and segmentation network

References

  1. Mohammed MA, Ghani MKA, Hamed RI, Ibrahim DA (2017) Review on nasopharyngeal carcinoma: concepts, methods of analysis, segmentation, classification, prediction and impact: a review of the research literature. J Comput Sci 21:283–298

    Article  Google Scholar 

  2. Brouwer CL, Steenbakkers RJ, Heuvel EVD et al (2012) 3D variation in delineation of head and neck organs at risk. Radiat Oncol 7(1):1–10

    Article  Google Scholar 

  3. Weiss E, Hess CF (2003) The impact of gross tumor volume (GTV) and clinical target volume (CTV) definition on the total accuracy in radiotherapy theoretical aspects and practical experiences. Strahlenther Onkol 179(1):21–30

    Article  PubMed  Google Scholar 

  4. Faggiano E, Fiorino C, Scalco E et al (2011) An automatic contour propagation method to follow parotid gland deformation during head-and-neck cancer tomotherapy. Phys Med Biol 56(3):775–791

    Article  CAS  PubMed  Google Scholar 

  5. Rueckert D, Frangi AF, Schnabel JA (2003) Automatic construction of 3-D statistical deformation models of the brain using nonrigid registration. IEEE Trans Med Imaging 22(8):1014–1025

    Article  PubMed  Google Scholar 

  6. Fritscher KD, Grünerbl A, Schubert R (2007) 3D image segmentation using combined shape-intensity prior models. Int J Comput Assist Radiol Surg 1(6):341–350

    Article  Google Scholar 

  7. Han X, Hibbard LS, O’Connell NP, Willcut V (2010) Automatic segmentation of parotids in head and neck CT images using multiatlas fusion. In: van Ginneken B, Murphy K, Heimann T, Pekar V, Deng X (eds.) Med Image Analysis for the Clinic:A Grand Challenge, Beijing, 297–304

  8. Han X, Hibbard LS, O’Connell NP (2009) Automatic segmentation of head and neck CT images by GPU-accelerated multi-atlas fusion. On 3D Segmentation. Retrieved from http://www.midasjournal.org/handle/10380/3111

  9. Daisne JF, Blumhofer A (2013) Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation. Radiat Oncol 8(1):154

    Article  PubMed  PubMed Central  Google Scholar 

  10. Fritscher KD, Peroni M, Zaffino P, Spadea MF, Schubert R, Sharp G (2014) Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours. Med Phys 41(5):051910

  11. Gorthi S, Duay V, Houhou N et al (2009) Segmentation of head and neck lymph node regions for radiotherapy planning using active contour-based atlas registration. IEEE Journal of Selected Topics in Signal Processing 3(1):135–147

    Article  Google Scholar 

  12. Qazi AA, Pekar V, Kim J, Xie J, Breen SL, Jaffray DA (2011) Auto-segmentation of normal and target structures in head and neck CT images: a feature-driven model-based approach. Med Phys 38(11):6160–6170

    Article  PubMed  Google Scholar 

  13. Russakovsky O, Deng J, Su H et al (2014) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  Google Scholar 

  14. Girshick R, Donahue J, Darrell T, Malik J (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158

    Article  PubMed  Google Scholar 

  15. Lakhani P, Sundaram B (2017) Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284(2):574–582

    Article  PubMed  Google Scholar 

  16. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB (2017) Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology 286(2):170700

    Google Scholar 

  17. Akram SU, Kannala J, Eklund L, Heikkilä J (2016) Cell Segmentation proposal network for microscopy image analysis. IEEE International Conference on Image Processing

  18. Ibragimov B, Xing L (2017) Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 44(2):547

    Article  CAS  PubMed  Google Scholar 

  19. Yushkevich PA, Piven J, Hazlett HC et al (2006) User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31(3):1116–1128

    Article  PubMed  Google Scholar 

  20. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  PubMed  Google Scholar 

  21. Long J, Shelhamer E, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651

    Article  PubMed  Google Scholar 

  22. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv: 1409.1556 [cs.CV]

  23. Zhang L, Lin L, Liang X, He K (2016) Is faster R-CNN doing well for pedestrian detection? In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9906. Springer, Cham pp 443–457

  24. Bottou L (2010) Large-scale machine learning with stochastic gradient descent. In: Lechevallier Y, Saporta G (eds) Proceedings of COMPSTAT'2010. Physica-Verlag HD 177–186

  25. Jia Y, Shelhamer E, Donahue J et al (2014) Caffe: convolutional architecture for fast feature embedding. arXiv:1408.5093v1 [cs.CV]

  26. Agresti A, Coull BA (1998) Approximate is better than “exact” for interval estimation of binomial proportions. Am Stat 52(2):119–126

    Google Scholar 

Download references

Acknowledgements

The author(s) would like to thank the reviewers for their fruitful comments.

Funding

This study has received funding by the National Natural Science Foundation of China under Grant No. 61671230 and No.31271067, the Science and Technology Program of Guangdong Province under Grant No. 2017A020211012, the Guangdong Provincial Key Laboratory of Medical Image Processing under Grant No.2014B030301042, and the Science and Technology Program of Guangzhou under Grant No. 201607010097.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Zhang.

Ethics declarations

Guarantor

The scientific guarantor of this publication is Yu Zhang.

Conflict of interest

The authors declare that they have no conflict of interest.

Statistics and biometry

No complex statistical methods were necessary for this paper.

Informed consent

Written informed consent was waived by the Institutional Review Board.

Ethical approval

Institutional Review Board approval was obtained.

Methodology

• retrospective

• experimental

• performed at one institution

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, S., Tang, F., Huang, X. et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol 29, 1961–1967 (2019). https://doi.org/10.1007/s00330-018-5748-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00330-018-5748-9

Keywords