Abstract
Radiologists identify, measure, and classify clinically significant lesions routinely for cancer staging and tumor burden assessment. As these tasks are repetitive and cumbersome, only the largest lesion is identified leaving others of potential importance unmentioned. Automated deep learning-based methods for lesion detection have been proposed in literature to help relieve their tasks with the publicly available DeepLesion dataset (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8 body part labels). However, this dataset contains missing lesions, and displays a severe class imbalance in the labels. In our work, we use a subset of the DeepLesion dataset (boxes + tags) to train a state-of-the-art VFNet model to detect and classify suspicious lesions in CT volumes. Next, we predict on a larger data subset (containing only bounding boxes) and identify new lesion candidates for a weakly-supervised self-training scheme. The self-training is done across multiple rounds to improve the model’s robustness against noise. Two experiments were conducted with static and variable thresholds during self-training, and we show that sensitivity improves from 72.5% without self-training to 76.4% with self-training. We also provide a structured reporting guideline through a “Lesions” sub-section for entry into the “Findings” section of a radiology report. To our knowledge, we are the first to propose a weakly-supervised self-training approach for joint lesion detection and tagging in order to mine for under-represented lesion classes in the DeepLesion dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Eisenhauer, E., et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45(2), 228–247 (2009)
van Persijn van Meerten, E.L., et al.: RECIST revised: implications for the radiologist. A review article on the modified RECIST guideline. Eur. Radiol. 20, 1456–1467 (2010)
Yang, J., et al.: AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12264, pp. 562–572. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59719-1_55
Yang, J., He, Y., Kuang, K., Lin, Z., Pfister, H., Ni, B.: Asymmetric 3D context fusion for universal lesion detection. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 571–580. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_55
Han, L., et al.: SATr: Slice Attention with Transformer for Universal Lesion Detection. arXiv (2022)
Yan, K., et al.: Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in CT. IEEE TMI 40(10), 2759–2770 (2021)
Cai, J., et al.: Lesion harvester: iteratively mining unlabeled lesions and hard-negative examples at scale. IEEE TMI 40(1), 59–70 (2021)
Yan, K., et al.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 036501 (2018)
Yan, K., et al.: Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology. In: IEEE CVPR (2019)
Yan, K., et al.: MULAN: multitask universal lesion analysis network for joint lesion detection, tagging, and segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 194–202. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_22
Hering, A., et al.: Whole-body soft-tissue lesion tracking and segmentation in longitudinal CT imaging studies. In: PMLR, pp. 312–326 (2021)
Cai, J., et al.: Deep lesion tracker: monitoring lesions in 4D longitudinal imaging studies. In: IEEE CVPR, pp. 15159–15169 (2021)
Tang, W., et al.: Transformer Lesion Tracker. arXiv (2022)
Zhang, H., et al.: VarifocalNet: an IoU-aware dense object detector. In: IEEE CVPR, pp. 8514–8523 (2021)
Yan, K., et al.: Unsupervised body part regression via spatially self-ordering convolutional neural networks. In: IEEE ISBI, pp. 1022–1025 (2018)
Tian, Z., et al.: FCOS: fully convolutional one-stage object detection. In: IEEE ICCV, pp. 9627–9636 (2019)
Zhang, S, et al.: Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: IEEE CVPR (2020)
Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE PAMI 39(6), 1137–1149 (2017)
Solovyev, R., et al.: Weighted boxes fusion: ensembling boxes from different object detection models. Image Vis. Comput. 107, 104117 (2021)
Mattikalli, T., et al.: Universal lesion detection in CT scans using neural network ensembles. In: SPIE Medical Imaging: Computer-Aided Diagnosis, vol. 12033 (2022)
Acknowledgements
This work was supported by the Intramural Research Program of the National Institutes of Health (NIH) Clinical Center.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Naga, V., Mathai, T.S., Paul, A., Summers, R.M. (2022). Universal Lesion Detection and Classification Using Limited Data and Weakly-Supervised Self-training. In: Zamzmi, G., Antani, S., Bagci, U., Linguraru, M.G., Rajaraman, S., Xue, Z. (eds) Medical Image Learning with Limited and Noisy Data. MILLanD 2022. Lecture Notes in Computer Science, vol 13559. Springer, Cham. https://doi.org/10.1007/978-3-031-16760-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-16760-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16759-1
Online ISBN: 978-3-031-16760-7
eBook Packages: Computer ScienceComputer Science (R0)