Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Paper
15 March 2019 Body part and imaging modality classification for a general radiology cognitive assistant
Chinyere Agunwa, Mehdi Moradi, Ken C. L. Wong, Tanveer Syeda-Mahmood
Author Affiliations +
Abstract
Decision support systems built for radiologists need to cover a fairly wide range of image types, with the ability to route each image to the relevant algorithm. Furthermore, the training of such networks requires building large datasets with significant efforts in image curation. In situations where the DICOM tag of an image is unavailable, or unreliable, a classifier that can automatically detect the body part depicted in the image, as well as the imaging modality, is necessary. Previous work has shown the use of imaging and textual features to distinguish between imaging modalities. In this work, we present a model for the simultaneous classification of body part and imaging modality, which to our knowledge has not been done before, as part of the larger work to create a cognitive assistant for radiologists. This classification network consists of 10 classes built from a VGG network architecture using transfer learning to learn generic features. An accuracy of 94.8% is achieved.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chinyere Agunwa, Mehdi Moradi, Ken C. L. Wong, and Tanveer Syeda-Mahmood "Body part and imaging modality classification for a general radiology cognitive assistant", Proc. SPIE 10949, Medical Imaging 2019: Image Processing, 1094910 (15 March 2019); https://doi.org/10.1117/12.2513074
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
X-ray computed tomography

Medical imaging

Image classification

Network architectures

Brain

Image quality

Magnetic resonance imaging

Back to Top