Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Deep Learning-Based Quantification

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

ARTICLE IN PRESS

Original Investigation

Deep Learning-based Quantification


of Abdominal Subcutaneous and
Visceral Fat Volume on CT Images
Andrew T. Grainger, PhD, Arun Krishnaraj, MD, MPH, Michael H. Quinones, MD,
Nicholas J. Tustison, PhD, Samantha Epstein, MD, Daniela Fuller, BA, Aakash Jha, BA,
Kevin L. Allman, BA, Weibin Shi, MD, PhD

Rationale and Objectives: Develop a deep learning-based algorithm using the U-Net architecture to measure abdominal fat on computed
tomography (CT) images.
Materials and Methods: Sequential CT images spanning the abdominal region of seven subjects were manually segmented to calculate
subcutaneous fat (SAT) and visceral fat (VAT). The resulting segmentation maps of SAT and VAT were augmented using a template-based
data augmentation approach to create a large dataset for neural network training. Neural network performance was evaluated on both
sequential CT slices from three subjects and randomly selected CT images from the upper, central, and lower abdominal regions of 100
subjects.
Results: Both subcutaneous and abdominal cavity segmentation images created by the two methods were highly comparable with an
overall Dice similarity coefficient of 0.94. Pearson’s correlation coefficients between the subcutaneous and visceral fat volumes quantified
using the two methods were 0.99 and 0.99 and the overall percent residual squared error were 5.5% and 8.5%. Manual segmentation of
SAT and VAT on the 555 CT slices used for testing took approximately 46 hours while automated segmentation took approximately 1 min-
ute.
Conclusion: Our data demonstrates that deep learning methods utilizing a template-based data augmentation strategy can be employed
to accurately and rapidly quantify total abdominal SAT and VAT with a small number of training images.

Key Words: Deep learning; artificial intelligence; visceral fat; obesity.


© 2020 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

Abbreviations: SAT subcutaneous fat, VAT visceral fat

INTRODUCTION historically been used to diagnose obesity. However, these


indirect measurements do not account for weight from skeletal

O
besity, defined as excessive fat accumulation in the
muscle, nor do they distinguish between differential fat distri-
body, is a growing global epidemic, with particular
butions such as visceral and subcutaneous fat. Distribution of
impact on the US population which has experienced
fat is a key variable when assessing obesity as greater distribu-
a marked increase in obesity levels over the last 50 years (1,2).
tion of visceral fat has been linked to more deleterious cardio-
Obesity is a public health problem due to its associated
vascular outcomes than total body fat or BMI alone (4 6).
increased risk for a variety of chronic diseases including meta-
Computed tomography (CT) is an imaging modality that
bolic syndrome, type 2 diabetes, cardiovascular diseases, and
permits easy distinction between fat and other tissues and thus
cancer (3). Anthropometric measurements such as body mass
allows for accurate measurement of fat and non-fat tissue
index (BMI), waist circumference, and waist-to-hip ratio have
amounts in the body (6). Quantification of body fat volume
using CT involves analysis of multiple slices across the region
Acad Radiol 2020; &:1–7 of interest, a laborious task if done manually, and hence not
From the Departments of Biochemistry & Molecular Genetics, Richmond, Vir- typically performed in routine CT interpretation.
ginia (A.T.G., W.S.); Radiology & Medical Imaging, School of Medicine, Virginia Deep learning using convolutional neural networks has
(A.K., M.H.Q., N.J.T., S.E., W.S.); School of Engineering and Applied Science,
University of Virginia, 480 Ray C. Hunt Drive, Charlottesville, VA 22908 (D.F., gained recent popularity in the literature for tackling prob-
A.J., K.L.A.). Received May 16, 2020; revised July 2, 2020; accepted July 6, lems in a multitude of areas, including image recognition,
2020. Address correspondence to: W.S. e-mail: ws4v@virginia.edu
classification and segmentation (7). Development of deep
© 2020 The Association of University Radiologists. Published by Elsevier Inc.
learning algorithms relies on large cohorts of training data to
All rights reserved.
https://doi.org/10.1016/j.acra.2020.07.010 identify important features of targets for predictions in new

1
ARTICLE IN PRESS
GRAINGER ET AL Academic Radiology, Vol &, No &&, && 2020

data. To simultaneously expedite the process of CT segmen- to segment high-artifact images. All procedures were con-
tation and quantification while reducing subjective influences ducted in compliance with the Health Insurance Portability
from observers, several semi and fully-automated algorithms and Accountability Act and were included within an IRB
have been developed for quantifying body fat on CT (8 15). approved retrospective study protocol. CT images were de-
However, nearly all of the previously published algorithms identified to protect patient identity.
are dependent on expert knowledge for tuning the features Manual segmentation and quantification. No access to
of images or focus on a single or few slices within the abdom- semi-automatic segmentation programs was available for our
inal scan. use, therefore the areas corresponding to the subcutaneous fat
ANTsRNet is a collection of deep learning network archi- in the abdominal cavity on each of the training CT images
tectures ported to the R language and built on the Keras were manually segmented by five coauthors on this article
framework (16). We previously applied ANTsRNet to pro- using Image J (18). The coauthors who did the manual seg-
vide a comprehensive protocol for automatically segmenting mentation had no medical background, but their work was
total abdominal subcutaneous adipose tissue (SAT) and vis- performed under the supervision of an experienced radiolo-
ceral adipose tissue (VAT) using mouse Magnetic Resonance gist. Fat can be readily distinguished from non-fat tissues on
images (17). This was accomplished through the use of a CT images in density, shape, and location. CT images were
novel technique designed by the Advanced Normalization adjusted through windowing on the PACS workstation to a
Tools team utilizing template-based data augmentation to gray scale at which fat was visually distinguishable from non-
create a large training set using a small number of images. fat components (bone, air, background, soft, and watery tis-
Here, we have applied the same techniques to test the sue). A bone window was found to be optimal in
hypothesis that deep learning using template-based data aug- distinguishing fat from non-fat tissues.
mentation could accurately and rapidly quantify total abdom- For quantification of fat, we developed an Image J-based
inal SAT and VAT on human CT images. We have also strategy using thresholding around a static intensity window
evaluated the relationship of SAT and VAT volumes with corresponding to fat on bone window-adjusted images. A
BMI in a cohort. flowchart explaining the steps needed to quantify total fat can
be seen in Figure S1.
For segmentation of subcutaneous fat (SAT), we manually
METHODS outlined the area between the skin and the abdominal
muscles. Thresholding the image on specified values (82 97)
CT Images. 110 unique abdominal CT scans of 110 patients
and quantifying the fat area within this selection represents
(half men, half women), aged 60 § 16 years (19 93 years),
the SAT volume. Visceral fat (VAT) is defined as the fat
were randomly selected and retrieved through the Picture
within the abdominal cavity. Due to its irregular shape and
Archiving and Communication System (PACS) at the Uni-
extensive distribution in the abdominal cavity, VAT was dif-
versity of Virginia. No exclusion or inclusion criteria were
ficult to manually segment. Therefore, the area correspond-
used when selecting patients, as the goal was to develop an
ing to the abdominal cavity was outlined and the VAT
algorithm that could properly segment any abdominal CT
volume was calculated though the quantification of fat within
image that it encountered. Scans used in this study were taken
this selection. Nonfat area was determined through thresh-
from 2008 to 2019. Patient characteristics can be seen in
olding of the image so include all nonfat tissues (97 255).
Table 1. The scan parameters varied among patients with a
Automatic Measurement. The creation of an automated
tube current of 36»249 mA, slice thicknesses of 2.82 §
method for measurement of abdominal fat volume consists of
1.75 mm (1.25 5 mm), and a tube voltage of 120 kV. Aver-
multiple steps, including training data preparation, template-
age BMI was 28.1 § 0.8 kg/m2 (range: 17.2 52.9) for men
based data augmentation, and fat quantification. We
and 28.8 § 1.1 (range: 17.2 52.9) kg/m2 for women. This
employed a strategy similar to the one previously used for
large variation in tube current, slice thickness, and BMI was
segmenting and quantifying abdominal fat of mice on MR
chosen to ensure the algorithm could properly segment a
images (17). The complete flowchart of the process is shown
diverse set of CT images. Additionally, a few studies with
in Figure S2.
excessive artifacts were purposely included to test the ability
Template-Based Data Augmentation and Training: The
need for large training data sets is a known major limitation
TABLE 1. Patient Characteristics associated with development of deep learning algorithms (7).
To achieve a training data set size that is sufficient for properly
Factor Number
segmenting total and subcutaneous fat, we employed a tem-
Male 55 plate-based data augmentation strategy that we previously
Female 55 used for segmenting abdominal fat of mice on MR images
Age 60 § 16 (17). Multiple rounds of training were performed using an
BMI (Male) 28.1 § 0.8 increasing number of patients until we were satisfied that the
BMI (Female) 28.8 § 1.1
training weights could accurately segment the testing set. We
Age and BMI are expressed as Mean § SD. aimed to include the smallest number of patients possible to

2
ARTICLE IN PRESS
Academic Radiology, Vol &, No &&, && 2020 DEEP LEARNING FOR FAT QUANTIFICATION

highlight the power of the template-based data augmentation approximately 46 hours compared to approximately 1 minute
strategy that was used. using the deep learning-based algorithm for the same CT images.
Six hundred and thirteen CT images covering the entire Statistical Analysis. Comparisons were made between the
abdominal region of seven individuals were selected for train- automated and manual methods in quantification of visceral
ing. Of the seven subjects, two females and three males had a and subcutaneous fat volumes. The Dice metric was used to
normal BMI and one female and one male met BMI criteria determine the similarity between a manually generated seg-
for obesity. Original CT images were adjusted at the PACS mentation image and an automatically generated one. If two
to bone windows and saved as “.tiff” files. While DICOM segmentations completely overlap, the Dice score is 1; it is 0
images work as well, “.tiff” files were chosen for training and if there is no overlap. This Dice score was determined using
subsequent validation due to their reduced file size, the de- the “Label Overlap Measures” function of the ANTs toolkit.
identification of patient information while, and similar ease of The residual was determined from the difference between
image saving. These images were converted to the Nifti (.nii. manually measured and automatically measured fat volume
gz) format using the ANTs toolkit (https://github.com/ for each slice. In addition, Pearson’s correlation analysis was
ANTsX/ANTs). Each converted image was segmented into done to determine correlations between the manually and
two contoured areas, one for SAT and one for abdominal automatically generated fat volumes and between fat volumes
muscle plus its encircled abdominal cavity, using the open and BMI, as reported (17). These residual and Pearson’s cor-
source segmentation tool ITK-SNAP and saved as a separate relation analyses were performed in R.
segmentation image.
Training was performed using a U-net-based model and
RESULTS
with the ANTsRNet and Keras packages for R using a Ten-
sorflow backend, as was done previously (17). U-net-Based Deep Learning for Subcutaneous Fat Selection.
Validation dataset: The accuracy of the deep learning- The U-net-based algorithm successfully generated the selec-
based algorithm in segmenting subcutaneous fat was validated tions designating the subcutaneous fat region and the abdom-
with a group of images consisting of a combination of CT inal cavity across an entire abdominal scan, and were highly
images from three full abdominal scans taken from separate consistent with the manually generated selections created for
subjects and randomly selected CT images for the upper, cen- the same input images (Fig 1).
tral and lower abdominal regions of 100 subjects (555 total Comparison of Fat Volume Measured from Manual and
images). The images from three full scans were chosen to val- Deep Learning-Based Methods. Initial validation was done
idate that the algorithm could properly segment images across on SAT volume measurements from the consecutive CT sli-
multiple individual scans. The images from 100 subjects were ces of three patients to determine whether the algorithm
included to validate that the algorithm could properly seg- could successfully generate segmentation images across an
ment images from a diverse set of images from a diverse set of entire abdominal scan. As shown in Figure 2a, SAT volumes
individuals. Manual measurement results were used as the measured on sequential slices were comparable between the
ground truth for comparisons with the automated measure- two methods in all three patients. For each of these scans, the
ment results. CT images were prepared as described above total SAT volumes were also comparable between the two
and subsequently input into the trained U-net. Novel seg- methods (Fig 2b). The difference for scan 1 was 59,155 mm3
mentation images generated from the training were evaluated or 0.97% of the total volume, the difference for scan 2 was
for accuracy in quantification of SAT and VAT using a macro 692,520 mm3 or 5.4%, and the difference for scans 3 was
developed for the Fiji package for Image J (19). The steps for 214,197 mm3 or 2.1% of the total volume. The average dif-
quantifying subcutaneous and visceral fat using the macro are ference was 28,252 mm3 or 3.0% of the total volume.
depicted in Figure S3. Performance of the algorithm was further validated with CT
Computational Time. Manual segmentation of total and sub- images from an additional 100 subjects randomly selected from
cutaneous fat on the 555 images used in testing took the University’s PACS database. One image was randomly

Figure 1. The accuracy of deep learning in


delineating areas containing subcutaneous
(SAT) and visceral fat (VAT) on CT images at
multiple levels. Representative images taken
every 20 slices from on of the three full scans
included in the validation dataset show consis-
tency between the manual and automatic
methods in segmenting SAT and VAT on CT
images. The red area denotes SAT and the
green area denotes VAT. Predicted segmenta-
tion: segmentation made by deep learning.

3
ARTICLE IN PRESS
GRAINGER ET AL Academic Radiology, Vol &, No &&, && 2020

Figure 2. Comparison between the automated and manual segmentation methods in quantification of SAT volumes. (a) Comparison of SAT
volumes measured from sequential CT images from upper (slice 1) to lower abdominal region of three individual subjects (black = subject 1,
red = subject 2, green = subject 3; solid = manual, hollow = automated). Each symbol represents an individual slice. (b) Comparison between
the manual and automated methods in measurements of total SAT volumes from three individual scans. (c) Bland-Altman Plot for all images in
the validation set (n = 555 images, 3 full scans + 3 images from 100 individuals). Each dot represents a single image. Orange solid line identifies
the mean difference between fat volumes caluclated using the manually generated or automatically generated segmentation images (m). Lower
solid red line identifies the Bland Altman lower limit of agreement (+2s), and upper red solid line identifies the Bland Altman upper limit of agree-
ment (-2s). (Color version of figure is available online.)

chosen from each of the upper, middle and lower abdominal additional 300 images from 100 individuals shows a high degree
regions of an individual. For 12 CT slices, quantification of adi- of accuracy for predicted VAT volumes, with 4.8% of the
pose tissue compartments was impossible with the automated images being outside 2s from the mean (Fig 3c).
method because subcutaneous fat area was discontinuous or the Pearson’s correlation was performed and residual squared
muscle layer was incomplete. The image training did not error was calculated on fat volume measurements for both
include images in which the SAT was discontinuous, and there- SAT and VAT for additional validation (Table 2). There was
fore the algorithm assumes a continuous SAT layer. When a a high degree of agreement between the SAT (R2 = 0.994,
continuous SAT layer was not present, it artificially created one p = 2.49E-217) and VAT (R2 = 0.989, p = 8.85E-193) vol-
and over-represented the area in which the SAT was predicted umes quantified from predicted or manually-generated seg-
to reside. A Bland-Altman plot combining the three full scans mentation images. The average residual squared error for
and the remaining images from the 100 individuals shows a high SAT was 5.494% and for VAT was 8.510%, further confirm-
degree of accuracy for the predicted SAT volumes, with only ing the algorithm’s ability to accurately quantifies fat from all
4.8% of the images being outside 2s from the mean (Fig 2c). images in an abdominal CT scan.
The volumes of VAT on sequential slices for the three full In addition to fat volume comparisons, Dice coefficient
scans were also comparable between the two methods (Fig 3a). values were calculated for the generated segmentation images
The difference between the total VAT volumes was also small to measure the level of similarity in the images themselves.
(Fig 3b). The difference for scan 1 was 59,155 mm3 or 2.7%, The average Dice coefficient value was 0.94, suggesting a
the difference for scan 2 was 692,520 mm3 or 9.3%, and the dif- high degree of similarity in selection shape and area (min
ference for scans 3 was 214,197 mm3 or 5.3%. The average dif- Dice = 0.80; max Dice = 0.98) (Table 2).
ference between fat volumes was 282,521 mm3, or 6.2%. A Correlations between Abdominal Fat Volumes and Body
Bland-Altman plot combining these three scans with the Mass Index (BMI). Correlations of total, SAT and VAT

4
ARTICLE IN PRESS
Academic Radiology, Vol &, No &&, && 2020 DEEP LEARNING FOR FAT QUANTIFICATION

Figure 3. Comparison between the automated and manual segmentation methods in quantification of VAT volumes. (a) Comparison of VAT vol-
umes measured from sequential CT images from upper (slice 1) to lower abdominal region of three individual subjects (black = subject 1, red = sub-
ject 2, green = subject 3; solid = manual, hollow = automated). Each symbol represents an individual slice (black = subject 1, red = subject 2,
green = subject 3; solid = manual, hollow = automated). (b) Comparison between the manual and automated methods in measurements of total VAT
volumes from three individual scans. (c) Bland-Altman Plot for all images in the validation set (n = 555 images, 3 full scans + 3 images from 100 indi-
viduals). Each dot represents a single image. Orange solid line identifies the mean difference between fat volumes caluclated using the manually
generated or automatically generated segmentation images (m). Lower solid red line identifies the Bland Altman lower limit of agreement (+2s), and
upper red solid line identifies the Bland Altman upper limit of agreement (-2s). (Color version of figure is available online.)

volumes with BMI were calculated using data from the above normalization with nonfat mass, total fat showed an improved
validation cohort. Total fat volumes for the abdominal region association with BMI and SAT showed a reduced association
from the base of the lung to the pelvic brim (T12-L5) were with BMI based on R2 and p values (Figs 4d-e). No correlation
measured using the automated method, where visceral fat is was found between VAT volume and BMI (Fig 4f).
typically measured with CT (20,21). BMI was significantly
correlated with total (R2 = 0.145; p < 0.001) and SAT vol-
DISCUSSION
umes (R2 = 0.246; p < 0.001) (Figs 4a-b). There was no corre-
lation between BMI and VAT volume (R2 = 0.0134; Our study successfully applies a deep learning-based approach
p = 0.144; Fig 4c). Because the amounts of abdominal fat vary to the measurement of abdominal fat on CT scans that is
between individuals, fat volume was normalized by non-fat both accurate and rapid. Volumes of visceral and subcutane-
mass for all subjects to account for the influence of abdominal ous fat measured with our algorithm have shown a high
dimensions on individual variations in abdominal fat. After degree of consistency with those measured by the manual

TABLE 2. Validation of the Deep Learning-Based Algorithm for Accuracy in Quantitation of Subcutaneous (SAT) and Visceral Fat
(VAT) on CT Images

Average dice SAT SAT volume VAT VAT volume Percent RSE Percent RSE
value volume R2 p-value volume R2 p-value SAT volume VAT Volume

All 555 validation 0.944 § 0.002 0.994 2.49E-217 0.989 8.85E-193 5.494 8.510
images

The Dice score (mean § SE), correlation coefficients, and percent Residual Standard Error (RSE) are used for comparing the similarity of the
values of subcutaneous (SAT) and visceral fat (VAT) volumes measured by the manual and deep learning methods on the same CT images.

5
ARTICLE IN PRESS
GRAINGER ET AL Academic Radiology, Vol &, No &&, && 2020

Figure 4. Correlations of fat volumes quantified from all slices in the central abdominal region with BMI in a validation cohort of 100 patients.
Each point represents values of an individual subject. The correlation coefficient (R) and significance (P) are shown. a, b, c: Total, subcutane-
ous, and visceral fat volumes are expressed in real values calculated using the automated method. d, e, f, Total, subcutaneous, and visceral
fat volumes are normalized by dividing by nonfat volume on a same CT slice.

method, the current gold standard. By combining full construct a representative template that was optimal in terms
abdominal scans with a template-based data augmentation of both shape and intensity (22). This approach permits a sub-
strategy, we have successfully developed a unique method to stantial augmentation of the training dataset and overcomes
quantify total SAT and VAT while training on only a small the limitation of deep learning in most cases which require
number of images. The ability to measure total body fat and large-scale training datasets. Despite fewer CT slices being
fat distribution is superior to common anthropometric meas- used for the training, our algorithm has shown a comparable
urements such as BMI, waist circumference, and waist-hip performance to previous deep learning-based methods for
ratio which cannot accurately discern adipose tissue distribu- quantification of body fat using CT images (12 14).
tion in the body. Analysis of 100 abdominal CT scans shows Manually segmenting visceral fat on numerous CT slices
that BMI is moderately associated with subcutaneous fat vol- due to its irregular shape and extensive distribution within
ume but has no association with visceral fat volume. the abdominal cavity is challenging. However, separating fat
A few deep learning-based methods to measure body fat from nonfat elements (air, background, waterish tissue, and
on CT scans have been reported (12 14). Compared to the bone) is a more straightforward task. Thus, we chose to seg-
previous studies, our present study overcomes several limita- ment the abdominal cavity and use thresholding to isolate
tions. First, the inclusion of all CT slices in the abdominal and quantify the VAT. The chance of overestimating visceral
region was performed as compared to one or few slices used fat volume from other fat deposits such as bone marrow fat
in other studies. Fat distribution varies greatly in the abdomi- and inter-muscular fat was minimal because the lumbar verte-
nal region of obese subjects so obesity and overall fat content brae and pelvic bones are within the abdominal and pelvic
may not be accurately reflected from one or few slices. Thus, wall while visceral fat is located within the abdominal cavity.
our method of including all CT slices should result in Also, vertebrae and pelvic bones are cancellous bones con-
improved precision and sensitivity for detecting change over taining red bone marrow that have higher Hounsfield units
time in abdominal fat. on CT than fat.
Second, our study employed a template-based data aug- BMI is the most widely used measure of body adiposity in
mentation strategy whereby images sampled were used to clinical practice. The association of BMI with abdominal fat

6
ARTICLE IN PRESS
Academic Radiology, Vol &, No &&, && 2020 DEEP LEARNING FOR FAT QUANTIFICATION

volumes directly measured by CT was tested in our cohort. 4. Gruzdeva O, Borodkina D, Uchasova E, et al. Localization of fat depots
Our results demonstrate that BMI is only moderately associ- and cardiovascular risk. Lipids Health Dis 2018; 17(1):218. doi:10.1186/
s12944-018-0856-8.
ated with total abdominal and subcutaneous fat and has no 5. St-Pierre J, Lemieux I, Vohl MC, et al. Contribution of abdominal obesity
association with visceral fat. In studies of larger subjects, BMI and hypertriglyceridemia to impaired fasting glucose and coronary artery
has also been found to be more highly correlated with subcu- disease. Am J Cardiol 2002; 90:15–18.
6. Chan JM, Rimm EB, Colditz GA, et al. Obesity, fat distribution, and
taneous fat versus visceral fat (20,23,24). These results suggest weight gain as risk factors for clinical diabetes in men. Diabetes Care
that BMI is not a reliable marker of abdominal fat volumes 1994; 17:961–969.
and is a poor proxy of visceral fat. 7. McBee MP, Awan OA, Colucci AT, et al. Deep learning in radiology. Acad
Radiol 2018; 25:1472–1480.
In summary, we have demonstrated the accuracy of our 8. Seabolt LA, Welch EB, Silver HJ. Imaging methods for analyzing body
deep learning based algorithm for quantifying abdominal fat composition in human obesity and cardiometabolic disease. Ann N Acad
on CT scans. The algorithm has markedly expedited the pro- Sci 2015; 1353:41–59.
9. Positano V, Gastaldelli A, Sironi AM, et al. An accurate and robust
cess of measuring abdominal fat volume allowing for poten- method for unsupervised assessment of abdominal fat by MRI. J Magn
tial routine reporting in the clinical setting. Moreover we Reson Imaging 2004; 20:684–689.
demonstrate the possibility of using a relatively small dataset 10. Demerath EW, Ritter KJ, Couch WA, et al. Validity of a new automated
software program for visceral adipose tissue estimation. Int J Obes
to effectively train a neural network to segment body fat. 2007; 31:285–291.
This has important clinical implications as machine learning 11. Kullberg J, Angelhed JE, Lonn L, et al. Whole-body T1 mapping
can be readily applied to other regions or tissue types evalu- improves the definition of adipose tissue: consequences for automated
image analysis. J Magn Reson Imaging 2006; 24:394–401.
ated on medical imaging. Despite the aforementioned advan- 12. Weston AD, Korfiatis P, Kline TL, et al. Automated abdominal segmenta-
ces afforded by applying deep learning to this task, the biggest tion of CT scans for body composition analysis using deep learning.
limitation is performance when an individual has a discontin- Radiology 2019; 290:669–679. doi:10.1148/radiol.2018181432.
13. Commandeur F, Goeller M, Betancur J, et al. Deep Learning for quanti-
uous subcutaneous fat layer. Another limitation is that we did fication of epicardial and thoracic adipose tissue from non-contrast CT.
not analyze the inter-observer variation due to the laborious IEEE Trans Med Imaging 2018; 37:1835–1846. doi:10.1109/
nature of manual segmentation. TMI.2018.2804799.
14. Wang Y, Qiu Y, Thai T, et al. A two-step convolutional neural network
based computer-aided detection scheme for automatically segmenting
adipose tissue volume depicting on CT images. Comput Methods Pro-
IRB STATEMENT grams Biomed 2017; 144:97–104. doi:10.1016/j.cmpb.2017.03.017.
15. Park HJ, Shin Y, Park J, et al. Development and validation of a deep
All procedures were conducted in compliance with the learning system for segmentation of abdominal muscle and fat on com-
Health Insurance Portability and Accountability Act and puted tomography. Korean J Radiol 2020; 21:88–100. doi:10.3348/
were included within an IRB approved retrospective study kjr.2019.0470.
16. Tustison NJ, Avants BB, Lin Z, et al. Convolutional neural networks with
protocol (protocol #17041) template-based data augmentation for functional lung image quantifica-
tion. Acad Radiol 2019; 26:412–423. doi:10.1016/j.acra.2018.08.003.
17. Grainger AT, Tustison NJ, Qing K, et al. Deep learning-based quantifica-
ACKNOWLEDGMENTS tion of abdominal fat on magnetic resonance images. PLoS One 2018;
13:e0204071.
Research reported in this publication was supported by the 18. NIH Image to ImageJ. 25 years of image analysis. PubMed - NCBI 2020.
Accessed March 19 https://www.ncbi.nlm.nih.gov/pubmed/22930834.
National Institute of Diabetes and Digestive and Kidney Diseases
19. Schindelin J, Arganda-Carreras I, Frise E, et al. Fiji: an open-source plat-
of the National Institutes of Health under Award Number form for biological-image analysis. Nat Methods 2012; 9:676–682.
R01DK116768 and Commonwealth Health Research Board 20. Snell-Bergeon JK, Hokanson JE, Kinney GL, et al. Measurement of
abdominal fat by CT compared to waist circumference and BMI in
(CHRB) Virginia. Andrew Grainger is a recipient of the Robert
explaining the presence of coronary calcium. Int J Obes Relat Metab Dis-
R. Wagner Fellowship from the University of Virginia School of ord J Int Assoc Study Obes 2004; 28:1594–1599. doi:10.1038/sj.
Medicine. The work in this manuscript has not been published ijo.0802796.
21. Ryo M, Kishida K, Nakamura T, et al. Clinical significance of visceral adi-
partly or in full. The authors declare no competing interest.
posity assessed by computed tomography: A Japanese perspective.
World J Radiol 2014; 6:409–416. doi:10.4329/wjr.v6.i7.409.
22. Avants BB, Yushkevich P, Pluta J, et al. The optimal template effect in
COMPLIANCE WITH ETHICAL STANDARDS hippocampus studies of diseased populations. NeuroImage 2010;
49:2457–2466. doi:10.1016/j.neuroimage.2009.09.062.
The scientific guarantor of this publication is Weibin Shi at 23. Camhi SM, Bray GA, Bouchard C, et al. The relationship of waist circum-
the University of Virginia. ference and BMI to visceral, subcutaneous, and total body fat: sex and
race differences. Obes Silver Spring Md 2011; 19:402–408. doi:10.1038/
oby.2010.248.
REFERENCES 24. Nattenmueller J, Hoegenauer H, Boehm J, et al. CT-based compartmental
quantification of adipose tissue versus body metrics in colorectal cancer
1. Ogden C, Carroll M.Prevalence of Overweight, Obesity, and Extreme patients. Eur Radiol 2016; 26:4131–4140. doi:10.1007/s00330-016-4231-8.
Obesity Among Adults: United States, Trends 1960 1962 Through
2007 2008. NCHS Data Brief.201:1-6.
2. Hales C, Carroll M, Fryar C, et al. Prevalence of obesity among adults
and youth: United States. NCHS Data Brief 2017; 288:2015–2016. SUPPLEMENTARY MATERIALS
3. Tremmel M, Gerdtham UG, Nilsson PM, et al. Economic burden of obe-
sity: a systematic literature review. Int J Environ Res Public Health 2017; Supplementary material associated with this article can be found
14:435. doi:10.3390/ijerph14040435. in the online version at doi:10.1016/j.acra.2020.07.010.

You might also like