Classification of COVID-19 Patients From Chest CT Images Using Multi-Objective Differential Evolution - Based Convolutional Neural Networks
Classification of COVID-19 Patients From Chest CT Images Using Multi-Objective Differential Evolution - Based Convolutional Neural Networks
Classification of COVID-19 Patients From Chest CT Images Using Multi-Objective Differential Evolution - Based Convolutional Neural Networks
https://doi.org/10.1007/s10096-020-03901-z
ORIGINAL ARTICLE
Abstract
Early classification of 2019 novel coronavirus disease (COVID-19) is essential for disease cure and control. Compared with
reverse-transcription polymerase chain reaction (RT-PCR), chest computed tomography (CT) imaging may be a significantly
more trustworthy, useful, and rapid technique to classify and evaluate COVID-19, specifically in the epidemic region. Almost all
hospitals have CT imaging machines; therefore, the chest CT images can be utilized for early classification of COVID-19
patients. However, the chest CT-based COVID-19 classification involves a radiology expert and considerable time, which is
valuable when COVID-19 infection is growing at rapid rate. Therefore, an automated analysis of chest CT images is desirable to
save the medical professionals’ precious time. In this paper, a convolutional neural networks (CNN) is used to classify the
COVID-19-infected patients as infected (+ve) or not (−ve). Additionally, the initial parameters of CNN are tuned using multi-
objective differential evolution (MODE). Extensive experiments are performed by considering the proposed and the competitive
machine learning techniques on the chest CT images. Extensive analysis shows that the proposed model can classify the chest CT
images at a good accuracy rate.
is bilateral change in chest computed tomography (CT) im- Li et al. [24] developed a deep learning model named as
ages [8]. Therefore, chest CT has been used as alternative tool COVNet to extract visual features from chest CT for detection
to detect the infection caused by nCoV due to high sensitivity of COVID-19. They used visual features to distinguish be-
[12]. The National Health Commission of China reported that tween community acquired pneumonia and other non-
chest CT can be utilized to detect the infection caused by pneumonia lung diseases. However, COVNet is unable to
nCoV [3]. A large amount of pathological information can categorize the severity of this disease. Gozes et al. [25] devel-
be obtained from chest CT. The radiologists are required to oped an artificial intelligence-based CT analysis tool for de-
analyze the images of chest CT. Hence, there is a necessity to tecting and quantification of COVID-19. The system extract-
develop a deep learning-based prediction technique for anal- ed slice of opacities in the lungs automatically. The developed
ysis of chest CT without intervention of radiologist. system achieved 98.2% sensitivity and 92.2% specificity. The
The main objective of this paper is to classify COVID-19- output of system provides quantitative opacity measure and
infected patients from chest CT images. A novel deep learn- 3D volume display for opacities. The system is robust against
ing model is designed by using multi-objective differential pixel spacing and slice thickness [25]. Shan et al. [26] devel-
evolution (MODE) and convolutional neural networks oped a deep learning-based system named VB-net for auto-
(CNN) for classification of human beings based upon wheth- matic segmentation of all the lung and infection sites using
er they are affected from COVID-19 or not. A multi- chest CT. Xu et al. [8] developed a prediction model to dis-
objective fitness function is designed to classify COVID- criminate COVID-19 pneumonia and influenza-A viral pneu-
19-infected patients by considering sensitivity and specific- monia using deep learning techniques. The CNN model was
ity. The hyperparameters of CNN are optimized by using the used for prediction. The maximum accuracy obtained from
MODE algorithm. The proposed model is trained by consid- prediction model was 86.7%. Wang et al. [9] investigated
ering the chest CT images of COVID-19 patients. The com- the radiographic changes in CT images of infected patients.
parisons between the proposed MODE-based CNN with the They developed a deep learning-based prediction model that
competitive models such as convolutional neural networks utilizes the modified inception transfer learning technique.
(CNN), adaptive neuro-fuzzy inference systems (ANFIS), The features are extracted from CT images for prior diagnosis.
and artificial neural networks (ANN) are also drawn by con- The accuracy of 89.5% obtained from this method is better
sidering the well-known classification metrics. than Xu’s model [8] and saved time for diagnosis. Narin et al.
The remaining paper is summarized as follows: the [11] proposed an automatic deep convolution neural network–
“Literature review” section discusses the existing literature based transfer models for prediction of COVID-19 in chest X-
in the field of COVID-19; proposed classification model is ray images. They used InceptionV3, Inception-ResNetV2,
discussed in the “Proposed model” section; performance anal- and ResNet50 models for better prediction. The ResNet50
yses are discussed in the “Performance analysis” section; the pre-trained model produced accuracy of 98%, which is higher
“Conclusion” section concludes the paper. than [8, 9]. Sethy et al. [10] developed a deep learning model
for detecting COVID-19 from X-ray images. They extracted
deep features and transferred them to support vector machine
Literature review for classification. The accuracy of 95.38% obtained from the
proposed model, which is better than [8, 9].
Recently, researchers have perceived the imaging patterns on From the extensive review, it has been found that the chest
chest CT for detecting the COVID-19 in chest CT [13–22]. CT images can be used for early classification of COVID-19-
Fang et al. [14] studied the sensitivity of RT-PCR and chest infected patients [27]. Therefore, in this paper, computational
CT during the detection of COVID-19. They analyzed the models are used to classify COVID-19 patients from chest CT
travel history and symptoms of 2 patients and found that the images.
sensitivity of chest CT for detection of COVID-19 is much
higher than RT-PCR. Xie et al. [13] also reported that the 3%
of 167 patients had negative RT-PCR for COVID-19 detec-
tion. However, chest CT has better sensitivity of detection of Proposed model
COVID-19 over RT-PCR. Berheim et al. [23] studied 121
infected patients’ chest CT from four different centers of This section discusses the proposed multi-objective differen-
China. The relationship between CT scan and symptom onset tial evolution (MODE)–based convolutional neural networks
is established. They found that the severity of disease in- (CNN) for classification of COVID-19-infected patients from
creased with time from onset of symptoms and designated chest CT images. In this paper, initially, CNN, ANN, and
the signs of disease. Recently, deep learning techniques have ANFIS models are implemented to classify COVID-19-
been widely used in detection of acute pneumonia in chest CT infected patients from chest CT images. These models provide
images [23–26]. good performance for classification of COVID-19 patients.
Eur J Clin Microbiol Infect Dis
Although, CNN provides good results, but it suffers from chest CT image based COVID-19 from disease classification
hyperparameters tuning issue. also involves repeated classification calculations and compu-
tations. To classify COVID-19-infected patients by using the
Convolutional neural networks CNN model, the following steps are used:
Fig. 1 Block diagram of the training process of the CNN-based COVID-19 classification model
Eur J Clin Microbiol Infect Dis
Tp
B. Classification Sn ¼ ð2Þ
T p þ Fn
In this step, fully connected layers act as a classifier. It
Here, Tp and Fn define true positive and false-negative
utilizes extracted features and evaluates probability for object values, respectively. Sn lies within [0, 100]. Sn approaching
in the image [33]. Usually, activation function and dropout towards 100 is desirable [36].
layer are utilized to establish non-linearity and minimize
Specificity (Sp) computes the proportion of actual negatives
overfitting, respectively [34]. Figure 5 shows the fully con- that are correctly identified and it can be estimated as [37]:
nected layer used for the classification process.
Tn
Sp ¼ ð3Þ
T n þ Fp
Multi-objective fitness function
Here, Tn and Fp, define true negative rate and false-positive
From literature review, it has been found that CNN suffers values, respectively. Sn lies within [0, 100]. Sp approaching
from hyperparameter tuning issues. These hyperparameters towards 100 is desirable [38].
are kernel size, kernel type, stride, padding, hidden layer, ac-
tivation functions, learning rate, momentum, number of
Multi-objective differential evolution
epochs, and batch size. Therefore, the tuning of these param-
eters is desirable. In this paper, a multi-objective fitness func-
The idea of differential evolution (DE) was coined by Storn
tion is designed as:
and Price [39] in 1995. DE has got its inspiration from
f ðt Þ ¼ S n þ S p ð1Þ Darwin’s theory of evolution and natural selection. Over the
time, many DE variants have been introduced [40–42]. DE
Here, Sn and Sp define the sensitivity and specificity param- algorithm has proven its potency in various domains [41,
eters, respectively.
Sensitivity, i.e., true positive rate, computes the ratio of
actual positives that are correctly classified. Confusion matrix
is utilized to evaluate the sensitivity (Sn) and it is mathemati-
cally evaluated as [35]:
Fig. 3 Max pooling with one pooled feature Fig. 4 Rectified linear unit (ReLU) activation function
Eur J Clin Microbiol Infect Dis
Parameter Value/range
A. Mutation operation
Performance analysis Intel Core i7 with 16-GB RAM and 4-GB graphics card is
used. 20.80 cross-validation is used to prevent the overfitting
The proposed COVID-19 classification model is implemented problem. Various variations of training and testing dataset
using MATLAB 2019a software with deep learning toolbox. ratio such as 20,80%, 30:70%, 40:60%, 50:50%, 60:40%,
Fig. 7 Confusion matrix (i.e., error matrix) analysis of a ANN, b ANFIS, c CNN, and the proposed MODE-based CNN model
Eur J Clin Microbiol Infect Dis
70:30%, 80:20%, and 90:10%, respectively are considered for COVID-19 patients present abnormalities in chest CT images
experimental purpose. The proposed model is compared with with most having bilateral involvement. Bilateral multiple
CNN, ANFIS, and ANN models. Table 1 shows the initial lobular and subsegmental areas of consolidation constitute
parameters of MODE to tune CNN. the typical findings in chest CT images of intensive care unit
(ICU) patients on admission. In comparison, non-ICU patients
COVID-19 chest CT images dataset show bilateral ground-glass opacity and subsegmental areas of
consolidation in their chest CT images. In these patients, later
The 2019 novel coronavirus (COVID-19) shows a number of chest CT images display bilateral ground-glass opacity with
unique characteristics. COVID-19’s infection can be classified resolved consolidation [27]. Therefore, in this paper, chest CT
by considering the polymerase chain reaction. It is found that images dataset is used to classify the COVID-19.
the COVID-19-infected patients show some pattern on chest Figure 6 shows different chest CT images COVID-19-
CT images which is not easily detectable by human eye. infected patients. Figure 6a shows an axial CT image of mild
Fig. 9 Accuracy analysis of the proposed and competitive COVID-19 classification models
Eur J Clin Microbiol Infect Dis
Fig. 10 F-measure analysis of the proposed and competitive COVID-19 classification models
type patient (2 days from symptom onset to CT scan). It ground-glass opacities in several lobes, formatting “white
demonstrates thickening of the lung texture. Figure 6b lung.”
shows an axial CT image of common type patient (6 days
from symptom onset to CT scan). It illustrates multiple Quantitative analysis
ground-glass opacities in both lungs. Figures 6c shows an
axial CT image of severe type patient. It demonstrates ex- Figure 7 shows the confusion matrix analysis of the proposed
tensive ground-glass opacities and pulmonary consolida- and the competitive models for COVID-19 disease classifica-
tion, enlargement of bronchi, and vessels. Figure 6d dem- tion. It is found that the proposed MODE-based CNN model
onstrates an axial CT image of critical type patient (9 days outperforms the competitive models as it has better, and con-
from symptom onset to CT scan). It illustrates extensive sistent true positive and true negative values as compared with
Fig. 11 Sensitivity analysis of the proposed and competitive COVID-19 classification models
Eur J Clin Microbiol Infect Dis
Fig. 12 Specificity analysis of the proposed and competitive COVID-19 classification models
other models. Also, it shows that the proposed model has lesser COVID-19 (+)s and vice versa. Figure 8 shows the obtained
false-negative and false-positive values. Therefore, the pro- ROC of the proposed and competitive classification models. It
posed model can efficiently classify the COVID-19 patients. clearly shows that the proposed model achieves good results
Receiver operating characteristic (ROC) is a performance as compared with the competitive models.
measurement curve for classification problem by considering Accuracy is computed by dividing the accurately classi-
number of threshold values. It is defined as a probability curve fied classes by total number of classes. It is a primary mea-
that defines the degree separability between two classes such sure to compute the performance of classification problems.
COVID-19 (+) and COVID-19 (−). It evaluates the perfor- Figure 9 shows the accuracy analysis between the proposed
mance of classification models for distinguishing between and competitive classification models. It clearly shows that
COVID-19 (+) and COVID-19 (−). Higher the ROC, better the proposed model achieves significantly more accuracy as
the classification model is at classifying COVID-19 (+)s as compared with the competitive classification models. The
Fig. 13 Kappa statistics analysis of the proposed and competitive COVID-19 classification models
Eur J Clin Microbiol Infect Dis
proposed model outperforms competitive models by CNN models in terms of accuracy, F-measure, sensitivity,
1.9789%. specificity, and Kappa statistics by 1.9789%, 2.0928%,
F-measure is a well-known measure which can provide 1.8262%, 1.6827%, and 1.9276%, respectively. Therefore,
significant details of classification problems especially when the proposed model is useful for real-time COVID-19 disease
data contain imbalanced classes. It calculates weighted har- classification from chest CT images.
monic mean of the recall and precision. Figure 10 demon-
strates the F-measure analysis between the proposed and com- Compliance with ethical standards
petitive classification models. It reveals that the proposed
model achieves significantly more F-measure as compared Conflict of interest The authors declare that they have no conflict of
interest.
with the competitive classification models. The proposed
model outperforms competitive models by 2.0928%.
Ethical approval This research work does not involve chemicals, pro-
Sensitivity computes the performance of the COVID-19 cedures, or equipment that have any unusual hazards inherent in their use.
(+) cases only. Thus, this test identifies very patient who is
actually infected from COVID-19 disease. Figure 11 shows Informed consent Not required.
the sensitivity analysis between the proposed and competitive
classification models. It reveals that the proposed model
achieves significantly more sensitivity as compared with the
competitive classification models. The proposed model out- References
performs competitive models by 1.8262%.
Specificity evaluates the performance of the COVID-19 (−) 1. World Health Organization, Novel Coronavirus(2019-nCoV)
Situation Report-11. 2020. https://www.who.int/docs/default-
cases only. Thus, this test identifies every patient who is not
source/coronaviruse/situationreports/20200131-sitrep-11-ncov.
infected from COVID-19 disease. Figure 12 depicts the spec- pdf?sfvrsn=de7c0f7_4. Accessed 24 March 2020
ificity analysis between the proposed and competitive classi- 2. World Health Organization, Novel Coronavirus(2019-nCoV)
fication models. It reveals that the proposed model achieves Situation Report-30. https://www.who.int/docs/default-source/
significantly more specificity as compared with the competi- coronaviruse/situationreports/20200219-sitrep-30-covid-19.pdf?
sfvrsn=6e50645_2. Accessed 24 March 2020
tive classification models. The proposed model outperforms
3. Worldometer Coronavirus. https://www.worldmeters.info/
competitive models by 1.6827%. coronavirus/countries-where-coronavirus-has-spread/. Accessed
Kappa statistics is a performance metric used to evaluate 01 Apr 2020
the linear inter-rater reliability. It is also known as a reliable 4. WHO-China joint mission on coronavirus disease 2019 report,
measure. It considers expected value by subtracting it from the 2020
5. Zhang Y (2020) The epidemiological characteristics of an outbreak
classification success. Figure 13 depicts the Kappa statistics of 2019 novel coronavirus (COVID-19)-China CCDC. Zhonghua
analysis between the proposed and competitive classification liu xing bing xue za zhi=Zhonghua liuxingbingxue zazhi 41(2):145
models. It reveals that the proposed model achieves signifi- 6. Xie Z (2020) Pay attention to SARS-CoV-2 infection in children.
cantly more Kappa statistics values as compared with the Pediatr Invest 4(1):1–4
competitive classification models. The proposed model out- 7. Ali T et al (2020) Correlation of chest CT and RT-PCR testing in
coronavirus disease 2019 (COVID-19) in China: a report of 1014
performs competitive models by 1.9276%. cases. Radiology. https://doi.org/10.1148/radiol.2020200642
8. Xu X, Jiang X, Ma C, Du P, Li X, Lv S, Yu L, Chen Y, Su J, Lang G,
Li Y, Zhao H, Xu K, Ruan L, Wu W (2020) Deep learning system to
Conclusion screen coronavirus disease 2019 pneumonia. arXiv preprintarXiv:
2002.09334, 1–29
9. Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J,Cai M, Yang J, Li
In this paper, a COVID-19 disease classification model is Y, Meng X, Xu B (2020) A deep learning algorithm using CT
proposed to classify the infected patients from chest CT im- images to screen for corona virus disease (COVID-19). medRxiv
ages. Initially, the chest CT dataset of COVID-19-infected preprint. https://doi.org/10.1101/2020.02.14.20023028, 1–26
patients is decomposed into training and testing groups. The 10. Sethy PK, Behera SK Detection of coronavirus disease (COVID-
19) based on deep features. Preprints 2020, 2020030300. https://
training dataset is utilized for building the COVID-19 disease
doi.org/10.20944/preprints202003.0300.v1
classification model. The proposed MODE-based CNN and 11. Narin A, Kaya C, Pamuk Z (2020) Automatic detection of corona-
competitive classification models are applied on the training virus disease (COVID-19) using X-ray images and deep
data. To prevent the overfitting, 20-fold cross-validation is convolutional neural network. arXiv preprint arXiv:2003.10849
also utilized. Finally, the comparisons are drawn between the 12. Wang D, Hu B, Hu C et al (2020) Clinical characteristics of 138
hospitalized patients with 2019 novel coronavirus–infected pneu-
competitive and proposed classification models by consider-
monia in Wuhan, China. Jama 323(11):1061–1069
ing different fractions of training and testing dataset. 13. Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J (2020) Chest CT
Extensive experimental results reveal that the proposed model for typical 2019-nCoV pneumonia: relationship to negative RT-
outperforms competitive models, i.e., ANN, ANFIS, and PCR testing. Radiology. https://doi.org/10.1148/radiol.2020200343
Eur J Clin Microbiol Infect Dis
14. Fang Y, Zhang H, Xu Y, Xie J, Pang P, Ji W (2020) CT manifesta- 31. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classifica-
tions of two cases of 2019 novel coronavirus (2019-nCoV) pneu- tion with deep convolutional neural networks. In: Advances in neu-
monia. Radiology 295(1):208–209 ral information processing systems, pp 1097–1105
15. Song F, Shi N, Shan F et al (2020) Emerging coronavirus 2019- 32. Jin KH, McCann MT, Froustey E, Unser M (2017) Deep
nCoV pneumonia. Radiology 295(1)210–217 convolutional neural network for inverse problems in imaging.
16. Ng M, Lee E, Yang J et al (2020) Imaging profile of the COVID-19 IEEE Trans Image Process 26(9):4509–4522
infection: radiologic findings and literature review. Radiol 33. Zbontar J, LeCun Y (2016) Stereo matching by training a
Cardiothorac Imaging 2(1):e200034 convolutional neural network to compare image patches. J Mach
17. Kong W, Agarwal P (2020) Chest imaging appearance of COVID- Learn Res 17(1):2287–2318
19 infection. Radiol Cardiothorac Imaging. https://doi.org/10.1148/ 34. Lawrence S, Giles CL, Tsoi AC, Back AD (1997) Face recognition:
ryct.2020200028 a convolutional neural-network approach. IEEE Trans Neural Netw
18. Kay F, Abbara S (2020) The many faces of COVID-19: spectrum of 8(1):98–113
imaging manifestations. Radiol Cardiothorac Imaging. https://doi. 35. Pannu HS, Singh D, Malhi AK (2018) Improved particle swarm
org/10.1016/B978-0-12-814551-7.00038-6 optimization based adaptive neuro-fuzzy inference system for ben-
19. Venugopal VK, Mahajan V, Rajan S, Agarwal VK, Rajan R, Syed zene detection. CLEAN–Soil, Air, Water 46(5):1700162
S, Mahajan H (2020) A systematic meta-analysis of CT features of 36. Pannu HS, Singh D, Malhi AK (2019) Multi-objective particle
COVID-19: lessons from radiology. medRxiv. https://doi.org/10. swarm optimization-based adaptive neuro-fuzzy inference system
1101/2020.04.04.20052241 for benzene monitoring. Neural Comput & Applic 31:2195–2205
20. Li X, Zeng X, Liu B, Yu Y (2020) COVID-19 infection presenting 37. Kaur M, Gianey HK, Singh D, Sabharwal M (2019) Multi-
with CT halo sign. Radiol Cardiothorac Imaging. https://doi.org/10. objective differential evolution based random forest for e-health
1148/ryct.2020200026 applications. Mod Phys Lett B 33(05):1950022
21. Chung M, Bernheim A, Mei X et al (2020) CT imaging features of 38. Kaur M, Singh D, Sun K, Rawat U (2020) Color image encryption
2019 novel coronavirus (2019-nCoV). Radiology 295(1):202–207 using non-dominated sorting genetic algorithm with local chaotic
22. Yijiu X China’s Hubei reports jump in new cases of COVID-19 search based 5D chaotic map. Futur Gener Comput Syst 107:333–
after diagnosis criteria revision. National Health Commission of the 350
People’s Republic of China website. www.en.nhc.gov/cn/2020-02/ 39. Storn R, Price K (1995) Differential evolution–a simple and effi-
13/c_76515.htm. Accessed 24 March 2020 cient heuristic for global optimization over continuous spaces
23. Bernheim A, Mei X, Huang M et al (2020) Chest CT findings in (Tech. Rep.), Berkeley, CA. TR-95-012
coronavirus disease-19 (COVID-19): relationship to duration of 40. Zhabitskaya E, Zhabitsky M (2012) Asynchronous differential evo-
infection https://doi.org/10.1148/radiol.2020200463 lution. In: Mathematical Modeling and Computational Science, pp
24. Li L et al (2020) Artificial intelligence distinguishes COVID-19 328–333
from community acquired pneumonia on chest CT. Radiology. 41. Zhang J, Sanderson AC (2009) JADE: adaptive differential evolu-
https://doi.org/10.1148/radiol.2020200905 tion with optional external archive. IEEE Trans Evol Comput 13(5):
25. Gozes O et al (2020) Rapid AI development cycle for the corona- 945–958
virus (COVID-19) pandemic: initial results for automated Detection 42. Vaishali, Sharma TK (2016) Asynchronous differential evolution
& patient monitoring using deep learning CT image analysis. arXiv with convex mutation. In: Proceedings of Fifth International
preprint arXiv:2003.05037 Conference on Soft Computing for Problem Solving. Springer,
26. Shan F, Gao Y, Wang J, Shi W, Shi N, Han M, Xue Z, Shi Y (2020) Singapore, pp 915–928
Lung infection quantification of COVID-19 in CT images with 43. Ilonen J, Kamarainen JK, Lampinen J (2003) Differential evolution
deep learning. arXiv preprint arXiv:2003.04655, 1–19, 2020 training algorithm for feed-forward neural networks. Neural
27. Liu K-C, Xu P, Lv W-F, Qiu X-H, Yao J-L, Jin-Feng G (2020) CT Process Lett 17(1):93–105
manifestations of coronavirus disease-2019: a retrospective analysis 44. Storn R (1996) On the usage of differential evolution for function
of 73 cases by disease severity. Eur J Radiol 108941. https://doi.org/ optimization. In: Fuzzy Information Processing Society, 1996.
10.1016/j.ejrad.2020.108941 NAFIPS. Biennial Conference of the North American, pp 519–
28. Moeskops P, Viergever MA, Mendrik AM, De Vries LS, Benders 523. IEEE
MJNL, Išgum I (2016) Automatic segmentation of MR brain im- 45. Hancer E, Xue B, Zhang M (2018) Differential evolution for filter
ages with a convolutional neural network. IEEE Trans Med feature selection based on information theory and feature ranking.
Imaging 35(5):1252–1261 Knowl-Based Syst 140:103–119
29. Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T et al 46. Kaur M, Kumar V, Li L (2019) Color image encryption approach
(2018) Recent advances in convolutional neural networks. Pattern based on memetic differential evolution. Neural Comput & Applic
Recogn 77:354–377 31(11):7975–7987
30. Matsugu M, Mori K, Mitari Y, Kaneda Y (2003) Subject indepen-
dent facial expression recognition with robust face detection using a Publisher’s note Springer Nature remains neutral with regard to jurisdic-
convolutional neural network. Neural Netw 16(5–6):555–559 tional claims in published maps and institutional affiliations.