Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Clustering Accelerometer Activity Patterns from the UK Biobank Cohort
Next Article in Special Issue
Balancing Complex Signals for Robust Predictive Modeling
Previous Article in Journal
Development of a Wireless Corrosion Detection System for Steel-Framed Structures Using Pulsed Eddy Currents
Previous Article in Special Issue
Detecting the Severity of Socio-Spatial Conflicts Involving Wild Boars in the City Using Social Media Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diagnostic Approach for Accurate Diagnosis of COVID-19 Employing Deep Learning and Transfer Learning Techniques through Chest X-ray Images Clinical Data in E-Healthcare

1
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj 11942, Saudi Arabia
3
College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(24), 8219; https://doi.org/10.3390/s21248219
Submission received: 30 October 2021 / Revised: 25 November 2021 / Accepted: 30 November 2021 / Published: 9 December 2021
(This article belongs to the Special Issue Big Data Analytics in Internet of Things Environment)

Abstract

:
COVID-19 is a transferable disease that is also a leading cause of death for a large number of people worldwide. This disease, caused by SARS-CoV-2, spreads very rapidly and quickly affects the respiratory system of the human being. Therefore, it is necessary to diagnosis this disease at the early stage for proper treatment, recovery, and controlling the spread. The automatic diagnosis system is significantly necessary for COVID-19 detection. To diagnose COVID-19 from chest X-ray images, employing artificial intelligence techniques based methods are more effective and could correctly diagnosis it. The existing diagnosis methods of COVID-19 have the problem of lack of accuracy to diagnosis. To handle this problem we have proposed an efficient and accurate diagnosis model for COVID-19. In the proposed method, a two-dimensional Convolutional Neural Network (2DCNN) is designed for COVID-19 recognition employing chest X-ray images. Transfer learning (TL) pre-trained ResNet-50 model weight is transferred to the 2DCNN model to enhanced the training process of the 2DCNN model and fine-tuning with chest X-ray images data for final multi-classification to diagnose COVID-19. In addition, the data augmentation technique transformation (rotation) is used to increase the data set size for effective training of the R2DCNNMC model. The experimental results demonstrated that the proposed (R2DCNNMC) model obtained high accuracy and obtained 98.12% classification accuracy on CRD data set, and 99.45% classification accuracy on CXI data set as compared to baseline methods. This approach has a high performance and could be used for COVID-19 diagnosis in E-Healthcare systems.

Graphical Abstract

1. Introduction

COVID-19 is a transferable illness that is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1]. COVID-19 is very quickly spread, and numerous people have suffered and died from this global pandemic. The efficient and accurate identification of COVID-19 is a big challenge to researchers and medical experts. Effective diagnosis technologies are significantly necessary for effective treatment and recovery of COVID-19 at an early stage. The Coronaviruses are a big family of viruses and SARS-CoV-2 is a ribonucleic acid (RNA) virus that belongs to coronaviruses. The COVID-19 can be diagnosed through different methods such as medical symptoms (fever, cough, dyspnea, and pneumonia), epidemiological history, positive pathogenic testing, positive chest X-ray, and CT images. However, two virus detection methods are used: detection through nucleic acids of the virus RNA or through antibodies generated in the patient’s immune system [1]. Thus, the diagnosis of COVID-19, clinical imagining such as chest X-ray, computer tomography (CT), and real-time polymerase chain reaction (RT-PCR) are suitable for accurate and efficient detection. Furthermore, chest CT scan images are employed to test the severity of lung involvement of COVID-19 positive subjects and provide depth information to analyze the pathogenesis of the disease [2].
Artificial intelligence (AI) techniques and their application are widely used in different domains, particularly computer vision and imaging. The diagnosis of disease employed artificial intelligence techniques on clinical images data has great applications. Medical images data such as X-ray and CT scans are mostly analyzed by applied AI techniques to diagnose diseases such as COVID-19. Due to AI these diseases are effectively diagnosis at early stages and ensuring the proper treatment and recovery of patients. The AI-based computer-aided diagnosis (CAD) systems accurately diagnose diseases than medical professionals because the medical experts do not correctly interpret the images of chest X-ray and CT Scan to diagnosis the disease at an early stage [3,4,5,6,7,8].
To detect the disease, various non-invasive-based methods have been proposed employing different kinds of images data such as X-rays [9,10,11], CT scans [12,13,14,15] and both X-rays and CT scans [16]. In these non-invasive-based techniques, mostly Machine Learning (ML) and Deep Learning (DL) techniques are employed to diagnose disease. The diagnosis of disease from images data using a convolutional neural network (CNN) model has gained very high popularity, and mostly a CNN classifier is used for classification and analysis of medical images data [17]. The CNN model has the capability to extract more related features from data for correct images classification [18]. The CNN model needs more inputs data for training of the model, however, this problem can be tackled by incorporating data augmentation [19] and transfer learning techniques [20].
In literature, various methods have been proposed for COVID-19 diagnosis using ML and DL approaches by researchers. In all these proposed methods, X-rays and CT scans images data have been used in AI algorithms to diagnose COVID-19. Numerous COVID-19 diagnosis AI-based CAD systems developed for quick and accurate detection to assist the E-healthcare systems in the world to handle this critical pandemic [21,22,23,24,25]. In these proposed models, mostly CNN and other CNN architectures, transfer learning, and data augmentation have been used to diagnosis COVID-19. Due to the lack of more data for training of the model, data augmentation techniques have been applied on X-rays and CT scans images data to increased data size [1].
Song et al. [26] designed a system for detection of COVID-19 and incorporated a detailed relation extraction neural network (DRE-Net) architecture which is named Deep Pneumonia. They trained the proposed model using a CT images data set. The data consist of 88 COVID-19 patient subjects, 101 bacteria pneumonia patient subjects, and 86 healthy subjects. The proposed model evaluated and obtained 86% and 95% accuracy and area under the curve (AUC), respectively. Wang et al. [23] proposed a COVID-19 diagnosis method employing deep learning algorithms and CT scan images. They extract features from CT scan images and then used these extract features for the classification of COVID-19 images from the viral pneumonia images. The data set used has 1065 CT images with 70% viral pneumonia and 30% COVID-19 images and the proposed method achieved 89.5% classification accuracy.
Xu et al. [24] proposed an integrated system based on CNN and ResNet models for COVID-19 diagnosis using CT scan images data and obtained 86.7% accuracy of classification. Chowdhury et al. [27] proposed a COVID-19 diagnosis method employing deep learning techniques and chest X-ray images data. The proposed method obtained classification 97.9%. Tawsifur et al. [28] proposed a diagnosis method for COVID-19 detection using chest X-ray images. They employed deep learning techniques to diagnosis the COVID-19. The proposed method achieved 95.11% accuracy.
Loddo et al. [29] proposed a COVID-19 diagnosis method employing CNN different architectures for accurate detection of COVID-19. In the proposed method development, two CT scan images data sets COVIDx CT-2A and COVID-CT have incorporated for evaluation of proposed model. The proposed method has been evaluated using different evaluation metrics and in terms of accuracy among the other CNN architectures the VGG19 obtained 98.87% on COVIDx CT-2A data set.
Gunraj et al. [30] proposed improved deep learning based diagnosis system (COVIDNet CT-2) for COVID-19 identification using CT scan images clinical data. The proposed method has been evaluated using different evaluation metrics and in terms of accuracy the method achieved 98.1% accuracy. Hu et al. [31] proposed a COVID-19 identification method using weakly supervised deep learning strategy and evaluated the proposed method using chest CT scan images data. The performance of proposed method achieved high predictive performance.
Khalifa et al. [32] proposed a COVID-19 diagnosis method using Generative adversarial networks (GAN) with a fine-tuned deep transfer learning. The proposed method has been evaluated using chest X-ray images data. They used 10% of data from data set for training and generate 90% data for training using GAN proposed model. Different transfer learning models such as Resnet18, Squeeznet, GoogLeNet, and AlexNet are used for detection of pneumonia. Furthermore, different performance evaluation metrics were used for model evaluation, but in terms of accuracy the proposed method obtained 99% accuracy. Wang et al. [33] proposed a deep convolutional Neural Network for the diagnosis of COVID-19 using data from chest X-ray images. The proposed model achieved 93.3 percent accuracy.
In this research paper, we have proposed a (R2DCNNMC) model for the diagnosis of COVID-19. In the designing of the method, we have incorporated a deep learning two-dimensional Convolution Neural Networks (2DCNN) model for extraction of deep features from chest X-ray images data and used these for final classification. In addition, transfer learning, and data augmentation techniques have been employed to increase the training process of the 2DCNN model. Furthermore, we have used the hold-out cross-validation technique for hyperparameters tuning and best model selection. The performance evaluation metrics have been computed for model performance evaluation. The performance of the baseline methods in terms of accuracy is compared with the proposed R2DCNNMC model.
The innovations of this study are summarized as follows:
  • A deep learning-based R2DCNNMC model is proposed for detection of COVID-19 employed chest X-ray images data.
  • To improve the predictive performance of the 2DCNN model we have used transfer learning and data augmentation techniques to improve the training process for effective training of the 2DCNN model.
  • The proposed 2DCNNMC model performances have been evaluated by using various performance evaluation metrics.
  • The proposed 2DCNNMC model obtained high performance compared to baseline models.
The remaining manuscript is arranged as follows: In Section 2, the data sets used in the work and proposed method methodology are discussed. Experiments are carried out and discussed in Section 3. Conclusions and future work are reported in Section 4.

2. Materials and Method

2.1. Data Collection

In this study, two data sets are used for the evaluation our model. The COVID-19-Radiography-Dataset (CRD) is a database of chest X-ray images for COVID-19 positive cases along with Normal, Viral Pneumonia images, and Lung Opacity. The data set has included 3616 COVID-19 positive cases, 10,192 Normal, 6012 Lung Opacity, and 1345 Viral Pneumonia images. The OVID-19-Radiography-Dataset is available on the Kaggle machine learning repository (https://www.kaggle.com/tawsifurrahman/COVID-19-Radiography-Database (accessed on 10 March 2021)). The second one chest X-ray (COVID-19, Pneumonia) (CXI) data set is achieved from Kaggle repository (https://www.kaggle.com/prashant268/chest-X-ray-COVID-19-pneumonia (accessed on 10 March 2021)). The data set has 6432 chest X-ray images, which belong to three classes (COVID-19, Normal, PNEUMONIA). The data set contain 576 COVID19, 1583 Normal, and 4273 PNEUMONIA images, respectively.

2.2. Proposed Method Background

The method background described in the below subsection in detail.

2.2.1. Convolutional Neural Network (CNN) Architecture for Multi-Classificationn

Recently, CNN’s models generated significant outcomes in different areas, such as NLP, image classification, and diagnosis systems [34]. In contrast to MLPs, CNN reduces the number of neurons and parameters, which results in lower complexity and faster adaptation. The CNN model has significant applications in medical image classification [34]. Here, we discuss the fundamental structure of the CNN model. The CNN is a type of Feed-Forward Neural Network (FFNN) and a DL model. Convolution operations can capture translation invariance, which means that the filter is independent of position, which reduces the number of parameters. The CNN model has three kinds of layers, such as Convolutional, Pooling, and fully connected layer. These three kinds of layers are necessary for performing functions of dimensionality reduction, feature extractors, and classification. During the convolution operation of the forward pass, the filter is slid onto the input volume and computes the activation map, which computes the point-wise output of each value and adds them to achieve the activation of that point. The sliding filter is deployed by convolution, and as a linear operator, it can be expressed as a dot product for fast deployment. Let us consider x is the input, and w is the kernel function, the convolution process ( x w ) ( a ) on time index t can be mathematically expressed in Equation (1).
( x w ) ( a ) = x ( t ) w ( a t ) d a
where a is in R n for any n 1 . While Parameter t is discrete. In this case, the discrete convolution can be expressed as in Equation (2):
( x w ) ( a ) = a x ( t ) w ( t a )
However, usually use 2 or 3-dimensional convolutions in CNN model. In this work, we used two dimensional convolutions CNN model for our multi-classification problem. In case of two-dimensional image I as input, K is a two dimensional kernel and the convolution can be mathematically expressed as in Equation (3):
( I K ) ( i , j ) = m n I ( m , n ) K ( i m , j n )
Additionally, to gain non-linearities, two activation functions can be used, such as ReLU and Softmax. In Equation (4), the activation function ReLU expressed:
R e L U ( x ) = m a x ( 0 , x ) x R
the gradient of R e L U ( x ) = 1 for x > 0 and R e L U ( x ) = 0 for x < 0 . The ReLU convergence capability is good then sigmoid non-linearities. The second activation function is softmax, which is expressed mathematically in Equation (5). The softmax non-linearity activation function is suitable when the output needs to be included in more than two classes.
S o f t m a x ( x i ) = exp ( x i ) j exp ( x j ) .
The CNN model pooling layers are utilized to output a statistics summary of its inputs and resize the output shape without losing necessary information. There are different type of pooling, and we use maximum pooling layer which generates the maximum values in individually rectangular neighborhood of individual point (i, j) for 2D data of each input feature, respectively. A fully connected layer F C is last layer with n and m input, and output sizes are described below. The output layer parameters are expressed as a weight matrix W M m , n . Where m rows, n columns, and a bias vector b R m . Assumed an input vector x R n , the fully connected layer F C output with activation function f is expressed mathematically in Equation (6) as:
F C ( x ) : = f ( W x + b ) R m
in Equation (6), W x is the product of the matrix, while the function f is applied component wise. The fully connected layers are utilized for problems of classification. The fully connected layers F C of CNN model are generally attached at the top. For this, the CNN output is flattened and showed as a single vector. In our proposed 2D CNN model, there are three 2D convolution layers with each layer have an activation layer and max-pooling layer and FC is last layer. Furthermore, we use Stochastic Gradient Descent (SGD) Optimization algorithm for our model optimization. The structure of our CNN model is given in Table 1.

2.2.2. Transfer Learning to Improve 2DCNN Model Predictive Performance

To improve the 2DCNN model predictive capability, we employed transferred learning ResNet-50 model. The transfer learning (TL) techniques widely used in image classification tasks [20], COVID-19 sub-type recognition [35] and medical images filtering [36]. In this study, we incorporated the transfer learning ResNet-50 CNN pre-trained model to enhance the predictive performance of the proposed 2DCNN model. The ResNet-50 pre-train model is trained on imageNet data set and transferred the weights of the trained parameters to our 2DCNN model, and fine-tuned the model using the chest X-ray images for the final classification of the 2DCNN model.
The structure of ResNet-50 have 5 steps and each step with convolution, and identity block. In each block of convolution there are 3 layers of convolution, also three layers of convolution in each identity block. Furthermore, ResNets-50 is a variant of ResNet model, which has 48 Convolution layers along with 1 max-pool and 1 average pool layer. The ResNet-50 model has more than 74,917,380 trainable parameters. The architecture of ResNet-50 is given in Figure 1.

2.2.3. Cross Validation Criteria

The holdout cross-validation mechanism is used for model training and validation [5,8]. In this study chest X-ray images data sets were divided into 70% and 30 % for training and teasing of the model for all experiments.

2.2.4. Model Assessment Criteria

In this work, important assessment measures [7] are used to evaluate the proposed method, which are expressed mathematically in Equations (7)–(12), respectively.
A c c u r a c y ( A c ) = ( T P + T N ) ( T P + T N + F P + F N ) × 100
R e c a l l / S e n s i t i v i t y ( R e / S n ) = T P ( T P + F N ) × 100
S p e c i f i c i t y ( S p ) = T N ( T N + F P ) × 100
P r e c i s i o n ( p r ) = T P ( T P + F P ) × 100
F 1 s c o r e ( F 1 S ) = 2 × P r × R e ( p r + R e ) × 100
M a t t h e w s c o r r e l a t i o n c o e f f i c i e n t ( M C C ) = T 1 T 2 × T 3 × T 4 × T 5 x × 100
where, T 1 = ( T P × T N F P × F N ) , T 2 = ( T P + F P ) , T 3 = ( T P + F N ) , T 4 = ( T N + F P ) , and T 5 = ( T N + F N )
AUC: The AUC demonstrated the model ROC and large AUC value show good predictive results of the model.

2.2.5. Proposed Integrated (ResNet-50+2DCNN) Multi Classification (R2DCNNMC) Model for COVID-19 Diagnosis

We have designed the 2DCNN model for COVID-19 detection employing chest X-ray images data. To improve the predictive performance of the 2DCNN model, we have used techniques of data augmentation and transfer learning (TL). We have used transfer learning pre-trained CNN architecture ResNet-50 [37]. The imagesNet data set has been employed for pre-trained of ResNet-50, and the generated weights (trained parameters) of this model are transferred for the training of our 2DCNN model. Chest X-ray data set is utilized for fine-tuning of the 2DCNN model and for final multi-classification of the model. Thus, an integrated (ResNet-50+2DCNN) multi-classification (R2DCNNMC) model for COVID-19 diagnosis is proposed.
A hold-out cross validation (CV) mechanism is used in the proposed R2DCNNMC model, with 70% of the model being trained and 30% being tested. The integration of transfer learning greatly enhanced the predictive performance of the 2DCNN model. The performance of the proposed R2DCNNMC model has been evaluated using evaluation metrics. The pseudo code for our model R2DCNNMC is given in Algorithm 1, and a flow chart is shown in Figure 2.
Algorithm 1: Proposed R2DCNNMC model for COVID-19 diagnosis.
Sensors 21 08219 i001

3. Experiments and Discussion

3.1. Experimental Setup

For implementation of our proposed R2DCNNMC model we have performed various experiments. For model validation two chest X-ray data sets have utilized and hold-out cross validation technique is used for model training and validation. Model assessment measures have computed for model evaluation. In addition Stochastic Gradient Descent (SGD) Optimization algorithm has been used for proposed model optimization. Others parameters such as learning rate α (SGD) = 0.0001, epochs = 120, batch size = 100, Mini-batch size = 9, outer activation function = Softmax and inner activation function = ReLU have been used in all experiments. In Table 2, the the proposed R2DCNNMC model parameters are defined accordingly. The hardware setup for all experiments we used a laptop with Intel Core i 5 , 64 GB RAM, and GPU. Python v3.7 is used for simulations and the proposed model is developed in Keras framework v2.2.4 and Tensor flow v1.12 as the back end. All experiments repeated many times for producing stable results.

3.2. Results and Analysis

3.2.1. Pre-Processing of Data

Two data set are used in this research for the evaluation of the proposed R2DCNNMC model. Before applying these data sets in the model we need to perform so pre-processing operations on both data sets that model suitable trained for effective performance. The COVID-19-Radiography-Dataset (CRD) is a data set of chest X-ray images for COVID-19 positive cases along with Normal, Viral Pneumonia, and Lung Opacity. This data set included 3616 COVID-19 positive cases, 10,192 Normal, 6012 Lung Opacity, and 1345 Viral Pneumonia images. The total images in the data set are 21,165.
To increase the data set size for effective training of the 2DCNN model we have used the data augmentation technique to augment the original dataset by using random transformation (rotation). All the images have been rotated with an angle of 45 degrees along the X-axis and added these augmented images to the original data set. Thus, the total images in new data are 42,330. The data augmentation technique has also used on the second chest X-ray (Covid-19 and Pneumonia) (CXI) data set. This data set has chest X-ray 6432 images, which belong to three classes (COVID19, Normal, PNEUMONIA). The data set contain 576 COVID19, 1583 Normal, and 4273 PNEUMONIA images, respectively.
After data augmentation, the images in the new dataset are 12,864. The proposed model has trained on original and augmented data sets, respectively, for all experiments. The holdout cross-validation method has used for the training and validation process in the proposed model because the data sets are now large enough so it will not make computational complexity problems and model will fit exactly and will generate high performance. The images of CRD and CXI datasets are shown in Figure 3 and Figure 4.

3.2.2. 2DCNN Model Performance Evaluation on Original and Augmented Date Sets

The predictive output of the 2DCNN model has been checked on two chest X-ray original and augmented data sets. The 2DCNN model has been trained with these two types of data sets along with other necessary hyperparameters. The SGD algorithm of optimization with a Learning Rate (LR) of 0.0001 is used in the model for model optimization. The number of epoch and batch sizes were 100 and 120, respectively, for all experiments. The results are reported in Table 3.
Table 3 is present the performance of 2DCNN model on original and augmented COVID-19-Radiography (CRD) chest X-ray data sets. According to Table 3, the 2DCNN model on original COVID-19-Radiography (CRD) chest X-ray data set has gained 95.20% Accuracy, 97.00% Specificity, 80.25% Sensitivity/Recall, 92.40% Precision, 93.00% MCC, 95.09% F1-score and 96.00% AUC, respectively. On the other side, the 2DCNN model on augmented COVID-19-Radiography (CRD) chest X-ray data obtained high performance as compared to the performance on the original data set. The 2DCNN model has achieved 96.00% Accuracy, 96.45% Specificity, 97.00% Sensitivity/Recall, 97.43% Precision, 96.33% MCC, 96.52% F1-score and 97.23% AUC, respectively, on augmented data set (CRD). The accuracy of the 2DCNN model has increased from 95.20% to 96.45% when the model trained with augmented COVID-19-Radiography (CRD) chest X-ray data set. Similarly, the AUC value of the 2DCNN model has increased from 96.00% to 97.23%. The other evaluation metrics values also improved with the data augmentation process.
The 2DCNN model performance has also been evaluated by using chest X-ray COVID-19, and Pneumonia (CXI) data set. Table 3 shows that the 2DCNN model has obtained 97.02% Accuracy, 98.00% Specificity, 99.25% Sensitivity/Recall, 100.00% Precision, 99.26% MCC, 97.00% F1-score and 99.00% AUC, respectively.
While with augmented chest X-ray COVID-19 and Pneumonia (CXI) data set the 2DCNN model has achieved 97.65% Accuracy, 99.10% Specificity, 97.86% Sensitivity/Recall, 99.80% Precision, 99.87% MCC, 97.73% F1-score and 99.23% AUC, respectively. Due to the data augmentation, the training of the 2DCNN was effectively performed and ultimately has increased model predictive performance. With the data augmentation process the model increased Accuracy from 97.02% to 97.65%, which demonstrates that the model predictive capability increased with data augmentation. Similarly, the MCC value has increased from 99.26% to 99.87%, and the AUC value has also improved from 99.00% to 99.23%.

3.2.3. ResNet-50 Model Performance Evaluation on Original and Augmented Date Sets

The ResNet-50 model performance has been evaluated on two chest X-ray original and augmented data sets. The ResNet-50 transfer learning CNN model trained with two types of data sets along with other required hyperparameters. The SGD optimization algorithm with LR of 0.0001 is used in the model for model optimization. The number of epoch and batch sizes were 100 and 120, respectively, for all experiments. To evaluate model performance different evaluation metrics have computed and reported in Table 4.
Table 4 shows that the ResNet-50 model on original CRD data set has obtained 94.03% Accuracy, 96.32% Specificity, 83.25% Sensitivity/Recall, 97.10% Precision, 93.50% MCC, 95.09% F1-score and 94.20% AUC, respectively. On the other side, the ResNet-50 model on augmented CRD data set obtained high performance as compared to the performance on the original data set. The ResNet-50 model has achieved 95.20% Accuracy, 97.00% Specificity, 99.00% Sensitivity/Recall, 88.21% Precision, 96.12% MCC, 97.34% F1-score and 95.00% AUC, respectively, on augmented data set (CRD).
The accuracy of the ResNet-50 model has increased from 94.03% to 95.20% when the model trained with augmented CRD chest X-ray data set. Similarly, the AUC value of the ResNet-50 model has increased from 94.20% to 95.00%. The other evaluation metrics values also improved with the data augmentation process.
The ResNet-50 model performance has also been evaluated by using chest X-ray COVID-19 and Pneumonia (CXI) data set. According to Table 4 the ResNet-50 model has obtained 92.34% Accuracy, 94.46% Specificity, 97.67% Sensitivity/Recall, 93.00% Precision, 94.98% MCC, 93.00% F1-score and 92.10% AUC, respectively.
While with augmented CXI data set the ResNet-50 model has achieved 94.87% Accuracy, 95.98% Specificity, 95.51% Sensitivity/Recall, 93.00% Precision, 95.24% MCC, 95.00% F1-score and 93.19% AUC, respectively. Due to the data augmentation, the training of the ResNet-50 was effectively performed and ultimately has increased model predictive performance. With the data augmentation process the model increased accuracy from 92.34% to 94.87%, which demonstrates that the model predictive capability increased with data augmentation.

3.2.4. R2DCNNMC Performance Evaluation on Original and Augmented Data Sets

The performance of the R2DCNNMC model has been evaluated on two chest X-ray original and augmented data sets. R2DCNNMC model has been trained with these two types of data sets along with other required hyperparameters. The SGD optimization algorithm with a LR of 0.0001 is used for model optimization. The number of epoch and batch sizes were 100 and 120, respectively, for all experiments. For training and validation of the model, 70% and 30% data are used. To evaluate model performance, different assessment measures have been computed and reported in Table 5.
The performance of the R2DCNNMC model on original and augmented COVID-19-Radiography (CRD) chest X-ray data sets is reported in Table 5. According to Table 5, the R2DCNNMC model on the original COVID-19-Radiography (CRD) chest X-ray data set has achieved 97.66% Accuracy, 99.00% Specificity, 89.18% Sensitivity/Recall, 99.10% Precision, 99.30% MCC, 98.00% F1-score and 97.03% AUC, respectively. On the other side, the R2DCNNMC model on augmented CRD data obtained high performance as compared to the performance on the original data set. The R2DCNNMC model has achieved 98.12% Accuracy, 99.28% Specificity, 93.00% Sensitivity/Recall, 99.56% Precision, 99.70% MCC, 98.23% F1-score and 98.60% AUC, respectively, on augmented CRD data set. The accuracy of the R2DCNNMC model has increased from 97.66% to 98.12% when the model trained with augmented CRD data set. Similarly, the AUC value of the 2DCNN model has increased from 97.03% to 98.60%. The other evaluation metrics values also improved with the data augmentation process.
The R2DCNNMC model performance has also been checked by using chest X-ray Covid-19 and Pneumonia (CXI) data set. According to Table 5, the R2DCNNMC model have obtained 98.17% Accuracy, 100.00% Specificity, 96.25% Sensitivity/Recall, 99.24% Precision, 99.70% MCC, 99.46% F1-score and 99.23% AUC, respectively. While with augmented chest X-ray Covid-19 and Pneumonia (CXI) data set the R2DCNNMC model has achieved 99.45% Accuracy, 99.63% Specificity, 96.99% Sensitivity/Recall, 100.00% Precision, 99.83% MCC, 99.78% F1-score, and 99.90% AUC, respectively. Due to the data augmentation the training of the R2DCNNMC effectively performed and ultimately has increased model predictive performance. With the data augmentation process the model increased Accuracy from 98.17% to 99.45%, which demonstrates that the model predictive capability increased with data augmentation. Similarly, the MCC value has increased from 99.70% to 99.83% and the AUC value has also improved from 99.23% to 99.90%.

3.2.5. Proposed R2DCNNMC Model Performance Comparison with Baseline Methods

In Table 6, we compared the proposed R2DCNNMC model performance in terms of accuracy with baseline methods. Table 6 shows that the proposed model R2DCNNMC achieved 98.12% accuracy with data set CRD, which is higher than baseline methods. Similarly, the proposed model R2DCNNMC achieved 99.45% accuracy with data set CXI, which is higher than baseline models. The excellent predictive performance of the proposed model demonstrated that it correctly detected COVID-19 and that it can be easily deployed in E-health care for COVID-19 diagnosis.

3.2.6. Discussion

COVID-19 is rapidly spreading, and many people are suffering and dying as a result of this global pandemic. Accurate and timely diagnosis is a significant medical challenge for effective COVID-19 control and treatment. Various techniques are used to control and diagnose this disease. Soft computing-based COVID-19 diagnosis methods are widely used, and numerous AI-based methods have been proposed by various researchers. However, these methods continue to suffer from a lack of accuracy in diagnosing COVID-19 patients.
The COVID-19 disease has a significant impact on the human respiratory system, and the lungs lose functionality quickly. Thus, using chest X-ray images to diagnose COVID-19 patients is an appropriate method that clinical professionals typically use. However, due to human error, medical doctors’ interpretation of chest X-ray images to diagnose COVID-19 is insufficiently accurate. As a result, AI-based interpretation methods for distinguishing between normal and COVID-19 patient chest X-ray images are more effective.
The deep learning techniques based COVID-19 detection method from chest X-ray images is significantly important for the accurate diagnosis of COVID-19. The CNN model has significant applications in medical image classification [34]. The CNN model extracts more deep features from images data, and these features can help in the final classification.
To tackle the accurate diagnosis problem of COVID-19 in this research study, we have proposed a model for COVID-19 diagnosis employed CNN, data augmentation, and transfer learning techniques. The CNN model is used for deep features extraction and classification. Data augmentation and transfer learning techniques are used to improve the predictive capability of the CNN model. Two COVID-19 chest X-ray images data sets are used for validation of the proposed model. These data sets are not insufficient for effective training of the model. Hence, we have used the data augmentation [47] technique to increased the size of these data sets to train the model effectively and achieve excellent performance. The experimental results show that the proposed model obtained high performance on both original and augmented data sets as compared to baseline methods. The major finding of this study are as follows:
Firstly, the accuracy of the 2DCNN model has increased from 95.20% to 96.45% when the model trained with augmented CRD data set. Similarly, the AUC value of the 2DCNN model has increased from 96.00% to 97.23%. In the 2DCNN model with augmented CXI data set, the accuracy improved from 97.02% to 97.65% and the MCC value increased from 9.26% to 99.87%, while the AUC value also improved from 99.00% to 99.23%. Thus, these results demonstrated that the model predictive capability increased with data augmentation.
Secondly, transfer learning techniques incorporated with the 2DCNN model and with CRD and CXI data sets the model increased accuracy from 97.66% to 98.12% and 98.17% to 99.45%, respectively.
Thirdly, the proposed model (R2DCNNMC) obtained 98.12% classification accuracy on CRD data set and 99.45% classification on CXI data set as compared to baseline methods. Due to higher predictive performance of the proposed model, we recommend it for accurate diagnosis of COVID-19 in E-healthcare.

4. Conclusions

Deep learning algorithms, particularly convolutional neural networks, are commonly used to analyze medical image data. The accurate diagnosis of COVID-19 is a critical issue, and a new accurate diagnosis method is significantly needed to address it. Hence to diagnosis COVID-19 accurately, we have proposed a R2DCNNMC model, which is based on deep and transfer learning. In the proposed model designing we have used 2DCNN model for deep features extraction, and classification of chest X-ray images data for recognition of COVID-19. Two data sets have utilized for the validation of the proposed model. Furthermore, data augmentation techniques have been used for increasing data sets size for effective training of the proposed model. In addition cross-validation and model assessment measures have been computed for model evaluation.
The experimental results demonstrated that the proposed R2DCNNMC diagnosis model has been obtained very high performance and obtained 98.12% classification accuracy on CRD data set and 99.45% classification on CXI data set as compared to baseline methods. We recommend the proposed method for effective COVID-19 identification in E-healthcare due to its high predictive performance. In the future, we will use advanced models of transfer learning, federated learning, and deep learning, as well as other types of data sets, to diagnose COVID-19.

Author Contributions

Author Contributions: Conceptualization, A.U.H. and J.P.L.; methodology, A.U.H. and J.P.L.; software, A.U.H. and S.A.; validation, A.U.H. and J.P.L.; formal analysis, A.U.H. and J.P.L.; investigation, A.U.H. and S.K.; resources, A.U.H., J.P.L., S.A. and S.K.; data curation, A.U.H., M.A.A. and R.M.A.; writing—original draft preparation, A.U.H.; writing—review and editing, A.U.H., S.A., M.A.A. and S.K.; visualization, A.U.H.; supervision, J.P.L.; project administration, A.U.H.; funding acquisition, J.P.L. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61370073), the National High Technology Research and Development Program of China (Grant No. 2007AA01Z423), the project of Science and Technology Department of Sichuan Province (Grant No. H04010601W00614016). The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no. RG-21-07-08.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data sets used in this study available on public repositories.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Abbreviations

The following section describes the mathematical notations and abbreviations used in this work.
XData set
X t r a i n Chest X-ray training data set
X t e s t Chest X-ray test data set
YPredicted output classes label
X ImageNet data set
bBatch size
α Learning rate
wTransfer learning model parameters
θ COVID-19 classification model parameters
ENumber of epoches
P t e s t The performance metrics on the test data set
AcAccuracy
SnSensitivity
SpSpecificity
F1SF1-Score
ReRecall
AIArtificial intelligence
PrPrecision
CTComputed tomography
AUCArea under the curve
CNNConvolutional neural network
DLDeep learning
TLTransfer learning
ResNet-50Residual Networks-50

References

  1. Zhou, L.; Li, Z.; Zhou, J.; Li, H.; Chen, Y.; Huang, Y.; Xie, D.; Zhao, L.; Fan, M.; Hashmi, S.; et al. A rapid, accurate and machine-agnostic segmentation and quantification method for CT-based COVID-19 diagnosis. IEEE Trans. Med. Imaging 2020, 39, 2638–2652. [Google Scholar] [CrossRef] [PubMed]
  2. Li, M.; Lei, P.; Zeng, B.; Li, Z.; Yu, P.; Fan, B.; Wang, C.; Li, Z.; Zhou, J.; Hu, S.; et al. Coronavirus disease (COVID-19): Spectrum of CT findings and temporal progression of the disease. Acad. Radiol. 2020, 27, 603–608. [Google Scholar] [CrossRef] [Green Version]
  3. Franquet, T. Imaging of pneumonia: Trends and algorithms. Eur. Respir. J. 2001, 18, 196–208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Yasaka, K.; Abe, O. Deep learning and artificial intelligence in radiology: Current applications and future directions. PLoS Med. 2018, 15, e1002707. [Google Scholar] [CrossRef] [Green Version]
  5. Haq, A.U.; Li, J.P.; Saboor, A.; Khan, J.; Zhou, W.; Jiang, T.; Raji, M.F.; Wali, S. 3DCNN: Three-Layers Deep Convolutional Neural Network Architecture for Breast Cancer Detection using Clinical Image Data. In Proceedings of the 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 18–20 December 2020; pp. 83–88. [Google Scholar]
  6. Haq, A.U.; Li, J.; Memon, M.H.; Khan, J.; Ud Din, S. A novel integrated diagnosis method for breast cancer detection. J. Intell. Fuzzy Syst. 2020, 38, 2383–2398. [Google Scholar] [CrossRef]
  7. Haq, A.U.; Li, J.P.; Khan, J.; Memon, M.H.; Nazir, S.; Ahmad, S.; Khan, G.A.; Ali, A. Intelligent Machine Learning Approach for Effective Recognition of Diabetes in E-Healthcare Using Clinical Data. Sensors 2020, 20, 2649. [Google Scholar] [CrossRef]
  8. Haq, A.U.; Li, J.P.; Saboor, A.; Khan, J.; Wali, S.; Ahmad, S.; Ali, A.; Khan, G.A.; Zhou, W. Detection of Breast Cancer Through Clinical Data Using Supervised and Unsupervised Feature Selection Techniques. IEEE Access 2021, 9, 22090–22105. [Google Scholar] [CrossRef]
  9. Chhikara, P.; Singh, P.; Gupta, P.; Bhatia, T. Deep convolutional neural network with transfer learning for detecting pneumonia on chest X-rays. In Advances in Bioinformatics, Multimedia, and Electronics Circuits and Signals; Springer: Berlin/Heidelberg, Germany, 2020; pp. 155–168. [Google Scholar]
  10. Kermany, D.; Zhang, K.; Goldbaum, M. Large dataset of labeled optical coherence tomography (oct) and chest X-ray images. Mendeley Data 2018, 3. [Google Scholar] [CrossRef]
  11. Saraiva, A.A.; Ferreira, N.M.F.; de Sousa, L.L.; Costa, N.J.C.; Sousa, J.V.M.; Santos, D.; Valente, A.; Soares, S. Classification of Images of Childhood Pneumonia using Convolutional Neural Networks. In Bioimaging; SCITEPRESS—Science and Technology Publications, Lda: Rome, Italy, 2019; pp. 112–119. [Google Scholar]
  12. Godet, C.; Elsendoorn, A.; Roblot, F. Benefit of CT scanning for assessing pulmonary disease in the immunodepressed patient. Diagn. Interv. Imaging 2012, 93, 425–430. [Google Scholar] [CrossRef] [Green Version]
  13. Garin, N.; Marti, C.; Scheffler, M.; Stirnemann, J.; Prendki, V. Computed tomography scan contribution to the diagnosis of community-acquired pneumonia. Curr. Opin. Pulm. Med. 2019, 25, 242. [Google Scholar] [CrossRef] [PubMed]
  14. Walsh, S.L.; Calandriello, L.; Silva, M.; Sverzellati, N. Deep learning for classifying fibrotic lung disease on high-resolution computed tomography: A case-cohort study. Lancet Respir. Med. 2018, 6, 837–845. [Google Scholar] [CrossRef]
  15. Garin, N.; Marti, C.; Carballo, S.; Darbellay Farhoumand, P.; Montet, X.; Roux, X.; Scheffler, M.; Serratrice, C.; Serratrice, J.; Claessens, Y.E.; et al. Rational use of CT-scan for the diagnosis of pneumonia: Comparative accuracy of different strategies. J. Clin. Med. 2019, 8, 514. [Google Scholar] [CrossRef] [Green Version]
  16. Bhandary, A.; Prabhu, G.A.; Rajinikanth, V.; Thanaraj, K.P.; Satapathy, S.C.; Robbins, D.E.; Shasky, C.; Zhang, Y.D.; Tavares, J.M.R.; Raja, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-ray and lung CT scan images. Pattern Recognit. Lett. 2020, 129, 271–278. [Google Scholar] [CrossRef]
  17. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-classification of brain tumor images using deep neural network. IEEE Access 2019, 7, 69215–69225. [Google Scholar] [CrossRef]
  18. Pereira, S.; Meier, R.; Alves, V.; Reyes, M.; Silva, C.A. Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment. In Understanding and Interpreting Machine Learning in Medical Image Computing Applications; Springer: Berlin/Heidelberg, Germany, 2018; pp. 106–114. [Google Scholar]
  19. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Volume 1. [Google Scholar]
  20. Schwarz, M.; Schulz, H.; Behnke, S. RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1329–1335. [Google Scholar]
  21. Bai, H.X.; Hsieh, B.; Xiong, Z.; Halsey, K.; Choi, J.W.; Tran, T.M.L.; Pan, I.; Shi, L.B.; Wang, D.C.; Mei, J.; et al. Performance of radiologists in differentiating COVID-19 from non-COVID-19 viral pneumonia at chest CT. Radiology 2020, 296, E46–E54. [Google Scholar] [CrossRef] [PubMed]
  22. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (covid-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
  23. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). medRxiv 2020, 14, v5. [Google Scholar] [CrossRef] [PubMed]
  24. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J.; et al. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef]
  25. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Lung infection quantification of COVID-19 in CT images with deep learning. arXiv 2020, arXiv:2003.04655. [Google Scholar]
  26. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Zha, Y.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef] [PubMed]
  27. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  28. Tawsifur, R.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Abul Kashem, S.B.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar]
  29. Loddo, A.; Pili, F.; Di Ruberto, C. Deep Learning for COVID-19 Diagnosis from CT Images. Appl. Sci. 2021, 11, 8227. [Google Scholar] [CrossRef]
  30. Gunraj, H.; Sabri, A.; Koff, D.; Wong, A. COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 from Chest CT Images Through Bigger, More Diverse Learning. arXiv 2021, arXiv:2101.07433. [Google Scholar]
  31. Hu, S.; Gao, Y.; Niu, Z.; Jiang, Y.; Li, L.; Xiao, X.; Wang, M.; Fang, E.F.; Menpes-Smith, W.; Xia, J.; et al. Weakly supervised deep learning for covid-19 infection detection and classification from ct images. IEEE Access 2020, 8, 118869–118883. [Google Scholar] [CrossRef]
  32. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Elghamrawy, S. Detection of coronavirus (COVID-19) associated pneumonia based on generative adversarial networks and a fine-tuned deep transfer learning model using chest X-ray dataset. arXiv 2020, arXiv:2004.01184. [Google Scholar]
  33. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 1–12. [Google Scholar]
  34. Cai, J.; Lu, L.; Xie, Y.; Xing, F.; Yang, L. Improving deep pancreas segmentation in CT and MRI images via recurrent neural contextual learning and direct loss function. arXiv 2017, arXiv:1707.04912. [Google Scholar]
  35. Ibrahim, D.M.; Elshennawy, N.M.; Sarhan, A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases. Comput. Biol. Med. 2021, 132, 104348. [Google Scholar] [CrossRef]
  36. Bickel, S. ECML-PKDD Discovery Challenge 2006 Overview. Available online: https://www.cs.waikato.ac.nz/ml/publications/2006/discovery_challenge_proceedings2006.pdf#page=5 (accessed on 24 November 2021).
  37. Ray, S. Disease classification within dermascopic images using features extracted by resnet50 and classification through deep forest. arXiv 2018, arXiv:1807.05711. [Google Scholar]
  38. Hassantabar, S.; Ahmadi, M.; Sharifi, A. Diagnosis and detection of infected tissue of COVID-19 patients based on lung X-ray image using convolutional neural network approaches. Chaos Solitons Fractals 2020, 140, 110170. [Google Scholar] [CrossRef]
  39. Zhang, J.; Xie, Y.; Pang, G.; Liao, Z.; Verjans, J.; Li, W.; Sun, Z.; He, J.; Li, Y.; Shen, C.; et al. Viral pneumonia screening on chest X-ray images using confidence-aware anomaly detection. arXiv 2020, arXiv:2003.12338. [Google Scholar]
  40. Hall, L.O.; Paul, R.; Goldgof, D.B.; Goldgof, G.M. Finding covid-19 from chest X-rays using deep learning on a small dataset. arXiv 2020, arXiv:2004.02060. [Google Scholar]
  41. Hammoudi, K.; Benhabiles, H.; Melkemi, M.; Dornaika, F.; Arganda-Carreras, I.; Collard, D.; Scherpereel, A. Deep learning on chest X-ray images to detect and evaluate pneumonia cases at the era of covid-19. J. Med. Syst. 2021, 45, 1–10. [Google Scholar] [CrossRef]
  42. Farooq, M.; Hafeez, A. Covid-resnet: A deep learning framework for screening of covid19 from radiographs. arXiv 2020, arXiv:2003.14395. [Google Scholar]
  43. Albahli, S. Efficient GAN-based Chest Radiographs (CXR) augmentation to diagnose coronavirus disease pneumonia. Int. J. Med. Sci. 2020, 17, 1439. [Google Scholar] [CrossRef] [PubMed]
  44. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  45. Perumal, V.; Narayanan, V.; Rajasekar, S.J.S. Detection of COVID-19 using CXR and CT images using transfer learning and Haralick features. Appl. Intell. 2021, 51, 341–358. [Google Scholar] [CrossRef]
  46. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef]
  47. Wong, S.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding Data Augmentation for Classification: When to Warp? In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–6. [Google Scholar]
Figure 1. ResNet-50 architecture.
Figure 1. ResNet-50 architecture.
Sensors 21 08219 g001
Figure 2. Proposed R2DCNNMC model for COVID-19 diagnosis.
Figure 2. Proposed R2DCNNMC model for COVID-19 diagnosis.
Sensors 21 08219 g002
Figure 3. Types of chest X-ray images in CRD data set.
Figure 3. Types of chest X-ray images in CRD data set.
Sensors 21 08219 g003
Figure 4. Types of chest X-ray images in CXI data set.
Figure 4. Types of chest X-ray images in CXI data set.
Sensors 21 08219 g004
Table 1. 2DCNN model structure for multi-classification.
Table 1. 2DCNN model structure for multi-classification.
NumberLayer (Name)
1Conv2D (64, (7, 2))
2Activation (‘ReLU’)
3MaxPool2D (pool-size = (2, 2))
4Conv2D (64, (3, 3)
5Activation (‘ReLU’)
6MaxPool2D (pool-size = (2, 2))
7Conv2D (64, (3, 3))
8Activation (‘ReLU’)
9MaxPool2D (pool-size = (2, 2))
10Flatten ()
11Dense (64)
12Activation (‘ReLU’)
13Dropout (0.5)
14Dense (3)
15Activation (‘Softmax’)
Table 2. R2DCNNMC model parameters.
Table 2. R2DCNNMC model parameters.
ParametersValue
OptimizerSGD
Learning rate α 0.0001
Number of epoch100
Bach size100
Mini-batch size9
Training data set70%
Validation data set30%
Table 3. 2DCNN model performance evaluation on original and augmented CRD and CXI data sets.
Table 3. 2DCNN model performance evaluation on original and augmented CRD and CXI data sets.
Data Set ParametersAssessment Measures
OptimizerLRAc (%)Sp (%)Sn/Re (%)Pr (%)MCC (%)F1S (%)AUC (%)
CDRoriginalSGD0.000195.2097.0080.2592.4093.0095.0996.00
augmented--96.0096.4597.0097.4396.3396.5297.23
CXIoriginal--97.0298.0099.25100.0099.2697.0099.00
augmented--97.6599.1097.8699.8099.8797.7399.23
Table 4. ResNet-50 model performance evaluation on original and augmented CRD and CXI data sets.
Table 4. ResNet-50 model performance evaluation on original and augmented CRD and CXI data sets.
Data Set ParametersAssessment Measures
OptimizerLRAc (%)Sp (%)Sn/Re (%)Pr (%)MCC (%)F1S (%)AUC (%)
CDRoriginalSGD0.000194.0396.3283.2597.1093.5095.0094.20
augmented--95.2097.0099.0088.2196.1297.3495.00
CXIoriginal--92.3494.4697.6798.0094.9893.0092.10
augmented--94.8795.9895.5193.0095.2495.0093.19
Table 5. R2DCNNMC model performance evaluation on original and augmented CRD and CXI data sets.
Table 5. R2DCNNMC model performance evaluation on original and augmented CRD and CXI data sets.
Data Set ParametersAssessment Measures
OptimizerLRAc (%)Sp (%)Sn/Re (%)Pr (%)MCC (%)F1S (%)AUC (%)
CRDoriginalSGD0.000197.6699.0089.1899.1099.3098.0097.03
augmented--98.1299.2893.0099.5699.7098.2398.60
CXIoriginal--98.17100.0096.2599.2499.7099.4699.23
augmented--99.4599.6396.99100.0099.8399.7899.90
Table 6. R2DCNNMC model accuracy comparison with baseline methods.
Table 6. R2DCNNMC model accuracy comparison with baseline methods.
MethodAccuracy (%)Reference
ResNet + SVM95.38[38]
GAN + Resnet1899[32]
VGG-16+ CNN91.24[39]
TLRV194.4[40]
DTL95.72[41]
ResNet-5096.23[42]
COVID-Net-TM92.4[33]
DRE-Net86[43]
COVIDx-Net90[44]
TM93.3[33]
TL93[45]
COVID-Net CT-298.1[30]
DarkCovidNet90.8[32]
ResNet5090[46]
VGG19 + CNN98.05[35]
VGG-1998.87[29]
Proposed method (R2DCNNMC)98.122021
Proposed method (R2DCNNMC)99.452021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Haq, A.U.; Li, J.P.; Ahmad, S.; Khan, S.; Alshara, M.A.; Alotaibi, R.M. Diagnostic Approach for Accurate Diagnosis of COVID-19 Employing Deep Learning and Transfer Learning Techniques through Chest X-ray Images Clinical Data in E-Healthcare. Sensors 2021, 21, 8219. https://doi.org/10.3390/s21248219

AMA Style

Haq AU, Li JP, Ahmad S, Khan S, Alshara MA, Alotaibi RM. Diagnostic Approach for Accurate Diagnosis of COVID-19 Employing Deep Learning and Transfer Learning Techniques through Chest X-ray Images Clinical Data in E-Healthcare. Sensors. 2021; 21(24):8219. https://doi.org/10.3390/s21248219

Chicago/Turabian Style

Haq, Amin Ul, Jian Ping Li, Sultan Ahmad, Shakir Khan, Mohammed Ali Alshara, and Reemiah Muneer Alotaibi. 2021. "Diagnostic Approach for Accurate Diagnosis of COVID-19 Employing Deep Learning and Transfer Learning Techniques through Chest X-ray Images Clinical Data in E-Healthcare" Sensors 21, no. 24: 8219. https://doi.org/10.3390/s21248219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop