Abstract
One of the most difficult research areas in today's healthcare industry to combat the coronavirus pandemic is accurate COVID-19 detection. Because of its low infection miss rate and high sensitivity, chest computed tomography (CT) imaging has been recommended as a viable technique for COVID-19 diagnosis in a number of recent clinical investigations. This article presents an Internet of Medical Things (IoMT)-based platform for improving and speeding up COVID-19 identification. Clinical devices are connected to network resources in the suggested IoMT platform using cloud computing. The method enables patients and healthcare experts to work together in real time to diagnose and treat COVID-19, potentially saving time and effort for both patients and physicians. In this paper, we introduce a technique for classifying chest CT scan images into COVID, pneumonia, and normal classes that use a Sugeno fuzzy integral ensemble across three transfer learning models, namely SqueezeNet, DenseNet-201, and MobileNetV2. The suggested fuzzy ensemble techniques outperform each individual transfer learning methodology as well as trainable ensemble strategies in terms of accuracy. The suggested MobileNetV2 fused with Sugeno fuzzy integral ensemble model has a 99.15% accuracy rate. In the present research, this framework was utilized to identify COVID-19, but it may also be implemented and used for medical imaging analyses of other disorders.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
In several countries, IoMT has been used in parallel with other techniques to restrict the spread of COVID-19, enhance the safety of front-line staff, placing greater emphasis by lowering the impact of the illness on human lives, and reduce fatality rates. Specifically, the extensive adoption of IoMT in healthcare facilities may assist in gathering huge medical and healthcare data, which can be utilized by medical practitioners to detect and identify disorders, subsequently proposing suitable therapies. Patients may extend their health status to an IoMT ecosystem through the Internet on a regular basis, and the data is shared with nearby clinics and municipal health agencies [1]. Hospitals could provide eHealth therapies based on the client's health status, and the authorities should provide equipment and allocate quarantine venues such as theaters and hotels. Users may monitor their illness severity and get appropriate medical care with the help of IoMT platform's deployment. It reduces national health expenditures, relieves strain on medical equipment, and provides a complete database that allows the government to avoid disease spread, allocate resources, and implement rapid regulation [2].
This research presents an IoMT-based platform to allow a speedy and safe detection of COVID-19. The potential of IoMT includes more accurate diagnosis, less errors, and reduced costs of treatment. Paired with smartphone apps, the technology enables users to communicate their healthcare information to physicians to better monitor diseases and track and avoid chronic illnesses [3]. This paper describes a COVID-19 diagnostic system based on IoMT that uses several transfer learning methods. First, data from patients is collected via IoT devices and then the sent to the cloud. Image augmentation and preprocessing are done initially once the data has been collected by IoMT devices and transferred to the cloud.
COVID-19 has spread over the globe, and the identification of COVID-19 using CT scans of the lungs has been clinically confirmed. Due to its capacity to detect lung structures, radiological imaging using computed tomography (CT) has emerged as a possible alternative approach of diagnosis. According to relevant research [4], CT scans of the lungs may play a role in the early diagnosis of COVID-19. CT image interpretation and diagnosis is a very complex procedure that requires physicians' professional expertise and experience. The physician's manual experience is a labor-intensive and time-consuming technique. Because RT-PCR test has a poor sensitivity and a high false-negative rate [5], many COVID-19-positive individuals are mistakenly identified as negative. Several deep learning-based techniques for automating the process of identifying COVID-19 infection from lung CT scan pictures have been created recently. However, for the ultimate judgment, the majority of them depend on a single model forecast; this may or may not be right. We employed the Sugeno fuzzy integral technique in this article, which combines the strength of multiple transfer learning models before making a final judgment. Using lung CT scan pictures, we employ several transfer learning models such as SqueezeNet, DenseNet-201, and MobileNetV2. All of these trained models are then combined to form a powerful ensemble classifier, which provides the final prediction.
The article is organized as follows. The Sect. 2 provides a literature review, Sect. 3 discusses the description of the materials and methods in the subsequent Sect. 4 “Result Analysis” contains the experimental data and analyses. The conclusion and future efforts are all mentioned in the concluding Sect. 5.
Literature Review
In the majority of investigations, CT scans were used as imaging modalities for COVID-19 diagnosis. In this sense, the study comprises a number of studies on different transfer learning algorithms for detecting coronavirus using CT imaging. InceptionNet was used by Wang et al. [6] to detect abnormalities linked with COVID-19 in lung CT scan images. InspectionNet was tested on 1065 CT images and found 325 COVID-19-positive patients with an accuracy of 85.20%. Zhao et al. [7] developed a dataset called COVID-CT consisting of 397 and 349 COVID-19-positive and -negative CT scans available for academic usage. Using 3D CT scans, Zheng et al. [8] proposed a poorly supervised DL technique for diagnosing COVID-19 patients. They used a pre-trained UNet technique to segment 3D lung pictures and achieved a 95.9% accuracy rate. In CT scan imaging, Xu et al. [9] used 3D CNN models to detect coronavirus infection from influenza. The author employed the ResNet CNN model and achieved an accuracy of 86.70%. Chen et al. [10] used the UNet design to find coronavirus pneumonia. The authors trained their model on 106 examples and achieved a classification accuracy of 98.85%. COVID-19 detection neural network (COVNet) was developed by Li et al. [11] to recover features from chest CT scans for detection of coronavirus infection in patients, with a 95% accuracy. Angelov et al. [12] achieved 94.96% accuracy on a SAR-CoV-2 CT scan dataset using VGG16, but only for models trained upon unaugmented imagery. Shah et al. [13] employed VGG-19 model and attained an accuracy of 94.52%. Perumal et al. [14] suggested AlexNet paired with SVM model to identify COVID-19 using chest CT scan pictures with an accuracy of 96.69%. Further, a few research [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36, 46] employed multiple transfer learning models to identify COVID-19 patients.
According to a study of related literature, few researchers have worked on fuzzy ensemble techniques linked with transfer learning models. The fundamental benefit of ensemble learning is that it evaluates and integrates all the choices by multiple models rather than depending on a single classifier [37]. An ensemble will be successful only if individual classifiers exhibit diversity while producing the prediction. Merging deep transfer learning models with fuzzy ensemble techniques may boost the accuracy and durability of a detection system. Our suggested work's originality and key contributions are as follows.
-
1.
The current study leverages the ensemble learning approach for classifier fusion.
-
2.
We have used SqueezeNet, DenseNet-201, and MobileNetV2 transfer learning models as foundation models. To aggregate the basic models’ predictions and to provide superior outcomes, we employed trainable ensemble and Sugeno fuzzy integral ensemble techniques.
-
3.
In this work, the Sugeno fuzzy integral was used to assemble the aforesaid classifiers to overcome the drawbacks of using the simple fusion technique.
Fuzzy integrals [38] are efficient aggregators that use the level of unpredictability in evaluation scores as extra info for classifier fusion. It may be considered of as a generalization of aggregation procedures on a collection of confidence scores with some weighting assigned to each source of data, referred to as fuzzy measures. Unlike the fundamental fusion techniques in the literature, which use fixed pre-determined weights, the Sugeno fuzzy integral uses the confidence of base learners' predictions to add adaptive weights to each input to the model.
Materials and Methods
Proposed Framework
Figure 1 depicts the suggested IoMT-based framework for COVID-19 classification from CT images. Smart IoT sensors are used to send CT scan images. Low-range networking equipment constitutes the local area network (LAN). From the intelligent IoT sensor and the device, this layer communicates the obtained signals to the next layer called the hosting layer. Various intelligent devices, such as portable multimedia or computers that can store and transfer signals, constitute the hosting layer. The intelligent devices are linked to a wide area network (WAN), which sends data from the devices to the cloud. The WAN layer sends information to the cloud in real time via specialized networks such as Cellular LAN, 4G, or 5G. Patient information is authenticated and sent to the transfer learning cognitive module via the cloud manager. Data is sent through intelligent IoT sensors. Any of these sensors may be included into the patient's environment. This gadget can also communicate with other IoT devices. The LAN is made up of short-range networking technologies such as Bluetooth and Zigbee.
The hosting layer supports smart devices such as tablets, mobiles, human digital assistants, and laptops. These gadgets have specific programs that compute the signals received and save data locally. These compact processing tools allow users to acquire generic and provisional health assessments. The data is sent to the cloud processing unit over the WAN layer. A cloud manager and a transfer learning cognitive module make up the cloud layer. The cloud manager is in charge of data flow and ensures that all authentication mechanisms are in place to ensure that all intelligent city actors' identities are verified. After patient verification, the transfer learning cognitive module analyzes the data and evaluates the patient's condition. It makes informed judgments based on CT scan images to detect COVID-19. Finally, medical specialists will analyze the data and keep track of the patients. Potential care may be assessed if the patient requires emergency treatment.
The cognitive system checks the patient's state and sends the CT scan image to the cloud, where the transfer learning cognitive module may evaluate it. Using chest CT scan images, the suggested automated framework achieves excellent classification sensitivity and accuracy while also being a considerably quicker technique. The suggested technique may be utilized with fresh test pictures being sent through the model to provide ensemble predictions. The complementary nature of the data obtained by the multiple transfer learning models is proven by examining the statistical dissimilarities from the decision values for each transfer learning model. The transfer learning classifiers are combined using the trainable ensemble and the Sugeno fuzzy integral. To extract features and classify them accurately, the CT scan images in this study must be suitable with the pre-trained transfer learning models. As a result, the first step is to scale the images to their original proportions, which is the standard form employed in neural networks. The COVID-19 is detected by the transfer learning module, which then returns the three-classification effects. The cognitive system anticipates future tasks based on these discoveries. These findings are shared with health specialists in the form of medical reports for a full examination. In the case of an emergency, the cognitive gadget generates alerts and messages, and a smart ambulance can quickly identify and meet the patient. The smart traffic technology also enables emergency vehicles to get to their destination in the quickest possible time by using the shortest route. Transfer learning models fused with fuzzy ensemble technique cognitive module is depicted in Fig. 2. A 224 × 224 × 3 input image was used, as well as a dense layer with 4096 nodes and a softmax layer with three output nodes.
Transfer Learning Models
SqueezeNet SqueezeNet is a CNN that uses design tactics to minimize the number of parameters, particularly via the use of fire modules, which squeeze parameters using 1 × 1 convolutions. It is an 18-layer deep CNN. With 50 times less parameters, it can attain an accuracy equivalent to AlexNet [39]. SqueezeNet's Fire module is a critical component for efficiently reducing the amount of parameters. SqueezeNet starts with a single convolution layer, then eight fire modules, and a final convolution layer. From the beginning to the conclusion of the network, we progressively expand the number of filters per fire module. After layers conv1, fire4, fire8, and conv10, SqueezeNet executes max-pooling with a stride of 2; these pooling locations are rather late (Fig. 3).
DenseNet-201 The DenseNet-201 is a 201-layer CNN. DenseNet201 utilizes the condensed network, giving simple to train and extremely parameterized effective methods [40]. This is due to the possibility for feature reusing by multiple layers, which enhances variance in the subsequent layer input and improves performance. DenseNet201 has performed well on a variety of datasets, including ImageNet and CIFAR-100. In DenseNet-201, the network links each layer to the next in a feed-forward fashion. Each subsequent layer uses the feature maps from the previous levels as input. Each layer in this network receives a cumulative knowledge from all the levels before it. Every preceding layer's connection is added to the DenseNet-201. The DenseNet has the benefit of reducing the disappearing issue since it incorporates feature maps from all previous levels. It improves feature propagation while reducing the number of parameters (Fig. 4).
MobileNetV2 One of the most lightweight network architectures is MobileNetV2. The inverted residual with linear bottleneck layer module in MobileNetV2 greatly minimizes the amount of memory required for processing. MobileNetV2 expands on MobileNetV1's concepts by using depth-wise separable convolution as an efficient building element. As shown in Fig. 1, the MobileNetV2 design [41] has two kinds of blocks: a residual block with a stride of 1 and a shrinking block with a stride of 2. Both kinds of blocks feature three layers, with the first layer including an 11 convolution with ReLU6, the second layer containing depth-wise convolution, and the third layer containing another 11 convolution without non-linearity. Figure 4 depicts the input, operator, and output of each layer (Fig. 5).
Data Set
The China National Center for Bio-information provided the CT chest imaging dataset that we utilized in our investigation. Three groups of photos have been labeled. The collection contains CT images of 999 COVID-19 patients, 1468 pneumonia patients, and 1,687 healthy individuals. From the whole dataset, we utilized an equal number of pneumonia, COVID-19-positive, and normal CT scans. There are 650 images in each class. The CT image provided to the system is 224 × 224 × 3 pixels in size. To assess whether or not a person has breast cancer, Tensor Flow and Keras deep learning models were developed. The data was split into 70% and 30% ratio for the training and test set, with the same groups being used for all models. Few pre-trained model layers were utilized to extract the characteristics from the training images. The pre-trained models were able to classify lung CT images based on class labels given to the training dataset. In this paper, we have used the balance dataset and the learning curve is shown in Fig. 6.
Experimental Environment
The suggested architecture was developed in Python 3.6 with PyCharm in a Windows 10 environment, employing several AI and image-processing packages to improve training efficiency and speed. Numpy, scipy, openCV, and fastai libraries were used in our testing setup, which was accelerated by a GPU with 8 GB dedicated memory running the Keras deep learning framework backend. We utilized 25 epochs and a learning rate of 0.0001 for training, which were both small enough to prevent overfitting the transfer learning models. We utilized the SDG optimizer for compilation, and after extracting features from pre-trained models, employed dense layers with 4096 neurons each as part of the classifier, using Rectified Linear Unit (ReLU) as the activation function. The accuracy curves produced by the three models after training are presented in Fig. 7.
Ensemble Technologies
Ensemble learning is a method of combining the most important characteristics of two or more base learners. Because ensembling minimizes the variation in the prediction errors, such a framework performs better than its component models.
Trainable ensemble In this scenario, we utilized a different classifier to combine the scores of all the basic models. All of the basis models' class scores were sample-wise flattened into a single feature vector. Then we chose a classifier that provided excellent results when we compared the flattened test and train data scores to the outcomes of the train and test. Here, we used a multilayer perceptron (MLP) in our research. Nodes are the primary components of a neural network (NN). These nodes are frequently grouped together as layers. In NN, data is sent from one layer to the next. The flow of information in the feed-forward neural network (FNN) [42] is fixed in one direction, from the input layer to the output layer. Assume the preceding layer has \(m\) nodes, with each node \(i\) forwarding the value \({x}_{i}\) to a specific node \(j\) in the current layer. The output \(z\) of node \(j\) will then be
where \({w}_{i}\) is the weight allocated to the route from node \(i\) to node \(j\), and \(\phi \) is the activation function present in the current layer. The basic goal of FNNs is to improve \({w}_{i}\).
The FNN must include at least one hidden layer between the input and output layers in any MLP [43] design. We used a basic MLP architecture with just one hidden layer and 16 features in our research. The MLP took a total of \(k.l\) score features as inputs and returned \(k\) score values for all classes, allowing the ensemble model to utilize the deep learning network as an aggregator.
Sugeno fuzzy integral Tahani et al. [44] introduced the Sugeno fuzzy-λ measures. Assume an accumulation of decision scores \(\mathrm{D}=\left\{{d}_{1},{d}_{2},{d}_{3},\dots ,{d}_{n}\right\},\) where \(M\) is the number of information sources (in this example, M = 3). The function \({f}_{\uplambda }: {2}^{D}\) is the Sugeno-λ measure. \(D\) is a number that spans from 0 to 1 and meets the following criteria:
1.\({f}_{\uplambda }\left(D\right)=1.\)
If \(\mathrm{e}\)i \(\cap \) ej \(=\mathrm{ \varphi }\), Eq. (1) holds truth value if and only if \(\uplambda >-1.\)
As a result, as shown in Eq. (2), λ is the real root.
The Sugeno integral [45] definition is stated as: If (\(\mathrm{Z},\upmu \)) is a measurable (Borel) space,\( \mathrm{f}:\mathrm{ Y }\to [0, 1]\) is a μ-measurable function, Eq. (3) shows the Sugeno integral of the measurable function D in terms of fuzzy measure ψ.
\(\int f\left(y\right)\mathrm{d\psi }=\mathrm{max}\left(\mathrm{min}(f({y}_{i}\right),\uppsi ({Z}_{i }))),\) 1 ≤ i ≤ n, where \(\psi \left({Z}_{i }\right)=\) \(\psi \left\{{y}_{i}, {y}_{i+1}, {y}_{i+2}, \dots , {y}_{n}\right\}\) and \(\left\{{f(y}_{1}), {f(y}_{2 }),\dots ,{f(y}_{n})\right\}\) are the ranges specified as \({f(y}_{1})\le {f(y}_{2 }),\le \dots ,\le {f(y}_{n })\). The detailed Sugeno integral algorithm is mentioned below.
Result Analysis
The fuzzy logic-based ensemble works particularly well when assigning weights to the predictions for reaching an ultimate decision on an image's class since the confidence in a classifier's prediction is taken into consideration for each sample when allocating weights to the predictions. Table 1 shows the results of an ensemble formed using three transfer learning models, demonstrating that the Sugeno integral greatly outperforms the others. The trainable ensemble approach is also very effective. Each model's accuracy, F1-score, sensitivity, and specificity were assessed, with the results shown in Table 1. Figures 8, 9, and 10 demonstrate the confusion matrices of the SqueezeNet, DenseNet-201, and MobileNetV2 using fuzzy ensemble approaches. Table 2 compares the results of several transfer learning methodologies to the suggested CT imaging methodology.
Conclusion
For COVID-19 patient recognition using CT scans, this research provides an IoMT-based technique combined with fuzzy ensemble and transfer learning models. Early identification of pneumonia is critical for deciding the best course of therapy and preventing the illness from posing a life-threatening hazard to the patient. This work created an IoMT-based system that combines deep transfer learning models combined with a fuzzy ensemble technique to categorize CT images into three classes: normal, COVID-19 positive and pneumonia. The suggested approach assembles the characteristics of three pre-trained models using deep transfer learning methods and Sugeno integral. The accuracy rate of the recommended MobileNetV2 fused with trainable ensemble and Sugeno fuzzy ensemble model is 98.80% and 99.15%, respectively. In the future, this research might be expanded to enable the identification of a range of lung infections using CT scan image. The suggested model may be modified to include picture fusion methods in future study. The implementation of IoMT may alleviate the constraints placed on healthcare systems, yet IoMT also has security and privacy flaws. Blockchain technologies, on the other hand, have the potential to improve the privacy and security of IoMT systems.
Data availability statement
None.
References
Awotunde, J.B., Jimoh, R.G., Matiluko, O.E., Gbadamosi, B., Ajamu, G.J.: Artificial intelligence and an edge-IoMT-based system for combating COVID-19 pandemic. In: Intelligent Interactive Multimedia Systems for e-Healthcare Applications 2022, pp. 191–214. Springer, Singapore. https://doi.org/10.1007/978-981-16-6542-4_11 (2022)
Yang, T., Gentile, M., Shen, C.F., Cheng, C.M.: Combining point-of-care diagnostics and internet of medical things (IoMT) to combat the COVID-19 pandemic. Diagnostics 10(4), 224 (2020). https://doi.org/10.3390/diagnostics10040224
Aman, A.H., Hassan, W.H., Sameen, S., Attarbashi, Z.S., Alizadeh, M., Latiff, L.A.: IoMT amid COVID-19 pandemic: application, architecture, technology, and security. J. Netw. Comput. Appl. 15(174), 102886 (2021). https://doi.org/10.1016/j.jnca.2020.102886
Fu, F., Lou, J., Xi, D., Bai, Y., Ma, G., Zhao, B., Liu, D., Bao, G., Lei, Z., Wang, M.: Chest computed tomography findings of coronavirus disease 2019 (COVID-19) pneumonia. Eur. Radiol. 30, 5489–5498 (2020). https://doi.org/10.1007/s00330-020-06920-8
Tahamtan, A., Ardebili, A.: Real-time RT-PCR in COVID-19 detection: issues affecting the results. Expert Rev. Mol. Diagn. 20(5), 453–454 (2020). https://doi.org/10.1080/14737159.2020.1757437
Wang, S., Zha, Y., Li, W., Wu, Q., Li, X., Niu, M., Wang, M., Qiu, X., Li, H., Yu, H., Gong, W.: A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur. Respir. J. (2020). https://doi.org/10.1183/13993003.00775-2020
Zhao, J., Zhang, Y., He, X., Xie, P.: Covid-CT-dataset: a CT scan dataset about covid-19, p. 490. arXiv:2003.13865. 2020. Accessed 20 Nov 2021
Zheng, C., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Wang, X.: Deep learning-based detection for COVID-19 from chest CT using weak label. MedRxiv (2020). https://doi.org/10.1101/2020.03.12.20027185
Xu, X., Jiang, X., Ma, C., Du, P., Li, X., Lv, S., Yu, L., Ni, Q., Chen, Y., Su, J., Lang, G.: A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 6(10), 1122–1129 (2020). https://doi.org/10.1016/j.eng.2020.04.010
Chen, J., Wu, L., Zhang, J., Zhang, L., Gong, D., Zhao, Y., Chen, Q., Huang, S., Yang, M., Yang, X., Hu, S.: Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 10(1), 1–1 (2020). https://doi.org/10.1038/s41598-020-76282-0
Li, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., Cao, K.: Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology (2020). https://doi.org/10.1148/radiol.2020200905
Angelov, P., Almeida, S.E.: SARS-CoV-2 CT-scan dataset: a large dataset of real patients CT scans for SARS-CoV-2 identification. MedRxiv (2020). https://doi.org/10.1101/2020.04.24.20078584
Shah, V., Keniya, R., Shridharani, A., Punjabi, M., Shah, J., Mehendale, N.: Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiol. 28(3), 497–505 (2021). https://doi.org/10.1007/s10140-020-01886-y
Perumal, V., Narayanan, V., Rajasekar, S.J.: Prediction of COVID-19 with computed tomography images using hybrid learning techniques. Dis. Markers 22, 2021 (2021). https://doi.org/10.1155/2021/5522729
Halder, A., Datta, B.: COVID-19 detection from lung CT-scan images using transfer learning approach. Mach. Learn. Sci. Technol. (2021). https://doi.org/10.1088/2632-2153/abf22c
Santa Cruz, J.F.: An ensemble approach for multi-stage transfer learning models for COVID-19 detection from chest CT scans. Intell. Based Med. 1(5), 100027 (2021). https://doi.org/10.1016/j.ibmed.2021.100027
Polsinelli, M., Cinque, L., Placidi, G.: A light CNN for detecting COVID-19 from CT scans of the chest. Pattern Recognit. Lett. 1(140), 95–100 (2020). https://doi.org/10.1016/j.patrec.2020.10.001
Yu, Z., Li, X., Sun, H., Wang, J., Zhao, T., Chen, H., Ma, Y., Zhu, S., Xie, Z.: Rapid identification of COVID-19 severity in CT scans through classification of deep features. Biomed. Eng. Online 19(1), 1–3 (2020). https://doi.org/10.1186/s12938-020-00807-x
Yan, T., Wong, P.K., Ren, H., Wang, H., Wang, J., Li, Y.: Automatic distinction between covid-19 and common pneumonia using multi-scale convolutional neural network on chest ct scans. Chaos Solitons Fractals 1(140), 110153 (2020). https://doi.org/10.1016/j.chaos.2020.110153
Krishnaswamy Rangarajan, A., Ramachandran, H.K.: A fused lightweight CNN model for the diagnosis of COVID-19 using CT scan images. Automatika 63(1), 171–184 (2022). https://doi.org/10.1080/00051144.2021.2014037
Alquzi, S., Alhichri, H., Bazi, Y.: Detection of COVID-19 using EfficientNet-B3 CNN and chest computed tomography images. In: International Conference on Innovative Computing and Communications 2022, pp. 365–373. Springer, Singapore. https://doi.org/10.1007/978-981-16-2594-7_30 (2022)
Biswas, S., Chatterjee, S., Majee, A., Sen, S., Schwenker, F., Sarkar, R.: Prediction of covid-19 from chest ct images using an ensemble of deep learning models. Appl. Sci. 11(15), 7004 (2021). https://doi.org/10.3390/app11157004
Kundu, R., Singh, P.K., Mirjalili, S., Sarkar, R.: COVID-19 detection from lung CT-Scans using a fuzzy integral-based CNN ensemble. Comput. Biol. Med. 1(138), 104895 (2021). https://doi.org/10.1016/j.compbiomed.2021.104895
Banerjee, A., Bhattacharya, R., Bhateja, V., Singh, P.K., Sarkar, R.: COFE-Net: an ensemble strategy for computer-aided detection for COVID-19. Measurement 1(187), 110289 (2022). https://doi.org/10.1016/j.measurement.2021.110289
Aversano, L., Bernardi, M.L., Cimitile, M., Pecori, R.: Deep neural networks ensemble to detect COVID-19 from CT scans. Pattern Recognit. 1(120), 108135 (2021). https://doi.org/10.1016/j.patcog.2021.108135
Alshazly, H., Linse, C., Barth, E., Martinetz, T.: Explainable covid-19 detection using chest ct scans and deep learning. Sensors 21(2), 455 (2021). https://doi.org/10.3390/s21020455
Serte, S., Demirel, H.: Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput. Biol. Med. 1(132), 104306 (2021). https://doi.org/10.1016/j.compbiomed.2021.104306
He, X., Wang, S., Shi, S., Chu, X., Tang, J., Liu, X., Yan, C., Zhang, J., Ding, G.: Benchmarking deep learning models and automated model design for covid-19 detection with chest CT scans. medRxiv. (2020). https://doi.org/10.1101/2020.06.08.20125963
Saha, P., Mukherjee, D., Singh, P.K., Ahmadian, A., Ferrara, M., Sarkar, R.: GraphCovidNet: a graph neural network based model for detecting COVID-19 from CT scans and X-rays of chest. Sci. Rep. (2021). https://doi.org/10.1038/s41598-021-87523-1
Kundu, R., Basak, H., Singh, P.K., Ahmadian, A., Ferrara, M., Sarkar, R.: Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT-scans. Sci. Rep. 11(1), 1–2 (2021). https://doi.org/10.1038/s41598-021-93658-y
Basu, A., Sheikh, K.H., Cuevas, E., Sarkar, R.: COVID-19 detection from CT scans using a two-stage framework. Expert Syst. Appl. 1, 116377 (2022). https://doi.org/10.1016/j.eswa.2021.116377
Shaik, N.S., Cherukuri, T.K.: Transfer learning based novel ensemble classifier for COVID-19 detection from chest CT-scans. Comput. Biol. Med. 1(141), 105127 (2022). https://doi.org/10.1016/j.compbiomed.2021.105127
Pavlov, V.A., Shariaty, F., Orooji, M., Velichko, E.N.: Application of deep learning techniques for detection of COVID-19 using lung CT scans: model development and validation. In: International Youth Conference on Electronics, Telecommunications and Information Technologies 2022, pp. 85–96. Springer, Cham. https://doi.org/10.1007/978-3-030-81119-8_9 (2022)
Gaur, P., Malaviya, V., Gupta, A., Bhatia, G., Pachori, R.B., Sharma, D.: COVID-19 disease identification from chest CT images using empirical wavelet transformation and transfer learning. Biomed. Signal Process. Control 1(71), 103076 (2022). https://doi.org/10.1016/j.bspc.2021.103076
Kanwal, S., Khan, F., Alamri, S., Dashtipur, K., Gogate, M.: COVID-opt-aiNet: a clinical decision support system for COVID-19 detection. Int. J. Imaging Syst. Technol. (2022). https://doi.org/10.1002/ima.22695
Singh, V.K., Kolekar, M.H.: Deep learning empowered COVID-19 diagnosis using chest CT scan images for collaborative edge-cloud computing platform. Multim. Tools Appl. 81(1), 3 (2022). https://doi.org/10.1007/s11042-021-11158-7
Dietterich, T.G.: Ensemble methods in machine learning. In: International Workshop on Multiple Classifier Systems, pp. 1–15. Springer, Berlin. https://doi.org/10.1007/3-540-45014-9_1 (2000)
Grabisch, M., Murofushi, T., Sugeno, M.: Fuzzy Measures and Integrals. Theory and Applications. Studies in Fuzziness. Physica Verlag (2000)
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv:1602.07360 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708. https://doi.org/10.1109/CVPR.2017.243 (2017)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
Bebis, G., Georgiopoulos, M.: Feed-forward neural networks. IEEE Potentials 13(4), 27–31 (1994). https://doi.org/10.1109/45.329294
Wilson, E., Tufts, D.W.: Multilayer perceptron design algorithm. In: Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pp. 61–68. IEEE. https://doi.org/10.1109/NNSP.1994.366063 (1994)
Tahani, H., Keller, J.M.: Information fusion in computer vision using the fuzzy integral. IEEE Trans. Syst. Man Cybern. 20(3), 733–741 (1990). https://doi.org/10.1109/21.57289
Sugeno, M.: Fuzzy measures and fuzzy integrals—a survey. In: Readings in Fuzzy Sets for Intelligent Systems, pp. 251–257. Morgan Kaufmann. https://doi.org/10.1016/B978-1-4832-1450-4.50027-4 (1993)
El Gannour, O., Hamida, S., Cherradi, B., Al-Sarem, M., Raihani, A., Saeed, F., Hadwan, M.: Concatenation of pre-trained convolutional neural networks for enhanced COVID-19 screening using transfer learning technique. Electronics 11(1), 103 (2021). https://doi.org/10.3390/electronics11010103
Funding
None.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Informed consent
None.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Mahanty, C., Kumar, R. & Patro, S.G.K. Internet of Medical Things-Based COVID-19 Detection in CT Images Fused with Fuzzy Ensemble and Transfer Learning Models. New Gener. Comput. 40, 1125–1141 (2022). https://doi.org/10.1007/s00354-022-00176-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00354-022-00176-0