An Automatic Dermatology Detection System Based On Deep Learning and Computer Vision
An Automatic Dermatology Detection System Based On Deep Learning and Computer Vision
ABSTRACT Automatic medical diagnosis has gained significant attention among researchers, particularly
in disease diagnosis. Differentiating between dermatology diseases is pivotal in clinical decision-making as it
provides prognostic and predictive information and treatment strategies. This paper proposes a dermatology
detection system based on deep learning (DL) and object recognition. The proposed model consists of three
phases: Data preprocessing, data augmentation, and classification with localization. In the data preprocessing
phase, we apply various operations such as color transformation, resizing, normalization, and labeling to
prepare the input image for enrollment in our DL models. The data augmentation phase is carried out on the
input images using the convolutional generative adversarial network algorithm. In the third phase, YOLO-V5
is used to classify and localize objects. The dataset is carefully collected with the assistance of medical
specialists to ensure its accuracy. The proposed models are evaluated and compared using various metrics.
Our empirical results demonstrate that the proposed model outperforms state-of-the-art models in terms of
accuracy. Our proposed methodology offers significant improvements in detecting vitiligo and melanoma
compared to recent techniques.
2023 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
VOLUME 11, 2023 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ 137769
S. E. Sorour et al.: Automatic Dermatology Detection System Based on Deep Learning and Computer Vision
of medical image analysis. Because dermatologists have The accuracy and real-time performance of DL approaches
varying experience levels, there are also discrepancies among are likewise subject to trade-offs. To address these issues, the
observers. state-of-the-art used the ‘‘You Only Look Once algorithm’’
AI decision and computer-aided diagnosis systems such (YOLO-V5) for real-time prediction of skin diseases. The
as disease diagnosis [17] and thoracic illness diagnosis cutting-edge technology utilized the YOLO-V5 algorithm to
[18], [19] and skin lesion detection [20]. Skin diseases predict skin diseases in real-time. The YOLO algorithm was
have a noticeable impact on a patient’s psychological first introduced in 2015 and offered a new approach to object
health, resulting in a loss of confidence and other issues. detection by treating it as a regression problem that could be
Skin diseases have a significant impact on the patient’s solved using a single neural network. This has resulted in
psychological health. It can lead to a loss of confidence and significant advancements in the field of object detection in
even depression in the patient. Dermatologists must have recent years, incorporating many of the most groundbreaking
adequate knowledge of AI concepts because dermatological ideas from computer vision research. The contributions of
conditions, with their abundant clinical and dermatoscopic this paper can be illustrated in the following points.
images and data, have the potential to be the next big thing • Prepare a dermatoscopic dataset that is carefully
in the application of AI in medical fields. Skin diseases are collected and revised by medical specialists.
among the most common diseases that affect people all over • Increase the amount of the image using the
the world. Dermatological diseases refer to conditions such as Convolutional Generative Adversarial Network (CGAN)
melanoma, basal cell carcinoma, squamous cell carcinoma, data
and intraepithelial carcinoma. ML is a powerful tool that can augmentation technique.
address real-world issues by analyzing data. It has been a • Deploy DL-based object recognition models to detect
widely discussed topic within the field of AI for some time the disease and split background.
[7], [21]. As a result, skin diseases can be fatal. There are • Evaluate the proposed models using evaluation metrics.
a variety of skin diseases that may develop in the human • Contrast the proposed models with the current
body as a result of causes such as excessive sun exposure, state-of-art.
water loss, sebum production, or hereditary inheritance.
The structure of this paper comprises several sections.
There is a condition called pigmented skin disease that is
Related work is presented in Section II, while Section III
quite prevalent. Melanocytes and melanin are abnormally
outlines the proposed solution’s components. The model’s
produced, resulting in the illness.
results are discussed in further detail in Section IV, and
Some pigmented skin diseases, such as freckles and
finally, Section V includes the conclusion and future work.
perioral streaks, can be extremely bothersome to people’s
daily lives, even if they are not life-threatening. Visual
judgment based on expertise and pathological diagnosis is the II. RELATED WORK
primary approach to diagnosing skin diseases in the clinic Dermatological disease detection is critical in disease
[22], [23], [24]. Skin condition diagnosis is challenging. prevention and diagnosis. An effective automatic skin
Skin disease can be identified visually using a variety of disease detection system requires a reliable feature extraction
cues, including body size distribution, color, scaling, and mechanism. This process is critical when using diagnostic
lesion pattern. When the individual components are analyzed systems to detect diseases. Several methods for extracting
separately, the identification process can become complex and analyzing various features from skin lesion images have
[25]. As a result, it is crucial to detect skin diseases been proposed in the last decade. However, some things could
early and prevent them from spreading. DL showcases be more consistent among observers due to readers’ varying
its powerful knowledge discovery capabilities to boost experience levels. Several researchers attempted to recognize
diagnostic performance and clinical workflow effectiveness skin diseases automatically to address this issue, and several
in organ and structure segmentation applications and image methods for identifying and classifying skin diseases were
quality enhancement. Therefore, a thorough analysis of developed and tested. The following subsections discuss the
DL applications in medical pictures is valuable and will researcher’s achievements in advancing the state of the art
develop knowledge-based systems in the medical industry. using ML and DL methods.
Multiple medical image analysis applications rely on image
registration. With the advent of DL, algorithmic performance A. DERMATOLOGY DETECTION AND MACHINE LEARNING
for many computer vision applications, including mature TECHNIQUES
registration, has improved significantly in recent years. Over the past few years, ML algorithms have become
Medical image remedial penetration methods using DL increasingly popular as a computational tool for clinical
have exploded in popularity over the past several years. diagnosis, particularly in classifying pigmented skin lesions.
Therefore, an in-depth study of the latest algorithms in this Dalila et al. [27] provided an automated system that
sector is pertinent and required Wang [26]. Detecting skin used four types of features to describe malignant lesions:
diseases and various growth stages in complex and changing texture, relative colors, geometrical, and qualities, which
surroundings cannot be done using conventional approaches. pertinent ones are chosen, along with an Ant colony-based
segmentation method. They utilized artificial neural network which is significantly better than the accuracy achieved by
(ANN) and K-Nearest Neighbor as malignant lesions SVM, Convolutional Neural Network (CNN), and Logistic
classifier. They tested the proposed segmentation algorithm Regression, which are 78.11%, 83.25%, and 78.21%,
by extracting and comparing the most relevant features respectively.
that describe melanomas. Their automated system examined
172 dermoscopic images, 88 malignant melanomas, and
84 benign lesions. The final results demonstrated that a better B. DERMATOLOGY DETECTION AND DEEP LEARNING
classification was received and outperformed the manual one. DL models outperform ML models as they automatically
For K-Nearest Neighbor classifier, the recorded accuracy was define and select problem features, and more accuracy is
85.22% of tested images against 87.50% for manual masks, attained if it is trained with more data. Gonzalez-Diaz et. al.
while the Neural Network classifier correctly classified [32] introduced CNN-based skin lesion CAD system called
93.60% of tested images against manual masks with an DermaKNet. The authors added a modulation block to the
accuracy of 86.60%. outputs of the convolutional res5c layer before building the
Adjed et al. [28] proposed a feature extraction technique proposed CNN on top of ResNet50. AVG and Polar AVG,
composed of two phases. The first was based on wavelet and two pooling layers, were developed simultaneously. The final
curve-let transforms for extracting structural features, and the stage of CNN used three entirely connected layers. The
other phase depended on local binary patterns for extracting asymmetry block comes before the third fully connected
textural features. Support Vector Machine (SVM) was block because of the numerous ways melanoma grows.
utilized to categorize collected features. Using a dermoscopy Using the asymmetry block, various melanoma development
database of 200 images, 160 were non-melanoma, and mechanisms were discovered. Amin et al. [33] introduced a
40 were melanoma. The accuracy rate of the validated model composed of three phases. In the first phase, images
results was 86.07%, with a specificity of 93.25% and were resized, and only the luminance (L) channel was
a sensitivity of 78.93%. Tajeddin et al. [29] proposed a selected. The Biorthogonal 2-D wavelet was transformed
melanoma lesions classification approach from dermoscopic in the second phase, and the Otsu algorithm was utilized
images. They used contour propagation to start lesion for skin lesion segmentation. Finally, PCA, pre-trained Alex
segmentation. Lesions were mapped via log-polar space net and VGG16 were utilized for deep feature extraction.
using Daugman’s transformation based on the surrounding Pezhman et al. [34] proposed a model from scratch based on
area to extract features. They used two approaches to test CNNs for skin lesions and dermoscopic feature segmentation.
the effectiveness of the new characteristics: linear SVM and They introduced increasing input depth to convolutional
RUSBoost classifier to discriminate between melanoma and layers using CIELAB color space and RGB color channels
nevus. The proposed approach is applied to 120 images with of the original dataset image instead of using traditional
a 10-fold cross-validation framework by using only four augmentation or transfer learning of pre-trained models.
characteristics with the linear SVM classifier; final results The proposed model applied to two datasets provided by
of accuracy, sensitivity, and specificity of 99.2%,97.5%, The International Skin Imaging Collaboration (ISIC), ISIC
and 100% respectively were recorded while the second 2016 and ISIC 2017. The Jaccard index shows a 2%
classification system, which included eight optimum selected increase, and accuracy improved by 7% in comparison to
features in addition to the RUSBoost classifier, was tested the results of the ISIC 2017. The model also showed a 1%
on 200 dermoscopic images, with sensitivity, specificity, and increase in the Jaccard index and a 6% improvement in
accuracy of 95%. sensitivity for the ISIC 2016 challenge. Kassem et al. [35]
Ahammed et al. [30] introduce a skin disease diagnosis utilized GoogleNet and pre-trained models to develop a DL
model that automatically segments affected lesions. They algorithm that can accurately categorize various types of skin
apply three ML classifiers: Decision Tree, SVM, and lesions from the ISIC 2019 dataset. The model successfully
K-Nearest Neighbor (k-NN). Two datasets, namely ISIC2019 classified eight different classes of skin lesions with high
and HAM10000, were used to evaluate the proposed model. percentages of classification accuracy, sensitivity, specificity,
The accuracy achieved for the ISIC2019 dataset was 94%, and precision, which were measured at 94.92%, 79.8%,
95%, and 93% with KNN, SVM, and DT classifiers, 97%, and 80.36%, respectively. Srinivasu [36] proposed
respectively. On the other hand, for the HAM10000 dataset, a model for classifying skin illnesses, which was based
the accuracy achieved was 95%, 97%, and 95% using KNN, on MobileNet V2 and Long Short-Term Memory (LSTM).
SVM, and DT classifiers, respectively. A grey-level co-occurrence matrix was utilized to evaluate
NPriyadharshini et al. [31] proposed a classification the progression of pathological growth. The effectiveness of
model that utilized Principal Component Analysis (PCA) the proposed model was then tested using the HAM10000
for feature selection, Fuzzy C-Means (FCM) for skin image dataset, and the final outcome showed an accuracy rate of
segmentation, and ELM-TLBO: Extreme Learning Machine 85%. In a recent study by Zhou et al. [37], a DL model
(ELM) and Teaching-Learning-Based Optimization (TLBO) was proposed for skin disease classification. The model
for classification. The model achieved a classification combines preprocessing, data augmentation, and residual
accuracy of 93.18% in detecting melanoma skin cancer, networks. Dermatologists manually annotated images, after
which background information was masked with unique CGAN data augmentation is used to identify dermatological
colors. Sample-balanced training and testing data were cases from the obtained images. The input images are
generated for augmentation. Finally, the DL networks were transformed into feature maps by the generator network, and
trained to compare the performance of classifiers for different the discriminator uses a classification layer to distinguish
background information. The model was evaluated using a between genuine and produced images based on these
dataset with seven types of skin diseases from the Department maps. On the other hand, when compared to other studies.,
of Dermatology, Xiangya Hospital. The results showed that our approach limits the application of GANs to the data
the classifier trained on the green background outperformed augmentation stage, eliminating the need for classification
the other backgrounds, with a precision of 81.98% and an since the objective is to produce images rather than make
F1-score of 82.41%. a decision. CGAN is used in this paper. The generator
consists of five convolutional transposition layers (Conv2D
III. PROPOSED APPROACH Transpose). Each Conv1, Conv2, Conv3, Conv4, and Conv5
The proposed model consists of three phases: Data layer has 8, 4, 2, 1, and 1. The input images are initially fed
preprocessing, data augmentation, and classification with into a de-noising layer that is fully connected, with a size of
localization. In the data preprocessing phase, we apply 8864. Then, they undergo a sequence of Conv2D Transpose
various operations such as color transformation, resizing, layers and batch normalization layers (BN) to generate a
normalization, and labeling to prepare the input image feature map of the input images.
for enrollment in our DL model. It also includes dataset
splitting into train, validation, and test subsets. Regarding
the validation, we deployed the k-fold technique with k
of 10. The data augmentation phase is carried out on the
input images using the convolutional generative adversarial
network algorithm. In the third phase, YOLO-V5 is used to
classify and localize objects. The proposed DL model is based
on cascaded YOLO models which deploys three YOLO-V5
models to obtain the optimal performance. The main idea is to
obtain the best weights from a YOLO model and feed it into
the next one. This approach ensures efficient performance
for both classification and localization. Figure 1 depicts the
proposed model. FIGURE 2. Proposed data augmentation method.
B. YOLO-5 ALGORITHM
In 2016, Joseph Redmon and his coworkers introduced the
You Only Look Once (YOLO) approach [38], which used
a single neural network to perform all of the necessary
processes for recognizing an object. It changes the definition
of object detection from picture pixels to box coordinates and
class probabilities, which is a single regression problem. This
integrated model simultaneously predicts multiple bounding
boxes and class probabilities for objects covered by boxes.
YOLO algorithm showed speed and accuracy in detecting and
calculating object coordinates compared to the top methods
FIGURE 1. The Proposed dermatology detection DL model.
at its release. As previously stated, an advanced YOLO-V5
detector is utilized in the proposed solution to resolve the
A. CONVOLUTIONAL GENERATIVE ADVERSARIAL problem in an automated manner. YOLO-V5 is created by
NETWORK DATA AUGMENTATION combining Yolo-1 and Yolov-4. It excelled in the Pascal VOC
This study proposes a data augmentation method based (visual object classes) and Microsoft COCO official object
on a generative adversarial network, which has been used detection datasets (common objects in context). As shown
to recognize skin from skin-captured images, as shown in figure 3, Yolo-5’s network architecture consists of three
in Figure 2. This algorithm consists of two stages. The essential elements: the backbone, the neck, and the head
first is the generator stage, in which the generator network [39]. The first part of the process is called the Backbone,
generates samples. Moreover, the second is the discriminator which is responsible for collecting important image features.
stage to construct the generated images. The deployed YOLO-V5 has integrated cross-stage partial networks into
CGAN algorithm generates multiple input images to make Darknet, resulting in a new backbone called CSPDarknet.
them compatible with the nature of DL models. However, Compared to YOLO-v3’s Darknet53, CSPDarknet is much
A. DESCRIPTIONS OF DATASETS and the final results were obtained by averaging all the
The datasets applied in the current research are called outcomes.
Melanoma and Vitiligo [41]. The melanoma dataset contains Table1 shows dataset description. In the Validation stage,
all the PLCO research data accessible for melanoma there are a total of 387 images, with 325 belonging to
cancer incidence and death analyses. Vitiligo dataset is the Melanoma dataset and 62 to the Vitiligo dataset. The
a skin disease that causes blotches of skin color to training stage encompasses 1797 images, comprising 1,508
fade. The datasets (melanoma and vitiligo) used and their melanoma images and 287 Vitiligo images. In the Test stage,
characteristics are the focus of this work. To structure there are 387 images, with 325 categorized as Melanoma
the model, 70% of each dataset was selected as training and 62 as Vitiligo. Consequently, the grand total across
data. The validation and testing data consisted of 15% all stages amounts to 2569 images. Specifically, there
each. The classification process was run across 200 epochs, are 2158 Melanoma images distributed across the stages
(325 in Valid, 1508 in Train, and 325 in Test) and 411 Vitiligo that are predicted to be negative and are actually
images (62 in Valid, 287 in Train, and 62 in Test). Here is the negative.
link to access our dataset:1 Additionally, ROC-AUC, stands for ‘‘Area Under the Curve’’
of the Receiver Operating Characteristic,’’ is used to represent
B. WORKING ENVIRONMENT the model’s performance graphically as the plot of Recall
The simulation has generated its results through the against Precision at different settings [42].
utilization of powerful hardware, specifically an Intel Core
TP + TN
i7 CPU, 64 GB of RAM, and an NVIDIA GTX 1050i Accuracy = (1)
GPU. Additionally, Python and PyTourch were utilized as TP + TN + FP + FN
programming tools to carry out the necessary programming TP
Precision = (2)
tasks. Table 2 presents the recommended hyperparameters for TP + FP
the proposed models, while other standard parameter options, TP
Recall = (3)
including the loss function and maximum number of epochs, TP + FN
are also available. The chosen optimizer for this task is Adam,
and the loss function that was applied is shown in Table 2. D. RESULT DISCUSSION
This paper proposes a DL method for skin disease detection.
C. EVALUATION MEASURES This study identifies two types of skin diseases (melanoma
In order to increase the statistical significance of the and vitiligo). This study proposes different models of the
experimental results, the proposed model’s performance is YOLO-V5 technique. The objective of the proposed method
evaluated using the standard metrics listed below: is to distinguish the infected area from the background
1) Accuracy: a ratio shows how many samples are skin. The proposed methods have been trained, validated,
predicted successful, or true positive (TP), to the total and tested using evaluation metrics. Figures 4, 5 and 6
number of samples and is computed using Eq.(1). show the training curves for the proposed techniques, which
2) Precision: a ratio indicates how many samples are include the values of box, objectiveness, classification,
predicted to be TP are actually positive and evaluated precision, and recall during the training phase. It can be
using Eq.(2) where FP, stands for false positive, is the observed that the performance increases along the training
number of samples which are predicted as positive and process. Furthermore, the proposed techniques are tested
are actually negative. and evaluated using the evaluation metrics. Figure 7 shows
3) Recall: measures the correctly predicted positive the proposed techniques’ confusion matrix, which comprises
samples out of all actual positive samples as formulated the normalized value of the detected skin diseases. It can
in Eq.(3), where FN refers to the number of samples be observed that they achieved an accuracy of detection of
90 % and 97 % for melanoma and vitiligo skin diseases,
1 https://github.com/Mohamed-Elredeny/An-Automatic-Dermatology- respectively. Moreover, the figure shows examples of the
Detection-System-Based-on-Deep-Learning-and-Computer-Vision.git resulting real-state images with a detection contour to
visualize the ability to perform the proposed methods on effective in accurately predicting signs performed in various
real-life applications. When IOU is between 0.5 (or 50%) and environments. Notably, our YOLO-V5 model3 achieved
0.95 (95%), we used many measures to assess the model’s a validation precision score of 0.877, a recall score of
performance, including Precision, Recall, and mAP (mean 0.99, and mAP scores of 0.935 for melanoma and vitiligo,
average Precision). The graphs of the metrics curves as respectively. These impressive scores demonstrate the
training advances are shown in Figure 8. Using a variety reliability and accuracy of our approach, further supporting
of metrics, including Precision, Recall, and mAP (mean its potential value in real-world applications. To ensure
average Precision) when IOU is between 0.5 (50%) and 0.95 accurate performance in different scenarios, it’s important to
(95%). The proposed YOLO-V5 model1 had a validation evaluate certain parameters. One such parameter is accuracy,
precision score of 0.887, a recall score of 0.927, and mAP which measures how often a model predicts the correct
scores of 0.877 for melanoma and vitiligo, respectively. The outcome. This can be calculated by dividing the classifier’s
results demonstrate the accuracy of our method in correctly correct predictions by the total number of predictions. The
predicting the signs performed in various environments. precision-recall curve demonstrates how precision and recall
In addition, the proposed YOLO-V5 model2 had a validation are related at various threshold values. A greater area
precision score of 0. 916, a recall score of 0.887, and mAP beneath the curve suggests high precision and recall, where
scores of 0.892 for melanoma and vitiligo, respectively. high precision implies a low false positive rate and high
With these results, it is clear that our approach is highly recall.
E. COMPARISON WITH THE STATE-OF-THE-ART MODELS [6] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, ‘‘Deep
learning for computer vision: A brief review,’’ Comput. Intell. Neurosci.,
Table3 compares the proposed models with the recent vol. 2018, pp. 1–13, Feb. 2018.
models using various datasets that detect melanoma and [7] C. Shen, D. Nguyen, Z. Zhou, S. B. Jiang, B. Dong, and X. Jia,
Vitiligo. The proposed YOLO-V5 model ranked first in all ‘‘An introduction to deep learning in medical physics: Advantages,
performance accuracy. It is important to note that the choice potential, and challenges,’’ Phys. Med. Biol., vol. 65, no. 5, Mar. 2020,
Art. no. 05TR01.
of YOLO-V5 depends on various factors, including DL and [8] R. Kop and F. Carroll, ‘‘Cloud computing and creativity: Learning on a
integrated cross-stage partial network. CSPDarknet created massive open online course,’’ Eur. J. Open, Distance E-Learn., vol. 14,
the backbone of Darknet, improving the proposed image’s no. 2, pp. 1–10, Dec. 2011.
[9] Md. I. Iqbal, Md. S. H. Mukta, A. R. Hasan, and S. Islam, ‘‘A dynamic
speed and accuracy.
weighted tabular method for convolutional neural networks,’’ IEEE Access,
vol. 10, pp. 134183–134198, 2022.
V. CONCLUSION AND FUTURE WORK [10] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M.
Distinguishing between dermatology diseases is key in Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, ‘‘A
survey on deep learning in medical image analysis,’’ Med. Image Anal.,
clinical decision-making as it provides prognostic and vol. 42, pp. 60–88, Dec. 2017.
predictive information and treatment strategies. This paper [11] A. I. Khan, S. M. K. Quadri, S. Banday, and J. Latief Shah, ‘‘Deep
proposes a dermatology detection system based on DL diagnosis: A real-time apple leaf disease detection system based on
deep learning,’’ Comput. Electron. Agricult., vol. 198, Jul. 2022,
and object recognition. The suggested approach contains Art. no. 107093.
three stages: Data preprocessing, data augmentation, [12] L. Zhang et al., ‘‘Study design of deep learning based automatic
and classification with localization. In the first stage, detection of cerebrovascular diseases on medical imaging: A position
paper from Chinese association of radiologists,’’ Intell. Med., vol. 2, no. 4,
different procedures, such as color transformation, resizing, pp. 221–229, Nov. 2022.
normalization, and labeling, were applied to prepare the [13] T. B. Chandra, K. Verma, B. K. Singh, D. Jain, and S. S. Netam,
input image for enrollment in our DL models. The data ‘‘Automatic detection of tuberculosis related abnormalities in chest X-ray
augmentation stage is implemented on the input images images using hierarchical feature extraction scheme,’’ Expert Syst. Appl.,
vol. 158, Nov. 2020, Art. no. 113514.
using the convolutional generative adversarial network [14] T. Zhou, Q. Cheng, H. Lu, Q. Li, X. Zhang, and S. Qiu, ‘‘Deep learning
algorithm. In the third stage, YOLO-V5 is used to classify methods for medical image fusion: A review,’’ Comput. Biol. Med.,
and localize objects. The dataset was carefully collected vol. 160, Jun. 2023, Art. no. 106959.
[15] F. Yousaf, S. Iqbal, N. Fatima, T. Kousar, and M. S. M. Rahim, ‘‘Multi-class
with the assistance of medical specialists to ensure its disease detection using deep learning and human brain medical imaging,’’
accuracy. The proposed models were assessed and compared Biomed. Signal Process. Control, vol. 85, Aug. 2023, Art. no. 104875.
using various metrics. Our empirical results demonstrated [16] G. Ramkumar, J. Seetha, R. Priyadarshini, M. Gopila, and G. Saranya,
‘‘IoT-based patient monitoring system for predicting heart disease using
that the suggested model surpasses state-of-the-art methods
deep learning,’’ Measurement, vol. 218, Aug. 2023, Art. no. 113235.
in accuracy. Our suggested method offered considerable [17] J.-H. Han, ‘‘Artificial intelligence in eye disease: Recent developments,
advancements in detecting vitiligo and melanoma compared applications, and surveys,’’ Diagnostics, vol. 12, no. 8, p. 1927, Aug. 2022.
to recent approaches. Overall, the proposed model makes [18] W. Zhang, J. Zhong, S. Yang, Z. Gao, J. Hu, Y. Chen, and Z. Yi,
‘‘Automated identification and grading system of diabetic retinopathy
substantial progress in the early detection of dermatology using deep neural networks,’’ Knowl.-Based Syst., vol. 175, pp. 12–25,
diseases through image data analysis, potentially impacting Jul. 2019.
the medical field positively. With further refinement, [19] Q. Alajmi and A. Sadiq, ‘‘‘What should be done to achieve greater use
of cloud computing by higher education institutions,’’’ in Proc. IEEE
validation, and interpretability enhancements, they could
7th Annu. Inf. Technol., Electron. Mobile Commun. Conf. (IEMCON),
become valuable tools in supporting healthcare professionals Oct. 2016, pp. 1–5.
in dermatology disease classification and advancing medical [20] T. Y. Tan, L. Zhang, and C. P. Lim, ‘‘Adaptive melanoma diagnosis using
research. evolving clustering, ensemble and deep neural networks,’’ Knowl.-Based
Syst., vol. 187, Jan. 2020, Art. no. 104807.
[21] K. P. Smith and J. E. Kirby, ‘‘Image analysis and artificial intelligence in
REFERENCES infectious disease diagnostics,’’ Clin. Microbiol. Infection, vol. 26, no. 10,
[1] X. Tian, H. Tang, L. Cheng, Z. Liao, Y. Li, J. He, P. Ren, M. You, pp. 1318–1323, Oct. 2020.
and Z. Pang, ‘‘Evaluation system framework of artificial intelligence [22] K. Sugimoto, Y. Kon, S. Lee, and Y. Okada, ‘‘Detection and localization
applications in medical diagnosis and treatment,’’ Proc. Comput. Sci., of myocardial infarction based on a convolutional autoencoder,’’
vol. 214, pp. 495–502, Jan. 2022. Knowl.-Based Syst., vol. 178, pp. 123–131, Aug. 2019.
[2] J. Futoma, M. Simons, T. Panch, F. Doshi-Velez, and L. A. Celi, ‘‘The myth [23] Y. Tian and S. Fu, ‘‘A descriptive framework for the field of deep learning
of generalisability in clinical research and machine learning in health care,’’ applications in medical images,’’ Knowl.-Based Syst., vol. 210, Dec. 2020,
Lancet Digit. Health, vol. 2, no. 9, pp. e489–e492, Sep. 2020. Art. no. 106445.
[3] N. A. Mahoto, A. Shaikh, A. Sulaiman, M. S. A. Reshan, A. Rajab, [24] K. Gao, Q. Zhang, and H. Wang, ‘‘Convolutional neural networks towards
and K. Rajab, ‘‘A machine learning based data modeling for medical diagnosis of dermatosis,’’ J. Phys., Conf. Ser., vol. 1237, no. 3, Jun. 2019,
diagnosis,’’ Biomed. Signal Process. Control, vol. 81, Mar. 2023, Art. no. 032057.
Art. no. 104481. [25] A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and
[4] C. Felmingham et al., ‘‘Improving skin cancer management with artificial A. Maida, ‘‘Deep learning in spiking neural networks,’’ Neural Netw.,
intelligence: A pre-post intervention trial of an artificial intelligence system vol. 111, pp. 47–63, Mar. 2019.
used as a diagnostic aid for skin cancer management in a real-world [26] X. Chen, A. Diaz-Pinto, N. Ravikumar, and A. Frangi, ‘‘Deep learning
specialist dermatology setting,’’ J. Amer. Acad. Dermatol., vol. 88, no. 5, in medical image registration,’’ Prog. Biomed. Eng., vol. 3, Dec. 2020,
pp. 1138–1142, May 2023. Art. no. 012003.
[5] S. Patel, J. V. Wang, K. Motaparthi, and J. B. Lee, ‘‘Artificial intelligence [27] F. Dalila, A. Zohra, K. Reda, and C. Hocine, ‘‘Segmentation and
in dermatology for the clinician,’’ Clinics Dermatol., vol. 39, no. 4, classification of melanoma and benign skin lesions,’’ Optik, vol. 140,
pp. 667–672, Jul. 2021. pp. 749–761, Jul. 2017.
[28] F. Adjed, S. J. Safdar Gardezi, F. Ababsa, I. Faye, and S. C. Dass, ‘‘Fusion SHAYMAA E. SOROUR received the Ph.D. degree
of structural and textural features for melanoma recognition,’’ IET Comput. in computer science and education from the
Vis., vol. 12, no. 2, pp. 185–195, Nov. 2017. Department of Advanced Information Technology,
[29] N. Z. Tajeddin and B. M. Asl, ‘‘Melanoma recognition in dermoscopy Faculty of Information Science and Electrical
images using lesion’s peripheral region information,’’ Comput. Methods Engineering, Kyushu University, Japan, in 2016.
Programs Biomed., vol. 163, pp. 143–153, Sep. 2018. She is currently an Assistant Professor
[30] M. Ahammed, M. A. Mamun, and M. S. Uddin, ‘‘A machine learning in computer science with the Management
approach for skin disease detection and classification using image Information Systems Department, College of
segmentation,’’ Healthcare Anal., vol. 2, Nov. 2022, Art. no. 100122.
Business Administration, King Faisal University,
[31] N. Priyadharshini, B. Hemalatha, and C. Sureshkumar, ‘‘A novel hybrid
Saudi Arabia. She is also an Associate Professor
extreme learning machine and teaching–learning-based? Optimization
algorithm for skin cancer detection,’’ Healthcare Anal., vol. 3, Nov. 2023, in computer science and education and a Computer Teacher with the
Art. no. 100161. Department of Educational Technology, Faculty of Specific Education,
[32] I. González-Díaz, ‘‘DermaKNet: Incorporating the knowledge of Kafrelsheikh University, Egypt. Her specialization is computer science,
dermatologists to convolutional neural networks for skin lesion diagnosis,’’ artificial intelligence, and machine learning algorithms. She received the
IEEE J. Biomed. Health Informat., vol. 23, no. 2, pp. 547–559, Mar. 2019. Best Paper Award at the Fifth IIAI International Congress on Advanced
[33] J. Amin, A. Sharif, N. Gul, M. A. Anjum, M. W. Nisar, F. Azam, Applied Informatics, in July 2016.
and S. A. C. Bukhari, ‘‘Integrated design of deep features fusion for
localization and classification of skin cancer,’’ Pattern Recognit. Lett.,
vol. 131, pp. 63–70, Mar. 2020. AMR ABO HANY received the B.Sc. degree
[34] M. Pezhman Pour and H. Seker, ‘‘Transform domain representation-driven from the Faculty of Computers and Information,
convolutional neural networks for skin lesion segmentation,’’ Expert Syst. Zagazig University, Egypt, in 2007, and the M.Sc.
Appl., vol. 144, Apr. 2020, Art. no. 113129. and Ph.D. degrees from the Faculty of Computers
[35] M. A. Kassem, K. M. Hosny, and M. M. Fouad, ‘‘Skin lesions classification and Information, Helwan University, Egypt, in
into eight classes for ISIC 2019 using deep convolutional neural network 2014 and 2018, respectively. He is currently
and transfer learning,’’ IEEE Access, vol. 8, pp. 114822–114832, 2020. an Associate Professor in information systems
[36] P. N. Srinivasu, J. G. Sivasai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, with the Faculty of Computers and Information,
‘‘Classification of skin disease using deep learning neural networks with
Kafrelsheikh University, Egypt. He has more than
MobileNet v2 and LSTM,’’ Sensors, vol. 21, no. 8, p. 2852, Apr. 2021.
31 scientific research articles on the topic of
[37] J. Zhou, Z. Wu, Z. Jiang, K. Huang, K. Guo, and S. Zhao, ‘‘Background
information systems published in prestigious international journals. His
selection schema on deep learning-based classification of dermatological
disease,’’ Comput. Biol. Med., vol. 149, Oct. 2022, Art. no. 105966. current research interests include optimization, machine learning, and the
[38] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, ‘‘You only look once: Internet of Things.
Unified, real-time object detection,’’ in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788.
[39] Y. Fang, X. Guo, K. Chen, Z. Zhou, and Q. Ye, ‘‘Accurate and automated MOHAMED S. ELREDENY is currently a
detection of surface knots on sawn timbers using YOLO-V5 model,’’ Teaching Assistant with Alamein International
BioResources, vol. 16, no. 3, pp. 5390–5406, Jun. 2021. University. His research interest includes
[40] X. Hu, D. Li, B. Luo, and L. Li, ‘‘Weathering characteristics of enhancing object detection methodologies.
wood-plastic composites compatibilized with ethylene vinyl acetate,’’
BioResources, vol. 15, no. 2, pp. 3930–3944, Apr. 2020.
[41] ISIC. [Online]. Available: https://www.isic-archive.com/
[42] D. M. W. Powers, ‘‘Evaluation: From precision, recall and
F-measure to ROC, informedness, markedness and correlation,’’ 2020,
arXiv:2010.16061.
[43] M. S. Khan, K. N. Alam, A. R. Dhruba, H. Zunair, and N. Mohammed,
‘‘Knowledge distillation approach towards melanoma detection,’’ Comput.
AHMED SEDIK received the B.Sc. and
Biol. Med., vol. 146, Jul. 2022, Art. no. 105581.
M.Sc. degrees in engineering from the Faculty
[44] Q. Liu, H. Kawashima, and A. Rezaei Sofla, ‘‘An optimal method
for melanoma detection from dermoscopy images using reinforcement of Engineering, Tanta University, Egypt, in
learning and support vector machine optimized by enhanced fish 2012 and 2018, respectively, and the Ph.D. degree
migration optimization algorithm,’’ Heliyon, vol. 9, no. 10, Oct. 2023, from Minia University, Egypt, in 2020. He is
Art. no. e21118. currently an Assistant Professor with the Faculty
[45] P. M. M. Pereira, L. A. Thomaz, L. M. N. Tavora, P. A. A. Assuncao, of Artificial Intelligence, Kafrelsheikh University.
R. Fonseca-Pinto, R. P. Paiva, and S. M. M. Faria, ‘‘Multiple instance
learning using 3D features for melanoma detection,’’ IEEE Access, vol. 10,
pp. 76296–76309, 2022.
[46] R. Kaur, H. GholamHosseini, R. Sinha, and M. Lindén, ‘‘Melanoma
classification using a novel deep convolutional neural network with REDA M. HUSSIEN (Member, IEEE) received
dermoscopic images,’’ Sensors, vol. 22, no. 3, p. 1134, Feb. 2022. the bachelor’s degree in electrical engineering
[47] T. Khatibi, N. Rezaei, L. Ataei Fashtami, and M. Totonchi, ‘‘Proposing from Menofia University, Egypt, in 1999, the
a novel unsupervised stack ensemble of deep and conventional image
master’s degree in information systems from the
segmentation (SEDCIS) method for localizing vitiligo lesions in skin
Faculty of Computers and Information, Menofia
images,’’ Skin Res. Technol., vol. 27, no. 2, pp. 126–137, Mar. 2021.
[48] L. Guo, Y. Yang, H. Ding, H. Zheng, H. Yang, J. Xie, Y. Li, T. Lin, and
University, in 2007, and the Ph.D. degree in
Y. Ge, ‘‘A deep learning-based hybrid artificial intelligence model for the information systems from Menofia University,
detection and severity assessment of vitiligo lesions,’’ Ann. Transl. Med., in 2011. He is currently an Assistant Professor
vol. 10, no. 10, p. 590, May 2022. with the Information Systems Department, Faculty
[49] S. Singh, K. R. Ramkumar, and S. Singh, ‘‘Significance of machine of Computers and Information, Kafrelsheikh
learning algorithms to predict the growth and trend of COVID-19 University, Egypt.
pandemic,’’ ECS Trans., vol. 107, no. 1, pp. 5449–5457, Apr. 2022.