Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,779)

Search Parameters:
Keywords = medical image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 764 KiB  
Review
The Medical Basis for the Photoluminescence of Indocyanine Green
by Wiktoria Mytych, Dorota Bartusik-Aebisher and David Aebisher
Molecules 2025, 30(4), 888; https://doi.org/10.3390/molecules30040888 (registering DOI) - 14 Feb 2025
Abstract
Indocyanine green (ICG), a near-infrared (NIR) fluorescent dye with unique photoluminescent properties, is a helpful tool in many medical applications. ICG produces fluorescence when excited by NIR light, enabling accurate tissue visualization and real-time imaging. This study investigates the fundamental processes behind ICG’s [...] Read more.
Indocyanine green (ICG), a near-infrared (NIR) fluorescent dye with unique photoluminescent properties, is a helpful tool in many medical applications. ICG produces fluorescence when excited by NIR light, enabling accurate tissue visualization and real-time imaging. This study investigates the fundamental processes behind ICG’s photoluminescence as well as its present and possible applications in treatments and medical diagnostics. Fluorescence-guided surgery (FGS) has been transformed by ICG’s capacity to visualize tumors, highlight blood flow, and facilitate lymphatic mapping, all of which have improved surgical accuracy and patient outcomes. Furthermore, the fluorescence of the dye is being studied for new therapeutic approaches, like photothermal therapy, in which NIR light can activate ICG to target and destroy cancer cells. We go over the benefits and drawbacks of ICG’s photoluminescent qualities in therapeutic contexts, as well as current studies that focus on improving its effectiveness, security, and adaptability. More precise disease detection, real-time monitoring, and tailored therapy options across a variety of medical specialties are made possible by the ongoing advancement of ICG-based imaging methods and therapies. In the main part of our work, we strive to take into account the latest reports; therefore, we used clinical articles going back to 2020. However, for the sake of the theoretical part, the oldest article used by us is from 1995. Full article
(This article belongs to the Special Issue Chemiluminescence and Photoluminescence of Advanced Compounds)
13 pages, 3045 KiB  
Article
Burn Wound Dynamics Measured with Hyperspectral Imaging
by Thomas Wild, Jörg Marotz, Ahmed Aljowder and Frank Siemers
Eur. Burn J. 2025, 6(1), 7; https://doi.org/10.3390/ebj6010007 - 13 Feb 2025
Viewed by 182
Abstract
Introduction: Hyperspectral Imaging (HSI) combined with an augmented model-based data processing enables the measurement of the depth-resolved perfusion of burn wounds. With these methods, the fundamental problem of the wound dynamics (wound conversion or progression) in the first 4 days should be parametrically [...] Read more.
Introduction: Hyperspectral Imaging (HSI) combined with an augmented model-based data processing enables the measurement of the depth-resolved perfusion of burn wounds. With these methods, the fundamental problem of the wound dynamics (wound conversion or progression) in the first 4 days should be parametrically analyzed and evaluated. Material and Methods: From a cohort of 59 patients with burn injuries requiring medical intervention, 281 homogenous wound segments were selected and subjected to clinical classification based on the duration of healing. The classification was retrospectively assigned to each segment during the period from day 0 to day 2 post-burn. The perfusion parameters were presented in two parameter spaces describing the upper and deeper perfusion. Results: The investigation of value distributions within the parameter spaces pertaining to four distinct categories of damage from superficial dermal to full-thickness burns during the initial four days reveals the inherent variability and distinct patterns associated with wound progression, depending on the severity of damage. The analysis highlights the challenges associated with estimating the burn degrees during this early stage and elucidates the significance of deeper tissue perfusion in the classification process, which cannot be discerned through visual inspections. Conclusions: The feasibility of early classification on day 0 or 1 was assessed, and the findings indicate a restricted level of reliability, particularly on day 0, primarily due to the substantial variability observed in wound characteristics and inherent dynamics. Full article
Show Figures

Figure 1

15 pages, 7826 KiB  
Article
Tongue Image Segmentation and Constitution Identification with Deep Learning
by Chien-Ho Lin, Sien-Hung Yang and Jiann-Der Lee
Electronics 2025, 14(4), 733; https://doi.org/10.3390/electronics14040733 - 13 Feb 2025
Viewed by 240
Abstract
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic [...] Read more.
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic outcomes can vary depending on the physician’s clinical experience and subjective judgment. This study employs AI methods to focus on localized tongue assessment, developing an automatic tongue body segmentation using the deep learning network “U-Net” through a series of optimization processes applied to tongue surface images. Furthermore, “ResNet34” is utilized for the identification of “cold”, “neutral”, and “hot” constitutions, creating a system that enhances the consistency and reliability of diagnostic results related to the tongue. The final results demonstrate that the AI interpretation accuracy of this system reaches the diagnostic level of junior TCM practitioners (those who have passed the TCM practitioner assessment with ≤5 years of experience). The framework and findings of this study can serve as (1) a foundational step for the future integration of pulse information and electronic medical records, (2) a tool for personalized preventive medicine, and (3) a training resource for TCM students learning to diagnose tongue constitutions such as “cold”, “neutral”, and “hot”. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision, 2nd Edition)
Show Figures

Figure 1

12 pages, 798 KiB  
Technical Note
Adapting Classification Neural Network Architectures for Medical Image Segmentation Using Explainable AI
by Arturs Nikulins, Edgars Edelmers, Kaspars Sudars and Inese Polaka
J. Imaging 2025, 11(2), 55; https://doi.org/10.3390/jimaging11020055 - 13 Feb 2025
Viewed by 216
Abstract
Segmentation neural networks are widely used in medical imaging to identify anomalies that may impact patient health. Despite their effectiveness, these networks face significant challenges, including the need for extensive annotated patient data, time-consuming manual segmentation processes and restricted data access due to [...] Read more.
Segmentation neural networks are widely used in medical imaging to identify anomalies that may impact patient health. Despite their effectiveness, these networks face significant challenges, including the need for extensive annotated patient data, time-consuming manual segmentation processes and restricted data access due to privacy concerns. In contrast, classification neural networks, similar to segmentation neural networks, capture essential parameters for identifying objects during training. This paper leverages this characteristic, combined with explainable artificial intelligence (XAI) techniques, to address the challenges of segmentation. By adapting classification neural networks for segmentation tasks, the proposed approach reduces dependency on manual segmentation. To demonstrate this concept, the Medical Segmentation Decathlon ‘Brain Tumours’ dataset was utilised. A ResNet classification neural network was trained, and XAI tools were applied to generate segmentation-like outputs. Our findings reveal that GuidedBackprop is among the most efficient and effective methods, producing heatmaps that closely resemble segmentation masks by accurately highlighting the entirety of the target object. Full article
Show Figures

Figure 1

50 pages, 3331 KiB  
Review
Artificial Intelligence in Ophthalmology: Advantages and Limits
by Hariton-Nicolae Costin, Monica Fira and Liviu Goraș
Appl. Sci. 2025, 15(4), 1913; https://doi.org/10.3390/app15041913 - 12 Feb 2025
Viewed by 466
Abstract
In recent years, artificial intelligence has begun to play a salient role in various medical fields, including ophthalmology. This extensive review is addressed to ophthalmologists and aims to capture the current landscape and future potential of AI applications for eye health. From automated [...] Read more.
In recent years, artificial intelligence has begun to play a salient role in various medical fields, including ophthalmology. This extensive review is addressed to ophthalmologists and aims to capture the current landscape and future potential of AI applications for eye health. From automated retinal screening processes and machine learning models predicting the progression of ocular conditions to AI-driven decision support systems in clinical settings, this paper provides a comprehensive overview of the clinical implications of AI in ophthalmology. The development of AI has opened new horizons for ophthalmology, offering innovative solutions to improve the accuracy and efficiency of ocular disease diagnosis and management. The importance of this paper lies in its potential to strengthen collaboration between researchers, ophthalmologists, and AI specialists, leading to transformative findings in the early identification and treatment of eye diseases. By combining AI potential with cutting-edge imaging methods, novel biomarkers, and data-driven approaches, ophthalmologists can make more informed decisions and provide personalized treatment for their patients. Furthermore, this paper emphasizes the translation of basic research outcomes into clinical applications. We do hope this comprehensive review will act as a significant resource for ophthalmologists, researchers, data scientists, healthcare professionals, and managers in the healthcare system who are interested in the application of artificial intelligence in eye health. Full article
(This article belongs to the Special Issue Recent Progress and Challenges of Digital Health and Bioengineering)
Show Figures

Figure 1

26 pages, 2026 KiB  
Systematic Review
Deep Learning in Thoracic Oncology: Meta-Analytical Insights into Lung Nodule Early-Detection Technologies
by Ting-Wei Wang, Chih-Keng Wang, Jia-Sheng Hong, Heng-Sheng Chao, Yuh-Min Chen and Yu-Te Wu
Cancers 2025, 17(4), 621; https://doi.org/10.3390/cancers17040621 - 12 Feb 2025
Viewed by 264
Abstract
Background/Objectives: Detecting lung nodules on computed tomography (CT) images is critical for diagnosing thoracic cancers. Deep learning models, particularly convolutional neural networks (CNNs), show promise in automating this process. This systematic review and meta-analysis aim to evaluate the diagnostic accuracy of these models, [...] Read more.
Background/Objectives: Detecting lung nodules on computed tomography (CT) images is critical for diagnosing thoracic cancers. Deep learning models, particularly convolutional neural networks (CNNs), show promise in automating this process. This systematic review and meta-analysis aim to evaluate the diagnostic accuracy of these models, focusing on lesion-wise sensitivity as the primary metric. Methods: A comprehensive literature search was conducted, identifying 48 studies published up to 7 November 2023. The pooled diagnostic performance was assessed using a random-effects model, with lesion-wise sensitivity as the key outcome. Factors influencing model performance, including participant demographics, dataset privacy, and data splitting methods, were analyzed. Methodological rigor was maintained through the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tools. Trial Registration: This review is registered with PROSPERO under CRD42023479887. Results: The meta-analysis revealed a pooled sensitivity of 79% (95% CI: 72–86%) for independent datasets and 85% (95% CI: 83–88%) across all datasets. Variability in performance was associated with dataset characteristics and study methodologies. Conclusions: While deep learning models demonstrate significant potential in lung nodule detection, the findings highlight the need for more diverse datasets, standardized evaluation protocols, and interventional studies to enhance generalizability and clinical applicability. Further research is necessary to validate these models across broader patient populations. Full article
Show Figures

Figure 1

21 pages, 3621 KiB  
Article
SAVE: Self-Attention on Visual Embedding for Zero-Shot Generic Object Counting
by Ahmed Zgaren, Wassim Bouachir and Nizar Bouguila
J. Imaging 2025, 11(2), 52; https://doi.org/10.3390/jimaging11020052 - 10 Feb 2025
Viewed by 326
Abstract
Zero-shot counting is a subcategory of Generic Visual Object Counting, which aims to count objects from an arbitrary class in a given image. While few-shot counting relies on delivering exemplars to the model to count similar class objects, zero-shot counting automates the operation [...] Read more.
Zero-shot counting is a subcategory of Generic Visual Object Counting, which aims to count objects from an arbitrary class in a given image. While few-shot counting relies on delivering exemplars to the model to count similar class objects, zero-shot counting automates the operation for faster processing. This paper proposes a fully automated zero-shot method outperforming both zero-shot and few-shot methods. By exploiting feature maps from a pre-trained detection-based backbone, we introduce a new Visual Embedding Module designed to generate semantic embeddings within object contextual information. These embeddings are then fed to a Self-Attention Matching Module to generate an encoded representation for the head counter. Our proposed method has outperformed recent zero-shot approaches, achieving the best Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) results of 8.89 and 35.83, respectively, on the FSC147 dataset. Additionally, our method demonstrates competitive performance compared to few-shot methods, advancing the capabilities of visual object counting in various industrial applications such as tree counting, wildlife animal counting, and medical applications like blood cell counting. Full article
(This article belongs to the Special Issue Recent Trends in Computer Vision with Neural Networks)
Show Figures

Figure 1

20 pages, 1045 KiB  
Review
Emerging Applications of Machine Learning in 3D Printing
by Izabela Rojek, Dariusz Mikołajewski, Marcin Kempiński, Krzysztof Galas and Adrianna Piszcz
Appl. Sci. 2025, 15(4), 1781; https://doi.org/10.3390/app15041781 - 10 Feb 2025
Viewed by 398
Abstract
Three-dimensional (3D) printing techniques already enable the precise deposition of many materials, becoming a promising approach for materials engineering, mechanical engineering, or biomedical engineering. Recent advances in 3D printing enable scientists and engineers to create models with precisely controlled and complex microarchitecture, shapes, [...] Read more.
Three-dimensional (3D) printing techniques already enable the precise deposition of many materials, becoming a promising approach for materials engineering, mechanical engineering, or biomedical engineering. Recent advances in 3D printing enable scientists and engineers to create models with precisely controlled and complex microarchitecture, shapes, and surface finishes, including multi-material printing. The incorporation of artificial intelligence (AI) at various stages of 3D printing has made it possible to reconstruct objects from images (including, for example, medical images), select and optimize materials and the printing process, and monitor the lifecycle of products. New emerging opportunities are provided by the ability of machine learning (ML) to analyze complex data sets and learn from previous (historical) experience and predictions to dynamically optimize and individuate products and processes. This includes the synergistic capabilities of 3D printing and ML for the development of personalized products. Full article
(This article belongs to the Special Issue Feature Review Papers in Additive Manufacturing Technologies)
Show Figures

Figure 1

19 pages, 2119 KiB  
Article
A Pixel Shift Estimation Approach Using Spectral Information
by Georgia Koukiou
Electronics 2025, 14(4), 664; https://doi.org/10.3390/electronics14040664 - 8 Feb 2025
Viewed by 175
Abstract
This research paper presents a robust image registration algorithm tailored for the accurate estimation of image displacements. Image registration is a fundamental task in computer vision and image processing, with applications ranging from medical imaging to motion tracking in surveillance systems. The algorithm’s [...] Read more.
This research paper presents a robust image registration algorithm tailored for the accurate estimation of image displacements. Image registration is a fundamental task in computer vision and image processing, with applications ranging from medical imaging to motion tracking in surveillance systems. The algorithm’s efficacy is explored through a series of experiments conducted on image pairs, both in scenarios without noise and those affected by additive noise. The algorithm’s core methodology involves a combination of techniques, including Fourier transforms, phase correlation, and subpixel estimation. By leveraging these techniques, the algorithm can simultaneously compute both the integer and subpixel components of image displacement. This capability is particularly valuable in scenarios demanding precise alignment and motion analysis. In the experiments, the algorithm’s performance is assessed using the Mean Estimation Error (MEE), which quantifies the accuracy of displacement estimation. The results reveal that the algorithm consistently achieves high precision and accuracy, even in the presence of uniform white noise with a mean of 25 and standard deviation of 15. This robustness to noise underscores its suitability for real-world applications where images are often affected by various sources of interference. The comparative analysis between noise-free and noisy scenarios demonstrates the algorithm’s resilience to adverse conditions, making it a versatile tool for image registration tasks in practical environments. Its potential applications encompass computer vision, medical imaging, security and surveillance, and high-precision image processing. The robustness of the algorithm to noise and sub-pixel accuracy makes it an asset for a wide range of applications, promising enhanced capabilities in image alignment and motion analysis. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

22 pages, 20326 KiB  
Article
GATransformer: A Graph Attention Network-Based Transformer Model to Generate Explainable Attentions for Brain Tumor Detection
by Sara Tehsin, Inzamam Mashood Nasir and Robertas Damaševičius
Algorithms 2025, 18(2), 89; https://doi.org/10.3390/a18020089 - 6 Feb 2025
Viewed by 405
Abstract
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the [...] Read more.
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the early stages. The latest deep learning systems offer very high classification accuracy but lack explainability to help patients understand the prediction process. GATransformer, a graph attention network (GAT)-based Transformer, uses the attention mechanism, GAT, and Transformer to identify and preserve key neural network channels. The channel attention module extracts deeper properties from weight-channel connections to improve model representation. Integrating these elements results in a reduction in model size and enhancement in computing efficiency, while preserving adequate model performance. The proposed model is assessed using two publicly accessible datasets, FigShare and Kaggle, and is cross-validated using the BraTS2019 and BraTS2020 datasets, demonstrating high accuracy and explainability. Notably, GATransformer generates interpretable attention maps, visually highlighting tumor regions to aid clinical understanding in medical imaging. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

18 pages, 4325 KiB  
Article
Hybrid U-Net Model with Visual Transformers for Enhanced Multi-Organ Medical Image Segmentation
by Pengsong Jiang, Wufeng Liu, Feihu Wang and Renjie Wei
Information 2025, 16(2), 111; https://doi.org/10.3390/info16020111 - 6 Feb 2025
Viewed by 482
Abstract
Medical image segmentation is an essential process that facilitates the precise extraction and localization of diseased areas from medical pictures. It can provide clear and quantifiable information to support clinicians in making final decisions. However, due to the lack of explicit modeling of [...] Read more.
Medical image segmentation is an essential process that facilitates the precise extraction and localization of diseased areas from medical pictures. It can provide clear and quantifiable information to support clinicians in making final decisions. However, due to the lack of explicit modeling of global relationships in CNNs, they are unable to fully use the long-range dependencies among several image locations. In this paper, we propose a novel model that can extract local and global semantic features from the images by utilizing CNN and the visual transformer in the encoder. It is important to note that the self-attention mechanism treats a 2D image as a 1D sequence of patches, which can potentially disrupt the image’s inherent 2D spatial structure. Therefore, we utilized the structure of the transformer using visual attention and large kernel attention, and we added a residual convolutional attention module (RCAM) and multi-scale fusion convolution (MFC) into the decoder. They can help the model better capture crucial features and fine details to improve detail and accuracy of segmentation effects. On the synapse multi-organ segmentation (Synapse) and the automated cardiac diagnostic challenge (ACDC) datasets, our model performed better than the previous models, demonstrating that it is more precise and robust in multi-organ medical image segmentation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

17 pages, 3802 KiB  
Article
Automated Fungal Identification with Deep Learning on Time-Lapse Images
by Marjan Mansourvar, Karol Rafal Charylo, Rasmus John Normand Frandsen, Steen Smidth Brewer and Jakob Blæsbjerg Hoof
Information 2025, 16(2), 109; https://doi.org/10.3390/info16020109 - 5 Feb 2025
Viewed by 427
Abstract
The identification of species within filamentous fungi is crucial in various fields such as agriculture, environmental monitoring, and medical mycology. Traditional identification methods based on morphology have a low demand for advanced equipment usage and heavily depend on manual observation and expertise. However, [...] Read more.
The identification of species within filamentous fungi is crucial in various fields such as agriculture, environmental monitoring, and medical mycology. Traditional identification methods based on morphology have a low demand for advanced equipment usage and heavily depend on manual observation and expertise. However, this approach may struggle to differentiate between species in a genus due to their potential visual similarities, making the process time-consuming and subjective. In this study, we present an AI-based fungal species recognition model that utilizes deep learning techniques applied to time-lapse images. The training dataset, derived from fungi strains in the IBT Culture Collection, comprised 26,451 high-resolution images representing 110 species from 35 genera. The dataset was divided into a training set and validation subsets. We implemented three advanced deep learning architectures—ResNet50, DenseNet-121, and Vision Transformer (ViT)—to assess their effectiveness in accurately classifying fungal species. By utilizing images from early growth stages (days 2–3.5) for training and testing and later stages (days 4–7) for validation, our approach shortens the fungal identification process by 2–3 days, significantly reducing the associated workload and costs. Among the models, the Vision Transformer achieved the highest accuracy of 92.6%, demonstrating the effectiveness of our method. This work contributes to the automation of fungal identification, providing a reliable and efficient solution for monitoring fungal growth and diversity over time, which would be useful for culture collections or other institutions that handle a large number of new isolates in their daily work. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

27 pages, 4940 KiB  
Article
Alzheimer’s Prediction Methods with Harris Hawks Optimization (HHO) and Deep Learning-Based Approach Using an MLP-LSTM Hybrid Network
by Raheleh Ghadami and Javad Rahebi
Diagnostics 2025, 15(3), 377; https://doi.org/10.3390/diagnostics15030377 - 5 Feb 2025
Viewed by 397
Abstract
Background/Objective: Alzheimer’s disease is a progressive brain syndrome causing cognitive decline and, ultimately, death. Early diagnosis is essential for timely medical intervention, with MRI medical imaging serving as a primary diagnostic tool. Machine learning (ML) and deep learning (DL) methods are increasingly [...] Read more.
Background/Objective: Alzheimer’s disease is a progressive brain syndrome causing cognitive decline and, ultimately, death. Early diagnosis is essential for timely medical intervention, with MRI medical imaging serving as a primary diagnostic tool. Machine learning (ML) and deep learning (DL) methods are increasingly utilized to analyze these images, but accurately distinguishing between healthy and diseased states remains a challenge. This study aims to address these limitations by developing an integrated approach combining swarm intelligence with ML and DL techniques for Alzheimer’s disease classification. Method: This proposal methodology involves sourcing Alzheimer’s disease-related MRI images and extracting features using convolutional neural networks (CNNs) and the Gray Level Co-occurrence Matrix (GLCM). The Harris Hawks Optimization (HHO) algorithm is applied to select the most significant features. The selected features are used to train a multi-layer perceptron (MLP) neural network and further processed using a long short-term (LSTM) memory network in order to classify tumors as malignant or benign. The Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset is utilized for assessment. Results: The proposed method achieved a classification accuracy of 97.59%, sensitivity of 97.41%, and precision of 97.25%, outperforming other models, including VGG16, GLCM, and ResNet-50, in diagnosing Alzheimer’s disease. Conclusions: The results demonstrate the efficacy of the proposed approach in enhancing Alzheimer’s disease diagnosis through improved feature extraction and selection techniques. These findings highlight the potential for advanced ML and DL integration to improve diagnostic tools in medical imaging applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Alzheimer’s Disease Diagnosis)
Show Figures

Figure 1

19 pages, 3581 KiB  
Article
Multi-Classification of Skin Lesion Images Including Mpox Disease Using Transformer-Based Deep Learning Architectures
by Seyfettin Vuran, Murat Ucan, Mehmet Akin and Mehmet Kaya
Diagnostics 2025, 15(3), 374; https://doi.org/10.3390/diagnostics15030374 - 5 Feb 2025
Viewed by 461
Abstract
Background/Objectives: As reported by the World Health Organization, Mpox (monkeypox) is an important disease present in 110 countries, mostly in South Asia and Africa. The number of Mpox cases has increased rapidly, and the medical world is worried about the emergence of a [...] Read more.
Background/Objectives: As reported by the World Health Organization, Mpox (monkeypox) is an important disease present in 110 countries, mostly in South Asia and Africa. The number of Mpox cases has increased rapidly, and the medical world is worried about the emergence of a new pandemic. Detection of Mpox by traditional methods (using test kits) is a costly and slow process. For this reason, there is a need for methods that have high success rates and can diagnose Mpox disease from skin images with a deep-learning-based autonomous method. Methods: In this work, we propose a multi-class, fast, and reliable autonomous disease diagnosis model using transformer-based deep learning architectures and skin lesion images, including for Mpox disease. Our other aim is to investigate the effects of self-supervised learning, self-distillation, and shifted window techniques on classification success when multi-class skin lesion images are trained with transformer-based deep learning architectures. The Mpox Skin Lesion Dataset, Version 2.0, which was publicly released in 2024, was used in the training, validation, and testing processes of the study. Results: The SwinTransformer architecture we proposed in our study achieved about 8% higher accuracy evaluation metric classification success compared to its closest competitor in the literature. ViT, MAE, DINO, and SwinTransformer architectures achieved 93.10%, 84.60%, 90.40%, and 93.71% accuracy classification success, respectively. Conclusions: The results obtained in the study showed that Mpox disease and other skin lesion images can be diagnosed with high success and can support doctors in decision-making. In addition, the study provides important results that can be used in other medical fields where the number of images is low in terms of transformer-based architecture and technique to use. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

54 pages, 1125 KiB  
Systematic Review
Investigation into Application of AI and Telemedicine in Rural Communities: A Systematic Literature Review
by Kinalyne Perez, Daniela Wisniewski, Arzu Ari, Kim Lee, Cristian Lieneck and Zo Ramamonjiarivelo
Healthcare 2025, 13(3), 324; https://doi.org/10.3390/healthcare13030324 - 4 Feb 2025
Viewed by 1110
Abstract
Recent advances in artificial intelligence (AI) and telemedicine are transforming healthcare delivery, particularly in rural and underserved communities. Background/Objectives: The purpose of this systematic review is to explore the use of AI-driven diagnostic tools and telemedicine platforms to identify underlying themes (constructs) in [...] Read more.
Recent advances in artificial intelligence (AI) and telemedicine are transforming healthcare delivery, particularly in rural and underserved communities. Background/Objectives: The purpose of this systematic review is to explore the use of AI-driven diagnostic tools and telemedicine platforms to identify underlying themes (constructs) in the literature across multiple research studies. Method: The research team conducted an extensive review of studies and articles using multiple research databases that aimed to identify consistent themes and patterns across the literature. Results: Five underlying constructs were identified with regard to the utilization of AI and telemedicine on patient diagnosis in rural communities: (1) Challenges/benefits of AI and telemedicine in rural communities, (2) Integration of telemedicine and AI in diagnosis and patient monitoring, (3) Future considerations of AI and telemedicine in rural communities, (4) Application of AI for accurate and early diagnosis of diseases through various digital tools, and (5) Insights into the future directions and potential innovations in AI and telemedicine specifically geared towards enhancing healthcare delivery in rural communities. Conclusions: While AI technologies offer enhanced diagnostic capabilities by processing vast datasets of medical records, imaging, and patient histories, leading to earlier and more accurate diagnoses, telemedicine acts as a bridge between patients in remote areas and specialized healthcare providers, offering timely access to consultations, follow-up care, and chronic disease management. Therefore, the integration of AI with telemedicine allows for real-time decision support, improving clinical outcomes by providing data-driven insights during virtual consultations. However, challenges remain, including ensuring equitable access to these technologies, addressing digital literacy gaps, and managing the ethical implications of AI-driven decisions. Despite these hurdles, AI and telemedicine hold significant promise in reducing healthcare disparities and advancing the quality of care in rural settings, potentially leading to improved long-term health outcomes for underserved populations. Full article
Show Figures

Figure 1

Back to TopTop