Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (973)

Search Parameters:
Keywords = medical image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 661 KiB  
Review
Next-Gen Medical Imaging: U-Net Evolution and the Rise of Transformers
by Chen Zhang, Xiangyao Deng and Sai Ho Ling
Sensors 2024, 24(14), 4668; https://doi.org/10.3390/s24144668 - 18 Jul 2024
Viewed by 116
Abstract
The advancement of medical imaging has profoundly impacted our understanding of the human body and various diseases. It has led to the continuous refinement of related technologies over many years. Despite these advancements, several challenges persist in the development of medical imaging, including [...] Read more.
The advancement of medical imaging has profoundly impacted our understanding of the human body and various diseases. It has led to the continuous refinement of related technologies over many years. Despite these advancements, several challenges persist in the development of medical imaging, including data shortages characterized by low contrast, high noise levels, and limited image resolution. The U-Net architecture has significantly evolved to address these challenges, becoming a staple in medical imaging due to its effective performance and numerous updated versions. However, the emergence of Transformer-based models marks a new era in deep learning for medical imaging. These models and their variants promise substantial progress, necessitating a comparative analysis to comprehend recent advancements. This review begins by exploring the fundamental U-Net architecture and its variants, then examines the limitations encountered during its evolution. It then introduces the Transformer-based self-attention mechanism and investigates how modern models incorporate positional information. The review emphasizes the revolutionary potential of Transformer-based techniques, discusses their limitations, and outlines potential avenues for future research. Full article
Show Figures

Figure 1

24 pages, 758 KiB  
Article
Advanced Convolutional Neural Networks for Precise White Blood Cell Subtype Classification in Medical Diagnostics
by Athanasios Kanavos, Orestis Papadimitriou, Khalil Al-Hussaeni, Manolis Maragoudakis and Ioannis Karamitsos
Electronics 2024, 13(14), 2818; https://doi.org/10.3390/electronics13142818 - 18 Jul 2024
Viewed by 201
Abstract
White blood cell (WBC) classification is pivotal in medical image analysis, playing a critical role in the precise diagnosis and monitoring of diseases. This paper presents a novel convolutional neural network (CNN) architecture designed specifically for the classification of WBC images. Our model, [...] Read more.
White blood cell (WBC) classification is pivotal in medical image analysis, playing a critical role in the precise diagnosis and monitoring of diseases. This paper presents a novel convolutional neural network (CNN) architecture designed specifically for the classification of WBC images. Our model, trained on an extensive dataset, automates the extraction of discriminative features essential for accurate subtype identification. We conducted comprehensive experiments on a publicly available image dataset to validate the efficacy of our methodology. Comparative analysis with state-of-the-art methods shows that our approach significantly outperforms existing models in accurately categorizing WBCs into their respective subtypes. An in-depth analysis of the features learned by the CNN reveals key insights into the morphological traits—such as shape, size, and texture—that contribute to its classification accuracy. Importantly, the model demonstrates robust generalization capabilities, suggesting its high potential for real-world clinical implementation. Our findings indicate that the proposed CNN architecture can substantially enhance the precision and efficiency of WBC subtype identification, offering significant improvements in medical diagnostics and patient care. Full article
Show Figures

Figure 1

5 pages, 1690 KiB  
Case Report
Postoperative Intestinal Intussusception in Polytraumatized Adult Patient: A Case Report
by Claudia Viviana Jaimes González, María José Pereira Velásquez, Juan Pablo Unigarro Villota and Adriana Patricia Mora Lozada
Complications 2024, 1(2), 32-36; https://doi.org/10.3390/complications1020006 - 17 Jul 2024
Viewed by 147
Abstract
Background: Intestinal intussusception is defined as the invagination of one segment of the intestine into the lumen of an adjacent intestinal segment, resulting in the mechanical intestinal obstruction of multifactorial origin with a high risk of morbidity and mortality. It is a rare [...] Read more.
Background: Intestinal intussusception is defined as the invagination of one segment of the intestine into the lumen of an adjacent intestinal segment, resulting in the mechanical intestinal obstruction of multifactorial origin with a high risk of morbidity and mortality. It is a rare pathology in adults with a nonspecific clinical presentation. We present the case of a 26-year-old male patient who was admitted postoperatively after multiple extra institutional surgical interventions due to polytrauma secondary to a work-related accident that caused high-impact trauma by a solids mixer. However, he was referred to our institution due to suspected vascular trauma in the right femoral artery. During his hospital stay, he developed intolerance to oral intake associated with pain, abdominal distension, and persistent emetic episodes despite medical management. Consequently, an abdominal CT scan with double contrast was requested, revealing intestinal intussusception secondary to intestinal adhesions, which required new surgical management with a favorable resolution; Discussion: Intussusception in the adult population is rare and is primarily caused by an identifiable structural lesion. It is one of the most challenging pathologies in terms of diagnosis and management due to its nonspecific presentation. However, when postoperative symptoms indicating intestinal obstruction appear, a computed tomography scan is considered the imaging modality of choice for diagnosing intussusception in adults; Conclusions: The development of postoperative peritoneal adhesions is a common cause of intestinal obstruction that can lead to complications such as intestinal intussusception, requiring additional interventions. Therefore, it is vital to identify their presence to reduce morbidity and mortality. Full article
Show Figures

Figure 1

41 pages, 33915 KiB  
Article
Four Transformer-Based Deep Learning Classifiers Embedded with an Attention U-Net-Based Lung Segmenter and Layer-Wise Relevance Propagation-Based Heatmaps for COVID-19 X-ray Scans
by Siddharth Gupta, Arun K. Dubey, Rajesh Singh, Mannudeep K. Kalra, Ajith Abraham, Vandana Kumari, John R. Laird, Mustafa Al-Maini, Neha Gupta, Inder Singh, Klaudija Viskovic, Luca Saba and Jasjit S. Suri
Diagnostics 2024, 14(14), 1534; https://doi.org/10.3390/diagnostics14141534 - 16 Jul 2024
Viewed by 359
Abstract
Background: Diagnosing lung diseases accurately is crucial for proper treatment. Convolutional neural networks (CNNs) have advanced medical image processing, but challenges remain in their accurate explainability and reliability. This study combines U-Net with attention and Vision Transformers (ViTs) to enhance lung disease [...] Read more.
Background: Diagnosing lung diseases accurately is crucial for proper treatment. Convolutional neural networks (CNNs) have advanced medical image processing, but challenges remain in their accurate explainability and reliability. This study combines U-Net with attention and Vision Transformers (ViTs) to enhance lung disease segmentation and classification. We hypothesize that Attention U-Net will enhance segmentation accuracy and that ViTs will improve classification performance. The explainability methodologies will shed light on model decision-making processes, aiding in clinical acceptance. Methodology: A comparative approach was used to evaluate deep learning models for segmenting and classifying lung illnesses using chest X-rays. The Attention U-Net model is used for segmentation, and architectures consisting of four CNNs and four ViTs were investigated for classification. Methods like Gradient-weighted Class Activation Mapping plus plus (Grad-CAM++) and Layer-wise Relevance Propagation (LRP) provide explainability by identifying crucial areas influencing model decisions. Results: The results support the conclusion that ViTs are outstanding in identifying lung disorders. Attention U-Net obtained a Dice Coefficient of 98.54% and a Jaccard Index of 97.12%. ViTs outperformed CNNs in classification tasks by 9.26%, reaching an accuracy of 98.52% with MobileViT. An 8.3% increase in accuracy was seen while moving from raw data classification to segmented image classification. Techniques like Grad-CAM++ and LRP provided insights into the decision-making processes of the models. Conclusions: This study highlights the benefits of integrating Attention U-Net and ViTs for analyzing lung diseases, demonstrating their importance in clinical settings. Emphasizing explainability clarifies deep learning processes, enhancing confidence in AI solutions and perhaps enhancing clinical acceptance for improved healthcare results. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Analysis—2nd Edition)
Show Figures

Figure 1

16 pages, 32240 KiB  
Article
A Novel Tongue Coating Segmentation Method Based on Improved TransUNet
by Jiaze Wu, Zijian Li, Yiheng Cai, Hao Liang, Long Zhou, Ming Chen and Jing Guan
Sensors 2024, 24(14), 4455; https://doi.org/10.3390/s24144455 - 10 Jul 2024
Viewed by 263
Abstract
Background: As an important part of the tongue, the tongue coating is closely associated with different disorders and has major diagnostic benefits. This study aims to construct a neural network model that can perform complex tongue coating segmentation. This addresses the issue of [...] Read more.
Background: As an important part of the tongue, the tongue coating is closely associated with different disorders and has major diagnostic benefits. This study aims to construct a neural network model that can perform complex tongue coating segmentation. This addresses the issue of tongue coating segmentation in intelligent tongue diagnosis automation. Method: This work proposes an improved TransUNet to segment the tongue coating. We introduced a transformer as a self-attention mechanism to capture the semantic information in the high-level features of the encoder. At the same time, the subtraction feature pyramid (SFP) and visual regional enhancer (VRE) were constructed to minimize the redundant information transmitted by skip connections and improve the spatial detail information in the low-level features of the encoder. Results: Comparative and ablation experimental findings indicate that our model has an accuracy of 96.36%, a precision of 96.26%, a dice of 96.76%, a recall of 97.43%, and an IoU of 93.81%. Unlike the reference model, our model achieves the best segmentation effect. Conclusion: The improved TransUNet proposed here can achieve precise segmentation of complex tongue images. This provides an effective technique for the automatic extraction in images of the tongue coating, contributing to the automation and accuracy of tongue diagnosis. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

7 pages, 6460 KiB  
Interesting Images
Using High-Resolution Vessel Wall Magnetic Resonance Images in a Patient of Intracranial Artery Dissection Related Acute Infarction
by Chia-Yu Lin, Hung-Chieh Chen and Yu-Hsuan Wu
Diagnostics 2024, 14(14), 1463; https://doi.org/10.3390/diagnostics14141463 - 9 Jul 2024
Viewed by 351
Abstract
Acute ischemic stroke in young adults typically carries significant implications for morbidity, mortality, and long-term disability. In this study, we describe the case of a 34-year-old male with no prior medical history who presented with symptoms of right-sided weakness and slurred speech, suggesting [...] Read more.
Acute ischemic stroke in young adults typically carries significant implications for morbidity, mortality, and long-term disability. In this study, we describe the case of a 34-year-old male with no prior medical history who presented with symptoms of right-sided weakness and slurred speech, suggesting an acute ischemic stroke. Initial CT angiography revealed an occlusion in the left M2 segment middle cerebral artery (MCA). The occlusion was successfully recanalized through emergent endovascular thrombectomy, which also identified a dissection as the cause of the stroke. Follow-up assessments at 3 days and three months, which included advanced vessel wall MRI, highlighted the critical role of intracranial artery dissection in strokes among young adults and provided essential images for ongoing evaluation. Full article
(This article belongs to the Special Issue Cerebrovascular Lesions: Diagnosis and Management)
Show Figures

Figure 1

24 pages, 2167 KiB  
Article
Utilizing Deep Feature Fusion for Automatic Leukemia Classification: An Internet of Medical Things-Enabled Deep Learning Framework
by Md Manowarul Islam, Habibur Rahman Rifat, Md. Shamim Bin Shahid, Arnisha Akhter and Md Ashraf Uddin
Sensors 2024, 24(13), 4420; https://doi.org/10.3390/s24134420 - 8 Jul 2024
Viewed by 421
Abstract
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone [...] Read more.
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection. Full article
(This article belongs to the Special Issue Securing E-health Data across IoMT and Wearable Sensor Networks)
Show Figures

Figure 1

29 pages, 13960 KiB  
Article
Few-Shot Image Segmentation Using Generating Mask with Meta-Learning Classifier Weight Transformer Network
by Jian-Hong Wang, Phuong Thi Le, Fong-Ci Jhou, Ming-Hsiang Su, Kuo-Chen Li, Shih-Lun Chen, Tuan Pham, Ji-Long He, Chien-Yao Wang, Jia-Ching Wang and Pao-Chi Chang
Electronics 2024, 13(13), 2634; https://doi.org/10.3390/electronics13132634 - 4 Jul 2024
Viewed by 423
Abstract
With the rapid advancement of modern hardware technology, breakthroughs have been made in many areas of artificial intelligence research, leading to the direction of machine replacement or assistance in various fields. However, most artificial intelligence or deep learning techniques require large amounts of [...] Read more.
With the rapid advancement of modern hardware technology, breakthroughs have been made in many areas of artificial intelligence research, leading to the direction of machine replacement or assistance in various fields. However, most artificial intelligence or deep learning techniques require large amounts of training data and are typically applicable to a single task objective. Acquiring such large training datasets can be particularly challenging, especially in domains like medical imaging. In the field of image processing, few-shot image segmentation is an area of active research. Recent studies have employed deep learning and meta-learning approaches to enable models to segment objects in images with only a small amount of training data, allowing them to quickly adapt to new task objectives. This paper proposes a network architecture for meta-learning few-shot image segmentation, utilizing a meta-learning classification weight transfer network to generate masks for few-shot image segmentation. The architecture leverages pre-trained classification weight transfers to generate informative prior masks and employs pre-trained feature extraction architecture for feature extraction of query and support images. Furthermore, it utilizes a Feature Enrichment Module to adaptively propagate information from finer features to coarser features in a top-down manner for query image feature extraction. Finally, a classification module is employed for query image segmentation prediction. Experimental results demonstrate that compared to the baseline using the mean Intersection over Union (mIOU) as the evaluation metric, the accuracy increases by 1.7% in the one-shot experiment and by 2.6% in the five-shot experiment. Thus, compared to the baseline, the proposed architecture with meta-learning classification weight transfer network for mask generation exhibits superior performance in few-shot image segmentation. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

19 pages, 6553 KiB  
Article
An Automatic Method for Elbow Joint Recognition, Segmentation and Reconstruction
by Ying Cui, Shangwei Ji, Yejun Zha, Xinhua Zhou, Yichuan Zhang and Tianfeng Zhou
Sensors 2024, 24(13), 4330; https://doi.org/10.3390/s24134330 - 3 Jul 2024
Viewed by 374
Abstract
Elbow computerized tomography (CT) scans have been widely applied for describing elbow morphology. To enhance the objectivity and efficiency of clinical diagnosis, an automatic method to recognize, segment, and reconstruct elbow joint bones is proposed in this study. The method involves three steps: [...] Read more.
Elbow computerized tomography (CT) scans have been widely applied for describing elbow morphology. To enhance the objectivity and efficiency of clinical diagnosis, an automatic method to recognize, segment, and reconstruct elbow joint bones is proposed in this study. The method involves three steps: initially, the humerus, ulna, and radius are automatically recognized based on the anatomical features of the elbow joint, and the prompt boxes are generated. Subsequently, elbow MedSAM is obtained through transfer learning, which accurately segments the CT images by integrating the prompt boxes. After that, hole-filling and object reclassification steps are executed to refine the mask. Finally, three-dimensional (3D) reconstruction is conducted seamlessly using the marching cube algorithm. To validate the reliability and accuracy of the method, the images were compared to the masks labeled by senior surgeons. Quantitative evaluation of segmentation results revealed median intersection over union (IoU) values of 0.963, 0.959, and 0.950 for the humerus, ulna, and radius, respectively. Additionally, the reconstructed surface errors were measured at 1.127, 1.523, and 2.062 mm, respectively. Consequently, the automatic elbow reconstruction method demonstrates promising capabilities in clinical diagnosis, preoperative planning, and intraoperative navigation for elbow joint diseases. Full article
Show Figures

Figure 1

15 pages, 3271 KiB  
Article
A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images
by Adriel Silva de Araújo, Márcio Sarroglia Pinho, Ana Maria Marques da Silva, Luis Felipe Fiorentini and Jefferson Becker
J. Imaging 2024, 10(7), 161; https://doi.org/10.3390/jimaging10070161 - 3 Jul 2024
Viewed by 454
Abstract
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline [...] Read more.
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model’s performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results. Full article
Show Figures

Figure 1

15 pages, 1734 KiB  
Article
Hybrid Ensemble Deep Learning Model for Advancing Ischemic Brain Stroke Detection and Classification in Clinical Application
by Radwan Qasrawi, Ibrahem Qdaih, Omar Daraghmeh, Suliman Thwib, Stephanny Vicuna Polo, Siham Atari and Diala Abu Al-Halawa
J. Imaging 2024, 10(7), 160; https://doi.org/10.3390/jimaging10070160 - 2 Jul 2024
Viewed by 614
Abstract
Ischemic brain strokes are severe medical conditions that occur due to blockages in the brain’s blood flow, often caused by blood clots or artery blockages. Early detection is crucial for effective treatment. This study aims to improve the detection and classification of ischemic [...] Read more.
Ischemic brain strokes are severe medical conditions that occur due to blockages in the brain’s blood flow, often caused by blood clots or artery blockages. Early detection is crucial for effective treatment. This study aims to improve the detection and classification of ischemic brain strokes in clinical settings by introducing a new approach that integrates the stroke precision enhancement, ensemble deep learning, and intelligent lesion detection and segmentation models. The proposed hybrid model was trained and tested using a dataset of 10,000 computed tomography scans. A 25-fold cross-validation technique was employed, while the model’s performance was evaluated using accuracy, precision, recall, and F1 score. The findings indicate significant improvements in accuracy for different stages of stroke images when enhanced using the SPEM model with contrast-limited adaptive histogram equalization set to 4. Specifically, accuracy showed significant improvement (from 0.876 to 0.933) for hyper-acute stroke images; from 0.881 to 0.948 for acute stroke images, from 0.927 to 0.974 for sub-acute stroke images, and from 0.928 to 0.982 for chronic stroke images. Thus, the study shows significant promise for the detection and classification of ischemic brain strokes. Further research is needed to validate its performance on larger datasets and enhance its integration into clinical settings. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

18 pages, 5484 KiB  
Article
ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation
by Tianyu Nie, Yishi Zhao and Shihong Yao
Sensors 2024, 24(13), 4302; https://doi.org/10.3390/s24134302 - 2 Jul 2024
Viewed by 453
Abstract
In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing [...] Read more.
In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing: 2nd Edition)
Show Figures

Figure 1

19 pages, 2826 KiB  
Article
Automated Left Ventricle Segmentation in Echocardiography Using YOLO: A Deep Learning Approach for Enhanced Cardiac Function Assessment
by Madankumar Balasubramani, Chih-Wei Sung, Mu-Yang Hsieh, Edward Pei-Chuan Huang, Jiann-Shing Shieh and Maysam F. Abbod
Electronics 2024, 13(13), 2587; https://doi.org/10.3390/electronics13132587 - 1 Jul 2024
Viewed by 489
Abstract
Accurate segmentation of the left ventricle (LV) using echocardiogram (Echo) images is essential for cardiovascular analysis. Conventional techniques are labor-intensive and exhibit inter-observer variability. Deep learning has emerged as a powerful tool for automated medical image segmentation, offering advantages in speed and potentially [...] Read more.
Accurate segmentation of the left ventricle (LV) using echocardiogram (Echo) images is essential for cardiovascular analysis. Conventional techniques are labor-intensive and exhibit inter-observer variability. Deep learning has emerged as a powerful tool for automated medical image segmentation, offering advantages in speed and potentially superior accuracy. This study explores the efficacy of employing a YOLO (You Only Look Once) segmentation model for automated LV segmentation in Echo images. YOLO, a cutting-edge object detection model, achieves exceptional speed–accuracy balance through its well-designed architecture. It utilizes efficient dilated convolutional layers and bottleneck blocks for feature extraction while incorporating innovations like path aggregation and spatial attention mechanisms. These attributes make YOLO a compelling candidate for adaptation to LV segmentation in Echo images. We posit that by fine-tuning a pre-trained YOLO-based model on a well-annotated Echo image dataset, we can leverage the model’s strengths in real-time processing and precise object localization to achieve robust LV segmentation. The proposed approach entails fine-tuning a pre-trained YOLO model on a rigorously labeled Echo image dataset. Model performance has been evaluated using established metrics such as mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 50% (mAP50) with 98.31% and across a range of IoU thresholds from 50% to 95% (mAP50:95) with 75.27%. Successful implementation of YOLO for LV segmentation has the potential to significantly expedite and standardize Echo image analysis. This advancement could translate to improved clinical decision-making and enhanced patient care. Full article
Show Figures

Figure 1

14 pages, 2268 KiB  
Article
A Retinal Vessel Segmentation Method Based on the Sharpness-Aware Minimization Model
by Iqra Mariam, Xiaorong Xue and Kaleb Gadson
Sensors 2024, 24(13), 4267; https://doi.org/10.3390/s24134267 - 30 Jun 2024
Viewed by 532
Abstract
Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet’s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We [...] Read more.
Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet’s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We focused our experiments on the digital retinal images for vessel extraction (DRIVE) dataset, which is a benchmark for retinal vessel segmentation, and our test results show that adding SAM to the training procedure leads to notable improvements. Compared to the non-SAM model (training loss of 0.45709 and validation loss of 0.40266), the SAM-trained RF-UNet model achieved a significant reduction in both training loss (0.094225) and validation loss (0.08053). Furthermore, compared to the non-SAM model (training accuracy of 0.90169 and validation accuracy of 0.93999), the SAM-trained model demonstrated higher training accuracy (0.96225) and validation accuracy (0.96821). Additionally, the model performed better in terms of sensitivity, specificity, AUC, and F1 score, indicating improved generalization to unseen data. Our results corroborate the notion that SAM facilitates the learning of flatter minima, thereby improving generalization, and are consistent with other research highlighting the advantages of advanced optimization methods. With wider implications for other medical imaging tasks, these results imply that SAM can successfully reduce overfitting and enhance the robustness of retinal vessel segmentation models. Prospective research avenues encompass verifying the model on vaster and more diverse datasets and investigating its practical implementation in real-world clinical situations. Full article
Show Figures

Figure 1

11 pages, 2094 KiB  
Article
Synthetic Genitourinary Image Synthesis via Generative Adversarial Networks: Enhancing Artificial Intelligence Diagnostic Precision
by Derek J. Van Booven, Cheng-Bang Chen, Sheetal Malpani, Yasamin Mirzabeigi, Maral Mohammadi, Yujie Wang, Oleksander N. Kryvenko, Sanoj Punnen and Himanshu Arora
J. Pers. Med. 2024, 14(7), 703; https://doi.org/10.3390/jpm14070703 - 30 Jun 2024
Viewed by 504
Abstract
Introduction: In the realm of computational pathology, the scarcity and restricted diversity of genitourinary (GU) tissue datasets pose significant challenges for training robust diagnostic models. This study explores the potential of Generative Adversarial Networks (GANs) to mitigate these limitations by generating high-quality synthetic [...] Read more.
Introduction: In the realm of computational pathology, the scarcity and restricted diversity of genitourinary (GU) tissue datasets pose significant challenges for training robust diagnostic models. This study explores the potential of Generative Adversarial Networks (GANs) to mitigate these limitations by generating high-quality synthetic images of rare or underrepresented GU tissues. We hypothesized that augmenting the training data of computational pathology models with these GAN-generated images, validated through pathologist evaluation and quantitative similarity measures, would significantly enhance model performance in tasks such as tissue classification, segmentation, and disease detection. Methods: To test this hypothesis, we employed a GAN model to produce synthetic images of eight different GU tissues. The quality of these images was rigorously assessed using a Relative Inception Score (RIS) of 1.27 ± 0.15 and a Fréchet Inception Distance (FID) that stabilized at 120, metrics that reflect the visual and statistical fidelity of the generated images to real histopathological images. Additionally, the synthetic images received an 80% approval rating from board-certified pathologists, further validating their realism and diagnostic utility. We used an alternative Spatial Heterogeneous Recurrence Quantification Analysis (SHRQA) to assess the quality of prostate tissue. This allowed us to make a comparison between original and synthetic data in the context of features, which were further validated by the pathologist’s evaluation. Future work will focus on implementing a deep learning model to evaluate the performance of the augmented datasets in tasks such as tissue classification, segmentation, and disease detection. This will provide a more comprehensive understanding of the utility of GAN-generated synthetic images in enhancing computational pathology workflows. Results: This study not only confirms the feasibility of using GANs for data augmentation in medical image analysis but also highlights the critical role of synthetic data in addressing the challenges of dataset scarcity and imbalance. Conclusions: Future work will focus on refining the generative models to produce even more diverse and complex tissue representations, potentially transforming the landscape of medical diagnostics with AI-driven solutions. Full article
(This article belongs to the Special Issue State-of-the-Art Research on the Imaging in Personalized Medicine)
Show Figures

Figure 1

Back to TopTop