Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,707)

Search Parameters:
Keywords = supervised learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 12904 KiB  
Article
Intelligent Classification and Segmentation of Sandstone Thin Section Image Using a Semi-Supervised Framework and GL-SLIC
by Yubo Han and Ye Liu
Minerals 2024, 14(8), 799; https://doi.org/10.3390/min14080799 (registering DOI) - 5 Aug 2024
Abstract
This study presents the development and validation of a robust semi-supervised learning framework specifically designed for the automated segmentation and classification of sandstone thin section images from the Yanchang Formation in the Ordos Basin. Traditional geological image analysis methods encounter significant challenges due [...] Read more.
This study presents the development and validation of a robust semi-supervised learning framework specifically designed for the automated segmentation and classification of sandstone thin section images from the Yanchang Formation in the Ordos Basin. Traditional geological image analysis methods encounter significant challenges due to the labor-intensive and error-prone nature of manual labeling, compounded by the diversity and complexity of rock thin sections. Our approach addresses these challenges by integrating the GL-SLIC algorithm, which combines Gabor filters and Local Binary Patterns for effective superpixel segmentation, laying the groundwork for advanced component identification. The primary innovation of this research is the semi-supervised learning model that utilizes a limited set of manually labeled samples to generate high-confidence pseudo labels, thereby significantly expanding the training dataset. This methodology effectively tackles the critical challenge of insufficient labeled data in geological image analysis, enhancing the model’s generalization capability from minimal initial input. Our framework improves segmentation accuracy by closely aligning superpixels with the intricate boundaries of mineral grains and pores. Additionally, it achieves substantial improvements in classification accuracy across various rock types, reaching up to 96.3% in testing scenarios. This semi-supervised approach represents a significant advancement in computational geology, providing a scalable and efficient solution for detailed petrographic analysis. It not only enhances the accuracy and efficiency of geological interpretations but also supports broader hydrocarbon exploration efforts. Full article
Show Figures

Figure 1

20 pages, 4246 KiB  
Article
Bidirectional Efficient Attention Parallel Network for Segmentation of 3D Medical Imaging
by Dongsheng Wang, Tiezhen Xv, Jiehui Liu, Jianshen Li, Lijie Yang and Jinxi Guo
Electronics 2024, 13(15), 3086; https://doi.org/10.3390/electronics13153086 (registering DOI) - 4 Aug 2024
Viewed by 238
Abstract
Currently, although semi-supervised image segmentation has achieved significant success in many aspects, further improvement in segmentation accuracy is necessary for practical applications. Additionally, there are fewer networks specifically designed for segmenting 3D images compared to those for 2D images, and their performance is [...] Read more.
Currently, although semi-supervised image segmentation has achieved significant success in many aspects, further improvement in segmentation accuracy is necessary for practical applications. Additionally, there are fewer networks specifically designed for segmenting 3D images compared to those for 2D images, and their performance is notably inferior. To enhance the efficiency of network training, various attention mechanisms have been integrated into network models. However, these networks have not effectively extracted all the useful spatial or channel information. Particularly for 3D medical images, which contain rich spatial and channel information with tightly interconnected relationships between them, there remains a wealth of spatial and channel-specific information waiting to be explored and utilized. This paper proposes a bidirectional and efficient attention parallel network (BEAP-Net). Specifically, we introduce two modules: Supreme Channel Attention (SCA) and Parallel Spatial Attention (PSA). These modules aim to extract more spatial and channel-specific feature information and effectively utilize it. We combine the principles of consistency training and entropy regularization to enable mutual learning among sub-models. We evaluate the proposed BEAP-Net on two public 3D medical datasets, LA and Pancreas. The network outperforms the current state of the art in eight algorithms and is better suited for 3D medical images. It achieves the new best semi-supervised segmentation performance on the LA database. Ablation studies further validate the effectiveness of each component of the proposed model. Moreover, the SCA and PSA modules proposed can be seamlessly integrated into other 3D medical image segmentation networks to yield significant performance gains. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 6697 KiB  
Article
SSL-LRN: A Lightweight Semi-Supervised-Learning-Based Approach for UWA Modulation Recognition
by Chaojin Ding, Wei Su, Zehong Xu, Daqing Gao and En Cheng
J. Mar. Sci. Eng. 2024, 12(8), 1317; https://doi.org/10.3390/jmse12081317 - 4 Aug 2024
Viewed by 238
Abstract
Due to the lack of sufficient valid labeled data and severe channel fading, the recognition of various underwater acoustic (UWA) communication modulation types still faces significant challenges. In this paper, we propose a lightweight UWA communication type recognition network based on semi-supervised learning, [...] Read more.
Due to the lack of sufficient valid labeled data and severe channel fading, the recognition of various underwater acoustic (UWA) communication modulation types still faces significant challenges. In this paper, we propose a lightweight UWA communication type recognition network based on semi-supervised learning, named the SSL-LRN. In the SSL-LRN, a mean teacher–student mechanism is developed to improve learning performance by averaging the weights of multiple models, thereby improving recognition accuracy for insufficiently labeled data. The SSL-LRN employs techniques such as quantization and small convolutional kernels to reduce floating-point operations (FLOPs), enabling its deployment on underwater mobile nodes. To mitigate the performance loss caused by quantization, the SSL-LRN adopts a channel expansion module to optimize the neuron distribution. It also employs an attention mechanism to enhance the recognition robustness for frequency-selective-fading channels. Pool and lake experiments demonstrate that the framework effectively recognizes most modulation types, achieving a more than 5% increase in recognition accuracy at a 0 dB signal-to-noise ratio (SNRs) while reducing FLOPs by 84.9% compared with baseline algorithms. Even with only 10% labeled data, the performance of the SSL-LRN approaches that of the fully supervised LRN algorithm. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 4246 KiB  
Article
A Self-Training-Based System for Die Defect Classification
by Ping-Hung Wu, Siou-Zih Lin, Yuan-Teng Chang, Yu-Wei Lai and Ssu-Han Chen
Mathematics 2024, 12(15), 2415; https://doi.org/10.3390/math12152415 - 2 Aug 2024
Viewed by 183
Abstract
With increasing wafer sizes and diversifying die patterns, automated optical inspection (AOI) is progressively replacing traditional visual inspection (VI) for wafer defect detection. Yet, the defect classification efficacy of current AOI systems in our case company is not optimal. This limitation is due [...] Read more.
With increasing wafer sizes and diversifying die patterns, automated optical inspection (AOI) is progressively replacing traditional visual inspection (VI) for wafer defect detection. Yet, the defect classification efficacy of current AOI systems in our case company is not optimal. This limitation is due to the algorithms’ reliance on expertly designed features, reducing adaptability across various product models. Additionally, the limited time available for operators to annotate defect samples restricts learning potential. Our study introduces a novel hybrid self-training algorithm, leveraging semi-supervised learning that integrates pseudo-labeling, noisy student, curriculum labeling, and the Taguchi method. This approach enables classifiers to autonomously integrate information from unlabeled data, bypassing the need for feature extraction, even with scarcely labeled data. Our experiments on a small-scale set show that with 25% and 50% labeled data, the method achieves over 92% accuracy. Remarkably, with only 10% labeled data, our hybrid method surpasses the supervised DenseNet classifier by over 20%, achieving more than 82% accuracy. On a large-scale set, the hybrid method consistently outperforms other approaches, achieving up to 88.75%, 86.31%, and 83.61% accuracy with 50%, 25%, and 10% labeled data. Further experiments confirm our method’s consistent superiority, highlighting its potential for high classification accuracy in limited-data scenarios. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

17 pages, 786 KiB  
Article
A Parallel Approach to Enhance the Performance of Supervised Machine Learning Realized in a Multicore Environment
by Ashutosh Ghimire and Fathi Amsaad
Mach. Learn. Knowl. Extr. 2024, 6(3), 1840-1856; https://doi.org/10.3390/make6030090 - 2 Aug 2024
Viewed by 258
Abstract
Machine learning models play a critical role in applications such as image recognition, natural language processing, and medical diagnosis, where accuracy and efficiency are paramount. As datasets grow in complexity, so too do the computational demands of classification techniques. Previous research has achieved [...] Read more.
Machine learning models play a critical role in applications such as image recognition, natural language processing, and medical diagnosis, where accuracy and efficiency are paramount. As datasets grow in complexity, so too do the computational demands of classification techniques. Previous research has achieved high accuracy but required significant computational time. This paper proposes a parallel architecture for Ensemble Machine Learning Models, harnessing multicore CPUs to expedite performance. The primary objective is to enhance machine learning efficiency without compromising accuracy through parallel computing. This study focuses on benchmark ensemble models including Random Forest, XGBoost, ADABoost, and K Nearest Neighbors. These models are applied to tasks such as wine quality classification and fraud detection in credit card transactions. The results demonstrate that, compared to single-core processing, machine learning tasks run 1.7 times and 3.8 times faster for small and large datasets on quad-core CPUs, respectively. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

31 pages, 4601 KiB  
Article
Evaluating the Role of Data Enrichment Approaches towards Rare Event Analysis in Manufacturing
by Chathurangi Shyalika, Ruwan Wickramarachchi, Fadi El Kalach, Ramy Harik and Amit Sheth
Sensors 2024, 24(15), 5009; https://doi.org/10.3390/s24155009 - 2 Aug 2024
Viewed by 358
Abstract
Rare events are occurrences that take place with a significantly lower frequency than more common, regular events. These events can be categorized into distinct categories, from frequently rare to extremely rare, based on factors like the distribution of data and significant differences in [...] Read more.
Rare events are occurrences that take place with a significantly lower frequency than more common, regular events. These events can be categorized into distinct categories, from frequently rare to extremely rare, based on factors like the distribution of data and significant differences in rarity levels. In manufacturing domains, predicting such events is particularly important, as they lead to unplanned downtime, a shortening of equipment lifespans, and high energy consumption. Usually, the rarity of events is inversely correlated with the maturity of a manufacturing industry. Typically, the rarity of events affects the multivariate data generated within a manufacturing process to be highly imbalanced, which leads to bias in predictive models. This paper evaluates the role of data enrichment techniques combined with supervised machine learning techniques for rare event detection and prediction. We use time series data augmentation and sampling to address the data scarcity, maintaining its patterns, and imputation techniques to handle null values. Evaluating 15 learning models, we find that data enrichment improves the F1 measure by up to 48% in rare event detection and prediction. Our empirical and ablation experiments provide novel insights, and we also investigate model interpretability. Full article
Show Figures

Figure 1

17 pages, 1421 KiB  
Technical Note
Angle Estimation Using Learning-Based Doppler Deconvolution in Beamspace with Forward-Looking Radar
by Wenjie Li, Xinhao Xu, Yihao Xu, Yuchen Luan, Haibo Tang, Longyong Chen, Fubo Zhang, Jie Liu and Junming Yu
Remote Sens. 2024, 16(15), 2840; https://doi.org/10.3390/rs16152840 - 2 Aug 2024
Viewed by 211
Abstract
The measurement of the target azimuth angle using forward-looking radar (FLR) is widely applied in unmanned systems, such as obstacle avoidance and tracking applications. This paper proposes a semi-supervised support vector regression (SVR) method to solve the problem of small sample learning of [...] Read more.
The measurement of the target azimuth angle using forward-looking radar (FLR) is widely applied in unmanned systems, such as obstacle avoidance and tracking applications. This paper proposes a semi-supervised support vector regression (SVR) method to solve the problem of small sample learning of the target angle with FLR. This method utilizes function approximation to solve the problem of estimating the target angle. First, SVR is used to construct the function mapping relationship between the echo and the target angle in beamspace. Next, by adding manifold constraints to the loss function, supervised learning is extended to semi-supervised learning, aiming to improve the small sample adaptation ability. This framework supports updating the angle estimating function with continuously increasing unlabeled samples during the FLR scanning process. The numerical simulation results show that the new technology has better performance than model-based methods and fully supervised methods, especially under limited conditions such as signal-to-noise ratio and number of training samples. Full article
Show Figures

Figure 1

14 pages, 1193 KiB  
Article
Use of Machine Learning to Improve Additive Manufacturing Processes
by Izabela Rojek, Jakub Kopowski, Jakub Lewandowski and Dariusz Mikołajewski
Appl. Sci. 2024, 14(15), 6730; https://doi.org/10.3390/app14156730 - 1 Aug 2024
Viewed by 308
Abstract
Rapidly developing artificial intelligence (AI) can help machines and devices to perceive, analyze, and even make inferences in a similar way to human reasoning. The aim of this article is to present applications of AI methods, including machine learning (ML), in the design [...] Read more.
Rapidly developing artificial intelligence (AI) can help machines and devices to perceive, analyze, and even make inferences in a similar way to human reasoning. The aim of this article is to present applications of AI methods, including machine learning (ML), in the design and supervision of processes used in the field of additive manufacturing techniques. This approach will allow specific tasks to be solved as if they were performed by a human expert in the field. The application of AI in the development of additive manufacturing technologies makes it possible to be assisted by the knowledge of experienced operators in the design and supervision of processes acquired automatically. This reduces the risk of human error and simplifies and automates the production of products and parts. AI in 3D technology creates a wide range of possibilities for generating 3D objects and enables a machine equipped with a vision system, used in ML processes, to analyze data similar to human thought processes. Incremental printing using such a printer allows the production of objects of ever-increasing quality from several materials simultaneously. The process itself is also precise and fast. An accuracy of 97.56% means that the model is precise and makes very few errors. The 3D printing system with artificial intelligence allows the device to adapt to, for example, different material properties, as the printer examines the 3D-printed surface and automatically adjusts the printing. AI/ML-based solutions similar to ours, once learning sets are modified or extended, are easily adaptable to other technologies, materials, or multi-material 3D printing. They also allow the creation of dedicated, ML solutions that adapt to the specifics of a production line, including as self-learning solutions as production progresses. Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

13 pages, 1061 KiB  
Article
Swin-Fake: A Consistency Learning Transformer-Based Deepfake Video Detector
by Liang Yu Gong, Xue Jun Li and Peter Han Joo Chong
Electronics 2024, 13(15), 3045; https://doi.org/10.3390/electronics13153045 - 1 Aug 2024
Viewed by 242
Abstract
Deepfake has become an emerging technology affecting cyber-security with its illegal applications in recent years. Most deepfake detectors utilize CNN-based models such as the Xception Network to distinguish real or fake media; however, their performance on cross-datasets is not ideal because they suffer [...] Read more.
Deepfake has become an emerging technology affecting cyber-security with its illegal applications in recent years. Most deepfake detectors utilize CNN-based models such as the Xception Network to distinguish real or fake media; however, their performance on cross-datasets is not ideal because they suffer from over-fitting in the current stage. Therefore, this paper proposed a spatial consistency learning method to relieve this issue in three aspects. Firstly, we increased the selections of data augmentation methods to 5, which is more than our previous study’s data augmentation methods. Specifically, we captured several equal video frames of one video and randomly selected five different data augmentations to obtain different data views to enrich the input variety. Secondly, we chose Swin Transformer as the feature extractor instead of a CNN-based backbone, which means that our approach did not utilize it for downstream tasks, and could encode these data using an end-to-end Swin Transformer, aiming to learn the correlation between different image patches. Finally, this was combined with consistency learning in our study, and consistency learning was able to determine more data relationships than supervised classification. We explored the consistency of video frames’ features by calculating their cosine distance and applied traditional cross-entropy loss to regulate this classification loss. Extensive in-dataset and cross-dataset experiments demonstrated that Swin-Fake could produce relatively good results on some open-source deepfake datasets, including FaceForensics++, DFDC, Celeb-DF and FaceShifter. By comparing our model with several benchmark models, our approach shows relatively strong robustness in detecting deepfake media. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning in Computer Vision)
Show Figures

Figure 1

25 pages, 1652 KiB  
Article
Toward Safer Roads: Predicting the Severity of Traffic Accidents in Montreal Using Machine Learning
by Bappa Muktar and Vincent Fono
Electronics 2024, 13(15), 3036; https://doi.org/10.3390/electronics13153036 - 1 Aug 2024
Viewed by 389
Abstract
Traffic accidents are among the most common causes of death worldwide. According to statistics from the World Health Organization (WHO), 50 million people are involved in traffic accidents every year. Canada, particularly Montreal, is not immune to this problem. Data from the Société [...] Read more.
Traffic accidents are among the most common causes of death worldwide. According to statistics from the World Health Organization (WHO), 50 million people are involved in traffic accidents every year. Canada, particularly Montreal, is not immune to this problem. Data from the Société de l’Assurance Automobile du Québec (SAAQ) show that there were 392 deaths on Québec roads in 2022, 38 of them related to the city of Montreal. This value represents an increase of 29.3% for the city of Montreal compared with the average for the years 2017 to 2021. In this context, it is important to take concrete measures to improve traffic safety in the city of Montreal. In this article, we present a web-based solution based on machine learning that predicts the severity of traffic accidents in Montreal. This solution uses a dataset of traffic accidents that occurred in Montreal between 2012 and 2021. By predicting the severity of accidents, our approach aims to identify key factors that influence whether an accident is serious or not. Understanding these factors can help authorities implement targeted interventions to prevent severe accidents and allocate resources more effectively during emergency responses. Classification algorithms such as eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), Random Forest (RF), and Gradient Boosting (GB) were used to develop the prediction model. Performance metrics such as precision, recall, F1 score, and accuracy were used to evaluate the prediction model. The performance analysis shows an excellent accuracy of 96% for the prediction model based on the XGBoost classifier. The other models (CatBoost, RF, GB) achieved 95%, 93%, and 89% accuracy, respectively. The prediction model based on the XGBoost classifier was deployed using a client–server web application managed by Swagger-UI, Angular, and the Flask Python framework. This study makes significant contributions to the field by employing an ensemble of supervised machine learning algorithms, achieving a high prediction accuracy, and developing a real-time prediction web application. This application enables quicker and more effective responses from emergency services, potentially reducing the impact of severe accidents and improving overall traffic safety. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

29 pages, 9748 KiB  
Article
Hybrid Machine Learning for Automated Road Safety Inspection of Auckland Harbour Bridge
by Munish Rathee, Boris Bačić and Maryam Doborjeh
Electronics 2024, 13(15), 3030; https://doi.org/10.3390/electronics13153030 - 1 Aug 2024
Viewed by 515
Abstract
The Auckland Harbour Bridge (AHB) utilises a movable concrete barrier (MCB) to regulate the uneven bidirectional flow of daily traffic. In addition to the risk of human error during regular visual inspections, staff members inspecting the MCB work in diverse weather and light [...] Read more.
The Auckland Harbour Bridge (AHB) utilises a movable concrete barrier (MCB) to regulate the uneven bidirectional flow of daily traffic. In addition to the risk of human error during regular visual inspections, staff members inspecting the MCB work in diverse weather and light conditions, exerting themselves in ergonomically unhealthy inspection postures with the added weight of protection gear to mitigate risks, e.g., flying debris. To augment visual inspections of an MCB using computer vision technology, this study introduces a hybrid deep learning solution that combines kernel manipulation with custom transfer learning strategies. The video data recordings were captured in diverse light and weather conditions (under the safety supervision of industry experts) involving a high-speed (120 fps) camera system attached to an MCB transfer vehicle. Before identifying a safety hazard, e.g., the unsafe position of a pin connecting two 750 kg concrete segments of the MCB, a multi-stage preprocessing of the spatiotemporal region of interest (ROI) involves a rolling window before identifying the video frames containing diagnostic information. This study utilises the ResNet-50 architecture, enhanced with 3D convolutions, within the STENet framework to capture and analyse spatiotemporal data, facilitating real-time surveillance of the Auckland Harbour Bridge (AHB). Considering the sparse nature of safety anomalies, the initial peer-reviewed binary classification results (82.6%) for safe and unsafe (intervention-required) scenarios were improved to 93.6% by incorporating synthetic data, expert feedback, and retraining the model. This adaptation allowed for the optimised detection of false positives and false negatives. In the future, we aim to extend anomaly detection methods to various infrastructure inspections, enhancing urban resilience, transport efficiency and safety. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network)
Show Figures

Figure 1

15 pages, 1214 KiB  
Article
A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation
by Kai Yi , Weihang Wang  and Yi Zhang 
Sensors 2024, 24(15), 4975; https://doi.org/10.3390/s24154975 - 31 Jul 2024
Viewed by 317
Abstract
Nowadays, autonomous driving technology has become widely prevalent. The intelligent vehicles have been equipped with various sensors (e.g., vision sensors, LiDAR, depth cameras etc.). Among them, the vision systems with tailored semantic segmentation and perception algorithms play critical roles in scene understanding. However, [...] Read more.
Nowadays, autonomous driving technology has become widely prevalent. The intelligent vehicles have been equipped with various sensors (e.g., vision sensors, LiDAR, depth cameras etc.). Among them, the vision systems with tailored semantic segmentation and perception algorithms play critical roles in scene understanding. However, the traditional supervised semantic segmentation needs a large number of pixel-level manual annotations to complete model training. Although few-shot methods reduce the annotation work to some extent, they are still labor intensive. In this paper, a self-supervised few-shot semantic segmentation method based on Multi-task Learning and Dense Attention Computation (dubbed MLDAC) is proposed. The salient part of an image is split into two parts; one of them serves as the support mask for few-shot segmentation, while cross-entropy losses are calculated between the other part and the entire region with the predicted results separately as multi-task learning so as to improve the model’s generalization ability. Swin Transformer is used as our backbone to extract feature maps at different scales. These feature maps are then input to multiple levels of dense attention computation blocks to enhance pixel-level correspondence. The final prediction results are obtained through inter-scale mixing and feature skip connection. The experimental results indicate that MLDAC obtains 55.1% and 26.8% one-shot mIoU self-supervised few-shot segmentation on the PASCAL-5i and COCO-20i datasets, respectively. In addition, it achieves 78.1% on the FSS-1000 few-shot dataset, proving its efficacy. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 2752 KiB  
Article
A Noisy Sample Selection Framework Based on a Mixup Loss and Recalibration Strategy
by Qian Zhang, De Yu, Xinru Zhou, Hanmeng Gong, Zheng Li, Yiming Liu and Ruirui Shao
Mathematics 2024, 12(15), 2389; https://doi.org/10.3390/math12152389 - 31 Jul 2024
Viewed by 282
Abstract
Deep neural networks (DNNs) have achieved breakthrough progress in various fields, largely owing to the support of large-scale datasets with manually annotated labels. However, obtaining such datasets is costly and time-consuming, making high-quality annotation a challenging task. In this work, we propose an [...] Read more.
Deep neural networks (DNNs) have achieved breakthrough progress in various fields, largely owing to the support of large-scale datasets with manually annotated labels. However, obtaining such datasets is costly and time-consuming, making high-quality annotation a challenging task. In this work, we propose an improved noisy sample selection method, termed “sample selection framework”, based on a mixup loss and recalibration strategy (SMR). This framework enhances the robustness and generalization abilities of models. First, we introduce a robust mixup loss function to pre-train two models with identical structures separately. This approach avoids additional hyperparameter adjustments and reduces the need for prior knowledge of noise types. Additionally, we use a Gaussian Mixture Model (GMM) to divide the entire training set into labeled and unlabeled subsets, followed by robust training using semi-supervised learning (SSL) techniques. Furthermore, we propose a recalibration strategy based on cross-entropy (CE) loss to prevent the models from converging to local optima during the SSL process, thus further improving performance. Ablation experiments on CIFAR-10 with 50% symmetric noise and 40% asymmetric noise demonstrate that the two modules introduced in this paper improve the accuracy of the baseline (i.e., DivideMix) by 1.5% and 0.5%, respectively. Moreover, the experimental results on multiple benchmark datasets demonstrate that our proposed method effectively mitigates the impact of noisy labels and significantly enhances the performance of DNNs on noisy datasets. For instance, on the WebVision dataset, our method improves the top-1 accuracy by 0.7% and 2.4% compared to the baseline method. Full article
(This article belongs to the Special Issue Machine Learning Methods and Mathematical Modeling with Applications)
Show Figures

Figure 1

15 pages, 5293 KiB  
Article
LiverColor: An Artificial Intelligence Platform for Liver Graft Assessment
by Gemma Piella, Nicolau Farré, Daniel Esono, Miguel Ángel Cordobés, Javier Vázquez-Corral, Itxarone Bilbao and Concepción Gómez-Gavara
Diagnostics 2024, 14(15), 1654; https://doi.org/10.3390/diagnostics14151654 - 31 Jul 2024
Viewed by 255
Abstract
Hepatic steatosis, characterized by excess fat in the liver, is the main reason for discarding livers intended for transplantation due to its association with increased postoperative complications. The current gold standard for evaluating hepatic steatosis is liver biopsy, which, despite its accuracy, is [...] Read more.
Hepatic steatosis, characterized by excess fat in the liver, is the main reason for discarding livers intended for transplantation due to its association with increased postoperative complications. The current gold standard for evaluating hepatic steatosis is liver biopsy, which, despite its accuracy, is invasive, costly, slow, and not always feasible during liver procurement. Consequently, surgeons often rely on subjective visual assessments based on the liver’s colour and texture, which are prone to errors and heavily depend on the surgeon’s experience. The aim of this study was to develop and validate a simple, rapid, and accurate method for detecting steatosis in donor livers to improve the decision-making process during liver procurement. We developed LiverColor, a co-designed software platform that integrates image analysis and machine learning to classify a liver graft into valid or non-valid according to its steatosis level. We utilized an in-house dataset of 192 cases to develop and validate the classification models. Colour and texture features were extracted from liver photographs, and graft classification was performed using supervised machine learning techniques (random forests and support vector machine). The performance of the algorithm was compared against biopsy results and surgeons’ classifications. Usability was also assessed in simulated and real clinical settings using the Mobile Health App Usability Questionnaire. The predictive models demonstrated an area under the receiver operating characteristic curve of 0.82, with an accuracy of 85%, significantly surpassing the accuracy of visual inspections by surgeons. Experienced surgeons rated the platform positively, appreciating not only the hepatic steatosis assessment but also the dashboarding functionalities for summarising and displaying procurement-related data. The results indicate that image analysis coupled with machine learning can effectively and safely identify valid livers during procurement. LiverColor has the potential to enhance the accuracy and efficiency of liver assessments, reducing the reliance on subjective visual inspections and improving transplantation outcomes. Full article
Show Figures

Figure 1

22 pages, 3232 KiB  
Article
An Unsupervised Error Detection Methodology for Detecting Mislabels in Healthcare Analytics
by Pei-Yuan Zhou, Faith Lum, Tony Jiecao Wang, Anubhav Bhatti, Surajsinh Parmar, Chen Dan and Andrew K. C. Wong
Bioengineering 2024, 11(8), 770; https://doi.org/10.3390/bioengineering11080770 - 31 Jul 2024
Viewed by 279
Abstract
Medical datasets may be imbalanced and contain errors due to subjective test results and clinical variability. The poor quality of original data affects classification accuracy and reliability. Hence, detecting abnormal samples in the dataset can help clinicians make better decisions. In this study, [...] Read more.
Medical datasets may be imbalanced and contain errors due to subjective test results and clinical variability. The poor quality of original data affects classification accuracy and reliability. Hence, detecting abnormal samples in the dataset can help clinicians make better decisions. In this study, we propose an unsupervised error detection method using patterns discovered by the Pattern Discovery and Disentanglement (PDD) model, developed in our earlier work. Applied to the large data, the eICU Collaborative Research Database for sepsis risk assessment, the proposed algorithm can effectively discover statistically significant association patterns, generate an interpretable knowledge base for interpretability, cluster samples in an unsupervised learning manner, and detect abnormal samples from the dataset. As shown in the experimental result, our method outperformed K-Means by 38% on the full dataset and 47% on the reduced dataset for unsupervised clustering. Multiple supervised classifiers improve accuracy by an average of 4% after removing abnormal samples by the proposed error detection approach. Therefore, the proposed algorithm provides a robust and practical solution for unsupervised clustering and error detection in healthcare data. Full article
Show Figures

Graphical abstract

Back to TopTop