Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (363)

Search Parameters:
Keywords = explainable artificial intelligence (xAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1517 KiB  
Review
Data-Driven Decision Support for Smart and Efficient Building Energy Retrofits: A Review
by Amjad Baset and Muhyiddine Jradi
Appl. Syst. Innov. 2025, 8(1), 5; https://doi.org/10.3390/asi8010005 - 27 Dec 2024
Viewed by 354
Abstract
This review explores the novel integration of data-driven approaches, including artificial intelligence (AI) and machine learning (ML), in advancing building energy retrofits. This study uniquely emphasizes the emerging role of explainable AI (XAI) in addressing transparency and interpretability challenges, fostering the broader adoption [...] Read more.
This review explores the novel integration of data-driven approaches, including artificial intelligence (AI) and machine learning (ML), in advancing building energy retrofits. This study uniquely emphasizes the emerging role of explainable AI (XAI) in addressing transparency and interpretability challenges, fostering the broader adoption of data-driven solutions among stakeholders. A critical contribution of this review is its in-depth analysis of innovative applications of AI techniques to handle incomplete data, optimize energy performance, and predict retrofit outcomes with enhanced accuracy. Furthermore, the review identifies previously underexplored areas, such as scaling data-driven methods to diverse building typologies and incorporating future climate scenarios in retrofit planning. Future research directions include improving data availability and quality, developing scalable urban simulation tools, advancing modeling techniques to include life-cycle impacts, and creating practical decision-support systems that integrate economic and environmental metrics, paving the way for efficient and sustainable retrofitting solutions. Full article
Show Figures

Figure 1

21 pages, 473 KiB  
Article
Feature Selection in Cancer Classification: Utilizing Explainable Artificial Intelligence to Uncover Influential Genes in Machine Learning Models
by Matheus Dalmolin, Karolayne S. Azevedo, Luísa C. de Souza, Caroline B. de Farias, Martina Lichtenfels and Marcelo A. C. Fernandes
AI 2025, 6(1), 2; https://doi.org/10.3390/ai6010002 - 27 Dec 2024
Viewed by 307
Abstract
This study investigates the use of machine learning (ML) models combined with explainable artificial intelligence (XAI) techniques to identify the most influential genes in the classification of five recurrent cancer types in women: breast cancer (BRCA), lung adenocarcinoma (LUAD), thyroid cancer (THCA), ovarian [...] Read more.
This study investigates the use of machine learning (ML) models combined with explainable artificial intelligence (XAI) techniques to identify the most influential genes in the classification of five recurrent cancer types in women: breast cancer (BRCA), lung adenocarcinoma (LUAD), thyroid cancer (THCA), ovarian cancer (OV), and colon adenocarcinoma (COAD). Gene expression data from RNA-seq, extracted from The Cancer Genome Atlas (TCGA), were used to train ML models, including decision trees (DTs), random forest (RF), and XGBoost (XGB), which achieved accuracies of 98.69%, 99.82%, and 99.37%, respectively. However, the challenges in this analysis included the high dimensionality of the dataset and the lack of transparency in the ML models. To mitigate these challenges, the SHAP (Shapley Additive Explanations) method was applied to generate a list of features, aiming to understand which characteristics influenced the models’ decision-making processes and, consequently, the prediction results for the five tumor types. The SHAP analysis identified 119, 80, and 10 genes for the RF, XGB, and DT models, respectively, totaling 209 genes, resulting in 172 unique genes. The new list, representing 0.8% of the original input features, is coherent and fully explainable, increasing confidence in the applied models. Additionally, the results suggest that the SHAP method can be effectively used as a feature selector in gene expression data. This approach not only enhances model transparency but also maintains high classification performance, highlighting its potential in identifying biologically relevant features that may serve as biomarkers for cancer diagnostics and treatment planning. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

31 pages, 6140 KiB  
Article
Towards Transparent Diabetes Prediction: Combining AutoML and Explainable AI for Improved Clinical Insights
by Raza Hasan, Vishal Dattana, Salman Mahmood and Saqib Hussain
Information 2025, 16(1), 7; https://doi.org/10.3390/info16010007 - 26 Dec 2024
Viewed by 272
Abstract
Diabetes is a global health challenge that requires early detection for effective management. This study integrates Automated Machine Learning (AutoML) with Explainable Artificial Intelligence (XAI) to improve diabetes risk prediction and enhance model interpretability for healthcare professionals. Using the Pima Indian Diabetes dataset, [...] Read more.
Diabetes is a global health challenge that requires early detection for effective management. This study integrates Automated Machine Learning (AutoML) with Explainable Artificial Intelligence (XAI) to improve diabetes risk prediction and enhance model interpretability for healthcare professionals. Using the Pima Indian Diabetes dataset, we developed an ensemble model with 85.01% accuracy leveraging AutoGluon’s AutoML framework. To address the “black-box” nature of machine learning, we applied XAI techniques, including SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Integrated Gradients (IG), Attention Mechanism (AM), and Counterfactual Analysis (CA), providing both global and patient-specific insights into critical risk factors such as glucose and BMI. These methods enable transparent and actionable predictions, supporting clinical decision-making. An interactive Streamlit application was developed to allow clinicians to explore feature importance and test hypothetical scenarios. Cross-validation confirmed the model’s robust performance across diverse datasets. This study demonstrates the integration of AutoML with XAI as a pathway to achieving accurate, interpretable models that foster transparency and trust while supporting actionable clinical decisions. Full article
(This article belongs to the Special Issue Medical Data Visualization)
Show Figures

Figure 1

35 pages, 409 KiB  
Review
Fault Detection and Diagnosis in Industry 4.0: A Review on Challenges and Opportunities
by Denis Leite, Emmanuel Andrade, Diego Rativa and Alexandre M. A. Maciel
Sensors 2025, 25(1), 60; https://doi.org/10.3390/s25010060 - 25 Dec 2024
Viewed by 217
Abstract
Integrating Machine Learning (ML) in industrial settings has become a cornerstone of Industry 4.0, aiming to enhance production system reliability and efficiency through Real-Time Fault Detection and Diagnosis (RT-FDD). This paper conducts a comprehensive literature review of ML-based RT-FDD. Out of 805 documents, [...] Read more.
Integrating Machine Learning (ML) in industrial settings has become a cornerstone of Industry 4.0, aiming to enhance production system reliability and efficiency through Real-Time Fault Detection and Diagnosis (RT-FDD). This paper conducts a comprehensive literature review of ML-based RT-FDD. Out of 805 documents, 29 studies were identified as noteworthy for presenting innovative methods that address the complexities and challenges associated with fault detection. While ML-based RT-FDD offers different benefits, including fault prediction accuracy, it faces challenges in data quality, model interpretability, and integration complexities. This review identifies a gap in industrial implementation outcomes that opens new research opportunities. Future Fault Detection and Diagnosis (FDD) research may prioritize standardized datasets to ensure reproducibility and facilitate comparative evaluations. Furthermore, there is a pressing need to refine techniques for handling unbalanced datasets and improving feature extraction for temporal series data. Implementing Explainable Artificial Intelligence (AI) (XAI) tailored to industrial fault detection is imperative for enhancing interpretability and trustworthiness. Subsequent studies must emphasize comprehensive comparative evaluations, reducing reliance on specialized expertise, documenting real-world outcomes, addressing data challenges, and bolstering real-time capabilities and integration. By addressing these avenues, the field can propel the advancement of ML-based RT-FDD methodologies, ensuring their effectiveness and relevance in industrial contexts. Full article
Show Figures

Figure 1

18 pages, 5635 KiB  
Article
Toward Robust Lung Cancer Diagnosis: Integrating Multiple CT Datasets, Curriculum Learning, and Explainable AI
by Amira Bouamrane, Makhlouf Derdour, Akram Bennour, Taiseer Abdalla Elfadil Eisa, Abdel-Hamid M. Emara, Mohammed Al-Sarem and Neesrin Ali Kurdi
Diagnostics 2025, 15(1), 1; https://doi.org/10.3390/diagnostics15010001 - 24 Dec 2024
Viewed by 330
Abstract
Background and Objectives: Computer-aided diagnostic systems have achieved remarkable success in the medical field, particularly in diagnosing malignant tumors, and have done so at a rapid pace. However, the generalizability of the results remains a challenge for researchers and decreases the credibility of [...] Read more.
Background and Objectives: Computer-aided diagnostic systems have achieved remarkable success in the medical field, particularly in diagnosing malignant tumors, and have done so at a rapid pace. However, the generalizability of the results remains a challenge for researchers and decreases the credibility of these models, which represents a point of criticism by physicians and specialists, especially given the sensitivity of the field. This study proposes a novel model based on deep learning to enhance lung cancer diagnosis quality, understandability, and generalizability. Methods: The proposed approach uses five computed tomography (CT) datasets to assess diversity and heterogeneity. Moreover, the mixup augmentation technique was adopted to facilitate the reliance on salient characteristics by combining features and CT scan labels from datasets to reduce their biases and subjectivity, thus improving the model’s generalization ability and enhancing its robustness. Curriculum learning was used to train the model, starting with simple sets to learn complicated ones quickly. Results: The proposed approach achieved promising results, with an accuracy of 99.38%; precision, specificity, and area under the curve (AUC) of 100%; sensitivity of 98.76%; and F1-score of 99.37%. Additionally, it scored a 00% false positive rate and only a 1.23% false negative rate. An external dataset was used to further validate the proposed method’s effectiveness. The proposed approach achieved optimal results of 100% in all metrics, with 00% false positive and false negative rates. Finally, explainable artificial intelligence (XAI) using Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to better understand the model. Conclusions: This research proposes a robust and interpretable model for lung cancer diagnostics with improved generalizability and validity. Incorporating mixup and curriculum training supported by several datasets underlines its promise for employment as a diagnostic device in the medical industry. Full article
Show Figures

Figure 1

28 pages, 4062 KiB  
Article
Forecasting River Water Temperature Using Explainable Artificial Intelligence and Hybrid Machine Learning: Case Studies in Menindee Region in Australia
by Leyde Briceno Medina, Klaus Joehnk, Ravinesh C. Deo, Mumtaz Ali, Salvin S. Prasad and Nathan Downs
Water 2024, 16(24), 3720; https://doi.org/10.3390/w16243720 - 23 Dec 2024
Viewed by 413
Abstract
Water temperature (WT) is a crucial factor indicating the quality of water in the river system. Given the significant variability in water quality, it is vital to devise more precise methods to forecast temperature in river systems and assess the water quality. This [...] Read more.
Water temperature (WT) is a crucial factor indicating the quality of water in the river system. Given the significant variability in water quality, it is vital to devise more precise methods to forecast temperature in river systems and assess the water quality. This study designs and evaluates a new explainable artificial intelligence and hybrid machine-learning framework tailored for hourly and daily surface WT predictions for case studies in the Menindee region, focusing on the Weir 32 site. The proposed hybrid framework was designed by coupling a nonstationary signal processing method of Multivariate Variational Mode Decomposition (MVMD) with a bidirectional long short-term memory network (BiLSTM). The study has also employed a combination of in situ measurements with gridded and simulation datasets in the testing phase to rigorously assess the predictive performance of the newly designed MVMD-BiLSTM alongside other benchmarked models. In accordance with the outcomes of the statistical score metrics and visual infographics of the predicted and observed WT, the objective model displayed superior predictive performance against other benchmarked models. For instance, the MVMD-BiLSTM model captured the lowest Root Mean Square Percentage Error (RMSPE) values of 9.70% and 6.34% for the hourly and daily forecasts, respectively, at Weir 32. Further application of this proposed model reproduced the overall dynamics of the daily WT in Burtundy (RMSPE = 7.88% and Mean Absolute Percentage Error (MAPE) = 5.78%) and Pooncarie (RMSPE = 8.39% and MAPE = 5.89%), confirming that the gridded data effectively capture the overall WT dynamics at these locations. The overall explainable artificial intelligence (xAI) results, based on Local Interpretable Model-Agnostic Explanations (LIME), indicate that air temperature (AT) was the most significant contributor towards predicting WT. The superior capabilities of the proposed MVMD-BiLSTM model through this case study consolidate its potential in forecasting WT. Full article
Show Figures

Figure 1

26 pages, 21880 KiB  
Article
Explainable AI-Based Skin Cancer Detection Using CNN, Particle Swarm Optimization and Machine Learning
by Syed Adil Hussain Shah, Syed Taimoor Hussain Shah, Roa’a Khaled, Andrea Buccoliero, Syed Baqir Hussain Shah, Angelo Di Terlizzi, Giacomo Di Benedetto and Marco Agostino Deriu
J. Imaging 2024, 10(12), 332; https://doi.org/10.3390/jimaging10120332 - 22 Dec 2024
Viewed by 390
Abstract
Skin cancer is among the most prevalent cancers globally, emphasizing the need for early detection and accurate diagnosis to improve outcomes. Traditional diagnostic methods, based on visual examination, are subjective, time-intensive, and require specialized expertise. Current artificial intelligence (AI) approaches for skin cancer [...] Read more.
Skin cancer is among the most prevalent cancers globally, emphasizing the need for early detection and accurate diagnosis to improve outcomes. Traditional diagnostic methods, based on visual examination, are subjective, time-intensive, and require specialized expertise. Current artificial intelligence (AI) approaches for skin cancer detection face challenges such as computational inefficiency, lack of interpretability, and reliance on standalone CNN architectures. To address these limitations, this study proposes a comprehensive pipeline combining transfer learning, feature selection, and machine-learning algorithms to improve detection accuracy. Multiple pretrained CNN models were evaluated, with Xception emerging as the optimal choice for its balance of computational efficiency and performance. An ablation study further validated the effectiveness of freezing task-specific layers within the Xception architecture. Feature dimensionality was optimized using Particle Swarm Optimization, reducing dimensions from 1024 to 508, significantly enhancing computational efficiency. Machine-learning classifiers, including Subspace KNN and Medium Gaussian SVM, further improved classification accuracy. Evaluated on the ISIC 2018 and HAM10000 datasets, the proposed pipeline achieved impressive accuracies of 98.5% and 86.1%, respectively. Moreover, Explainable-AI (XAI) techniques, such as Grad-CAM, LIME, and Occlusion Sensitivity, enhanced interpretability. This approach provides a robust, efficient, and interpretable solution for automated skin cancer diagnosis in clinical applications. Full article
(This article belongs to the Special Issue Deep Learning in Image Analysis: Progress and Challenges)
Show Figures

Figure 1

33 pages, 2332 KiB  
Review
Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness
by Julius Wiggerthale and Christoph Reich
AI 2024, 5(4), 2864-2896; https://doi.org/10.3390/ai5040138 - 11 Dec 2024
Viewed by 598
Abstract
Machine learning (ML) is increasingly used to support or automate decision processes in critical decision systems such as self driving cars or systems for medical diagnosis. These systems require decisions in which human lives are at stake and the decisions should therefore be [...] Read more.
Machine learning (ML) is increasingly used to support or automate decision processes in critical decision systems such as self driving cars or systems for medical diagnosis. These systems require decisions in which human lives are at stake and the decisions should therefore be well founded and very reliable. This need for reliability contrasts with the black-box nature of many ML models, making it difficult to ensure that they always behave as intended. In face of the high stakes involved, the resulting uncertainty is a significant challenge. Explainable artificial intelligence (XAI) addresses the issue by making black-box models more interpretable, often to increase user trust. However, many current XAI applications focus more on transparency and usability than on enhancing safety of ML applications. In this work, we therefore conduct a systematic literature review to examine how XAI can be leveraged to increase safety of ML applications in critical decision systems. We strive to find out for what purposes XAI is currently used in critical decision systems, what are the most common XAI techniques in critical decision systems and how XAI can be harnessed to increase safety of ML applications in critical decision systems. Using the SPAR-4-SLR protocol, we are able to answer these questions and provide a foundational resource for researchers and practitioners seeking to mitigate risks of ML applications. Essentially, we identify promising approaches of XAI which go beyond increasing trust to actively ensure correctness of decisions. Our findings propose a three-layered framework to enhance safety of ML in critical decision systems by means of XAI. The approach consists of Reliability, Validation and Verification. Furthermore, we point out gaps in research and propose future directions of XAI research for enhancing safety of ML applications in critical decision systems. Full article
Show Figures

Figure 1

19 pages, 12083 KiB  
Article
An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection
by Flavia Grignaffini, Enrico De Santis, Fabrizio Frezza and Antonello Rizzi
Information 2024, 15(12), 783; https://doi.org/10.3390/info15120783 - 5 Dec 2024
Viewed by 524
Abstract
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most [...] Read more.
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most explored medical application is cancer detection, for which several CAD systems have been proposed. Among them, deep neural network (DNN)-based systems for skin cancer diagnosis have demonstrated comparable or superior performance to that of experienced dermatologists. However, the lack of transparency in the decision-making process of such approaches makes them “black boxes” and, therefore, not directly incorporable into clinical practice. Trying to explain and interpret the reasons for DNNs’ decisions can be performed by the emerging explainable AI (XAI) techniques. XAI has been successfully applied to DNNs for skin lesion image classification but never when additional information is incorporated during network training. This field is still unexplored; thus, in this paper, we aim to provide a method to explain, qualitatively and quantitatively, a convolutional neural network model with feature injection for melanoma diagnosis. The gradient-weighted class activation mapping and layer-wise relevance propagation methods were used to generate heat maps, highlighting the image regions and pixels that contributed most to the final prediction. In contrast, the Shapley additive explanations method was used to perform a feature importance analysis on the additional handcrafted information. To successfully integrate DNNs into the clinical and diagnostic workflow, ensuring their maximum reliability and transparency in whatever variant they are used is necessary. Full article
Show Figures

Figure 1

17 pages, 5357 KiB  
Article
Integrating Explanations into CNNs by Adopting Spiking Attention Block for Skin Cancer Detection
by Inzamam Mashood Nasir, Sara Tehsin, Robertas Damaševičius and Rytis Maskeliūnas
Algorithms 2024, 17(12), 557; https://doi.org/10.3390/a17120557 - 5 Dec 2024
Viewed by 553
Abstract
Lately, there has been a substantial rise in the number of identified individuals with skin cancer, making it the most widespread form of cancer worldwide. Until now, several machine learning methods that utilize skin scans have been directly employed for skin cancer classification, [...] Read more.
Lately, there has been a substantial rise in the number of identified individuals with skin cancer, making it the most widespread form of cancer worldwide. Until now, several machine learning methods that utilize skin scans have been directly employed for skin cancer classification, showing encouraging outcomes in terms of enhancing diagnostic precision. In this paper, multimodal Explainable Artificial Intelligence (XAI) is presented that offers explanations that (1) address a gap regarding interpretation by identifying specific dermoscopic features, thereby enabling (2) dermatologists to comprehend them during melanoma diagnosis and allowing for an (3) evaluation of the interaction between clinicians and XAI. The specific goal of this article is to create an XAI system that closely aligns with the perspective of dermatologists when it comes to diagnosing melanoma. By building upon previous research on explainability in dermatology, this work introduces a novel soft attention mechanism, called Convolutional Spiking Attention Module (CSAM), to deep neural architectures, which focuses on enhancing critical elements and reducing noise-inducing features. Two instances of the proposed CSAM were placed inside the proposed Spiking Attention Block (SAB). The InceptionResNetV2, DenseNet201, and Xception architectures with and without the proposed SAB mechanism were compared for skin lesion classification. Pretrained networks with SAB outperform state-of-the-art methods on the HAM10000 dataset. The proposed method used the ISIC-2019 dataset for the crossdataset validation process. The proposed model provides attention regarding cancer pixels without using an external explainer, which proves the importance and significance of the SAB module. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

21 pages, 5660 KiB  
Article
EWAIS: An Ensemble Learning and Explainable AI Approach for Water Quality Classification Toward IoT-Enabled Systems
by Nermeen Gamal Rezk, Samah Alshathri, Amged Sayed and Ezz El-Din Hemdan
Processes 2024, 12(12), 2771; https://doi.org/10.3390/pr12122771 - 5 Dec 2024
Viewed by 602
Abstract
In the context of smart cities with advanced Internet of Things (IoT) systems, ensuring the sustainability and safety of freshwater resources is pivotal for public health and urban resilience. This study introduces EWAIS (Ensemble Learning and Explainable AI System), a novel framework designed [...] Read more.
In the context of smart cities with advanced Internet of Things (IoT) systems, ensuring the sustainability and safety of freshwater resources is pivotal for public health and urban resilience. This study introduces EWAIS (Ensemble Learning and Explainable AI System), a novel framework designed for the smart monitoring and assessment of water quality. Leveraging the strengths of Ensemble Learning models and Explainable Artificial Intelligence (XAI), EWAIS not only enhances the prediction accuracy of water quality but also provides transparent insights into the factors influencing these predictions. EWAIS integrates multiple Ensemble Learning models—Extra Trees Classifier (ETC), K-Nearest Neighbors (KNN), AdaBoost Classifier, decision tree (DT), Stacked Ensemble, and Voting Ensemble Learning (VEL)—to classify water as drinkable or non-drinkable. The system incorporates advanced techniques for handling missing data and statistical analysis, ensuring robust performance even in complex urban datasets. To address the opacity of traditional Machine Learning models, EWAIS employs XAI methods such as SHAP and LIME, generating intuitive visual explanations like force plots, summary plots, dependency plots, and decision plots. The system achieves high predictive performance, with the VEL model reaching an accuracy of 0.89 and an F1-Score of 0.85, alongside precision and recall scores of 0.85 and 0.86, respectively. These results demonstrate the proposed framework’s capability to deliver both accurate water quality predictions and actionable insights for decision-makers. By providing a transparent and interpretable monitoring system, EWAIS supports informed water management strategies, contributing to the sustainability and well-being of urban populations. This framework has been validated using controlled datasets, with IoT implementation suggested to enhance water quality monitoring in smart city environments. Full article
Show Figures

Figure 1

53 pages, 1985 KiB  
Review
An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAI
by Sidra Naveed, Gunnar Stevens and Dean Robin-Kern
Appl. Sci. 2024, 14(23), 11288; https://doi.org/10.3390/app142311288 - 3 Dec 2024
Viewed by 1294
Abstract
Recent advances in technology have propelled Artificial Intelligence (AI) into a crucial role in everyday life, enhancing human performance through sophisticated models and algorithms. However, the focus on predictive accuracy has often resulted in opaque black-box models that lack transparency in decision-making. To [...] Read more.
Recent advances in technology have propelled Artificial Intelligence (AI) into a crucial role in everyday life, enhancing human performance through sophisticated models and algorithms. However, the focus on predictive accuracy has often resulted in opaque black-box models that lack transparency in decision-making. To address this issue, significant efforts have been made to develop explainable AI (XAI) systems that make outcomes comprehensible to users. Various approaches, including new concepts, models, and user interfaces, aim to improve explainability, build user trust, enhance satisfaction, and increase task performance. Evaluation research has emerged to define and measure the quality of these explanations, differentiating between formal evaluation methods and empirical approaches that utilize techniques from psychology and human–computer interaction. Despite the importance of empirical studies, evaluations remain underutilized, with literature reviews indicating a lack of rigorous evaluations from the user perspective. This review aims to guide researchers and practitioners in conducting effective empirical user-centered evaluations by analyzing several studies; categorizing their objectives, scope, and evaluation metrics; and offering an orientation map for research design and metric measurement. Full article
Show Figures

Figure 1

30 pages, 4591 KiB  
Article
Machine Learning Classification of Pediatric Health Status Based on Cardiorespiratory Signals with Causal and Information Domain Features Applied—An Exploratory Study
by Maciej Rosoł, Jakub S. Gąsior, Kacper Korzeniewski, Jonasz Łaba, Robert Makuch, Bożena Werner and Marcel Młyńczak
J. Clin. Med. 2024, 13(23), 7353; https://doi.org/10.3390/jcm13237353 - 2 Dec 2024
Viewed by 503
Abstract
Background/Objectives: This study aimed to evaluate the accuracy of machine learning (ML) techniques in classifying pediatric individuals—cardiological patients, healthy participants, and athletes—based on cardiorespiratory features from short-term static measurements. It also examined the impact of cardiorespiratory coupling (CRC)-related features (from causal and information [...] Read more.
Background/Objectives: This study aimed to evaluate the accuracy of machine learning (ML) techniques in classifying pediatric individuals—cardiological patients, healthy participants, and athletes—based on cardiorespiratory features from short-term static measurements. It also examined the impact of cardiorespiratory coupling (CRC)-related features (from causal and information domains) on the modeling accuracy to identify a preferred cardiorespiratory feature set that could be further explored for specialized tasks, such as monitoring training progress or diagnosing health conditions. Methods: We utilized six self-prepared datasets that comprised various subsets of cardiorespiratory parameters and applied several ML algorithms to classify subjects into three distinct groups. This research also leveraged explainable artificial intelligence (XAI) techniques to interpret model decisions and investigate feature importance. Results: The highest accuracy, over 89%, was obtained using the dataset that included most important demographic, cardiac, respiratory, and interrelated (causal and information) domain features. The dataset that comprised the most influential features but without demographic data yielded the second best accuracy, equal to 85%. Incorporation of the causal and information domain features significantly improved the classification accuracy. The use of XAI tools further highlighted the importance of these features with respect to each individual group. Conclusions: The integration of ML algorithms with a broad spectrum of cardiorespiratory features provided satisfactory efficiency in classifying pediatric individuals into groups according to their actual health status. This study underscored the potential of ML and XAI in advancing the analysis of cardiorespiratory signals and emphasized the importance of CRC-related features. The established set of features that appeared optimal for the classification of pediatric patients should be further explored for their potential in assessing individual progress through training or rehabilitation. Full article
Show Figures

Graphical abstract

21 pages, 4809 KiB  
Article
Cardioish: Lead-Based Feature Extraction for ECG Signals
by Turker Tuncer, Abdul Hafeez Baig, Emrah Aydemir, Tarik Kivrak, Ilknur Tuncer, Gulay Tasci and Sengul Dogan
Diagnostics 2024, 14(23), 2712; https://doi.org/10.3390/diagnostics14232712 - 30 Nov 2024
Viewed by 479
Abstract
Background: Electrocardiography (ECG) signals are commonly used to detect cardiac disorders, with 12-lead ECGs being the standard method for acquiring these signals. The primary objective of this research is to propose a new feature engineering model that achieves both high classification accuracy and [...] Read more.
Background: Electrocardiography (ECG) signals are commonly used to detect cardiac disorders, with 12-lead ECGs being the standard method for acquiring these signals. The primary objective of this research is to propose a new feature engineering model that achieves both high classification accuracy and explainable results using ECG signals. To this end, a symbolic language, named Cardioish, has been introduced. Methods: In this research, two publicly available datasets were used: (i) a mental disorder classification dataset and (ii) a myocardial infarction (MI) dataset. These datasets contain ECG beats and include 4 and 11 classes, respectively. To obtain explainable results from these ECG signal datasets, a new explainable feature engineering (XFE) model has been proposed. The Cardioish-based XFE model consists of four main phases: (i) lead transformation and transition table feature extraction, (ii) iterative neighborhood component analysis (INCA) for feature selection, (iii) classification, and (iv) explainable results generation using the recommended Cardioish. In the feature extraction phase, the lead transformer converts ECG signals into lead indexes. To extract features from the transformed signals, a transition table-based feature extractor is applied, resulting in 144 features (12 × 12) from each ECG signal. In the feature selection phase, INCA is used to select the most informative features from the 144 generated, which are then classified using the k-nearest neighbors (kNN) classifier. The final phase is the explainable artificial intelligence (XAI) phase. In this phase, Cardioish symbols are created, forming a Cardioish sentence. By analyzing the extracted sentence, XAI results are obtained. Additionally, these results can be integrated into connectome theory for applications in cardiology. Results: The presented Cardioish-based XFE model achieved over 99% classification accuracy on both datasets. Moreover, the XAI results related to these disorders have been presented in this research. Conclusions: The recommended Cardioish-based XFE model achieved high classification performance for both datasets and provided explainable results. In this regard, our proposal paves a new way for ECG classification and interpretation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

29 pages, 3568 KiB  
Systematic Review
eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations
by Luigi Piero Di Bonito, Lelio Campanile, Francesco Di Natale, Michele Mastroianni and Mauro Iacono
Appl. Syst. Innov. 2024, 7(6), 121; https://doi.org/10.3390/asi7060121 - 30 Nov 2024
Viewed by 1603
Abstract
Artificial Intelligence (AI) has been swiftly incorporated into the industry to become a part of both customer services and manufacturing operations. To effectively address the ethical issues now being examined by the government, AI models must be explainable in order to be used [...] Read more.
Artificial Intelligence (AI) has been swiftly incorporated into the industry to become a part of both customer services and manufacturing operations. To effectively address the ethical issues now being examined by the government, AI models must be explainable in order to be used in both scientific and societal contexts. The current state of eXplainable artificial intelligence (XAI) in process engineering is examined in this study through a systematic literature review (SLR), with particular attention paid to the technology’s effect, degree of adoption, and potential to improve process and product quality. Due to restricted access to sizable, reliable datasets, XAI research in process engineering is still primarily exploratory or propositional, despite noteworthy applicability in well-known case studies. According to our research, XAI is becoming more and more positioned as a tool for decision support, with a focus on robustness and dependability in process optimization, maintenance, and quality assurance. This study, however, emphasizes that the use of XAI in process engineering is still in its early stages, and there is significant potential for methodological development and wider use across technical domains. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop