Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,011)

Search Parameters:
Keywords = data-driven and learning-based approaches

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 849 KiB  
Article
Designing Channel Attention Fully Convolutional Networks with Neural Architecture Search for Customer Socio-Demographic Information Identification Using Smart Meter Data
by Zhirui Luo, Qingqing Li, Ruobin Qi and Jun Zheng
AI 2025, 6(1), 9; https://doi.org/10.3390/ai6010009 - 10 Jan 2025
Viewed by 292
Abstract
Background: Accurately identifying the socio-demographic information of customers is crucial for utilities. It enables them to efficiently deliver personalized energy services and manage distribution networks. In recent years, machine learning-based data-driven methods have gained popularity compared to traditional survey-based approaches, owing to their [...] Read more.
Background: Accurately identifying the socio-demographic information of customers is crucial for utilities. It enables them to efficiently deliver personalized energy services and manage distribution networks. In recent years, machine learning-based data-driven methods have gained popularity compared to traditional survey-based approaches, owing to their time and cost efficiency, as well as the availability of a large amount of high-frequency smart meter data. Methods: In this paper, we propose a new method that harnesses the power of neural architecture search to automatically design deep neural network architectures tailored for identifying various socio-demographic information of customers using smart meter data. We designed a search space based on a novel channel attention fully convolutional network architecture. Furthermore, we developed a search algorithm based on Bayesian optimization to effectively explore the space and identify high-performing architectures. Results: The performance of the proposed method was evaluated and compared with a set of machine learning and deep learning baseline methods using a smart meter dataset widely used in this research area. Our results show that the deep neural network architectures designed automatically by our proposed method significantly outperform all baseline methods in addressing the socio-demographic questions investigated in our study. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

17 pages, 1240 KiB  
Technical Note
MAL-Net: Model-Adaptive Learned Network for Slow-Time Ambiguity Function Shaping
by Jun Wang, Xiangqing Xiao, Jinfeng Hu, Ziwei Zhao, Kai Zhong and Chaohai Li
Remote Sens. 2025, 17(1), 173; https://doi.org/10.3390/rs17010173 - 6 Jan 2025
Viewed by 265
Abstract
Designing waveforms with a Constant Modulus Constraint (CMC) to achieve desirable Slow-Time Ambiguity Function (STAF) characteristics is significantly important in radar technology. The problem is NP-hard, due to its non-convex quartic objective function and CMC constraint. Existing methods typically involve model-based approaches with [...] Read more.
Designing waveforms with a Constant Modulus Constraint (CMC) to achieve desirable Slow-Time Ambiguity Function (STAF) characteristics is significantly important in radar technology. The problem is NP-hard, due to its non-convex quartic objective function and CMC constraint. Existing methods typically involve model-based approaches with relaxation and data-driven Deep Neural Networks (DNNs) methods, which face the challenge of dataimitation. We observe that the Complex Circle Manifold (CCM) naturally satisfies the CMC. By projecting onto the CCM, the problem is transformed into an unconstrained minimization problem that can be tackled using the CCM gradient descent model. Furthermore, we observe that the gradient descent model over the CCM can be unfolded as a Deep Learning (DL) network. Therefore, byeveraging the powerfulearning ability of DL and the CCM gradient descent model, we propose a Model-Adaptive Learned Network (MAL-Net) method without relaxation. Initially, we reformulate the problem as an Unconstrained Quartic Problem (UQP) on the CCM. Then, the MAL-Net is developed toearn the step sizes of allayers adaptively. This is accomplished by unrolling the CCM gradient descent model as the networkayer. Our simulation results demonstrate that the proposed MAL-Net achieves superior STAF performance compared to existing methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing, Radar Techniques, and Their Applications)
Show Figures

Figure 1

26 pages, 3308 KiB  
Article
Adaptive Cloud-Based Big Data Analytics Model for Sustainable Supply Chain Management
by Nenad Stefanovic, Milos Radenkovic, Zorica Bogdanovic, Jelena Plasic and Andrijana Gaborovic
Sustainability 2025, 17(1), 354; https://doi.org/10.3390/su17010354 - 6 Jan 2025
Viewed by 556
Abstract
Due to uncertain business climate, fierce competition, environmental challenges, regulatory requirements, and the need for responsible business operations, organizations are forced to implement sustainable supply chains. This necessitates the use of proper data analytics methods and tools to monitor economic, environmental, and social [...] Read more.
Due to uncertain business climate, fierce competition, environmental challenges, regulatory requirements, and the need for responsible business operations, organizations are forced to implement sustainable supply chains. This necessitates the use of proper data analytics methods and tools to monitor economic, environmental, and social performance, as well as to manage and optimize supply chain operations. This paper discusses issues, challenges, and the state of the art approaches in supply chain analytics and gives a systematic literature review of big data developments associated with supply chain management (SCM). Even though big data technologies promise many benefits and advantages, the prospective applications of big data technologies in sustainable SCM are still not achieved to a full extent. This necessitates work on several segments like research, the design of new models, architectures, services, and tools for big data analytics. The goal of the paper is to introduce a methodology covering the whole Business Intelligence (BI) lifecycle and a unified model for advanced supply chain big data analytics (BDA). The model is multi-layered, cloud-based, and adaptive in terms of specific big data scenarios. It comprises business process modeling, data ingestion, storage, processing, machine learning, and end-user intelligence and visualization. It enables the creation of next-generation BDA systems that improve supply chain performance and enable sustainable SCM. The proposed supply chain BDA methodology and the model have been successfully applied in practice for the purpose of supplier quality management. The solution based on the real-world dataset and the illustrative supply chain case are presented and discussed. The results demonstrate the effectiveness and applicability of the big data model for intelligent and insight-driven decision making and sustainable supply chain management. Full article
(This article belongs to the Special Issue Sustainable Enterprise Operation and Supply Chain Management)
Show Figures

Figure 1

16 pages, 447 KiB  
Article
How Self-Regulated Learning Is Affected by Feedback Based on Large Language Models: Data-Driven Sustainable Development in Computer Programming Learning
by Di Sun, Pengfei Xu, Jing Zhang, Ruqi Liu and Jun Zhang
Electronics 2025, 14(1), 194; https://doi.org/10.3390/electronics14010194 - 5 Jan 2025
Viewed by 643
Abstract
Self-regulated learning (SRL) is a sustainable development skill that involves learners actively monitoring and adjusting their learning processes, which is essential for lifelong learning. Learning feedback plays a crucial role in SRL by aiding in self-observation and self-judgment. In this context, large language [...] Read more.
Self-regulated learning (SRL) is a sustainable development skill that involves learners actively monitoring and adjusting their learning processes, which is essential for lifelong learning. Learning feedback plays a crucial role in SRL by aiding in self-observation and self-judgment. In this context, large language models (LLMs), with their ability to use human language and continuously interact with learners, not only provide personalized feedback but also offer a data-driven approach to sustainable development in education. By leveraging real-time data, LLMs have the potential to deliver more effective and interactive feedback that enhances both individual learning experiences and scalable, long-term educational strategies. Therefore, this study utilized a quasi-experimental design to examine the effects of LLM-based feedback on learners’ SRL, aiming to explore how this data-driven application could support learners’ sustainable development in computer programming learning. The findings indicate that LLM-based feedback significantly improves learners’ SRL by providing tailored, interactive support that enhances motivation and metacognitive strategies. Additionally, learners receiving LLM-based feedback demonstrated better academic performance, suggesting that these models can effectively support learners’ sustainable development in computer programming learning. However, the study acknowledges limitations, including the short experimental period and the initial unfamiliarity with LLM tools, which may have influenced the results. Future research should focus on refining LLM integration, exploring the impact of different feedback types, and extending the application of these tools to other educational contexts. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

30 pages, 6901 KiB  
Article
EPRNG: Effective Pseudo-Random Number Generator on the Internet of Vehicles Using Deep Convolution Generative Adversarial Network
by Chenyang Fei, Xiaomei Zhang, Dayu Wang, Haomin Hu, Rong Huang and Zejie Wang
Information 2025, 16(1), 21; https://doi.org/10.3390/info16010021 - 3 Jan 2025
Viewed by 493
Abstract
With the increasing connectivity and automation on the Internet of Vehicles, safety, security, and privacy have become stringent challenges. In the last decade, several cryptography-based protocols have been proposed as intuitive solutions to protect vehicles from information leakage and intrusions. Before generating the [...] Read more.
With the increasing connectivity and automation on the Internet of Vehicles, safety, security, and privacy have become stringent challenges. In the last decade, several cryptography-based protocols have been proposed as intuitive solutions to protect vehicles from information leakage and intrusions. Before generating the encryption keys, a random number generator (RNG) plays an important component in cybersecurity. Several deep learning-based RNGs have been deployed to train the initial value and generate pseudo-random numbers. However, interference from actual unpredictable driving environments renders the system unreliable for its low-randomness outputs. Furthermore, dynamics in the training process make these methods subject to training instability and pattern collapse by overfitting. In this paper, we propose an Effective Pseudo-Random Number Generator (EPRNG) which exploits a deep convolution generative adversarial network (DCGAN)-based approach using our processed vehicle datasets and entropy-driven stopping method-based training processes for the generation of pseudo-random numbers. Our model starts from the vehicle data source to stitch images and add noise to enhance the entropy of the images and then inputs them into our network. In addition, we design an entropy-driven stopping method that enables our model training to stop at the optimal epoch so as to prevent overfitting. The results of the evaluation indicate that our entropy-driven stopping method can effectively generate pseudo-random numbers in a DCGAN. Our numerical experiments on famous test suites (NIST, ENT) demonstrate the effectiveness of the developed approach in high-quality random number generation for the IoV. Furthermore, the PRNGs are successfully applied to image encryption, and the performance metrics of the encryption are close to ideal values. Full article
Show Figures

Graphical abstract

30 pages, 388 KiB  
Review
Advanced Machine Learning and Deep Learning Approaches for Estimating the Remaining Life of EV Batteries—A Review
by Daniel H. de la Iglesia, Carlos Chinchilla Corbacho, Jorge Zakour Dib, Vidal Alonso-Secades and Alfonso J. López Rivero
Batteries 2025, 11(1), 17; https://doi.org/10.3390/batteries11010017 - 3 Jan 2025
Viewed by 497
Abstract
This systematic review presents a critical analysis of advanced machine learning (ML) and deep learning (DL) approaches for predicting the remaining useful life (RUL) of electric vehicle (EV) batteries. Conducted in accordance with PRISMA guidelines and using a novel adaptation of the Downs [...] Read more.
This systematic review presents a critical analysis of advanced machine learning (ML) and deep learning (DL) approaches for predicting the remaining useful life (RUL) of electric vehicle (EV) batteries. Conducted in accordance with PRISMA guidelines and using a novel adaptation of the Downs and Black (D&B) scale, this study evaluates 89 research papers and provides insights into the evolving landscape of RUL estimation. Our analysis reveals an evolving landscape of methodological approaches, with different techniques showing distinct capabilities in capturing complex degradation patterns in EV batteries. While recent years have seen increased adoption of DL methods, the effectiveness of different approaches varies significantly based on application context and data characteristics. However, we also uncover critical challenges, including a lack of standardized evaluation metrics, prevalent overfitting problems, and limited dataset sizes, that hinder the field’s progress. To address these, we propose a comprehensive set of evaluation metrics and emphasize the need for larger and more diverse datasets. The review introduces an innovative clustering approach that provides a nuanced understanding of research trends and methodological gaps. In addition, we discuss the ethical implications of DL in RUL estimation, addressing concerns about privacy and algorithmic bias. By synthesizing current knowledge, identifying key research directions, and suggesting methodological improvements, this review serves as a central guide for researchers and practitioners in the rapidly evolving field of EV battery management. It not only contributes to the advancement of RUL estimation techniques but also sets a new standard for conducting systematic reviews in technology-driven fields, paving the way for more sustainable and efficient EV technologies. Full article
Show Figures

Graphical abstract

26 pages, 12514 KiB  
Article
Reconstruction and Prediction of Chaotic Time Series with Missing Data: Leveraging Dynamical Correlations Between Variables
by Jingchan Lv, Hongcun Mao, Yu Wang and Zhihai Yao
Mathematics 2025, 13(1), 152; https://doi.org/10.3390/math13010152 - 3 Jan 2025
Viewed by 431
Abstract
Although data-driven machine learning methods have been successfully applied to predict complex nonlinear dynamics, forecasting future evolution based on incomplete past information remains a significant challenge. This paper proposes a novel data-driven approach that leverages the dynamical relationships among variables. By integrating Non-Stationary [...] Read more.
Although data-driven machine learning methods have been successfully applied to predict complex nonlinear dynamics, forecasting future evolution based on incomplete past information remains a significant challenge. This paper proposes a novel data-driven approach that leverages the dynamical relationships among variables. By integrating Non-Stationary Transformers with LightGBM, we construct a robust model where LightGBM builds a fitting function to capture and simulate the complex coupling relationships among variables in dynamically evolving chaotic systems. This approach enables the reconstruction of missing data, restoring sequence completeness and overcoming the limitations of existing chaotic time series prediction methods in handling missing data. We validate the proposed method by predicting the future evolution of variables with missing data in both dissipative and conservative chaotic systems. Experimental results demonstrate that the model maintains stability and effectiveness even with increasing missing rates, particularly in the range of 30% to 50%, where prediction errors remain relatively low. Furthermore, the feature importance extracted by the model aligns closely with the underlying dynamic characteristics of the chaotic system, enhancing the method’s interpretability and reliability. This research offers a practical and theoretically sound solution to the challenges of predicting chaotic systems with incomplete datasets. Full article
(This article belongs to the Special Issue Statistical Analysis and Data Science for Complex Data)
Show Figures

Figure 1

24 pages, 5314 KiB  
Article
A Methodological Framework for Business Decisions with Explainable AI and the Analytic Hierarchical Process
by Gabriel Marín Díaz, Raquel Gómez Medina and José Alberto Aijón Jiménez
Processes 2025, 13(1), 102; https://doi.org/10.3390/pr13010102 - 3 Jan 2025
Viewed by 503
Abstract
In today’s data-driven business landscape, effective and transparent decision making becomes relevant to maintain a competitive advantage over the competition, especially in customer service in B2B environments. This study presents a methodological framework that integrates Explainable Artificial Intelligence (XAI), C-means clustering, and the [...] Read more.
In today’s data-driven business landscape, effective and transparent decision making becomes relevant to maintain a competitive advantage over the competition, especially in customer service in B2B environments. This study presents a methodological framework that integrates Explainable Artificial Intelligence (XAI), C-means clustering, and the Analytic Hierarchical Process (AHP) to improve strategic decision making in business environments. The framework addresses the need to obtain interpretable information from predictions based on machine learning processes and the prioritization of key factors for decision making. C-means clustering enables flexible customer segmentation based on interaction patterns, while XAI provides transparency into model outputs, allowing support teams to understand the factors influencing each recommendation. The AHP is then applied to prioritize criteria within each customer segment, aligning support actions with organizational goals. Tested with real customer interaction data, this integrated approach proved effective in accurately segmenting customers, predicting support needs, and optimizing resource allocation. The combined use of XAI and the AHP ensures that business decisions are data-driven, interpretable, and aligned with the company’s strategic objectives, making this framework relevant for companies seeking to improve their customer service in complex B2B contexts. Future research will explore the application of the proposed model in different business processes. Full article
(This article belongs to the Section Advanced Digital and Other Processes)
Show Figures

Figure 1

24 pages, 3468 KiB  
Article
Adaptive Real-Time Translation Assistance Through Eye-Tracking
by Dimosthenis Minas, Eleanna Theodosiou, Konstantinos Roumpas and Michalis Xenos
AI 2025, 6(1), 5; https://doi.org/10.3390/ai6010005 - 2 Jan 2025
Viewed by 666
Abstract
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively [...] Read more.
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively provides translations, maintaining reading flow and engagement without undermining language learning. The key technological components include a desktop eye-tracker integrated with a custom Python-based application. Through a user-centered design, ETS dynamically adapts to individual reading needs, reducing cognitive strain by offering word-level translations when needed. A study involving 53 participants assessed ETS’s impact on reading speed, fixation duration, and user experience, with findings indicating improved comprehension and reading efficiency. Results demonstrated that gaze-based adaptations significantly improved their reading experience and reduced cognitive load. Participants positively rated ETS’s usability and were noted through preferences for customization, such as pop-up placement and sentence-level translations. Future work will integrate AI-driven adaptations, allowing the system to adjust based on user proficiency and reading behavior. The study contributes to the growing evidence of eye-tracking’s potential in educational and professional applications, offering a flexible, personalized approach to reading assistance that balances language exposure with real-time support. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

22 pages, 5922 KiB  
Article
Predictive Modeling and Experimental Analysis of Cyclic Shear Behavior in Sand–Fly Ash Mixtures
by Özgür Yıldız and Ali Fırat Çabalar
Appl. Sci. 2025, 15(1), 353; https://doi.org/10.3390/app15010353 - 2 Jan 2025
Viewed by 324
Abstract
This study presents a comprehensive investigation into the cyclic shear behavior of sand–fly ash mixtures through experimental and data-driven modeling approaches. Cyclic direct shear tests were conducted on mixtures containing fly ash at 0%, 2.5%, 5%, 10%, 15%, and 20% by weight to [...] Read more.
This study presents a comprehensive investigation into the cyclic shear behavior of sand–fly ash mixtures through experimental and data-driven modeling approaches. Cyclic direct shear tests were conducted on mixtures containing fly ash at 0%, 2.5%, 5%, 10%, 15%, and 20% by weight to examine the influence of fly ash content on the shear behavior under cyclic loading conditions. The tests were carried out under a constant stress of 100 kPa to simulate field-relevant stress conditions. Results revealed that the fly ash content initially reduces shear strength at lower additive contents, but shear strength increases and reaches a maximum at 20% fly ash content. The findings highlight the trade-offs in mechanical behavior associated with varying fly ash proportions. To enhance the understanding of cyclic shear behavior, a Nonlinear Autoregressive Model with External Input (NARX) model was employed. Using data from the loading cycles as input, the NARX model was trained to predict the final shear response under cyclic conditions. The model demonstrated exceptional predictive performance, achieving a coefficient of determination (R2) of 0.99, showcasing its robustness in forecasting the cyclic shear performance based on the composition of the mixtures. The insights derived from this research underscore the potential of incorporating fly ash in sand mixtures for soil stabilization in geotechnical engineering. Furthermore, the integration of advanced machine learning techniques such as NARX models offers a powerful tool for predicting the behavior of soil mixtures, facilitating more effective and data-driven decision-making in geotechnical applications. Evidently, this study not only advances the understanding of cyclic shear behavior in fly ash–sand mixtures but also provides a framework for employing data-driven methodologies to address complex geotechnical challenges. Full article
Show Figures

Figure 1

15 pages, 1209 KiB  
Article
Development and Validation of a Machine Learning Model for the Prediction of Bloodstream Infections in Patients with Hematological Malignancies and Febrile Neutropenia
by Antonio Gallardo-Pizarro, Christian Teijón-Lumbreras, Patricia Monzo-Gallo, Tommaso Francesco Aiello, Mariana Chumbita, Olivier Peyrony, Emmanuelle Gras, Cristina Pitart, Josep Mensa, Jordi Esteve, Alex Soriano and Carolina Garcia-Vidal
Antibiotics 2025, 14(1), 13; https://doi.org/10.3390/antibiotics14010013 - 28 Dec 2024
Viewed by 1026
Abstract
Background/Objectives: The rise of multidrug-resistant (MDR) infections demands personalized antibiotic strategies for febrile neutropenia (FN) in hematological malignancies. This study investigates machine learning (ML) for identifying patient profiles with increased susceptibility to bloodstream infections (BSI) during FN onset, aiming to tailor treatment approaches. [...] Read more.
Background/Objectives: The rise of multidrug-resistant (MDR) infections demands personalized antibiotic strategies for febrile neutropenia (FN) in hematological malignancies. This study investigates machine learning (ML) for identifying patient profiles with increased susceptibility to bloodstream infections (BSI) during FN onset, aiming to tailor treatment approaches. Methods: From January 2020 to June 2022, we used the unsupervised ML algorithm KAMILA to analyze data from hospitalized hematological malignancy patients. Eleven features categorized clinical phenotypes and determined BSI and multidrug-resistant Gram-negative bacilli (MDR-GNB) prevalences at FN onset. Model performance was evaluated with a validation cohort from July 2022 to March 2023. Results: Among 462 FN episodes analyzed in the development cohort, 116 (25.1%) had BSIs. KAMILA’s stratification identified three risk clusters: Cluster 1 (low risk), Cluster 2 (intermediate risk), and Cluster 3 (high risk). Cluster 2 (28.4% of episodes) and Cluster 3 (43.7%) exhibited higher BSI rates of 26.7% and 37.6% and GNB BSI rates of 13.4% and 19.3%, respectively. Cluster 3 had a higher incidence of MDR-GNB BSIs, accounting for 75% of all MDR-GNB BSIs. Cluster 1 (27.9% of episodes) showed a lower BSI risk (<1%) with no GNB infections. Validation cohort results were similar: Cluster 3 had a BSI rate of 38.1%, including 78% of all MDR-GNB BSIs, while Cluster 1 had no GNB-related BSIs. Conclusions: Unsupervised ML-based risk stratification enhances evidence-driven decision-making for empiric antibiotic therapies at FN onset, crucial in an era of rising multi-drug resistance. Full article
(This article belongs to the Special Issue Nosocomial Infections and Complications in ICU Settings)
Show Figures

Figure 1

16 pages, 2354 KiB  
Article
Porter 6: Protein Secondary Structure Prediction by Leveraging Pre-Trained Language Models (PLMs)
by Wafa Alanazi, Di Meng and Gianluca Pollastri
Int. J. Mol. Sci. 2025, 26(1), 130; https://doi.org/10.3390/ijms26010130 - 27 Dec 2024
Viewed by 421
Abstract
Accurately predicting protein secondary structure (PSSP) is crucial for understanding protein function, which is foundational to advancements in drug development, disease treatment, and biotechnology. Researchers gain critical insights into protein folding and function within cells by predicting protein secondary structures. The advent of [...] Read more.
Accurately predicting protein secondary structure (PSSP) is crucial for understanding protein function, which is foundational to advancements in drug development, disease treatment, and biotechnology. Researchers gain critical insights into protein folding and function within cells by predicting protein secondary structures. The advent of deep learning models, capable of processing complex sequence data and identifying meaningful patterns, offer substantial potential to enhance the accuracy and efficiency of protein structure predictions. In particular, recent breakthroughs in deep learning—driven by the integration of natural language processing (NLP) algorithms—have significantly advanced the field of protein research. Inspired by the remarkable success of NLP techniques, this study harnesses the power of pre-trained language models (PLMs) to advance PSSP prediction. We conduct a comprehensive evaluation of various deep learning models trained on distinct sequence embeddings, including one-hot encoding and PLM-based approaches such as ProtTrans and ESM-2, to develop a cutting-edge prediction system optimized for accuracy and computational efficiency. Our proposed model, Porter 6, is an ensemble of CBRNN-based predictors, leveraging the protein language model ESM-2 as input features. Porter 6 achieves outstanding performance on large-scale, independent test sets. On a 2022 test set, the model attains an impressive 86.60% accuracy in three-state (Q3) and 76.43% in eight-state (Q8) classifications. When tested on a more recent 2024 test set, Porter 6 maintains robust performance, achieving 84.56% in Q3 and 74.18% in Q8 classifications. This represents a significant 3% improvement over its predecessor, outperforming or matching state-of-the-art approaches in the field. Full article
(This article belongs to the Special Issue Advanced Research in Biomolecular Design for Medical Applications)
Show Figures

Figure 1

22 pages, 11697 KiB  
Article
Generalizable Solar Irradiance Prediction for Battery Operation Optimization in IoT-Based Microgrid Environments
by Ray Colucci and Imad Mahgoub
J. Sens. Actuator Netw. 2025, 14(1), 3; https://doi.org/10.3390/jsan14010003 - 27 Dec 2024
Viewed by 456
Abstract
The reliance on fossil fuels as a primary global energy source has significantly impacted the environment, contributing to pollution and climate change. A shift towards renewable energy sources, particularly solar power, is underway, though these sources face challenges due to their inherent intermittency. [...] Read more.
The reliance on fossil fuels as a primary global energy source has significantly impacted the environment, contributing to pollution and climate change. A shift towards renewable energy sources, particularly solar power, is underway, though these sources face challenges due to their inherent intermittency. Battery energy storage systems (BESS) play a crucial role in mitigating this intermittency, ensuring a reliable power supply when solar generation is insufficient. The objective of this paper is to accurately predict the solar irradiance for battery operation optimization in microgrids. Using satellite data from weather sensors, we trained machine learning models to enhance solar irradiance predictions. We evaluated five popular machine learning algorithms and applied ensemble methods, achieving a substantial improvement in predictive accuracy. Our model outperforms previous works using the same dataset and has been validated to generalize across diverse geographical locations in Florida. This work demonstrates the potential of AI-assisted data-driven approaches to support sustainable energy management in solar-powered IoT-based microgrids. Full article
(This article belongs to the Special Issue AI-Assisted Machine-Environment Interaction)
Show Figures

Figure 1

24 pages, 6981 KiB  
Article
Machine-Learning-Driven Optimization of Cold Spray Process Parameters: Robust Inverse Analysis for Higher Deposition Efficiency
by Abderrachid Hamrani, Aditya Medarametla, Denny John and Arvind Agarwal
Coatings 2025, 15(1), 12; https://doi.org/10.3390/coatings15010012 - 26 Dec 2024
Viewed by 575
Abstract
Cold spray technology has become essential for industries requiring efficient material deposition, yet achieving optimal deposition efficiency (DE) presents challenges due to complex interactions among process parameters. This study developed a two-stage machine learning (ML) framework incorporating Bayesian optimization to address these challenges. [...] Read more.
Cold spray technology has become essential for industries requiring efficient material deposition, yet achieving optimal deposition efficiency (DE) presents challenges due to complex interactions among process parameters. This study developed a two-stage machine learning (ML) framework incorporating Bayesian optimization to address these challenges. In the first stage, a classification model predicted the occurrence of deposition, while the second stage used a regression model to forecast DE values given deposition presence. The approach was validated on Aluminum 6061 data, demonstrating its capability to accurately predict DE and identify optimal process parameters for target efficiencies. Model interpretability was enhanced with SHAP analysis, which identified gas temperature and gas type as primary factors affecting DE. Scenario-based inverse analysis further validated the framework by comparing model-predicted parameters to literature data, revealing high accuracy in replicating real-world conditions. Notably, substituting hydrogen as the gas carrier reduced the required gas temperature and pressure for high DE values, suggesting economic and operational benefits over helium and nitrogen. This study demonstrates the effectiveness of AI-driven solutions in optimizing cold spray processes, contributing to more efficient and practical approaches in material deposition. Full article
Show Figures

Figure 1

17 pages, 7587 KiB  
Article
SGSAFormer: Spike Gated Self-Attention Transformer and Temporal Attention
by Shouwei Gao, Yu Qin, Ruixin Zhu, Zirui Zhao, Hao Zhou and Zihao Zhu
Electronics 2025, 14(1), 43; https://doi.org/10.3390/electronics14010043 - 26 Dec 2024
Viewed by 437
Abstract
Spiking neural networks (SNNs), a neural network model structure inspired by the human brain, have emerged as a more energy-efficient deep learning paradigm due to their unique spike-based transmission and event-driven characteristics. Combining SNNs with the Transformer model significantly enhances SNNs’ performance while [...] Read more.
Spiking neural networks (SNNs), a neural network model structure inspired by the human brain, have emerged as a more energy-efficient deep learning paradigm due to their unique spike-based transmission and event-driven characteristics. Combining SNNs with the Transformer model significantly enhances SNNs’ performance while maintaining good energy efficiency. The gating mechanism, which dynamically adjusts input data and controls information flow, plays an important role in artificial neural networks (ANNs). Here, we introduce this gating mechanism into SNNs and propose a novel spike Transformer model, called SGSAFormer, based on the Spikformer network architecture. We introduce the Spike Gated Linear Unit (SGLU) module to improve the Multi-layer perceptron (MLP) module in SNNs by adding a gating mechanism to enhance the model’s expressive power. We also incorporate Spike Gated Self-Attention (SGSA) to strengthen the network’s attention mechanism, improving its ability to capture temporal information and dynamic processing. Additionally, we propose a Temporal Attention (TA) module, which selects new filters for the input data along the temporal dimension and can substantially reduce energy consumption with only a slight decrease in accuracy. To validate the effectiveness of our approach, we conducted extensive experiments on several neuromorphic datasets. Our model outperforms other state-of-the-art models in terms of performance. Full article
Show Figures

Figure 1

Back to TopTop