Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Issue
Volume 17, October
Previous Issue
Volume 17, August
 
 

Algorithms, Volume 17, Issue 9 (September 2024) – 49 articles

Cover Story (view full-size image): The emergence of deep learning has sparked notable strides in the quality of synthetic media. Yet, as photorealism reaches new heights, the line between generated and authentic images blurs, raising concerns about the dissemination of counterfeit or manipulated content online. Consequently, there is a pressing need to develop automated tools capable of effectively distinguishing synthetic images, especially those portraying faces, which is one of the most commonly encountered issues. This article presents a novel approach to synthetic face discrimination, leveraging deep learning-based image compression and predominantly utilizing the quality metrics of an image to determine its authenticity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
35 pages, 5643 KiB  
Article
MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization
by Hemin Sardar Abdulla, Azad A. Ameen, Sarwar Ibrahim Saeed, Ismail Asaad Mohammed and Tarik A. Rashid
Algorithms 2024, 17(9), 423; https://doi.org/10.3390/a17090423 - 23 Sep 2024
Viewed by 548
Abstract
The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats’ social and behavioral characteristics, has demonstrated potential in various domains, although [...] Read more.
The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats’ social and behavioral characteristics, has demonstrated potential in various domains, although its convergence precision and exploration capabilities are limited. To address these shortcomings, this study introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the balance between exploration and exploitation. The MRSO incorporates unique modifications to improve search efficiency and robustness, making it suitable for challenging engineering problems such as Welded Beam, Pressure Vessel, and Gear Train Design. Extensive testing with classical benchmark functions shows that the MRSO significantly improves performance, avoiding local optima and achieving higher accuracy in six out of nine multimodal functions and in all seven fixed-dimension multimodal functions. In the CEC 2019 benchmarks, the MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities. When applied to engineering design problems, the MRSO consistently delivers better average results than the RSO, proving its effectiveness. Additionally, we compared our approach with eight recent and well-known algorithms using both classical and CEC-2019 benchmarks. The MRSO outperformed each of these algorithms, achieving superior results in six out of 23 classical benchmark functions and in four out of ten CEC-2019 benchmark functions. These results further demonstrate the MRSO’s significant contributions as a reliable and efficient tool for optimization tasks in engineering applications. Full article
Show Figures

Figure 1

35 pages, 13085 KiB  
Article
Cubic q-Bézier Triangular Patch for Scattered Data Interpolation and Its Algorithm
by Owen Tamin and Samsul Ariffin Abdul Karim
Algorithms 2024, 17(9), 422; https://doi.org/10.3390/a17090422 - 23 Sep 2024
Viewed by 298
Abstract
This paper presents an approach to scattered data interpolation using q-Bézier triangular patches via an efficient algorithm. While existing studies have formed q-Bézier triangular patches through convex combination, their application to scattered data interpolation has not been previously explored. Therefore, this [...] Read more.
This paper presents an approach to scattered data interpolation using q-Bézier triangular patches via an efficient algorithm. While existing studies have formed q-Bézier triangular patches through convex combination, their application to scattered data interpolation has not been previously explored. Therefore, this study aims to extend the use of q-Bézier triangular patches to scattered data interpolation by achieving C1 continuity throughout the data points. We test the proposed scheme using both established data points and real-life engineering problems. We compared the performance of the proposed interpolation scheme with a well-known existing scheme by varying the q parameter. The comparison was based on visualization and error analysis. Numerical and graphical results were generated using MATLAB. The findings indicate that the proposed scheme outperforms the existing scheme, demonstrating a higher coefficient of determination (R2), smaller root mean square error (RMSE), and faster central processing unit (CPU) time. These results highlight the potential of the proposed q-Bézier triangular patches scheme for more accurate and reliable scattered data interpolation via the proposed algorithm. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

11 pages, 297 KiB  
Article
Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms
by Evgeny R. Gafarov and Frank Werner
Algorithms 2024, 17(9), 421; https://doi.org/10.3390/a17090421 - 21 Sep 2024
Viewed by 368
Abstract
In this paper, we consider some problems that arise in connected and autonomous vehicle (CAV) systems. Their simplified variants can be formulated as scheduling problems. Therefore, scheduling solution algorithms can be used as a part of solution algorithms for real-world problems. For four [...] Read more.
In this paper, we consider some problems that arise in connected and autonomous vehicle (CAV) systems. Their simplified variants can be formulated as scheduling problems. Therefore, scheduling solution algorithms can be used as a part of solution algorithms for real-world problems. For four variants of such problems, mathematical models and solution algorithms are presented. In particular, three polynomial algorithms and a branch and bound algorithm are developed. These CAV scheduling problems are considered in the literature for the first time. More complicated NP-hard scheduling problems related to CAVs can be considered in the future. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

23 pages, 5381 KiB  
Article
An Algorithm to Find the Shortest Path through Obstacles of Arbitrary Shapes and Positions in 2D
by Gilles Labonté
Algorithms 2024, 17(9), 420; https://doi.org/10.3390/a17090420 - 20 Sep 2024
Viewed by 594
Abstract
An algorithm is described to find the shortest route through a field of obstacles of arbitrary shapes and positions. It has the appreciable advantage of not having to find mathematical formulas to represent the obstacles: it works directly with a digital image of [...] Read more.
An algorithm is described to find the shortest route through a field of obstacles of arbitrary shapes and positions. It has the appreciable advantage of not having to find mathematical formulas to represent the obstacles: it works directly with a digital image of the terrain and is implemented solely with standard graphical functions. Key to this algorithm is the definition of digraphs, the edges of which are built with obstacle bitangents and border enveloping convex arcs that incorporate the fundamental features of shortest paths. These graphs have a remarkably lower cardinality than those that have been proposed before to solve this problem; their edges are a concatenation of sequences of what individual edges and nodes are in formerly defined graphs. Furthermore, a thorough analysis of the topology of the terrain yields a procedure to eliminate the edges that have no possibility of being part of the shortest path. The A* graph optimization algorithm is adapted to deal with this type of graph. A new quite general theorem is proved, which applies to all graphs in which the triangle inequality holds, which allows the discarding of one of the normal steps of the A* algorithm. The effectiveness of the algorithm is demonstrated by calculating the shortest path for real complex terrains of areas between 25 km2 and 900 km2. In all cases, the required calculation time is less than 0.6 s on a Core i7-10750H CPU @ 2.60 GHz laptop computer. Full article
Show Figures

Figure 1

20 pages, 7057 KiB  
Article
Weather Condition Clustering for Improvement of Photovoltaic Power Plant Generation Forecasting Accuracy
by Kristina I. Haljasmaa, Andrey M. Bramm, Pavel V. Matrenin and Stanislav A. Eroshenko
Algorithms 2024, 17(9), 419; https://doi.org/10.3390/a17090419 - 20 Sep 2024
Viewed by 444
Abstract
Together with the growing interest towards renewable energy sources within the framework of different strategies of various countries, the number of solar power plants keeps growing. However, managing optimal power generation for solar power plants has its own challenges. First comes the problem [...] Read more.
Together with the growing interest towards renewable energy sources within the framework of different strategies of various countries, the number of solar power plants keeps growing. However, managing optimal power generation for solar power plants has its own challenges. First comes the problem of work interruption and reduction in power generation. As the system must be tolerant to the faults, the relevance and significance of short-term forecasting of solar power generation becomes crucial. Within the framework of this research, the applicability of different forecasting methods for short-time forecasting is explained. The main goal of the research is to show an approach regarding how to make the forecast more accurate and overcome the above-mentioned challenges using opensource data as features. The data clustering algorithm based on KMeans is proposed to train unique models for specific groups of data samples to improve the generation forecast accuracy. Based on practical calculations, machine learning models based on Random Forest algorithm are selected which have been proven to have higher efficiency in predicting the generation of solar power plants. The proposed algorithm was successfully tested in practice, with an achieved accuracy near to 90%. Full article
(This article belongs to the Special Issue Algorithms for Time Series Forecasting and Classification)
Show Figures

Graphical abstract

40 pages, 11208 KiB  
Article
Mapping the Frontier: A Bibliometric Analysis of Artificial Intelligence Applications in Local and Regional Studies
by Camelia Delcea, Ionuț Nica, Ștefan Ionescu, Bianca Cibu and Horațiu Țibrea
Algorithms 2024, 17(9), 418; https://doi.org/10.3390/a17090418 - 20 Sep 2024
Viewed by 854
Abstract
This study aims to provide a comprehensive bibliometric analysis covering the common areas between artificial intelligence (AI) applications and research focused on local or regional contexts. The analysis covers the period between the year 2002 and the year 2023, utilizing data sourced from [...] Read more.
This study aims to provide a comprehensive bibliometric analysis covering the common areas between artificial intelligence (AI) applications and research focused on local or regional contexts. The analysis covers the period between the year 2002 and the year 2023, utilizing data sourced from the Web of Science database. Employing the Bibliometrix package within RStudio and VOSviewer software, the study identifies a significant increase in AI-related publications, with an annual growth rate of 22.67%. Notably, key journals such as Remote Sensing, PLOS ONE, and Sustainability rank among the top contributing sources. From the perspective of prominent contributing affiliations, institutions like Duy Tan University, Ton Duc Thang University, and the Chinese Academy of Sciences emerge as leading contributors, with Vietnam, Portugal, and China being the countries with the highest citation counts. Furthermore, a word cloud analysis is able to highlight the recurring keywords, including “model”, “classification”, “prediction”, “logistic regression”, “innovation”, “performance”, “random forest”, “impact”, “machine learning”, “artificial intelligence”, and “deep learning”. The co-occurrence network analysis reveals five clusters, amongst them being “artificial neural network”, “regional development”, “climate change”, “regional economy”, “management”, “technology”, “risk”, and “fuzzy inference system”. Our findings support the fact that AI is increasingly employed to address complex regional challenges, such as resource management and urban planning. AI applications, including machine learning algorithms and neural networks, have become essential for optimizing processes and decision-making at the local level. The study concludes with the fact that while AI holds vast potential for transforming local and regional research, ongoing international collaboration and the development of adaptable AI models are essential for maximizing the benefits of these technologies. Such efforts will ensure the effective implementation of AI in diverse contexts, thereby supporting sustainable regional development. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Figure 1

63 pages, 2957 KiB  
Article
Hybrid Four Vector Intelligent Metaheuristic with Differential Evolution for Structural Single-Objective Engineering Optimization
by Hussam N. Fakhouri, Ahmad Sami Al-Shamayleh, Abdelraouf Ishtaiwi, Sharif Naser Makhadmeh, Sandi N. Fakhouri and Faten Hamad
Algorithms 2024, 17(9), 417; https://doi.org/10.3390/a17090417 - 20 Sep 2024
Viewed by 472
Abstract
Complex and nonlinear optimization challenges pose significant difficulties for traditional optimizers, which often struggle to consistently locate the global optimum within intricate problem spaces. To address these challenges, the development of hybrid methodologies is essential for solving complex, real-world, and engineering design problems. [...] Read more.
Complex and nonlinear optimization challenges pose significant difficulties for traditional optimizers, which often struggle to consistently locate the global optimum within intricate problem spaces. To address these challenges, the development of hybrid methodologies is essential for solving complex, real-world, and engineering design problems. This paper introduces FVIMDE, a novel hybrid optimization algorithm that synergizes the Four Vector Intelligent Metaheuristic (FVIM) with Differential Evolution (DE). The FVIMDE algorithm is rigorously tested and evaluated across two well-known benchmark suites (i.e., CEC2017, CEC2022) and an additional set of 50 challenging benchmark functions. Comprehensive statistical analyses, including mean, standard deviation, and the Wilcoxon rank-sum test, are conducted to assess its performance. Moreover, FVIMDE is benchmarked against state-of-the-art optimizers, revealing its superior adaptability and robustness. The algorithm is also applied to solve five structural engineering challenges. The results highlight FVIMDE’s ability to outperform existing techniques across a diverse range of optimization problems, confirming its potential as a powerful tool for complex optimization tasks. Full article
Show Figures

Figure 1

47 pages, 2834 KiB  
Review
Advancements in Optimization: Critical Analysis of Evolutionary, Swarm, and Behavior-Based Algorithms
by Noor A. Rashed, Yossra H. Ali and Tarik A. Rashid
Algorithms 2024, 17(9), 416; https://doi.org/10.3390/a17090416 - 19 Sep 2024
Viewed by 526
Abstract
The research work on optimization has witnessed significant growth in the past few years, particularly within multi- and single-objective optimization algorithm areas. This study provides a comprehensive overview and critical evaluation of a wide range of optimization algorithms from conventional methods to innovative [...] Read more.
The research work on optimization has witnessed significant growth in the past few years, particularly within multi- and single-objective optimization algorithm areas. This study provides a comprehensive overview and critical evaluation of a wide range of optimization algorithms from conventional methods to innovative metaheuristic techniques. The methods used for analysis include bibliometric analysis, keyword analysis, and content analysis, focusing on studies from the period 2000–2023. Databases such as IEEE Xplore, SpringerLink, and ScienceDirect were extensively utilized. Our analysis reveals that while traditional algorithms like evolutionary optimization (EO) and particle swarm optimization (PSO) remain popular, newer methods like the fitness-dependent optimizer (FDO) and learner performance-based behavior (LPBB) are gaining attraction due to their adaptability and efficiency. The main conclusion emphasizes the importance of algorithmic diversity, benchmarking standards, and performance evaluation metrics, highlighting future research paths including the exploration of hybrid algorithms, use of domain-specific knowledge, and addressing scalability issues in multi-objective optimization. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)
Show Figures

Figure 1

26 pages, 7481 KiB  
Article
Meshfree Variational-Physics-Informed Neural Networks (MF-VPINN): An Adaptive Training Strategy
by Stefano Berrone and Moreno Pintore
Algorithms 2024, 17(9), 415; https://doi.org/10.3390/a17090415 - 19 Sep 2024
Viewed by 501
Abstract
In this paper, we introduce a Meshfree Variational-Physics-Informed Neural Network. It is a Variational-Physics-Informed Neural Network that does not require the generation of the triangulation of the entire domain and that can be trained with an adaptive set of test functions. In order [...] Read more.
In this paper, we introduce a Meshfree Variational-Physics-Informed Neural Network. It is a Variational-Physics-Informed Neural Network that does not require the generation of the triangulation of the entire domain and that can be trained with an adaptive set of test functions. In order to generate the test space, we exploit an a posteriori error indicator and add test functions only where the error is higher. Four training strategies are proposed and compared. Numerical results show that the accuracy is higher than the one of a Variational-Physics-Informed Neural Network trained with the same number of test functions but defined on a quasi-uniform mesh. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

22 pages, 2570 KiB  
Article
Vulnerability Analysis of a Multilayer Logistics Network against Cascading Failure
by Tongyu Wu, Minjie Li and Shuangjiao Lin
Algorithms 2024, 17(9), 414; https://doi.org/10.3390/a17090414 - 19 Sep 2024
Viewed by 406
Abstract
One of the most challenging issues in contemporary complex network research is to understand the structure and vulnerability of multilayer networks, even though cascading failures in single networks have been widely studied in recent years. The goal of this work is to compare [...] Read more.
One of the most challenging issues in contemporary complex network research is to understand the structure and vulnerability of multilayer networks, even though cascading failures in single networks have been widely studied in recent years. The goal of this work is to compare the similarities and differences between four single layers and understand the implications of interdependencies among cities on the overall vulnerability of a multilayer global logistics network. In this paper, a global logistics network model set as a multilayer network considering cascading failures is proposed in different disruption scenarios. Two types of attack strategies—a highest load attack and a lowest load attack—are used to evaluate the vulnerability of the global logistics network and to further analyze the changes in the topology properties. For a multilayer network, the vulnerability of single layers is compared as well. The results suggest that compared with the results of a single global logistics network, a multilayer network has a higher vulnerability. In addition, the heterogeneity of networks plays an important role in the vulnerability of a multilayer network against targeted attacks. Protecting the most important nodes is critical to safeguard the potential “vulnerability” in the development of the global logistics network. The three-step response strategy of “Prewarning–Response–Postrepair” is the main pathway to improving the adjustment ability and adaptability of logistics hub cities in response to external shocks. These findings supplement and extend the previous attack results on nodes and can thus help us better explain the vulnerability of different networks and provide insight into more tolerant, real, complex system designs. Full article
(This article belongs to the Topic Complex Networks and Social Networks)
Show Figures

Figure 1

16 pages, 2152 KiB  
Article
Research on Applying Deep Learning to Visual–Motor Integration Assessment Systems in Pediatric Rehabilitation Medicine
by Yu-Ting Tsai, Jin-Shyan Lee and Chien-Yu Huang
Algorithms 2024, 17(9), 413; https://doi.org/10.3390/a17090413 - 18 Sep 2024
Viewed by 487
Abstract
In pediatric rehabilitation medicine, manual assessment methods for visual–motor integration result in inconsistent scoring standards. To address these issues, incorporating artificial intelligence (AI) technology is a feasible approach that can reduce time and improve accuracy. Existing research on visual–motor integration scoring has proposed [...] Read more.
In pediatric rehabilitation medicine, manual assessment methods for visual–motor integration result in inconsistent scoring standards. To address these issues, incorporating artificial intelligence (AI) technology is a feasible approach that can reduce time and improve accuracy. Existing research on visual–motor integration scoring has proposed a framework based on convolutional neural networks (CNNs) for the Beery–Buktenica developmental test of visual–motor integration. However, as the number of training questions increases, the accuracy of this framework significantly decreases. This paper proposes a new architecture to reduce the number of features, channels, and overall model complexity. The architectureoptimizes input features by concatenating question numbers with answer features and selecting appropriate channel ratios and optimizes the output vector by designing the task as a multi-class classification. This paper also proposes a model named improved DenseNet. After experimentation, DenseNet201 was identified as the most suitable pre-trained model for this task and was used as the backbone architecture for improved DenseNet. Additionally, new fully connected layers were added for feature extraction and classification, allowing for specialized feature learning. The architecture can provide reasons for unscored results based on prediction results and decoding rules, offering directions for children’s training. The final experimental results show that the proposed new architecture improves the accuracy of scoring 6 question graphics by 12.8% and 12 question graphics by 20.14% compared to the most relevant literature. The accuracy of the proposed new architecture surpasses the model frameworks of the most relevant literature, demonstrating the effectiveness of this approach in improving scoring accuracy and stability. Full article
Show Figures

Figure 1

24 pages, 912 KiB  
Article
Compatibility Model between Encapsulant Compounds and Antioxidants by the Implementation of Machine Learning
by Juliana Quintana-Rojas, Rafael Amaya-Gómez and Nicolas Ratkovich
Algorithms 2024, 17(9), 412; https://doi.org/10.3390/a17090412 - 17 Sep 2024
Viewed by 433
Abstract
The compatibility between antioxidant compounds (ACs) and wall materials (WMs) is one of the most crucial aspects of the encapsulation process, as the encapsulated compounds’ stability depends on the affinity between the compounds, which is influenced by their chemical properties. A compatibility model [...] Read more.
The compatibility between antioxidant compounds (ACs) and wall materials (WMs) is one of the most crucial aspects of the encapsulation process, as the encapsulated compounds’ stability depends on the affinity between the compounds, which is influenced by their chemical properties. A compatibility model between the encapsulant and antioxidant chemicals was built using machine learning (ML) to discover optimal matches without costly and time-consuming trial-and-error experiments. The attributes of the required antioxidant and wall material components were recollected, and two datasets were constructed. As a result, a tying process was performed to connect both datasets and identify significant relationships between parameters of ACs and WMs to define the compatibility or incompatibility of the compounds, as this was necessary to enrich the dataset by incorporating decoys. As a result, a simple statistical analysis was conducted to examine the indicated correlations between variables, and a Principal Component Analysis (PCA) was performed to reduce the dimensionality of the dataset without sacrificing essential information. The K-nearest neighbor (KNN) algorithm was used and designed to handle the classification problems of the compatibility of the combinations to integrate ML in the model. In this way, the model accuracy was 0.92, with a sensitivity of 0.84 and a specificity of 1. These results indicate that the KNN model performs well, exhibiting high accuracy and correctly classifying positive and negative combinations as evidenced by the sensitivity and specificity scores. Full article
(This article belongs to the Special Issue Algorithm Engineering in Bioinformatics)
Show Figures

Figure 1

17 pages, 655 KiB  
Article
A Machine Learning Framework for Condition-Based Maintenance of Marine Diesel Engines: A Case Study
by Francesco Maione, Paolo Lino, Guido Maione and Giuseppe Giannino
Algorithms 2024, 17(9), 411; https://doi.org/10.3390/a17090411 - 14 Sep 2024
Viewed by 765
Abstract
The development of artificial intelligence-based tools is having a big impact on industry. In this context, the maintenance operations of important assets and industrial resources are changing, both from a theoretical and a practical perspective. Namely, conventional maintenance reacts to faults and breakdowns [...] Read more.
The development of artificial intelligence-based tools is having a big impact on industry. In this context, the maintenance operations of important assets and industrial resources are changing, both from a theoretical and a practical perspective. Namely, conventional maintenance reacts to faults and breakdowns as they occur or schedules the necessary inspections of systems and their parts at fixed times by using statistics on component failures, but this can be improved by a predictive maintenance based on the real component’s health status, which is inspected by appropriate sensors. In this way, maintenance time and costs are saved. Improvements can be achieved even in the marine industry, in which complex ship propulsion systems are produced for operation in many different scenarios. In more detail, data-driven models, through machine learning (ML) algorithms, generate the expected values of monitored variables for comparison with real measurements on the asset, for a diagnosis based on the difference between expectations and observations. The first step towards realization of predictive maintenance is choosing the ML algorithm. This selection is often not the consequence of an in-depth analysis of the different algorithms available in the literature. For that reason, here the authors propose a framework to support an initial implementation stage of predictive maintenance based on a benchmarking of the most suitable ML algorithms. The comparison is tested to predict failures of the oil circuit in a diesel marine engine as a case study. The algorithms are compared by considering not only the mean squared error between the algorithm predictions and the data, but also the response time, which is a crucial variable for maintenance. The results clearly indicate the framework well supports predictive maintenance and the prediction error and running time are appropriate variables to choose the most suitable ML algorithm for prediction. Moreover, the proposed framework can be used to test different algorithms, on the basis of more performance indexes, and to apply predictive maintenance to other engine components. Full article
Show Figures

Figure 1

17 pages, 397 KiB  
Article
Algorithm for Option Number Selection in Stochastic Paired Comparison Models
by László Gyarmati, Csaba Mihálykó and Éva Orbán-Mihálykó
Algorithms 2024, 17(9), 410; https://doi.org/10.3390/a17090410 - 14 Sep 2024
Viewed by 380
Abstract
In this paper, paired comparison models with a stochastic background are investigated and compared from the perspective of the option numbers allowed. As two-option and three-option models are the ones most frequently used, we mainly focus on the relationships between two-option and four-option [...] Read more.
In this paper, paired comparison models with a stochastic background are investigated and compared from the perspective of the option numbers allowed. As two-option and three-option models are the ones most frequently used, we mainly focus on the relationships between two-option and four-option models and three-option and five-option models, and then we turn to the general s- and (s+2)-option models. We compare them from both theoretical and practical perspectives; the latter are based on computer simulations. We examine, when it is possible, mandatory, or advisable how to convert four-, five-, and (s+2)-option models into two-, three-, and s-option models, respectively. The problem also exists in reverse: when is it advisable to use four-, five-, and (s+2)-option models instead of two-, three-, and s-option models? As a result of these investigations, we set up an algorithm to perform the decision process. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

17 pages, 3327 KiB  
Article
Explainable Machine Learning Model to Accurately Predict Protein-Binding Peptides
by Sayed Mehedi Azim, Aravind Balasubramanyam, Sheikh Rabiul Islam, Jinglin Fu and Iman Dehzangi
Algorithms 2024, 17(9), 409; https://doi.org/10.3390/a17090409 - 12 Sep 2024
Viewed by 524
Abstract
Enzymes play key roles in the biological functions of living organisms, which serve as catalysts to and regulate biochemical reaction pathways. Recent studies suggest that peptides are promising molecules for modulating enzyme function due to their advantages in large chemical diversity and well-established [...] Read more.
Enzymes play key roles in the biological functions of living organisms, which serve as catalysts to and regulate biochemical reaction pathways. Recent studies suggest that peptides are promising molecules for modulating enzyme function due to their advantages in large chemical diversity and well-established methods for library synthesis. Experimental approaches to identify protein-binding peptides are time-consuming and costly. Hence, there is a demand to develop a fast and accurate computational approach to tackle this problem. Another challenge in developing a computational approach is the lack of a large and reliable dataset. In this study, we develop a new machine learning approach called PepBind-SVM to predict protein-binding peptides. To build this model, we extract different sequential and physicochemical features from peptides and use a Support Vector Machine (SVM) as the classification technique. We train this model on the dataset that we also introduce in this study. PepBind-SVM achieves 92.1% prediction accuracy, outperforming other classifiers at predicting protein-binding peptides. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

18 pages, 302 KiB  
Review
Using Wearable Technology to Detect, Monitor, and Predict Major Depressive Disorder—A Scoping Review and Introductory Text for Clinical Professionals
by Quinty Walschots, Milan Zarchev, Maurits Unkel and Astrid Kamperman
Algorithms 2024, 17(9), 408; https://doi.org/10.3390/a17090408 - 12 Sep 2024
Viewed by 555
Abstract
The rising popularity of wearable devices allows for extensive and unobtrusive collection of personal health data for extended periods of time. Recent studies have used machine learning to create predictive algorithms to assess symptoms of major depressive disorder (MDD) based on these data. [...] Read more.
The rising popularity of wearable devices allows for extensive and unobtrusive collection of personal health data for extended periods of time. Recent studies have used machine learning to create predictive algorithms to assess symptoms of major depressive disorder (MDD) based on these data. This review evaluates the clinical relevance of these models. Studies were selected to represent the range of methodologies and applications of wearables for MDD algorithms, with a focus on wrist-worn devices. The reviewed studies demonstrated that wearable-based algorithms were able to predict symptoms of MDD with considerable accuracy. These models may be used in the clinic to complement the monitoring of treatments or to facilitate early intervention in high-risk populations. In a preventative context, they could prompt users to seek help for earlier intervention and better clinical outcomes. However, the lack of standardized methodologies and variation in which performance metrics are reported complicates direct comparisons between studies. Issues with reproducibility, overfitting, small sample sizes, and limited population demographics also limit the generalizability of findings. As such, wearable-based algorithms show considerable promise for predicting and monitoring MDD, but there is significant room for improvement before this promise can be fulfilled. Full article
20 pages, 4893 KiB  
Article
Interactive 3D Vase Design Based on Gradient Boosting Decision Trees
by Dongming Wang, Xing Xu, Xuewen Xia and Heming Jia
Algorithms 2024, 17(9), 407; https://doi.org/10.3390/a17090407 - 11 Sep 2024
Viewed by 522
Abstract
Traditionally, ceramic design began with sketches on rough paper and later evolved into using CAD software for more complex designs and simulations. With technological advancements, optimization algorithms have gradually been introduced into ceramic design to enhance design efficiency and creative diversity. The use [...] Read more.
Traditionally, ceramic design began with sketches on rough paper and later evolved into using CAD software for more complex designs and simulations. With technological advancements, optimization algorithms have gradually been introduced into ceramic design to enhance design efficiency and creative diversity. The use of Interactive Genetic Algorithms (IGAs) for ceramic design is a new approach, but an IGA requires a significant amount of user evaluation, which can result in user fatigue. To overcome this problem, this paper introduces the LightGBM algorithm and the CatBoost algorithm to improve the IGA because they have excellent predictive capabilities that can assist users in evaluations. The algorithms are also applied to a vase design platform for validation. First, bicubic Bézier surfaces are used for modeling, and the genetic encoding of the vase is designed with appropriate evolutionary operators selected. Second, user data from the online platform are collected to train and optimize the LightGBM and CatBoost algorithms. Finally, LightGBM and CatBoost are combined with an IGA and applied to the vase design platform to verify their effectiveness. Comparing the improved algorithm to traditional IGAs, KD trees, Random Forest, and XGBoost, it is found that IGAs improve with LightGBM, and CatBoost performs better overall, requiring fewer evaluations and less time. Its R2 is higher than other proxy models, achieving 0.816 and 0.839, respectively. The improved method proposed in this paper can effectively alleviate user fatigue and enhance the user experience in product design participation. Full article
Show Figures

Figure 1

20 pages, 1836 KiB  
Article
Advanced Detection of Abnormal ECG Patterns Using an Optimized LADTree Model with Enhanced Predictive Feature: Potential Application in CKD
by Muhammad Binsawad and Bilal Khan
Algorithms 2024, 17(9), 406; https://doi.org/10.3390/a17090406 - 11 Sep 2024
Viewed by 580
Abstract
Detecting abnormal ECG patterns is a crucial area of study aimed at enhancing diagnostic accuracy and enabling early identification of Chronic Kidney Disease (CKD)-related abnormalities. This study compares a unique strategy for abnormal ECG patterns using the LADTree model to standard machine learning [...] Read more.
Detecting abnormal ECG patterns is a crucial area of study aimed at enhancing diagnostic accuracy and enabling early identification of Chronic Kidney Disease (CKD)-related abnormalities. This study compares a unique strategy for abnormal ECG patterns using the LADTree model to standard machine learning (ML) models. The study design includes data collection from the MIT-BIH Arrhythmia dataset, preprocessing to address missing values, and feature selection using the CfsSubsetEval method using Best First Search, Harmony Search, and Particle Swarm Optimization Search approaches. The performance assessment consists of two scenarios: percentage splitting and K-fold cross-validation, with several evaluation measures such as Kappa statistic (KS), Best First Search, recall, precision-recall curve (PRC) area, receiver operating characteristic (ROC) area, and accuracy. In scenario 1, LADTree outperforms other ML models in terms of mean absolute error (MAE), KS, recall, ROC area, and PRC. Notably, the Naïve Bayes (NB) model has the lowest MAE, but the Support Vector Machine (SVM) performs badly. In scenario 2, NB has the lowest MAE but the highest KS, recall, ROC area, and PRC area, closely followed by LADTree. Overall, the findings indicate that the LADTree model, when optimized for ECG signal data, delivers promising results in detecting abnormal ECG patterns potentially related with CKD. This study advances predictive modeling tools for identifying abnormal ECG patterns, which could enhance early detection and management of CKD, potentially leading to improved patient outcomes and healthcare practices. Full article
Show Figures

Figure 1

28 pages, 6717 KiB  
Article
A Segmentation-Based Automated Corneal Ulcer Grading System for Ocular Staining Images Using Deep Learning and Hough Circle Transform
by Dulyawat Manawongsakul and Karn Patanukhom
Algorithms 2024, 17(9), 405; https://doi.org/10.3390/a17090405 - 10 Sep 2024
Viewed by 458
Abstract
Corneal ulcer is a prevalent ocular condition that requires ophthalmologists to diagnose, assess, and monitor symptoms. During examination, ophthalmologists must identify the corneal ulcer area and evaluate its severity by manually comparing ocular staining images with severity indices. However, manual assessment is time-consuming [...] Read more.
Corneal ulcer is a prevalent ocular condition that requires ophthalmologists to diagnose, assess, and monitor symptoms. During examination, ophthalmologists must identify the corneal ulcer area and evaluate its severity by manually comparing ocular staining images with severity indices. However, manual assessment is time-consuming and may provide inconsistent results. Variations can occur with repeated evaluations of the same images or with grading among different evaluators. To address this problem, we propose an automated corneal ulcer grading system for ocular staining images based on deep learning techniques and the Hough Circle Transform. The algorithm is structured into two components for cornea segmentation and corneal ulcer segmentation. Initially, we apply a deep learning method combined with the Hough Circle Transform to segment cornea areas. Subsequently, we develop the corneal ulcer segmentation model using deep learning methods. In this phase, the predicted cornea areas are utilized as masks for training the corneal ulcer segmentation models during the learning phase. Finally, this algorithm uses the results from these two components to determine two outputs: (1) the percentage of the ulcerated area on the cornea, and (2) the severity degree of the corneal ulcer based on the Type–Grade (TG) grading standard. These methodologies aim to enhance diagnostic efficiency across two key aspects: (1) ensuring consistency by delivering uniform and dependable results, and (2) enhancing robustness by effectively handling variations in eye size. In this research, our proposed method is evaluated using the SUSTech-SYSU public dataset, achieving an Intersection over Union of 89.23% for cornea segmentation and 82.94% for corneal ulcer segmentation, along with a Mean Absolute Error of 2.51% for determining the percentage of the ulcerated area on the cornea and an Accuracy of 86.15% for severity grading. Full article
Show Figures

Graphical abstract

25 pages, 9129 KiB  
Article
A Comparative Study of Maze Generation Algorithms in a Game-Based Mobile Learning Application for Learning Basic Programming Concepts
by Mia Čarapina, Ognjen Staničić, Ivica Dodig and Davor Cafuta
Algorithms 2024, 17(9), 404; https://doi.org/10.3390/a17090404 - 10 Sep 2024
Viewed by 527
Abstract
This study evaluates several maze generation algorithms applied to generate mazes in a game-based Android mobile application designed to support children in learning basic programming concepts and computational thinking. Each algorithm is assessed for its ability to generate solvable and educationally effective mazes, [...] Read more.
This study evaluates several maze generation algorithms applied to generate mazes in a game-based Android mobile application designed to support children in learning basic programming concepts and computational thinking. Each algorithm is assessed for its ability to generate solvable and educationally effective mazes, varying in complexity and size. Key findings indicate that Wilson’s and Aldous–Broder algorithms were identified as the most time inefficient. In comparison, Sidewinder and Binary Tree algorithms perform best for smaller mazes due to their straightforward traversal methods. The Hunt-and-Kill and Recursive backtracker algorithms maintain higher ratios of longest paths, making them suitable for the more complex maze generation required for advanced game levels. Additionally, the study explores various maze-solving algorithms, highlighting the efficiency of the recursive algorithm for simpler mazes and the reliability of Dijkstra’s algorithm across diverse maze structures. This research underscores the importance of selecting appropriate maze generation and solving algorithms to balance generation speed, path complexity, and navigational characteristics. While the study demonstrates the practical applicability of these algorithms in a mobile educational application, it also identifies limitations and suggests directions for future research. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

16 pages, 3080 KiB  
Article
Load Frequency Optimal Active Disturbance Rejection Control of Hybrid Power System
by Kuansheng Zou, Yue Wang, Baowei Liu and Zhaojun Zhang
Algorithms 2024, 17(9), 403; https://doi.org/10.3390/a17090403 - 9 Sep 2024
Viewed by 433
Abstract
The widespread adoption of the power grid has led to increased attention to load frequency control (LFC) in power systems. The LFC strategy of multi-source hybrid power systems, including hydroelectric generators, Wind Turbine Generators (WTGs), and Photovoltaic Generators (PVGs), with thermal generators is [...] Read more.
The widespread adoption of the power grid has led to increased attention to load frequency control (LFC) in power systems. The LFC strategy of multi-source hybrid power systems, including hydroelectric generators, Wind Turbine Generators (WTGs), and Photovoltaic Generators (PVGs), with thermal generators is more challenging. Existing methods for LFC tasks pose challenges in achieving satisfactory outcomes in hybrid power systems. In this paper, a novel method for the multi-source hybrid power system LFC task by using an optimal active disturbance rejection control (ADRC) strategy is proposed, which is based on the combination of the improved linear quadratic regulator (LQR) and the ADRC controller. Firstly, an established model of a hybrid power system is presented, which incorporates multiple regions and multiple sources. Secondly, utilizing the state space representation, a novel control strategy is developed by integrating improved LQR and ARDC. Finally, a series of comparative simulation experiments has been conducted using the Simulink model. Compared with the LQR with ESO, the maximum relative error of the maximum peaks of frequency deviation and tie-line exchanged power of the hybrid power system is reduced by 96% and 83%, respectively, by using the proposed strategy. The experimental results demonstrate that the strategy proposed in this paper exhibits a substantial enhancement in control performance. Full article
(This article belongs to the Topic Recent Trends in Nonlinear, Chaotic and Complex Systems)
Show Figures

Figure 1

28 pages, 6310 KiB  
Article
Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System
by Jong-Chen Chen and Yin-Zhen Chen
Algorithms 2024, 17(9), 402; https://doi.org/10.3390/a17090402 - 8 Sep 2024
Viewed by 483
Abstract
Fatigued driving is a problem that every driver will face, and traffic accidents caused by drowsy driving often occur involuntarily. If there is a fatigue detection and warning system, it is generally believed that the occurrence of some incidents can be reduced. However, [...] Read more.
Fatigued driving is a problem that every driver will face, and traffic accidents caused by drowsy driving often occur involuntarily. If there is a fatigue detection and warning system, it is generally believed that the occurrence of some incidents can be reduced. However, everyone’s driving habits and methods may differ, so it is not easy to establish a suitable general detection system. If a customized intelligent fatigue detection system can be established, it may reduce unfortunate accidents. With its potential to mitigate unfortunate accidents, this study offers hope for a safer driving environment. Thus, on the one hand, this research hopes to integrate the information obtained from three different sensing devices (eye movement, finger pressure, and plantar pressure), which are chosen for their ability to provide comprehensive and reliable data on a driver’s physical and mental state. On the other hand, it uses an autonomous learning architecture to integrate these three data types to build a customized fatigued driving detection system. This study used a system that simulated a car driving environment and then invited subjects to conduct tests on fixed driving routes. First, we demonstrated that the system established in this study could be used to learn and classify different driving clips. Then, we showed that it was possible to judge whether the driver was fatigued through a series of driving behaviors, such as lane drifting, sudden braking, and irregular acceleration, rather than a single momentary behavior. Finally, we tested the hypothesized situation in which drivers were experiencing three cases of different distractions. The results show that the entire system can establish a personal driving system through autonomous learning behavior and further detect whether fatigued driving abnormalities occur. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

18 pages, 817 KiB  
Article
Performance of Linear and Spiral Hashing Algorithms
by Arockia David Roy Kulandai and Thomas Schwarz
Algorithms 2024, 17(9), 401; https://doi.org/10.3390/a17090401 - 7 Sep 2024
Viewed by 570
Abstract
Linear Hashing is an important algorithm for many key-value stores in main memory. Spiral Storage was invented to overcome the poor fringe behavior of Linear Hashing, but after an influential study by Larson, it seems to have been discarded. Since almost 50 years [...] Read more.
Linear Hashing is an important algorithm for many key-value stores in main memory. Spiral Storage was invented to overcome the poor fringe behavior of Linear Hashing, but after an influential study by Larson, it seems to have been discarded. Since almost 50 years have passed, we repeat Larson’s comparison with the in-memory implementation of both to see whether his verdict still stands. Our study shows that Spiral Storage has slightly better lookup performance but slightly poorer insert performance. However, Spiral Hashing has more predictable performance for inserts and, in particular, better fringe behavior. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

23 pages, 1626 KiB  
Article
Is Reinforcement Learning Good at American Option Valuation?
by Peyman Kor, Reidar B. Bratvold and Aojie Hong
Algorithms 2024, 17(9), 400; https://doi.org/10.3390/a17090400 - 7 Sep 2024
Viewed by 450
Abstract
This paper investigates algorithms for identifying the optimal policy for pricing American Options. The American Option pricing is reformulated as a Sequential Decision-Making problem with two binary actions (Exercise or Continue), transforming it into an optimal stopping time problem. Both the least square [...] Read more.
This paper investigates algorithms for identifying the optimal policy for pricing American Options. The American Option pricing is reformulated as a Sequential Decision-Making problem with two binary actions (Exercise or Continue), transforming it into an optimal stopping time problem. Both the least square Monte Carlo simulation method (LSM) and Reinforcement Learning (RL)-based methods were utilized to find the optimal policy and, hence, the fair value of the American Put Option. Both Classical Geometric Brownian Motion (GBM) and calibrated Stochastic Volatility models served as the underlying uncertain assets. The novelty of this work lies in two aspects: (1) Applying LSM- and RL-based methods to determine option prices, with a specific focus on analyzing the dynamics of “Decisions” made by each method and comparing final decisions chosen by the LSM and RL methods. (2) Assess how the RL method updates “Decisions” at each batch, revealing the evolution of the decisions during the learning process to achieve optimal policy. Full article
Show Figures

Figure 1

31 pages, 3360 KiB  
Article
IMM Filtering Algorithms for a Highly Maneuvering Fighter Aircraft: An Overview
by M. N. Radhika, Mahendra Mallick and Xiaoqing Tian
Algorithms 2024, 17(9), 399; https://doi.org/10.3390/a17090399 - 6 Sep 2024
Viewed by 433
Abstract
The trajectory estimation of a highly maneuvering target is a challenging problem and has practical applications. The interacting multiple model (IMM) filter is a well-established filtering algorithm for the trajectory estimation of maneuvering targets. In this study, we present an overview of IMM [...] Read more.
The trajectory estimation of a highly maneuvering target is a challenging problem and has practical applications. The interacting multiple model (IMM) filter is a well-established filtering algorithm for the trajectory estimation of maneuvering targets. In this study, we present an overview of IMM filtering algorithms for tracking a highly-maneuverable fighter aircraft using an air moving target indicator (AMTI) radar on another aircraft. This problem is a nonlinear filtering problem due to nonlinearities in the dynamic and measurement models. We first describe single-model nonlinear filtering algorithms: the extended Kalman filter (EKF), unscented Kalman filter (UKF), and cubature Kalman filter (CKF). Then, we summarize the IMM-based EKF (IMM-EKF), IMM-based UKF (IMM-UKF), and IMM-based CKF (CKF). In order to compare the state estimation accuracies of the IMM-based filters, we present a derivation of the posterior Cramér-Rao lower bound (PCRLB). We consider fighter aircraft traveling with accelerations 3g, 4g, 5g, and 6g and present numerical results for state estimation accuracy and computational cost under various operating conditions. Our results show that under normal operating conditions, the three IMM-based filters have nearly the same accuracy. This is due to the accuracy of the measurements of the AMTI radar and the high data rate. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

43 pages, 3605 KiB  
Review
In-Depth Insights into the Application of Recurrent Neural Networks (RNNs) in Traffic Prediction: A Comprehensive Review
by Yuxin He, Ping Huang, Weihang Hong, Qin Luo, Lishuai Li and Kwok-Leung Tsui
Algorithms 2024, 17(9), 398; https://doi.org/10.3390/a17090398 - 6 Sep 2024
Viewed by 519
Abstract
Traffic prediction is crucial for transportation management and user convenience. With the rapid development of deep learning techniques, numerous models have emerged for traffic prediction. Recurrent Neural Networks (RNNs) are extensively utilized as representative predictive models in this domain. This paper comprehensively reviews [...] Read more.
Traffic prediction is crucial for transportation management and user convenience. With the rapid development of deep learning techniques, numerous models have emerged for traffic prediction. Recurrent Neural Networks (RNNs) are extensively utilized as representative predictive models in this domain. This paper comprehensively reviews RNN applications in traffic prediction, focusing on their significance and challenges. The review begins by discussing the evolution of traffic prediction methods and summarizing state-of-the-art techniques. It then delves into the unique characteristics of traffic data, outlines common forms of input representations in traffic prediction, and generalizes an abstract description of traffic prediction problems. Then, the paper systematically categorizes models based on RNN structures designed for traffic prediction. Moreover, it provides a comprehensive overview of seven sub-categories of applications of deep learning models based on RNN in traffic prediction. Finally, the review compares RNNs with other state-of-the-art methods and highlights the challenges RNNs face in traffic prediction. This review is expected to offer significant reference value for comprehensively understanding the various applications of RNNs and common state-of-the-art models in traffic prediction. By discussing the strengths and weaknesses of these models and proposing strategies to address the challenges faced by RNNs, it aims to provide scholars with insights for designing better traffic prediction models. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 8282 KiB  
Article
Advanced Fault Detection in Power Transformers Using Improved Wavelet Analysis and LSTM Networks Considering Current Transformer Saturation and Uncertainties
by Qusay Alhamd, Mohsen Saniei, Seyyed Ghodratollah Seifossadat and Elaheh Mashhour
Algorithms 2024, 17(9), 397; https://doi.org/10.3390/a17090397 - 6 Sep 2024
Viewed by 463
Abstract
Power transformers are vital and costly components in power systems, essential for ensuring a reliable and uninterrupted supply of electrical energy. Their protection is crucial for improving reliability, maintaining network stability, and minimizing operational costs. Previous studies have introduced differential protection schemes with [...] Read more.
Power transformers are vital and costly components in power systems, essential for ensuring a reliable and uninterrupted supply of electrical energy. Their protection is crucial for improving reliability, maintaining network stability, and minimizing operational costs. Previous studies have introduced differential protection schemes with harmonic restraint to detect internal transformer faults. However, these schemes often struggle with computational inaccuracies in fault detection due to neglecting current transformer (CT) saturation and associated uncertainties. CT saturation during internal faults can produce even harmonics, disrupting relay operations. Additionally, CT saturation during transformer energization can introduce a DC component, leading to incorrect relay activation. This paper introduces a novel feature extracted through advanced wavelet transform analysis of differential current. This feature, combined with differential current amplitude and bias current, is used to train a deep learning system based on long short-term memory (LSTM) networks. By accounting for existing uncertainties, this system accurately identifies internal transformer faults under various CT saturation and measurement uncertainty conditions. Test and validation results demonstrate the proposed method’s effectiveness and superiority in detecting internal faults in power transformers, even in the presence of CT saturation, outperforming other recent modern techniques. Full article
Show Figures

Figure 1

19 pages, 1342 KiB  
Article
AASA: A Priori Adaptive Splitting Algorithm for the Split Delivery Vehicle Routing Problem
by Nariman Torkzaban, Anousheh Gholami, John S. Baras and Bruce L. Golden
Algorithms 2024, 17(9), 396; https://doi.org/10.3390/a17090396 - 6 Sep 2024
Viewed by 339
Abstract
The split delivery vehicle routing problem (SDVRP) is a relaxed variant of the capacitated vehicle routing problem (CVRP) where the restriction that each customer is visited precisely once is removed. Compared with CVRP, the SDVRP allows a reduction in the total cost of [...] Read more.
The split delivery vehicle routing problem (SDVRP) is a relaxed variant of the capacitated vehicle routing problem (CVRP) where the restriction that each customer is visited precisely once is removed. Compared with CVRP, the SDVRP allows a reduction in the total cost of the routes traveled by vehicles. The exact methods to solve the SDVRP are computationally expensive. Moreover, the complexity and difficult implementation of the state-of-the-art heuristic approaches hinder their application in real-life scenarios of the SDVRP. In this paper, we propose an easily understandable and effective approach to solve the SDVPR based on an a priori adaptive splitting algorithm (AASA) that improves the existing state of the art on a priori split strategy in terms of both solution accuracy and time complexity. In this approach, the demand of the customers is split into smaller demand values using a splitting rule in advance. Consequently, the original SDVRP instance is converted to a CVRP instance which is solved using an existing CVRP solver. While the proposed a priori splitting rule in the literature is fixed for all customers regardless of their demand and location, we suggest an adaptive splitting rule that takes into account the distance of the customers to the depot and their demand values. Our experiments show that AASA can generate solutions comparable to the state of the art, but much faster. Full article
(This article belongs to the Special Issue Heuristic Optimization Algorithms for Logistics)
Show Figures

Figure 1

22 pages, 444 KiB  
Article
Information Criteria for Signal Extraction Using Singular Spectrum Analysis: White and Red Noise
by Nina Golyandina and Nikita Zvonarev
Algorithms 2024, 17(9), 395; https://doi.org/10.3390/a17090395 - 5 Sep 2024
Viewed by 473
Abstract
In singular spectrum analysis, which is applied to signal extraction, it is of critical importance to select the number of components correctly in order to accurately estimate the signal. In the case of a low-rank signal, there is a challenge in estimating the [...] Read more.
In singular spectrum analysis, which is applied to signal extraction, it is of critical importance to select the number of components correctly in order to accurately estimate the signal. In the case of a low-rank signal, there is a challenge in estimating the signal rank, which is equivalent to selecting the model order. Information criteria are commonly employed to address these issues. However, singular spectrum analysis is not aimed at the exact low-rank approximation of the signal. This makes it an adaptive, fast, and flexible approach. Conventional information criteria are not directly applicable in this context. The paper examines both subspace-based and information criteria, proposing modifications suited to the Hankel structure of trajectory matrices employed in singular spectrum analysis. These modifications are initially developed for white noise, and a version for red noise is also proposed. In the numerical comparisons, a number of scenarios are considered, including the case of signals that are approximated by low-rank signals. This is the most similar to the case of real-world time series. The criteria are compared with each other and with the optimal rank choice that minimizes the signal estimation error. The results of numerical experiments demonstrate that for low-rank signals and noise levels within a region of stable rank detection, the proposed modifications yield accurate estimates of the optimal rank for both white and red noise cases. The method that considers the Hankel structure of the trajectory matrices appears to be a superior approach in many instances. Reasonable model orders are obtained for real-world time series. It is recommended that a transformation be applied to stabilize the variance before estimating the rank. Full article
Show Figures

Figure 1

28 pages, 1152 KiB  
Article
Combining Parallel Stochastic Methods and Mixed Termination Rules in Optimization
by Vasileios Charilogis, Ioannis G. Tsoulos and Anna Maria Gianni
Algorithms 2024, 17(9), 394; https://doi.org/10.3390/a17090394 - 5 Sep 2024
Viewed by 431
Abstract
Parallel optimization enables faster and more efficient problem-solving by reducing computational resource consumption and time. By simultaneously combining multiple methods, such as evolutionary algorithms and swarm-based optimization, effective exploration of the search space and achievement of optimal solutions in shorter time frames are [...] Read more.
Parallel optimization enables faster and more efficient problem-solving by reducing computational resource consumption and time. By simultaneously combining multiple methods, such as evolutionary algorithms and swarm-based optimization, effective exploration of the search space and achievement of optimal solutions in shorter time frames are realized. In this study, a combination of termination criteria is proposed, utilizing three different criteria to end the algorithmic process. These criteria include measuring the difference between optimal values in successive iterations, calculating the mean value of the cost function in each iteration, and the so-called “DoubleBox” criterion, which is based on the relative variance of the best value of the objective cost function over a specific number of iterations. The problem is addressed through the parallel execution of three different optimization methods (PSO, Differential Evolution, and Multistart). Each method operates independently on separate computational units with the goal of faster discovery of the optimal solution and more efficient use of computational resources. The optimal solution identified in each iteration is transferred to the other computational units. The proposed enhancements were tested on a series of well-known optimization problems from the relevant literature, demonstrating significant improvements in convergence speed and solution quality compared to traditional approaches. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop