Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (112)

Search Parameters:
Keywords = log-ratio algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1917 KiB  
Article
Measurement of Intratumor Heterogeneity and Its Changing Pattern to Predict Response and Recurrence Risk After Neoadjuvant Chemotherapy in Breast Cancer
by Mingxi Zhu, Qiong Wu, Xiaochuan Geng, Huaying Xie, Yan Wang, Ziping Wu, Yanping Lin, Liheng Zhou, Shuguang Xu, Yumei Ye, Wenjin Yin, Jia Hua, Jingsong Lu and Yaohui Wang
Curr. Oncol. 2025, 32(2), 93; https://doi.org/10.3390/curroncol32020093 - 7 Feb 2025
Viewed by 526
Abstract
The heterogeneity of breast tumors might reflect biological complexity and provide prediction clues for the sensitivity of treatment. This study aimed to construct a model based on tumor heterogeneity in magnetic resonance imaging (MRI) for predicting the pathological complete response (pCR) to neoadjuvant [...] Read more.
The heterogeneity of breast tumors might reflect biological complexity and provide prediction clues for the sensitivity of treatment. This study aimed to construct a model based on tumor heterogeneity in magnetic resonance imaging (MRI) for predicting the pathological complete response (pCR) to neoadjuvant chemotherapy (NAC). This retrospective study involved 217 patients with biopsy-confirmed invasive breast cancer who underwent MR before and after NAC. Patients were randomly divided into the training cohort and the validation cohort at a 1:1 ratio. MR images were processed by algorithms to quantify the heterogeneity of tumors. Models incorporating heterogeneity and clinical characteristics were constructed to predict pCR. The patterns of heterogeneity variation during NAC were classified into four categories abbreviated as the heterogeneity high-keep group (H_keep group), heterogeneity low-keep group (L_keep group), heterogeneity rising group, and decrease group. The average heterogeneity in patients achieving pCR was significantly lower than in those who did not (p = 0.029). Lower heterogeneity was independently associated with pCR (OR, 0.401 [95%CI: 0.21, 0.76]; p = 0.007). The model combining heterogeneity and clinical characteristics demonstrated improved specificity (True Negative Rate 0.857 vs. 0.698) and accuracy (Accuracy 0.828 vs. 0.753) compared to the clinical model. Survival outcomes were best for the L_keep group and worst for the rising group (Log-rank p = 0.031). Patients with increased heterogeneity exhibited a higher risk of recurrence approaching two years post-surgery, particularly within the non-pCR population. The quantified heterogeneity of breast cancer in MRI offers a non-invasive method for predicting pCR to NAC and evaluating the implementation of precision medicine. Full article
(This article belongs to the Section Breast Cancer)
Show Figures

Figure 1

18 pages, 5126 KiB  
Article
Critical Filling Height of Embankment over Soft Soil: A Three-Dimensional Upper-Bound Limit Analysis
by Xijun Liu, Bokai Song, Zhuanqin Sun and Wenxiu Jiao
Buildings 2025, 15(3), 395; https://doi.org/10.3390/buildings15030395 - 26 Jan 2025
Viewed by 490
Abstract
This paper investigates the critical filling height of embankments over soft soil using three-dimensional (3D) upper-bound limit analysis based on a rotational log-spiral failure mechanism. Soft soils are characterized by low shear strength and high compressibility, making the accurate determination of critical filling [...] Read more.
This paper investigates the critical filling height of embankments over soft soil using three-dimensional (3D) upper-bound limit analysis based on a rotational log-spiral failure mechanism. Soft soils are characterized by low shear strength and high compressibility, making the accurate determination of critical filling height essential for evaluating embankment stability. Unlike conventional two-dimensional (2D) analyses, the proposed 3D method captures the true failure mechanism of embankments, providing more realistic and reliable results. The upper-bound analysis equations are derived using the principle of virtual work and solved efficiently through the genetic algorithm (GA), which avoids the limitations of traditional loop and random searching algorithms. The proposed solution is validated by comparing it with existing studies on slope stability and demonstrates higher accuracy and computational efficiency. Parametric studies are conducted to evaluate the influence of the depth–height ratio (the ratio of soft soil depth to embankment height) on the failure width of the embankment, the critical failure surface, and the critical filling height. Results show that the critical failure surface is tangential to the bottom of the soft soil layer and the critical filling height increases as the depth–height ratio decreases. The findings provide a set of critical filling heights calculated under various soft soil depths, strength parameters, and embankment geometries, offering practical guidance for embankment design. Full article
Show Figures

Figure 1

23 pages, 1263 KiB  
Article
Turning Points in the Core–Periphery Displacement of Systemic Risk in the Eurozone: Constrained Weighted Compositional Clustering
by Anna Maria Fiori and Germà Coenders
Risks 2025, 13(2), 21; https://doi.org/10.3390/risks13020021 - 24 Jan 2025
Viewed by 521
Abstract
Investigating how systemic risk originates and spreads across the financial system poses an inherently compositional question, i.e., a question concerning the joint distribution of relative risk share across several interdependent contributors. To address this question, we propose a weighted compositional clustering approach aimed [...] Read more.
Investigating how systemic risk originates and spreads across the financial system poses an inherently compositional question, i.e., a question concerning the joint distribution of relative risk share across several interdependent contributors. To address this question, we propose a weighted compositional clustering approach aimed at tackling the trajectories and turning points of systemic risk in the Eurozone, from both a chronological and a geographical perspective. The cluster profiles emerging from our analysis indicate a progressive shift from Northern Europe towards the Euro-Mediterranean region in the coordinate center of systemic risk compositions. This shift matures as the outcome of complex interactions between core and peripheral EU countries that compositional methods have the merit of capturing and unifying in a self-contained multivariate framework. Full article
Show Figures

Figure 1

19 pages, 2833 KiB  
Article
Enhanced Lung Cancer Survival Prediction Using Semi-Supervised Pseudo-Labeling and Learning from Diverse PET/CT Datasets
by Mohammad R. Salmanpour, Arman Gorji, Amin Mousavi, Ali Fathi Jouzdani, Nima Sanati, Mehdi Maghsudi, Bonnie Leung, Cheryl Ho, Ren Yuan and Arman Rahmim
Cancers 2025, 17(2), 285; https://doi.org/10.3390/cancers17020285 - 17 Jan 2025
Viewed by 847
Abstract
Objective: This study explores a semi-supervised learning (SSL), pseudo-labeled strategy using diverse datasets such as head and neck cancer (HNCa) to enhance lung cancer (LCa) survival outcome predictions, analyzing handcrafted and deep radiomic features (HRF/DRF) from PET/CT scans with hybrid machine learning systems [...] Read more.
Objective: This study explores a semi-supervised learning (SSL), pseudo-labeled strategy using diverse datasets such as head and neck cancer (HNCa) to enhance lung cancer (LCa) survival outcome predictions, analyzing handcrafted and deep radiomic features (HRF/DRF) from PET/CT scans with hybrid machine learning systems (HMLSs). Methods: We collected 199 LCa patients with both PET and CT images, obtained from TCIA and our local database, alongside 408 HNCa PET/CT images from TCIA. We extracted 215 HRFs and 1024 DRFs by PySERA and a 3D autoencoder, respectively, within the ViSERA 1.0.0 software, from segmented primary tumors. The supervised strategy (SL) employed an HMLS–PCA connected with six classifiers on both HRFs and DRFs. The SSL strategy expanded the datasets by adding 408 pseudo-labeled HNCa cases (labeled by the Random Forest algorithm) to 199 LCa cases, using the same HMLS techniques. Furthermore, principal component analysis (PCA) linked with four survival prediction algorithms were utilized in the survival hazard ratio analysis. Results: The SSL strategy outperformed the SL method (p << 0.001), achieving an average accuracy of 0.85 ± 0.05 with DRFs from PET and PCA + Multi-Layer Perceptron (MLP), compared to 0.69 ± 0.06 for the SL strategy using DRFs from CT and PCA + Light Gradient Boosting (LGB). Additionally, PCA linked with Component-wise Gradient Boosting Survival Analysis on both HRFs and DRFs, as extracted from CT, had an average C-index of 0.80, with a log rank p-value << 0.001, confirmed by external testing. Conclusions: Shifting from HRFs and SL to DRFs and SSL strategies, particularly in contexts with limited data points, enabling CT or PET alone, can significantly achieve high predictive performance. Full article
(This article belongs to the Special Issue PET/CT in Cancers Outcomes Prediction)
Show Figures

Figure 1

15 pages, 2220 KiB  
Article
Enhancing Treatment Decisions for Advanced Non-Small Cell Lung Cancer with Epidermal Growth Factor Receptor Mutations: A Reinforcement Learning Approach
by Hakan Şat Bozcuk, Leyla Sert, Muhammet Ali Kaplan, Ali Murat Tatlı, Mustafa Karaca, Harun Muğlu, Ahmet Bilici, Bilge Şah Kılıçtaş, Mehmet Artaç, Pınar Erel, Perran Fulden Yumuk, Burak Bilgin, Mehmet Ali Nahit Şendur, Saadettin Kılıçkap, Hakan Taban, Sevinç Ballı, Ahmet Demirkazık, Fatma Akdağ, İlhan Hacıbekiroğlu, Halil Göksel Güzel, Murat Koçer, Pınar Gürsoy, Bahadır Köylü, Fatih Selçukbiricik, Gökhan Karakaya and Mustafa Serkan Alemdaradd Show full author list remove Hide full author list
Cancers 2025, 17(2), 233; https://doi.org/10.3390/cancers17020233 - 13 Jan 2025
Viewed by 870
Abstract
Background: Although higher-generation TKIs are associated with improved progression-free survival in advanced NSCLC patients with EGFR mutations, the optimal selection of TKI treatment remains uncertain. To address this gap, we developed a web application powered by a reinforcement learning (RL) algorithm to assist [...] Read more.
Background: Although higher-generation TKIs are associated with improved progression-free survival in advanced NSCLC patients with EGFR mutations, the optimal selection of TKI treatment remains uncertain. To address this gap, we developed a web application powered by a reinforcement learning (RL) algorithm to assist in guiding initial TKI treatment decisions. Methods: Clinical and mutational data from advanced NSCLC patients were retrospectively collected from 14 medical centers. Only patients with complete data and sufficient follow-up were included. Multiple supervised machine learning models were tested, with the Extra Trees Classifier (ETC) identified as the most effective for predicting progression-free survival. Feature importance scores were calculated by the ETC, and features were then integrated into a Deep Q-Network (DQN) RL algorithm. The RL model was designed to select optimal TKI generation and a treatment line for each patient and was embedded into an open-source web application for experimental clinical use. Results: In total, 318 cases of EGFR-mutant advanced NSCLC were analyzed, with a median patient age of 63. A total of 52.2% of patients were female, and 83.3% had ECOG scores of 0 or 1. The top three most influential features identified were neutrophil-to-lymphocyte ratio (log-transformed), age (log-transformed), and the treatment line of TKI administration, as tested by the ETC algorithm, with an area under curve (AUC) value of 0.73, whereas the DQN RL algorithm achieved a higher AUC value of 0.80, assigning distinct Q-values across four TKI treatment categories. This supports the decision-making process in the web-based ‘EGFR Mutant NSCLC Treatment Advisory System’, where clinicians can input patient-specific data to receive tailored recommendations. Conclusions: The RL-based web application shows promise in assisting TKI treatment selection for EGFR-mutant advanced NSCLC patients, underscoring the potential for reinforcement learning to enhance decision-making in oncology care. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

24 pages, 5153 KiB  
Article
Enhanced Hyperspectral Forest Soil Organic Matter Prediction Using a Black-Winged Kite Algorithm-Optimized Convolutional Neural Network and Support Vector Machine
by Yun Deng, Lifan Xiao and Yuanyuan Shi
Appl. Sci. 2025, 15(2), 503; https://doi.org/10.3390/app15020503 - 7 Jan 2025
Viewed by 761
Abstract
Soil Organic Matter (SOM) is crucial for soil fertility, and effective detection methods are of great significance for the development of agriculture and forestry. This study uses 206 hyperspectral soil samples from the state-owned Yachang and Huangmian Forest Farms in Guangxi, using the [...] Read more.
Soil Organic Matter (SOM) is crucial for soil fertility, and effective detection methods are of great significance for the development of agriculture and forestry. This study uses 206 hyperspectral soil samples from the state-owned Yachang and Huangmian Forest Farms in Guangxi, using the SPXY algorithm to partition the dataset in a 4:1 ratio, to provide an effective spectral data preprocessing method and a novel SOM content prediction model for the study area and similar regions. Three denoising methods (no denoising, Savitzky–Golay filter denoising, and discrete wavelet transform denoising) were combined with nine mathematical transformations (original spectral reflectance (R), first-order differential (1DR), second-order differential (2DR), MSC, SNV, logR, (logR)′, 1/R, ((1/R)′) to form 27 combinations. Through Pearson heatmap analysis and modeling accuracy comparison, the SG-1DR preprocessing combination was found to effectively highlight spectral data features. A CNN-SVM model based on the Black Kite Algorithm (BKA) is proposed. This model leverages the powerful parameter tuning capabilities of BKA, uses CNN for feature extraction, and uses SVM for classification and regression, further improving the accuracy of SOM prediction. The model results are RMSE = 3.042, R2 = 0.93, MAE = 4.601, MARE = 0.1, MBE = 0.89, and PRIQ = 1.436. Full article
Show Figures

Figure 1

18 pages, 3386 KiB  
Article
Adaptive Filtering for Channel Estimation in RIS-Assisted mmWave Systems
by Shuying Shao, Tiejun Lv and Pingmu Huang
Sensors 2025, 25(2), 297; https://doi.org/10.3390/s25020297 - 7 Jan 2025
Viewed by 529
Abstract
The advent of millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems, coupled with reconfigurable intelligent surfaces (RISs), presents a significant opportunity for advancing wireless communication technologies. This integration enhances data transmission rates and broadens coverage areas, but challenges in channel estimation (CE) remain due [...] Read more.
The advent of millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems, coupled with reconfigurable intelligent surfaces (RISs), presents a significant opportunity for advancing wireless communication technologies. This integration enhances data transmission rates and broadens coverage areas, but challenges in channel estimation (CE) remain due to the limitations of the signal processing capabilities of RIS. To address this, we propose an adaptive channel estimation framework comprising two algorithms: log-sum normalized least mean squares (Log-Sum NLMS) and hybrid normalized least mean squares-normalized least mean fourth (Hybrid NLMS-NLMF). These algorithms leverage the sparse nature of mmWave channels to improve estimation accuracy. The Log-Sum NLMS algorithm incorporates a log-sum penalty in its cost function for faster convergence, while the Hybrid NLMS-NLMF employs a mixed error function for better performance across varying signal-to-noise ratio (SNR) conditions. Our analysis also reveals that both algorithms have lower computational complexity compared to existing methods. Extensive simulations validate our findings, with results illustrating the performance of the proposed algorithms under different parameters, demonstrating significant improvements in channel estimation accuracy and convergence speed over established methods, including NLMS, sparse exponential forgetting window least mean square (SEFWLMS), and sparse hybrid adaptive filtering algorithms (SHAFA). Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

13 pages, 4102 KiB  
Article
Dynamic Stability for Seismic-Excited Earth Retaining Structures Following a Nonlinear Criterion
by Jingshu Xu, Jiahui Deng, Zemian Wang, Linghao Qi and Yundi Wang
Buildings 2024, 14(12), 4086; https://doi.org/10.3390/buildings14124086 - 23 Dec 2024
Viewed by 499
Abstract
Based on the upper bound limit analysis, the multi-log spiral failure mechanism for earth retaining structures under horizontal seismic loads was constructed, which could introduce the nonlinear strength criterion into stability analysis without any linearization technique. By calculating various external work rates and [...] Read more.
Based on the upper bound limit analysis, the multi-log spiral failure mechanism for earth retaining structures under horizontal seismic loads was constructed, which could introduce the nonlinear strength criterion into stability analysis without any linearization technique. By calculating various external work rates and the internal energy dissipation, the energy balance equation was established, and the active earth pressure formula required for the retaining structure to be in a critical stable state was derived. With the application of a genetic algorithm and particle swarm optimization, the optimal upper bound solutions of active earth pressure coefficients were obtained. The validity of the research results was verified through comparative analysis. This paper provided diagrams of the active earth pressure coefficients required for earth retaining structures to maintain a critical stability state under different parameters. The influences of seismic load, slope inclination angle, soil strength tension cutoff (TC), and the δ/ϕ ratio were investigated. By investigating the design charts, the active earth pressures applicable to practical engineering can be obtained, which provide a theoretical basis for the preliminary design of retaining structures in earthquake-prone areas. Full article
(This article belongs to the Special Issue Dynamic Response of Civil Engineering Structures under Seismic Loads)
Show Figures

Figure 1

15 pages, 33665 KiB  
Article
Non-Invasive Monitoring of Cerebral Edema Using Ultrasonic Echo Signal Features and Machine Learning
by Shuang Yang, Yuanbo Yang and Yufeng Zhou
Brain Sci. 2024, 14(12), 1175; https://doi.org/10.3390/brainsci14121175 - 23 Nov 2024
Viewed by 873
Abstract
Objectives: Cerebral edema, a prevalent consequence of brain injury, is associated with significant mortality and disability. Timely diagnosis and monitoring are crucial for patient prognosis. There is a pressing clinical demand for a real-time, non-invasive cerebral edema monitoring method. Ultrasound methods are prime [...] Read more.
Objectives: Cerebral edema, a prevalent consequence of brain injury, is associated with significant mortality and disability. Timely diagnosis and monitoring are crucial for patient prognosis. There is a pressing clinical demand for a real-time, non-invasive cerebral edema monitoring method. Ultrasound methods are prime candidates for such investigations due to their non-invasive nature. Methods: Acute cerebral edema was introduced in rats by permanently occluding the left middle cerebral artery (MCA). Ultrasonic echo signals were collected at nine time points over a 24 h period to extract features from both the time and frequency domains. Concurrently, histomorphological changes were examined. We utilized support vector machine (SVM), logistic regression (LogR), decision tree (DT), and random forest (RF) algorithms for classifying cerebral edema types, and SVM, RF, linear regression (LR), and feedforward neural network (FNNs) for predicting the cerebral infarction volume ratio. Results: The integration of 16 ultrasonic features associated with cerebral edema development with the RF model enabled effective classification of cerebral edema types, with a high accuracy rate of 97.9%. Additionally, it provided an accurate prediction of the cerebral infarction volume ratio, with an R2 value of 0.8814. Conclusions: Our proposed strategy classifies cerebral edema and predicts the cerebral infarction volume ratio with satisfactory precision. The fusion of ultrasound echo features with machine learning presents a promising non-invasive approach for the monitoring of cerebral edema. Full article
(This article belongs to the Section Neural Engineering, Neuroergonomics and Neurorobotics)
Show Figures

Figure 1

22 pages, 4480 KiB  
Article
Comparing Two Geostatistical Simulation Algorithms for Modelling the Spatial Uncertainty of Texture in Forest Soils
by Gabriele Buttafuoco
Land 2024, 13(11), 1835; https://doi.org/10.3390/land13111835 - 5 Nov 2024
Viewed by 842
Abstract
Uncertainty assessment is an essential part of modeling and mapping the spatial variability of key soil properties, such as texture. The study aimed to compare sequential Gaussian simulation (SGS) and turning bands simulation (TBS) for assessing the uncertainty in unknown values of the [...] Read more.
Uncertainty assessment is an essential part of modeling and mapping the spatial variability of key soil properties, such as texture. The study aimed to compare sequential Gaussian simulation (SGS) and turning bands simulation (TBS) for assessing the uncertainty in unknown values of the textural fractions accounting for their compositional nature. The study area was a forest catchment (1.39 km2) with soils classified as Typic Xerumbrepts and Ultic Haploxeralf. Samples were collected at 135 locations (0.20 m depth) according to a design developed using a spatial simulated annealing algorithm. Isometric log-ratio (ilr) was used to transform the three textural fractions into a two-dimensional real vector of coordinates ilr.1 and ilr.2, then 100 realizations were simulated using SGS and TBS. The realizations obtained by SGS and TBS showed a strong similarity in reproducing the distribution of ilr.1 and ilr.2 with minimal differences in average conditional variances of all grid nodes. The variograms of ilr.1 and ilr.2 coordinates were better reproduced by the realizations obtained by TBS. Similar results in reproducing the texture data statistics by both algorithms of simulation were obtained. The maps of expected values and standard deviations of the three soil textural fractions obtained by SGS and TBS showed no notable visual differences or visual artifacts. The realizations obtained by SGS and TBS showed a strong similarity in reproducing the distribution of isometric log-ratio coordinates (ilr.1 and ilr.2). Overall, their variograms and data were better reproduced by the realizations obtained by TBS. Full article
Show Figures

Figure 1

17 pages, 15627 KiB  
Article
Enhanced Carbon/Oxygen Ratio Logging Interpretation Methods and Applications in Offshore Oilfields
by Wei Zhou, Yaoting Lin, Gang Gao and Peng Wang
Processes 2024, 12(10), 2301; https://doi.org/10.3390/pr12102301 - 21 Oct 2024
Viewed by 782
Abstract
As the development of most domestic and international oilfields progresses, many fields have entered a mature phase characterized by high water cut and high recovery, with water cut levels often exceeding 90%. Carbon/oxygen ratio logging has proven to be an indispensable tool for [...] Read more.
As the development of most domestic and international oilfields progresses, many fields have entered a mature phase characterized by high water cut and high recovery, with water cut levels often exceeding 90%. Carbon/oxygen ratio logging has proven to be an indispensable tool for distinguishing oil layers from water layers in complex environments, especially where salinity is low, unknown, or highly variable. This logging method has become one of the most effective techniques for determining residual oil saturation in cased wells, providing critical insights into the oil–water interface. In this study, we evaluate two key interpretation models for carbon/oxygen ratio logging: the fan chart method and the ratio chart method. We optimize the interpretation parameters in the ratio chart model using an improved genetic algorithm, which significantly enhances interpretation precision. The optimized parameters enable a more seamless integration of logging results with reservoir and conventional logging data, reducing the influence of lithological variations and physical property differences on the measurements. This research establishes a robust theoretical foundation for enhancing the interpretation accuracy of carbon/oxygen ratio logging, which is crucial for effectively identifying water-flooded layers. These advancements provide vital technical support for monitoring oil–water dynamics, optimizing reservoir management, and improving production efficiency in oilfield development. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

18 pages, 3731 KiB  
Article
Algorithm Design for an Online Berth Allocation Problem
by Cong Chen, Fanxin Wang, Jiayin Pan, Lang Xu and Hongming Gao
J. Mar. Sci. Eng. 2024, 12(10), 1722; https://doi.org/10.3390/jmse12101722 - 30 Sep 2024
Viewed by 731
Abstract
In this paper, we investigate an online berth allocation problem, where vessels arrive one by one and their information is revealed upon arrival. Our objective is to design online algorithms to minimize the maximum load of all berths (makespan). We first demonstrate that [...] Read more.
In this paper, we investigate an online berth allocation problem, where vessels arrive one by one and their information is revealed upon arrival. Our objective is to design online algorithms to minimize the maximum load of all berths (makespan). We first demonstrate that the widely used Greedy algorithm has a very poor theoretical guarantee; specifically, the competitive ratio of the Greedy algorithm for this problem is lower bounded by Ω(logm/loglogm), which increases with the number of berths m. On account of this, we borrow an idea from algorithms for the online strip packing problem and provide a comprehensive theoretical analysis of the Revised Shelf (RS) algorithm as applied to our berth allocation problem. We prove that the competitive ratio of RS for our problem is 5, improving on the original competitive ratio of 6.66 for the online strip packing problem. Through numerical studies, we examine the RS algorithm and Greedy algorithm in an average case. The numerical simulation of competitive ratios reveals distinct advantages for different algorithms depending on job size. For smaller job sizes, the Greedy algorithm emerges as the most efficient, while for medium-sized jobs, the RS algorithm proves to be the most effective. Full article
Show Figures

Figure 1

12 pages, 2659 KiB  
Communication
Predicting Sodium-Ion Battery Performance through Surface Chemistry Analysis and Textural Properties of Functionalized Hard Carbons Using AI
by Walter M. Warren-Vega, Ana I. Zárate-Guzmán, Francisco Carrasco-Marín, Guadalupe Ramos-Sánchez and Luis A. Romero-Cano
Materials 2024, 17(17), 4193; https://doi.org/10.3390/ma17174193 - 24 Aug 2024
Viewed by 1414
Abstract
Traditionally, the performance of sodium-ion batteries has been predicted based on a single characteristic of the electrodes and its relationship to specific capacity increase. However, recent studies have shown that this hypothesis is incorrect because their performance depends on multiple physical and chemical [...] Read more.
Traditionally, the performance of sodium-ion batteries has been predicted based on a single characteristic of the electrodes and its relationship to specific capacity increase. However, recent studies have shown that this hypothesis is incorrect because their performance depends on multiple physical and chemical variables. Due to the above, the present communication shows machine learning as an innovative strategy to predict the performance of functionalized hard carbon anodes prepared from grapefruit peels. In this sense, a three-layer feed-forward Artificial Neural Network (ANN) was designed. The inputs used to feed the ANN were the physicochemical characteristics of the materials, which consisted of mercury intrusion porosimetry data (SHg and average pore), elemental analysis (C, H, N, S), ID/IG ratio obtained from RAMAN studies, and X-ray photoemission spectroscopy data of the C1s, N1s, and O1s regions. In addition, two more inputs were added: the cycle number and the applied C-rate. The ANN architecture consisted of a first hidden layer with a sigmoid transfer function and a second layer with a log-sigmoid transfer function. Finally, a sigmoid transfer function was used in the output layer. Each layer had 10 neurons. The training algorithm used was Bayesian regularization. The results show that the proposed ANN correctly predicts (R2 > 0.99) the performance of all materials. The proposed strategy provides critical insights into the variables that must be controlled during material synthesis to optimize the process and accelerate progress in developing tailored materials. Full article
Show Figures

Figure 1

12 pages, 2569 KiB  
Article
Improved Non-Negative Matrix Factorization-Based Noise Reduction of Leakage Acoustic Signals
by Yongsheng Yu, Yongwen Hu, Yingming Wang and Zhuoran Cai
Sensors 2024, 24(16), 5146; https://doi.org/10.3390/s24165146 - 9 Aug 2024
Viewed by 965
Abstract
The detection of gas leaks using acoustic signals is often compromised by environmental noise, which significantly impacts the accuracy of subsequent leak identification. Current noise reduction algorithms based on non-negative matrix factorization (NMF) typically utilize the Euclidean distance as their objective function, which [...] Read more.
The detection of gas leaks using acoustic signals is often compromised by environmental noise, which significantly impacts the accuracy of subsequent leak identification. Current noise reduction algorithms based on non-negative matrix factorization (NMF) typically utilize the Euclidean distance as their objective function, which can exacerbate noise anomalies. Moreover, these algorithms predominantly rely on simple techniques like Wiener filtering to estimate the amplitude spectrum of pure signals. This approach, however, falls short in accurately estimating the amplitude spectrum of non-stationary signals. Consequently, this paper proposes an improved non-negative matrix factorization (INMF) noise reduction algorithm that enhances the traditional NMF by refining both the objective function and the amplitude spectrum estimation process for reconstructed signals. The improved algorithm replaces the conventional Euclidean distance with the Kullback–Leibler (KL) divergence and incorporates noise and sparse constraint terms into the objective function to mitigate the adverse effects of signal amplification. Unlike traditional methods such as Wiener filtering, the proposed algorithm employs an adaptive Minimum Mean-Square Error-Log Spectral Amplitude (MMSE-LSA) method to estimate the amplitude spectrum of non-stationary signals adaptively across varying signal-to-noise ratios. Comparative experiments demonstrate that the INMF algorithm significantly outperforms existing methods in denoising leakage acoustic signals. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

9 pages, 236 KiB  
Article
Algorithms for Densest Subgraphs of Vertex-Weighted Graphs
by Zhongling Liu, Wenbin Chen, Fufang Li, Ke Qi and Jianxiong Wang
Mathematics 2024, 12(14), 2206; https://doi.org/10.3390/math12142206 - 14 Jul 2024
Viewed by 815
Abstract
Finding the densest subgraph has tremendous potential in computer vision and social network research, among other domains. In computer vision, it can demonstrate essential structures, and in social network research, it aids in identifying closely associated communities. The densest subgraph problem is finding [...] Read more.
Finding the densest subgraph has tremendous potential in computer vision and social network research, among other domains. In computer vision, it can demonstrate essential structures, and in social network research, it aids in identifying closely associated communities. The densest subgraph problem is finding a subgraph with maximum mean density. However, most densest subgraph-finding algorithms are based on edge-weighted graphs, where edge weights can only represent a single value dimension, whereas practical applications involve multiple dimensions. To resolve the challenge, we propose two algorithms for resolving the densest subgraph problem in a vertex-weighted graph. First, we present an exact algorithm that builds upon Goldberg’s original algorithm. Through theoretical exploration and analysis, we rigorously verify our proposed algorithm’s correctness and confirm that it can efficiently run in polynomial time O(n(n + m)log2n) is its temporal complexity. Our approach can be applied to identify closely related subgroups demonstrating the maximum average density in real-life situations. Additionally, we consistently offer an approximation algorithm that guarantees an accurate approximation ratio of 2. In conclusion, our contributions enrich theoretical foundations for addressing the densest subgraph problem. Full article
(This article belongs to the Special Issue Mathematical and Computing Sciences for Artificial Intelligence)
Back to TopTop