Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,529)

Search Parameters:
Keywords = benchmark

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1360 KiB  
Article
Efficient Algorithms for Range Mode Queries in the Big Data Era
by Christos Karras, Leonidas Theodorakopoulos, Aristeidis Karras and George A. Krimpas
Information 2024, 15(8), 450; https://doi.org/10.3390/info15080450 (registering DOI) - 30 Jul 2024
Abstract
The mode is a fundamental descriptive statistic in data analysis, signifying the most frequent element within a dataset. The range mode query (RMQ) problem expands upon this concept by preprocessing an array A containing n natural numbers. This allows for the swift determination [...] Read more.
The mode is a fundamental descriptive statistic in data analysis, signifying the most frequent element within a dataset. The range mode query (RMQ) problem expands upon this concept by preprocessing an array A containing n natural numbers. This allows for the swift determination of the mode within any subarray A[a..b], thus optimizing the computation of the mode for a multitude of range queries. The efficacy of this process bears considerable importance in data analytics and retrieval across diverse platforms, including but not limited to online shopping experiences and financial auditing systems. This study is dedicated to exploring and benchmarking different algorithms and data structures designed to tackle the RMQ problem. The goal is to not only address the theoretical aspects of RMQ but also to provide practical solutions that can be applied in real-world scenarios, such as the optimization of an online shopping platform’s understanding of customer preferences, enhancing the efficiency and effectiveness of data retrieval in large datasets. Full article
(This article belongs to the Special Issue Multidimensional Data Structures and Big Data Management)
Show Figures

Figure 1

19 pages, 927 KiB  
Article
Role of the Biogenic Carbon Physicochemical Properties in the Manufacturing and Industrial Transferability of Mill Scale-Based Self-Reducing Briquettes
by Gianluca Dall’Osto, Davide Mombelli, Sara Scolari and Carlo Mapelli
Metals 2024, 14(8), 882; https://doi.org/10.3390/met14080882 (registering DOI) - 30 Jul 2024
Abstract
The recovery of iron contained in mill scale rather than iron ore can be considered a promising valorization pathway for this waste, especially if carried out through reduction using biogenic carbon sources. Nevertheless, the physicochemical properties of the latter may hinder the industrial [...] Read more.
The recovery of iron contained in mill scale rather than iron ore can be considered a promising valorization pathway for this waste, especially if carried out through reduction using biogenic carbon sources. Nevertheless, the physicochemical properties of the latter may hinder the industrial transferability of such a pathway. In this work, the mechanical and metallurgical behavior of self-reduced briquettes composed of mill scale and four biogenic carbons (with increasing ratios of fixed carbon to volatile matter and ash) was studied. Each sample achieved mechanical performance above the benchmarks established for their application in metallurgical furnaces, although the presence of alkali compounds in the ash negatively affected the water resistance of the briquettes. In terms of metallurgical performance, although agglomeration successfully exploited the reduction by volatiles from 750 °C, full iron recovery and slag separation required an amount of fixed carbon higher than 6.93% and a heat treatment temperature of 1400 °C. Finally, the presence of Ca-, Al-, and Si- compounds in the ash was essential for the creation of a slag compatible with steelmaking processes and capable of retaining both phosphorus and sulfur, hence protecting the recovered iron. Full article
(This article belongs to the Special Issue Electric Arc Furnace and Converter Steelmaking)
22 pages, 5585 KiB  
Article
Oceanic Mesoscale Eddy Fitting Using Legendre Polynomial Surface Fitting Model Based on Along-Track Sea Level Anomaly Data
by Chunzheng Kong, Yibo Zhang, Jie Shi and Xianqing Lv
Remote Sens. 2024, 16(15), 2799; https://doi.org/10.3390/rs16152799 (registering DOI) - 30 Jul 2024
Abstract
Exploring the spatial distribution of sea surface height involves two primary methodologies: utilizing gridded reanalysis data post-secondary processing or conducting direct fitting along-track data. While processing gridded reanalysis data may entail information loss, existing direct fitting methods have limitations. Therefore, there is a [...] Read more.
Exploring the spatial distribution of sea surface height involves two primary methodologies: utilizing gridded reanalysis data post-secondary processing or conducting direct fitting along-track data. While processing gridded reanalysis data may entail information loss, existing direct fitting methods have limitations. Therefore, there is a pressing need for novel direct fitting approaches to enhance efficiency and accuracy in sea surface height fitting. This study demonstrates the viability of Legendre polynomial surface fitting, benchmarked against bicubic quasi-uniform B-spline surface fitting, which has been proven to be a well-established direct fitting method. Despite slightly superior accuracy exhibited by bicubic quasi-uniform B-spline surface fitting under identical order combinations, Legendre polynomial surface fitting offers a simpler structure and enhanced controllability. However, it is pertinent to note that significant expansion of the spatial scope of fitting often results in decreased fitting efficacy. To address this, the current research achieves the precise fitting of sea surface height across expansive spatial ranges through a regional stitching methodology. Full article
14 pages, 1620 KiB  
Article
Modelling Student Retention in Tutorial Classes with Uncertainty—A Bayesian Approach to Predicting Attendance-Based Retention
by Eli Nimy and Moeketsi Mosia
Educ. Sci. 2024, 14(8), 830; https://doi.org/10.3390/educsci14080830 (registering DOI) - 30 Jul 2024
Abstract
A Bayesian additive regression tree (BART) is a recent statistical method that blends ensemble learning with nonparametric regression. BART is constructed using a Bayesian approach, which provides the benefit of model-based prediction uncertainty, enhancing the reliability of predictions. This study proposes the development [...] Read more.
A Bayesian additive regression tree (BART) is a recent statistical method that blends ensemble learning with nonparametric regression. BART is constructed using a Bayesian approach, which provides the benefit of model-based prediction uncertainty, enhancing the reliability of predictions. This study proposes the development of a BART model with a binomial likelihood to predict the percentage of students retained in tutorial classes using attendance data sourced from a South African university database. The data consist of tutorial dates and encoded (anonymized) student numbers, which play a crucial role in deriving retention variables such as cohort age, active students, and retention rates. The proposed model is evaluated and benchmarked against the random forest regressor (RFR). The proposed BART model reported an average of 20% higher predictive performance compared to RFR across six error metrics, achieving an R-squared score of 0.9414. Furthermore, the study demonstrates the utility of the highest density interval (HDI) provided by the BART model, which can help in determining the best- and worst-case scenarios for student retention rate estimates. The significance of this study extends to multiple stakeholders within the educational sector. Educational institutions, administrators, and policymakers can benefit from this study by gaining insights into how future tutorship programme student retention rates can be predicted using predictive models. Furthermore, the foresight provided by the predicted student retention rates can aid in strategic resource allocation, facilitating more informed planning and budgeting for tutorship programmes. Full article
(This article belongs to the Special Issue Higher Education Research: Challenges and Practices)
30 pages, 7189 KiB  
Article
Performance Assessment of an Integrated Low-Approach Low-Temperature Open Cooling Tower with Radiant Cooling and Displacement Ventilation for Space Conditioning in Temperate Climates
by Mehdi Nasrabadi and Donal Finn
Energies 2024, 17(15), 3763; https://doi.org/10.3390/en17153763 (registering DOI) - 30 Jul 2024
Abstract
Cooling towers, by producing chilled water and by integration with radiant and displacement cooling systems, offer a possible alternative method for space conditioning of office buildings in temperate climates. This present study examines the operational feasibility of a cooling tower in conjunction with [...] Read more.
Cooling towers, by producing chilled water and by integration with radiant and displacement cooling systems, offer a possible alternative method for space conditioning of office buildings in temperate climates. This present study examines the operational feasibility of a cooling tower in conjunction with a radiant and displacement ventilation cooling system for office conditioning in four temperate climates. The climates are: cool and semi-humid (Birmingham, UK), cool and dry (Helsinki, FI), warm and humid (Paris, FR) and warm and dry (Prague, CZ). The system is capable of producing chilled water between 14 and 20 °C, with low approach tower temperatures (1–3 K). A mathematical model of the cooling tower system was developed and integrated with an office building energy simulation model. Using the integrated simulation model, assessment was carried out based on ASHRAE design day specifications, as well as a complete cooling seasonal analysis. Moreover, the performance of the system is benchmarked against a variable-air-volume cooling system. Energy savings for the system when benchmarked against a variable-air-volume air conditioning system, where the chiller COP (coefficient of performance) varies from 2.75 to 6.5, were 62% to 37% in Paris, 56% to 30% in Prague, 52% to 28% in Helsinki and 45% to 13% in Birmingham, respectively. Full article
(This article belongs to the Section G: Energy and Buildings)
25 pages, 1985 KiB  
Article
Prediction of Sea Level Using Double Data Decomposition and Hybrid Deep Learning Model for Northern Territory, Australia
by Nawin Raj, Jaishukh Murali, Lila Singh-Peterson and Nathan Downs
Mathematics 2024, 12(15), 2376; https://doi.org/10.3390/math12152376 (registering DOI) - 30 Jul 2024
Abstract
Sea level rise (SLR) attributed to the melting of ice caps and thermal expansion of seawater is of great global significance to vast populations of people residing along the world’s coastlines. The extent of SLR’s impact on physical coastal areas is determined by [...] Read more.
Sea level rise (SLR) attributed to the melting of ice caps and thermal expansion of seawater is of great global significance to vast populations of people residing along the world’s coastlines. The extent of SLR’s impact on physical coastal areas is determined by multiple factors such as geographical location, coastal structure, wetland vegetation and related oceanic changes. For coastal communities at risk of inundation and coastal erosion due to SLR, the modelling and projection of future sea levels can provide the information necessary to prepare and adapt to gradual sea level rise over several years. In the following study, a new model for predicting future sea levels is presented, which focusses on two tide gauge locations (Darwin and Milner Bay) in the Northern Territory (NT), Australia. Historical data from the Australian Bureau of Meteorology (BOM) from 1990 to 2022 are used for data training and prediction using artificial intelligence models and computation of mean sea level (MSL) linear projection. The study employs a new double data decomposition approach using Multivariate Variational Mode Decomposition (MVMD) and Successive Variational Mode Decomposition (SVMD) with dimensionality reduction techniques of Principal Component Analysis (PCA) for data modelling using four artificial intelligence models (Support Vector Regression (SVR), Adaptive Boosting Regressor (AdaBoost), Multilayer Perceptron (MLP), and Convolutional Neural Network–Bidirectional Gated Recurrent Unit (CNN-BiGRU). It proposes a deep learning hybrid CNN-BiGRU model for sea level prediction, which is benchmarked by SVR, AdaBoost, and MLP. MVMD-SVMD-CNN-BiGRU hybrid models achieved the highest performance values of 0.9979 (d), 0.996 (NS), 0.9409 (L); and 0.998 (d), 0.9959 (NS), 0.9413 (L) for Milner Bay and Darwin, respectively. It also attained the lowest error values of 0.1016 (RMSE), 0.0782 (MABE), 2.3699 (RRMSE), and 2.4123 (MAPE) for Darwin and 0.0248 (RMSE), 0.0189 (MABE), 1.9901 (RRMSE), and 1.7486 (MAPE) for Milner Bay. The mean sea level (MSL) trend analysis showed a rise of 6.1 ± 1.1 mm and 5.6 ± 1.5 mm for Darwin and Milner Bay, respectively, from 1990 to 2022. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence)
25 pages, 3246 KiB  
Article
Weighted Attribute-Based Proxy Re-Encryption Scheme with Distributed Multi-Authority Attributes
by Wenlong Yi, Chuang Wang, Sergey Kuzmin, Igor Gerasimov and Xiangping Cheng
Sensors 2024, 24(15), 4939; https://doi.org/10.3390/s24154939 (registering DOI) - 30 Jul 2024
Abstract
Existing attribute-based proxy re-encryption schemes suffer from issues like complex access policies, large ciphertext storage space consumption, and an excessive authority of the authorization center, leading to weak security and controllability of data sharing in cloud storage. This study proposes a Weighted Attribute [...] Read more.
Existing attribute-based proxy re-encryption schemes suffer from issues like complex access policies, large ciphertext storage space consumption, and an excessive authority of the authorization center, leading to weak security and controllability of data sharing in cloud storage. This study proposes a Weighted Attribute Authority Multi-Authority Proxy Re-Encryption (WAMA-PRE) scheme that introduces attribute weights to elevate the expression of access policies from binary to multi-valued, simplifying policies and reducing ciphertext storage space. Simultaneously, the multiple attribute authorities and the authorization center construct a joint key, reducing reliance on a single authorization center. The proposed distributed attribute authority network enhances the anti-attack capability of cloud storage. Experimental results show that introducing attribute weights can reduce ciphertext storage space by 50%, proxy re-encryption saves 63% time compared to repeated encryption, and the joint key construction time is only 1% of the benchmark scheme. Security analysis proves that WAMA-PRE achieves CPA security under the decisional q-parallel BDHE assumption in the random oracle model. This study provides an effective solution for secure data sharing in cloud storage. Full article
(This article belongs to the Section Internet of Things)
19 pages, 3656 KiB  
Article
Integrating Feature Selection with Machine Learning for Accurate Reservoir Landslide Displacement Prediction
by Qi Ge, Jingyong Wang, Cheng Liu, Xiaohong Wang, Yiyan Deng and Jin Li
Water 2024, 16(15), 2152; https://doi.org/10.3390/w16152152 (registering DOI) - 30 Jul 2024
Abstract
Accurate prediction of reservoir landslide displacements is crucial for early warning and hazard prevention. Current machine learning (ML) paradigms for predicting landslide displacement demonstrate superior performance, while often relying on various feature engineering techniques, such as decomposing into different temporal lags and feature [...] Read more.
Accurate prediction of reservoir landslide displacements is crucial for early warning and hazard prevention. Current machine learning (ML) paradigms for predicting landslide displacement demonstrate superior performance, while often relying on various feature engineering techniques, such as decomposing into different temporal lags and feature selection. This study investigates the impact of various feature selection techniques on the performance of ML algorithms for landslide displacement prediction. The Shuping and Baishuihe landslides in China’s Three Gorges Reservoir Area are used to comprehensively benchmark four prevalent ML algorithms. Both static ML models, including backpropagation neural network (BPNN), support vector machine (SVM), and dynamic models, such as long short-term memory (LSTM), and gated recurrent unit (GRU), are included. Each ML model is evaluated under three feature engineering techniques: raw multivariate time series, and feature selection under maximal information coefficient-partial autocorrelation function (MIC-PACF), or grey relational analysis-PACF (GRA-PACF). The results demonstrate that appropriate feature selection methods could significantly improve the performance of static ML models. In contrast, dynamic models effectively leverage inherent capabilities in capturing temporal dynamics within raw multivariate time series, seeing marginal gains with extensive feature engineering compared to no feature selection strategy. The optimal feature selection approach varies based on the ML model and specific landslide, highlighting the importance of case-specific assessments. The findings in this study offer guidance on integrating feature selection techniques with different machine learning models to maximize the robustness and generalizability of data-driven landslide displacement prediction frameworks. Full article
(This article belongs to the Special Issue Rainfall-Induced Landslides and Natural Geohazards)
Show Figures

Figure 1

31 pages, 4735 KiB  
Article
Advanced State of Charge Estimation Using Deep Neural Network, Gated Recurrent Unit, and Long Short-Term Memory Models for Lithium-Ion Batteries under Aging and Temperature Conditions
by Saad El Fallah, Jaouad Kharbach, Jonas Vanagas, Živilė Vilkelytė, Sonata Tolvaišienė, Saulius Gudžius, Artūras Kalvaitis, Oumayma Lehmam, Rachid Masrour, Zakia Hammouch, Abdellah Rezzouk and Mohammed Ouazzani Jamil
Appl. Sci. 2024, 14(15), 6648; https://doi.org/10.3390/app14156648 (registering DOI) - 30 Jul 2024
Abstract
Accurate estimation of the state of charge (SoC) of lithium-ion batteries is crucial for battery management systems, particularly in electric vehicle (EV) applications where real-time monitoring ensures safe and robust operation. This study introduces three advanced algorithms to estimate the SoC: deep neural [...] Read more.
Accurate estimation of the state of charge (SoC) of lithium-ion batteries is crucial for battery management systems, particularly in electric vehicle (EV) applications where real-time monitoring ensures safe and robust operation. This study introduces three advanced algorithms to estimate the SoC: deep neural network (DNN), gated recurrent unit (GRU), and long short-term memory (LSTM). The DNN, GRU, and LSTM models are trained and validated using laboratory data from a lithium-ion 18650 battery and simulation data from Matlab/Simulink for a LiCoO2 battery cell. These models are designed to account for varying temperatures during charge/discharge cycles and the effects of battery aging due to cycling. This paper is the first to estimate the SoC by a deep neural network using a variable current profile that provides the SoC curve during both the charge and discharge phases. The DNN model is implemented in Matlab/Simulink, featuring customizable activation functions, multiple hidden layers, and a variable number of neurons per layer, thus providing flexibility and robustness in the SoC estimation. This approach uniquely integrates temperature and aging effects into the input features, setting it apart from existing methodologies that typically focus only on voltage, current, and temperature. The performance of the DNN model is benchmarked against the GRU and LSTM models, demonstrating superior accuracy with a maximum error of less than 2.5%. This study highlights the effectiveness of the DNN algorithm in providing a reliable SoC estimation under diverse operating conditions, showcasing its potential for enhancing battery management in EV applications. Full article
Show Figures

Figure 1

38 pages, 6204 KiB  
Article
The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection
by Ahmad K. Al Hwaitat and Hussam N. Fakhouri
Symmetry 2024, 16(8), 966; https://doi.org/10.3390/sym16080966 (registering DOI) - 30 Jul 2024
Abstract
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, which are characterized by their great strength. The OX optimizer is designed to address the challenges posed by complex, high-dimensional optimization problems. The design of the [...] Read more.
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, which are characterized by their great strength. The OX optimizer is designed to address the challenges posed by complex, high-dimensional optimization problems. The design of the OX optimizer embodies a fundamental symmetry between global and local search processes. This symmetry ensures a balanced and effective exploration of the solution space, highlighting the algorithm’s innovative contribution to the field of optimization. The OX optimizer has been evaluated on CEC2022 and CEC2017 IEEE competition benchmark functions. The results demonstrate the OX optimizer’s superior performance in terms of convergence speed and solution quality compared to existing state-of-the-art algorithms. The algorithm’s robustness and adaptability to various problem landscapes highlight its potential as a powerful tool for solving diverse optimization tasks. Detailed analysis of convergence curves, search history distributions, and sensitivity heatmaps further support these findings. Furthermore, the OX optimizer has been applied to optimize support vector machines (SVMs), emphasizing parameter selection and feature optimization. We tested it on the NSL-KDD dataset to evaluate its efficacy in an intrusion detection system. The results demonstrate that the OX optimizer significantly enhances SVM performance, facilitating effective exploration of the parameter space. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 6530 KiB  
Article
MambaSR: Arbitrary-Scale Super-Resolution Integrating Mamba with Fast Fourier Convolution Blocks
by Jin Yan, Zongren Chen, Zhiyuan Pei, Xiaoping Lu and Hua Zheng
Mathematics 2024, 12(15), 2370; https://doi.org/10.3390/math12152370 (registering DOI) - 30 Jul 2024
Abstract
Traditional single image super-resolution (SISR) methods, which focus on integer scale super-resolution, often require separate training for each scale factor, leading to increased computational resource consumption. In this paper, we propose MambaSR, a novel arbitrary-scale super-resolution approach integrating Mamba with Fast Fourier Convolution [...] Read more.
Traditional single image super-resolution (SISR) methods, which focus on integer scale super-resolution, often require separate training for each scale factor, leading to increased computational resource consumption. In this paper, we propose MambaSR, a novel arbitrary-scale super-resolution approach integrating Mamba with Fast Fourier Convolution Blocks. MambaSR leverages the strengths of the Mamba state-space model to extract long-range dependencies. In addition, Fast Fourier Convolution Blocks are proposed to capture the global information in the frequency domain. The experimental results demonstrate that MambaSR achieves superior performance compared to different methods across various benchmark datasets. Specifically, on the Urban100 dataset, MambaSR outperforms MetaSR by 0.93 dB in PSNR and 0.0203 dB in SSIM, and on the Manga109 dataset, it achieves an average PSNR improvement of 1.00 dB and an SSIM improvement of 0.0093 dB. These results highlight the efficacy of MambaSR in enhancing image quality for arbitrary-scale super-resolution. Full article
Show Figures

Figure 1

16 pages, 677 KiB  
Article
Arabic Lexical Substitution: AraLexSubD Dataset and AraLexSub Pipeline
by Eman Naser-Karajah and Nabil Arman
Data 2024, 9(8), 98; https://doi.org/10.3390/data9080098 (registering DOI) - 30 Jul 2024
Viewed by 54
Abstract
Lexical substitution aims to generate a list of equivalent substitutions (i.e., synonyms) to a sentence’s target word or phrase while preserving the sentence’s meaning to improve writing, enhance language understanding, improve natural language processing models, and handle ambiguity. This task has recently attracted [...] Read more.
Lexical substitution aims to generate a list of equivalent substitutions (i.e., synonyms) to a sentence’s target word or phrase while preserving the sentence’s meaning to improve writing, enhance language understanding, improve natural language processing models, and handle ambiguity. This task has recently attracted much attention in many languages. Despite the richness of Arabic vocabulary, limited research has been performed on the lexical substitution task due to the lack of annotated data. To bridge this gap, we present the first Arabic lexical substitution benchmark dataset AraLexSubD for benchmarking lexical substitution pipelines. AraLexSubD is manually built by eight native Arabic speakers and linguists (six linguist annotators, a doctor, and an economist) who annotate the 630 sentences. AraLexSubD covers three domains: general, finance, and medical. It encompasses 2476 substitution candidates ranked according to their semantic relatedness. We also present the first Arabic lexical substitution pipeline, AraLexSub, which uses the AraBERT pre-trained language model. The pipeline consists of several modules: substitute generation, substitute filtering, and candidate ranking. The filtering step shows its effectiveness by achieving an increase of 1.6 in the F1 score on the entire AraLexSubD dataset. Additionally, an error analysis of the experiment is reported. To our knowledge, this is the first study on Arabic lexical substitution. Full article
Show Figures

Figure 1

21 pages, 762 KiB  
Systematic Review
Applications of Smart and Self-Sensing Materials for Structural Health Monitoring in Civil Engineering: A Systematic Review
by Ana Raina Carneiro Vasconcelos, Ryan Araújo de Matos, Mariana Vella Silveira and Esequiel Mesquita
Buildings 2024, 14(8), 2345; https://doi.org/10.3390/buildings14082345 - 29 Jul 2024
Viewed by 212
Abstract
Civil infrastructures are constantly exposed to environmental effects that can contribute to deterioration. Early detection of damage is crucial to prevent catastrophic failures. Structural Health Monitoring (SHM) systems are essential for ensuring the safety and reliability of structures by continuously monitoring and recording [...] Read more.
Civil infrastructures are constantly exposed to environmental effects that can contribute to deterioration. Early detection of damage is crucial to prevent catastrophic failures. Structural Health Monitoring (SHM) systems are essential for ensuring the safety and reliability of structures by continuously monitoring and recording data to identify damage-induced changes. In this context, self-sensing composites, formed by incorporating conductive nanomaterials into a matrix, offer intrinsic sensing capabilities through piezoresistivity and various conduction mechanisms. The paper reviews how SHM with self-sensing materials can be applied to civil infrastructure while also highlighting important research articles in this field. The result demonstrates increased dissemination of self-sensing materials for civil engineering worldwide. Their use in core infrastructure components enhances functionality, safety, and transportation efficiency. Among nanomaterials used as additions to produce self-sensing materials in small portions, carbon nanotubes have the most citations and, consequently, the most studies, followed by carbon fiber and steel fiber. This highlight identifies knowledge gaps, benchmark technologies, and outlines self-sensing materials for future research. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
21 pages, 1214 KiB  
Article
Simulation of China’s Carbon Peak Path Based on Random Forest and Sparrow Search Algorithm–Long Short-Term Memory
by Zhoumu Yang, Xiaoying Wu, Yinan Song and Jiao Pan
Atmosphere 2024, 15(8), 907; https://doi.org/10.3390/atmos15080907 (registering DOI) - 29 Jul 2024
Viewed by 142
Abstract
How to decouple economic growth from carbon dioxide emissions and achieve low-carbon transformation of the Chinese economy has become an urgent problem that needs to be solved. Firstly, the Tapio index is used to identify China’s carbon peak status, and then the Technology [...] Read more.
How to decouple economic growth from carbon dioxide emissions and achieve low-carbon transformation of the Chinese economy has become an urgent problem that needs to be solved. Firstly, the Tapio index is used to identify China’s carbon peak status, and then the Technology Choice Index (TCI) and economic complexity are introduced into the comprehensive factor analysis framework for carbon dioxide emissions. Key influencing factors are identified using random forest and ridge regression. On this basis, a novel sparrow search algorithm–long short-term memory (SSA-LSTM) model which has more prediction accuracy compared with past studies is constructed to predict the dynamic evolution trend of carbon dioxide emissions, and in combination with scenario analysis, the path towards the carbon peak is simulated. The following conclusions are obtained: The benchmark scenario peaks in 2031, with a peak of 12.346 billion tons, and the low-carbon scenario peaks in 2030, with a peak of 11.962 billion tons. The extensive scenario peaks in 2037, with a peak of 13.291 billion tons. Under six scenarios, it can be concluded that energy intensity is the key factor in reducing the peak. These research results provide theoretical support for decision-makers to formulate emission reduction policies and adjust the carbon peak path. Full article
17 pages, 2648 KiB  
Article
Multi-Feature, Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification
by Zirui Li, Runbang Liu, Le Sun and Yuhui Zheng
Remote Sens. 2024, 16(15), 2775; https://doi.org/10.3390/rs16152775 - 29 Jul 2024
Viewed by 229
Abstract
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed [...] Read more.
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed on how to more deeply integrate the features of two modalities in attention mechanisms. In this paper, we propose a novel Multi-Feature Cross Attention-Induced Transformer Network (MCAITN) designed to enhance the classification accuracy of hyperspectral and LiDAR data. The MCAITN integrates the strengths of both data modalities by leveraging a cross-attention mechanism that effectively captures the complementary information between hyperspectral and LiDAR features. By utilizing a transformer-based architecture, the network is capable of learning complex spatial-spectral relationships and long-range dependencies. The cross-attention module facilitates the fusion of multi-source data, improving the network’s ability to discriminate between different land cover types. Extensive experiments conducted on benchmark datasets demonstrate that the MCAITN outperforms state-of-the-art methods in terms of classification accuracy and robustness. Full article
Show Figures

Figure 1

Back to TopTop