Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = MLPNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 6151 KiB  
Article
Radial Basis Function (RBF) and Multilayer Perceptron (MLP) Comparative Analysis on Building Renovation Cost Estimation: The Case of Greece
by Vasso E. Papadimitriou, Georgios N. Aretoulis and Jason Papathanasiou
Algorithms 2024, 17(9), 390; https://doi.org/10.3390/a17090390 - 2 Sep 2024
Viewed by 534
Abstract
Renovation of buildings has become a major area of development for the construction industry. In the building construction sector, generating a precise and trustworthy cost estimate before building begins is the greatest challenge. Emphasizing the value of using ANN models to forecast the [...] Read more.
Renovation of buildings has become a major area of development for the construction industry. In the building construction sector, generating a precise and trustworthy cost estimate before building begins is the greatest challenge. Emphasizing the value of using ANN models to forecast the total cost of a building renovation project is the ultimate objective. As a result, building firms may be able to avoid financial losses as long as there is as little discrepancy between projected and actual costs for remodeling works in progress. To address the gap in the research, Greek contractors specializing in building renovations provided a sizable dataset of real project cost data. To build cost prediction ANNs, the collected data had to be organized, assessed, and appropriately encoded. The network was developed, trained, and tested using IBM SPSS Statistics software 28.0.0.0. The dependent variable is the final cost. The independent variables are initial cost, estimated completion time, actual completion time, delay time, initial and final demolition-drainage costs, cost of expenses, initial and final plumbing costs, initial and final heating costs, initial and final electrical costs, initial and final masonry costs, initial and final construction costs of plasterboard construction, initial and final cost of bathrooms, initial and final cost of flooring, initial and final cost of frames, initial and final cost of doors, initial and final cost of paint, and initial and final cost of kitchen construction. The first procedure that was employed was the radial basis function (RBF). The efficiency of the RBFNN model was evaluated and analyzed during training and testing, with up to 6% sum of squares error and nearly 0% relative error in the training sample, which accounted for roughly 70% of the total sample. The second procedure implemented was the method called the multi-layer perceptron (MLP). The efficiency of the MLPNN model was assessed and examined during training and testing; the training sample, which made up around 70% of the overall sample, had a relative error of 0–7% and a sum of squares error ranging from 1% to 5%, confirming specifically the efficacy of RBFNN in calculating the overall cost of renovations. Full article
Show Figures

Figure 1

24 pages, 14032 KiB  
Article
Lake Surface Temperature Predictions under Different Climate Scenarios with Machine Learning Methods: A Case Study of Qinghai Lake and Hulun Lake, China
by Zhenghao Li, Zhijie Zhang, Shengqing Xiong, Wanchang Zhang and Rui Li
Remote Sens. 2024, 16(17), 3220; https://doi.org/10.3390/rs16173220 - 30 Aug 2024
Viewed by 370
Abstract
Accurate prediction of lake surface water temperature (LSWT) is essential for understanding the impacts of climate change on aquatic ecosystems and for guiding environmental management strategies. Predictions of LSWT for two prominent lakes in northern China, Qinghai Lake and Hulun Lake, under various [...] Read more.
Accurate prediction of lake surface water temperature (LSWT) is essential for understanding the impacts of climate change on aquatic ecosystems and for guiding environmental management strategies. Predictions of LSWT for two prominent lakes in northern China, Qinghai Lake and Hulun Lake, under various future climate scenarios, were conducted in the present study. Utilizing historical hydrometeorological data and MODIS satellite observations (MOD11A2), we employed three advanced machine learning models—Random Forest (RF), XGBoost, and Multilayer Perceptron Neural Network (MLPNN)—to predict monthly average LSWT across three future climate scenarios (ssp119, ssp245, ssp585) from CMIP6 projections. Through the comparison of training and validation results of the three models across both lake regions, the RF model demonstrated the highest accuracy, with a mean MAE of 0.348 °C and an RMSE of 0.611 °C, making it the most optimal and suitable model for this purpose. With this model, the predicted LSWT for both lakes reveals a significant warming trend in the future, particularly under the high-emission scenario (ssp585). The rate of increase is most pronounced under ssp585, with Hulun Lake showing a rise of 0.55 °C per decade (R2 = 0.72) and Qinghai Lake 0.32 °C per decade (R2 = 0.85), surpassing trends observed under ssp119 and ssp245. These results underscore the vulnerability of lake ecosystems to future climate change and provide essential insights for proactive climate adaptation and environmental management. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Figure 1

17 pages, 3229 KiB  
Article
Short-Term Forecasting of Photovoltaic Power Using Multilayer Perceptron Neural Network, Convolutional Neural Network, and k-Nearest Neighbors’ Algorithms
by Kelachukwu Iheanetu and KeChrist Obileke
Optics 2024, 5(2), 293-309; https://doi.org/10.3390/opt5020021 - 18 Jun 2024
Viewed by 557
Abstract
Governments and energy providers all over the world are moving towards the use of renewable energy sources. Solar photovoltaic (PV) energy is one of the providers’ favourite options because it is comparatively cheaper, clean, available, abundant, and comparatively maintenance-free. Although the PV energy [...] Read more.
Governments and energy providers all over the world are moving towards the use of renewable energy sources. Solar photovoltaic (PV) energy is one of the providers’ favourite options because it is comparatively cheaper, clean, available, abundant, and comparatively maintenance-free. Although the PV energy source has many benefits, its output power is dependent on continuously changing weather and environmental factors, so there is a need to forecast the PV output power. Many techniques have been employed to predict the PV output power. This work focuses on the short-term forecast horizon of PV output power. Multilayer perception (MLP), convolutional neural networks (CNN), and k-nearest neighbour (kNN) neural networks have been used singly or in a hybrid (with other algorithms) to forecast solar PV power or global solar irradiance with success. The performances of these three algorithms have been compared with other algorithms singly or in a hybrid (with other methods) but not with themselves. This study aims to compare the predictive performance of a number of neural network algorithms in solar PV energy yield forecasting under different weather conditions and showcase their robustness in making predictions in this regard. The performance of MLPNN, CNN, and kNN are compared using solar PV (hourly) data for Grahamstown, Eastern Cape, South Africa. The choice of location is part of the study parameters to provide insight into renewable energy power integration in specific areas in South Africa that may be prone to extreme weather conditions. Our data does not have lots of missing data and many data spikes. The kNN algorithm was found to have an RMSE value of 4.95%, an MAE value of 2.74% at its worst performance, an RMSE value of 1.49%, and an MAE value of 0.85% at its best performance. It outperformed the others by a good margin, and kNN could serve as a fast, easy, and accurate tool for forecasting solar PV output power. Considering the performance of the kNN algorithm across the different seasons, this study shows that kNN is a reliable and robust algorithm for forecasting solar PV output power. Full article
Show Figures

Figure 1

24 pages, 20381 KiB  
Article
Application of Artificial Neural Networks for Prediction of Received Signal Strength Indication and Signal-to-Noise Ratio in Amazonian Wooded Environments
by Brenda S. de S. Barbosa, Hugo A. O. Cruz, Alex S. Macedo, Caio M. M. Cardoso, Filipe C. Fernandes, Leslye E. C. Eras, Jasmine P. L. de Araújo, Gervásio P. S. Calvacante and Fabrício J. B. Barros
Sensors 2024, 24(8), 2542; https://doi.org/10.3390/s24082542 - 16 Apr 2024
Viewed by 1137
Abstract
The presence of green areas in urbanized cities is crucial to reduce the negative impacts of urbanization. However, these areas can influence the signal quality of IoT devices that use wireless communication, such as LoRa technology. Vegetation attenuates electromagnetic waves, interfering with the [...] Read more.
The presence of green areas in urbanized cities is crucial to reduce the negative impacts of urbanization. However, these areas can influence the signal quality of IoT devices that use wireless communication, such as LoRa technology. Vegetation attenuates electromagnetic waves, interfering with the data transmission between IoT devices, resulting in the need for signal propagation modeling, which considers the effect of vegetation on its propagation. In this context, this research was conducted at the Federal University of Pará, using measurements in a wooded environment composed of the Pau-Mulato species, typical of the Amazon. Two machine learning-based propagation models, GRNN and MLPNN, were developed to consider the effect of Amazonian trees on propagation, analyzing different factors, such as the transmitter’s height relative to the trunk, the beginning of foliage, and the middle of the tree canopy, as well as the LoRa spreading factor (SF) 12, and the co-polarization of the transmitter and receiver antennas. The proposed models demonstrated higher accuracy, achieving values of root mean square error (RMSE) of 3.86 dB and standard deviation (SD) of 3.8614 dB, respectively, compared to existing empirical models like CI, FI, Early ITU-R, COST235, Weissberger, and FITU-R. The significance of this study lies in its potential to boost wireless communications in wooded environments. Furthermore, this research contributes to enhancing more efficient and robust LoRa networks for applications in agriculture, environmental monitoring, and smart urban infrastructure. Full article
(This article belongs to the Special Issue LoRa Communication Technology for IoT Applications)
Show Figures

Figure 1

22 pages, 2747 KiB  
Article
Utilizing Hybrid Machine Learning Techniques and Gridded Precipitation Data for Advanced Discharge Simulation in Under-Monitored River Basins
by Reza Morovati and Ozgur Kisi
Hydrology 2024, 11(4), 48; https://doi.org/10.3390/hydrology11040048 - 4 Apr 2024
Cited by 1 | Viewed by 1851
Abstract
This study addresses the challenge of utilizing incomplete long-term discharge data when using gridded precipitation datasets and data-driven modeling in Iran’s Karkheh basin. The Multilayer Perceptron Neural Network (MLPNN), a rainfall-runoff (R-R) model, was applied, leveraging precipitation data from the Asian Precipitation—Highly Resolved [...] Read more.
This study addresses the challenge of utilizing incomplete long-term discharge data when using gridded precipitation datasets and data-driven modeling in Iran’s Karkheh basin. The Multilayer Perceptron Neural Network (MLPNN), a rainfall-runoff (R-R) model, was applied, leveraging precipitation data from the Asian Precipitation—Highly Resolved Observational Data Integration Toward Evaluation (APHRODITE), Global Precipitation Climatology Center (GPCC), and Climatic Research Unit (CRU). The MLPNN was trained using the Levenberg–Marquardt algorithm and optimized with the Non-dominated Sorting Genetic Algorithm-II (NSGA-II). Input data were pre-processed through principal component analysis (PCA) and singular value decomposition (SVD). This study explored two scenarios: Scenario 1 (S1) used in situ data for calibration and gridded dataset data for testing, while Scenario 2 (S2) involved separate calibrations and tests for each dataset. The findings reveal that APHRODITE outperformed in S1, with all datasets showing improved results in S2. The best results were achieved with hybrid applications of the S2-PCA-NSGA-II for APHRODITE and S2-SVD-NSGA-II for GPCC and CRU. This study concludes that gridded precipitation datasets, when properly calibrated, significantly enhance runoff simulation accuracy, highlighting the importance of bias correction in rainfall-runoff modeling. It is important to emphasize that this modeling approach may not be suitable in situations where a catchment is undergoing significant changes, whether due to development interventions or the impacts of anthropogenic climate change. This limitation highlights the need for dynamic modeling approaches that can adapt to changing catchment conditions. Full article
(This article belongs to the Special Issue The 10th Anniversary of Hydrology: Inaugurating a New Research Decade)
Show Figures

Figure 1

40 pages, 6023 KiB  
Article
Mechanical Framework for Geopolymer Gels Construction: An Optimized LSTM Technique to Predict Compressive Strength of Fly Ash-Based Geopolymer Gels Concrete
by Xuyang Shi, Shuzhao Chen, Qiang Wang, Yijun Lu, Shisong Ren and Jiandong Huang
Gels 2024, 10(2), 148; https://doi.org/10.3390/gels10020148 - 16 Feb 2024
Cited by 8 | Viewed by 1610
Abstract
As an environmentally responsible alternative to conventional concrete, geopolymer concrete recycles previously used resources to prepare the cementitious component of the product. The challenging issue with employing geopolymer concrete in the building business is the absence of a standard mix design. According to [...] Read more.
As an environmentally responsible alternative to conventional concrete, geopolymer concrete recycles previously used resources to prepare the cementitious component of the product. The challenging issue with employing geopolymer concrete in the building business is the absence of a standard mix design. According to the chemical composition of its components, this work proposes a thorough system or framework for estimating the compressive strength of fly ash-based geopolymer concrete (FAGC). It could be possible to construct a system for predicting the compressive strength of FAGC using soft computing methods, thereby avoiding the requirement for time-consuming and expensive experimental tests. A complete database of 162 compressive strength datasets was gathered from the research papers that were published between the years 2000 and 2020 and prepared to develop proposed models. To address the relationships between inputs and output variables, long short-term memory networks were deployed. Notably, the proposed model was examined using several soft computing methods. The modeling process incorporated 17 variables that affect the CSFAG, such as percentage of SiO2 (SiO2), percentage of Na2O (Na2O), percentage of CaO (CaO), percentage of Al2O3 (Al2O3), percentage of Fe2O3 (Fe2O3), fly ash (FA), coarse aggregate (CAgg), fine aggregate (FAgg), Sodium Hydroxide solution (SH), Sodium Silicate solution (SS), extra water (EW), superplasticizer (SP), SH concentration, percentage of SiO2 in SS, percentage of Na2O in SS, curing time, curing temperature that the proposed model was examined to several soft computing methods such as multi-layer perception neural network (MLPNN), Bayesian regularized neural network (BRNN), generalized feed-forward neural networks (GFNN), support vector regression (SVR), decision tree (DT), random forest (RF), and LSTM. Three main innovations of this study are using the LSTM model for predicting FAGC, optimizing the LSTM model by a new evolutionary algorithm called the marine predators algorithm (MPA), and considering the six new inputs in the modeling process, such as aggregate to total mass ratio, fine aggregate to total aggregate mass ratio, FASiO2:Al2O3 molar ratio, FA SiO2:Fe2O3 molar ratio, AA Na2O:SiO2 molar ratio, and the sum of SiO2, Al2O3, and Fe2O3 percent in FA. The performance capacity of LSTM-MPA was evaluated with other artificial intelligence models. The results indicate that the R2 and RMSE values for the proposed LSTM-MPA model were as follows: MLPNN (R2 = 0.896, RMSE = 3.745), BRNN (R2 = 0.931, RMSE = 2.785), GFFNN (R2 = 0.926, RMSE = 2.926), SVR-L (R2 = 0.921, RMSE = 3.017), SVR-P (R2 = 0.920, RMSE = 3.291), SVR-S (R2 = 0.934, RMSE = 2.823), SVR-RBF (R2 = 0.916, RMSE = 3.114), DT (R2 = 0.934, RMSE = 2.711), RF (R2 = 0.938, RMSE = 2.892), LSTM (R2 = 0.9725, RMSE = 1.7816), LSTM-MPA (R2 = 0.9940, RMSE = 0.8332), and LSTM-PSO (R2 = 0.9804, RMSE = 1.5221). Therefore, the proposed LSTM-MPA model can be employed as a reliable and accurate model for predicting CSFAG. Noteworthy, the results demonstrated the significance and influence of fly ash and sodium silicate solution chemical compositions on the compressive strength of FAGC. These variables could adequately present variations in the best mix designs discovered in earlier investigations. The suggested approach may also save time and money by accurately estimating the compressive strength of FAGC with low calcium content. Full article
(This article belongs to the Special Issue Gel Formation and Processing Technologies for Material Applications)
Show Figures

Figure 1

28 pages, 5694 KiB  
Article
A Multi-Output Regression Model for Energy Consumption Prediction Based on Optimized Multi-Kernel Learning: A Case Study of Tin Smelting Process
by Zhenglang Wang, Zao Feng, Zhaojun Ma and Jubo Peng
Processes 2024, 12(1), 32; https://doi.org/10.3390/pr12010032 - 22 Dec 2023
Cited by 2 | Viewed by 1495
Abstract
Energy consumption forecasting plays an important role in energy management, conservation, and optimization in manufacturing companies. Aiming at the tin smelting process with multiple types of energy consumption and a strong coupling with energy consumption, the traditional prediction model cannot be applied to [...] Read more.
Energy consumption forecasting plays an important role in energy management, conservation, and optimization in manufacturing companies. Aiming at the tin smelting process with multiple types of energy consumption and a strong coupling with energy consumption, the traditional prediction model cannot be applied to the multi-output problem. Moreover, the data collection frequency of different processes is inconsistent, resulting in few effective data samples and strong nonlinearity. In this paper, we propose a multi-kernel multi-output support vector regression model optimized based on a differential evolutionary algorithm for the prediction of multiple types of energy consumption in tin smelting. Redundant feature variables are eliminated using the distance correlation coefficient method, multi-kernel learning is introduced to improve the multi-output support vector regression model, and a differential evolutionary algorithm is used to optimize the model hyperparameters. The validity and superiority of the model was verified using the energy consumption data of a non-ferrous metal producer in Southwest China. The experimental results show that the proposed model outperformed multi-output Gaussian process regression (MGPR) and a multi-layer perceptron neural network (MLPNN) in terms of measurement capability. Finally, this paper uses a grey correlation analysis model to discuss the influencing factors on the integrated energy consumption of the tin smelting process and gives corresponding energy-saving suggestions. Full article
Show Figures

Graphical abstract

14 pages, 3125 KiB  
Article
Effect of Textural Properties on the Degradation of Bisphenol from Industrial Wastewater Effluent in a Photocatalytic Reactor: A Modeling Approach
by May Ali Alsaffar, Mohamed Abdel Rahman Abdel Ghany, Alyaa K. Mageed, Adnan A. AbdulRazak, Jamal Manee Ali, Khalid A. Sukkar and Bamidele Victor Ayodele
Appl. Sci. 2023, 13(15), 8966; https://doi.org/10.3390/app13158966 - 4 Aug 2023
Cited by 4 | Viewed by 1325
Abstract
Conventional treatment methods such as chlorination and ozonation have been proven not to be effective in eliminating and degrading contaminants such as Bisphenol A (BPA) from wastewater. Hence, the degradation of BPA using a photocatalytic reactor has received a lot of attention recently. [...] Read more.
Conventional treatment methods such as chlorination and ozonation have been proven not to be effective in eliminating and degrading contaminants such as Bisphenol A (BPA) from wastewater. Hence, the degradation of BPA using a photocatalytic reactor has received a lot of attention recently. In this study, a model-based approach using a multilayer perceptron neural network (MLPNN) coupled with back-propagation, as well as support vector machine regression coupled with cubic kernel function (CSVMR) and Gaussian process regression (EQGPR) coupled with exponential quadratic kernel function, were employed to model the relationship between the textural properties such as pore volume (Vp), pore diameter (Vd), crystallite size, and specific surface area (SBET) of erbium- and iron-modified TiO2 photocatalysts in degrading BPA. Parametric analysis revealed that effective degradation of the Bisphenol up to 90% could be achieved using photocatalysts having textural properties of 150 m2/g, 8 nm, 7 nm, and 0.36 cm3/g for SBET, crystallite size, particle diameter, and pore volume, respectively. Fifteen architectures of the MPLNN models were tested to determine the best in terms of predictability of BPA degradation. The performance of each of the MLPNN models was measured using the coefficient of determination (R2) and root mean squared errors (RMSE). The MLPNN architecture comprised of 4 input layers, 14 hidden neurons, and 3 output layers displayed the best performance with R2 of 0.902 and 0.996 for training and testing. The 4-14-3 MLPNN robustly predicted the BPA degradation with an R2 of 0.921 and RMSE of 4.02, which is an indication that a nonlinear relationship exists between the textural properties of the modified TiO2 and the degradation of the BPA. The CSVRM did not show impressive performance as indicated by the R2 of 0.397. Therefore, appropriately modifying the textural properties of the TiO2 will significantly influence the BPA degradability. Full article
(This article belongs to the Special Issue Advances in Waste Treatment and Material Recycling)
Show Figures

Figure 1

21 pages, 13835 KiB  
Article
Research on a Hybrid Intelligent Method for Natural Gas Energy Metering
by Jingya Dong, Bin Song, Fei He, Yingying Xu, Qiang Wang, Wanjun Li and Peng Zhang
Sensors 2023, 23(14), 6528; https://doi.org/10.3390/s23146528 - 19 Jul 2023
Viewed by 1106
Abstract
In this paper, a Comprehensive Diagram Method (CDM) for a Multi-Layer Perceptron Neuron Network (MLPNN) is proposed to realize natural gas energy metering using temperature, pressure, and the speed of sound from an ultrasonic flowmeter. Training and testing of the MLPNN model were [...] Read more.
In this paper, a Comprehensive Diagram Method (CDM) for a Multi-Layer Perceptron Neuron Network (MLPNN) is proposed to realize natural gas energy metering using temperature, pressure, and the speed of sound from an ultrasonic flowmeter. Training and testing of the MLPNN model were performed on the basis of 1003 real data points describing the compression factors (Z-factors) and calorific values of the three main components of natural gas in Sichuan province, China. Moreover, 20 days of real tests were conducted to verify the measurements’ accuracy and the adaptability of the new intelligent method. Based on the values of the Mean Relative Errors and the Root Mean Square errors for the learning and test errors calculated on the basis of the actual data, the best-quality MLP 3-5-1 network for the metering of Z-factors and the new CDM methods for the metering of calorific values were experimentally selected. The Bayesian regularized MLPNN (BR-MLPNN) 3-5-1 network showed that the Z-factors of natural gas have a maximum relative error of −0.44%, and the new CDM method revealed calorific values with a maximum relative error of 1.90%. In addition, three local tests revealed that the maximum relative error of the daily cumulative amount of natural gas energy was 2.39%. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

30 pages, 4653 KiB  
Article
Classification of the Central Effects of Transcutaneous Electroacupuncture Stimulation (TEAS) at Different Frequencies: A Deep Learning Approach Using Wavelet Packet Decomposition with an Entropy Estimator
by Çağlar Uyulan, David Mayor, Tony Steffert, Tim Watson and Duncan Banks
Appl. Sci. 2023, 13(4), 2703; https://doi.org/10.3390/app13042703 - 20 Feb 2023
Cited by 3 | Viewed by 3066
Abstract
The field of signal processing using machine and deep learning algorithms has undergone significant growth in the last few years, with a wide scope of practical applications for electroencephalography (EEG). Transcutaneous electroacupuncture stimulation (TEAS) is a well-established variant of the traditional method of [...] Read more.
The field of signal processing using machine and deep learning algorithms has undergone significant growth in the last few years, with a wide scope of practical applications for electroencephalography (EEG). Transcutaneous electroacupuncture stimulation (TEAS) is a well-established variant of the traditional method of acupuncture that is also receiving increasing research attention. This paper presents the results of using deep learning algorithms on EEG data to investigate the effects on the brain of different frequencies of TEAS when applied to the hands in 66 participants, before, during and immediately after 20 min of stimulation. Wavelet packet decomposition (WPD) and a hybrid Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) model were used to examine the central effects of this peripheral stimulation. The classification results were analysed using confusion matrices, with kappa as a metric. Contrary to expectation, the greatest differences in EEG from baseline occurred during TEAS at 80 pulses per second (pps) or in the ‘sham’ (160 pps, zero amplitude), while the smallest differences occurred during 2.5 or 10 pps stimulation (mean kappa 0.414). The mean and CV for kappa were considerably higher for the CNN-LSTM than for the Multilayer Perceptron Neural Network (MLP-NN) model. As far as we are aware, from the published literature, no prior artificial intelligence (AI) research appears to have been conducted into the effects on EEG of different frequencies of electroacupuncture-type stimulation (whether EA or TEAS). This ground-breaking study thus offers a significant contribution to the literature. However, as with all (unsupervised) DL methods, a particular challenge is that the results are not easy to interpret, due to the complexity of the algorithms and the lack of a clear understanding of the underlying mechanisms. There is therefore scope for further research that explores the effects of the frequency of TEAS on EEG using AI methods, with the most obvious place to start being a hybrid CNN-LSTM model. This would allow for better extraction of information to understand the central effects of peripheral stimulation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

20 pages, 3760 KiB  
Article
Performance Comparison of Machine Learning Disruption Predictors at JET
by Enrico Aymerich, Barbara Cannas, Fabio Pisano, Giuliana Sias, Carlo Sozzi, Chris Stuart, Pedro Carvalho, Alessandra Fanni and the JET Contributors
Appl. Sci. 2023, 13(3), 2006; https://doi.org/10.3390/app13032006 - 3 Feb 2023
Cited by 8 | Viewed by 2222
Abstract
Reliable disruption prediction (DP) and disruption mitigation systems are considered unavoidable during international thermonuclear experimental reactor (ITER) operations and in the view of the next fusion reactors such as the DEMOnstration Power Plant (DEMO) and China Fusion Engineering Test Reactor (CFETR). In the [...] Read more.
Reliable disruption prediction (DP) and disruption mitigation systems are considered unavoidable during international thermonuclear experimental reactor (ITER) operations and in the view of the next fusion reactors such as the DEMOnstration Power Plant (DEMO) and China Fusion Engineering Test Reactor (CFETR). In the last two decades, a great number of DP systems have been developed using data-driven methods. The performance of the DP models has been improved over the years both for a more appropriate choice of diagnostics and input features and for the availability of increasingly powerful data-driven modelling techniques. However, a direct comparison among the proposals has not yet been conducted. Such a comparison is mandatory, at least for the same device, to learn lessons from all these efforts and finally choose the best set of diagnostic signals and the best modelling approach. A first effort towards this goal is made in this paper, where different DP models will be compared using the same performance indices and the same device. In particular, the performance of a conventional Multilayer Perceptron Neural Network (MLP-NN) model is compared with those of two more sophisticated models, based on Generative Topographic Mapping (GTM) and Convolutional Neural Networks (CNN), on the same real time diagnostic signals from several experiments at the JET tokamak. The most common performance indices have been used to compare the different DP models and the results are deeply discussed. The comparison confirms the soundness of all the investigated machine learning approaches and the chosen diagnostics, enables us to highlight the pros and cons of each model, and helps to consciously choose the approach that best matches with the plasma protection needs. Full article
Show Figures

Figure 1

18 pages, 10621 KiB  
Article
Comparisons of Convolutional Neural Network and Other Machine Learning Methods in Landslide Susceptibility Assessment: A Case Study in Pingwu
by Ziyu Jiang, Ming Wang and Kai Liu
Remote Sens. 2023, 15(3), 798; https://doi.org/10.3390/rs15030798 - 31 Jan 2023
Cited by 24 | Viewed by 2892
Abstract
Landslide is a natural disaster that seriously affects human life and social development. In this study, the characteristics and effectiveness of convolutional neural network (CNN) and conventional machine learning (ML) methods in a landslide susceptibility assessment (LSA) are compared. Six ML methods used [...] Read more.
Landslide is a natural disaster that seriously affects human life and social development. In this study, the characteristics and effectiveness of convolutional neural network (CNN) and conventional machine learning (ML) methods in a landslide susceptibility assessment (LSA) are compared. Six ML methods used in this study are Adaboost, multilayer perceptron neural network (MLP-NN), random forest (RF), naive Bayes, decision tree (DT), and gradient boosting decision tree (GBDT). First, the basic knowledge and structures of the CNN and ML methods, and the steps of the LSA are introduced. Then, 11 conditioning factors in three categories in the Hongxi River Basin, Pingwu County, Mianyang City, Sichuan Province are chosen to build the train, validation, and test samples. The CNN and ML models are constructed based on these samples. For comparison, indicator methods, statistical methods, and landslide susceptibility maps (LSMs) are used. The result shows that the CNN can obtain the highest accuracy (86.41%) and the highest AUC (0.9249) in the LSA. The statistical methods represented by the mean and variance of TP and TN perform more firmly on the possibility of landslide occurrence. Furthermore, the LSMs show that all models can successfully identify most of the landslide points, but for areas with a low frequency of landslides, some models are insufficient. The CNN model demonstrates better results in the recognition of the landslides’ cluster region, this is also related to the convolution operation that takes the surrounding environment information into account. The higher accuracy and more concentrative possibility of CNN in LSA is of great significance for disaster prevention and mitigation, which can help the efficient use of human and material resources. Although CNN performs better than other methods, there are still some limitations, the identification of low-cluster landside areas can be enhanced by improving the CNN model. Full article
Show Figures

Figure 1

27 pages, 4379 KiB  
Article
Ensemble Model for Diagnostic Classification of Alzheimer’s Disease Based on Brain Anatomical Magnetic Resonance Imaging
by Yusera Farooq Khan, Baijnath Kaushik, Chiranji Lal Chowdhary and Gautam Srivastava
Diagnostics 2022, 12(12), 3193; https://doi.org/10.3390/diagnostics12123193 - 16 Dec 2022
Cited by 23 | Viewed by 2666
Abstract
Alzheimer’s is one of the fast-growing diseases among people worldwide leading to brain atrophy. Neuroimaging reveals extensive information about the brain’s anatomy and enables the identification of diagnostic features. Artificial intelligence (AI) in neuroimaging has the potential to significantly enhance the treatment process [...] Read more.
Alzheimer’s is one of the fast-growing diseases among people worldwide leading to brain atrophy. Neuroimaging reveals extensive information about the brain’s anatomy and enables the identification of diagnostic features. Artificial intelligence (AI) in neuroimaging has the potential to significantly enhance the treatment process for Alzheimer’s disease (AD). The objective of this study is two-fold: (1) to compare existing Machine Learning (ML) algorithms for the classification of AD. (2) To propose an effective ensemble-based model for the same and to perform its comparative analysis. In this study, data from the Alzheimer’s Diseases Neuroimaging Initiative (ADNI), an online repository, is utilized for experimentation consisting of 2125 neuroimages of Alzheimer’s disease (n = 975), mild cognitive impairment (n = 538) and cognitive normal (n = 612). For classification, the framework incorporates a Decision Tree (DT), Random Forest (RF), Naïve Bayes (NB), and K-Nearest Neighbor (K-NN) followed by some variations of Support Vector Machine (SVM), such as SVM (RBF kernel), SVM (Polynomial Kernel), and SVM (Sigmoid kernel), as well as Gradient Boost (GB), Extreme Gradient Boosting (XGB) and Multi-layer Perceptron Neural Network (MLP-NN). Afterwards, an Ensemble Based Generic Kernel is presented where Master-Slave architecture is combined to attain better performance. The proposed model is an ensemble of Extreme Gradient Boosting, Decision Tree and SVM_Polynomial kernel (XGB + DT + SVM). At last, the proposed method is evaluated using cross-validation using statistical techniques along with other ML models. The presented ensemble model (XGB + DT + SVM) outperformed existing state-of-the-art algorithms with an accuracy of 89.77%. The efficiency of all the models was optimized using Grid-based tuning, and the results obtained after such process showed significant improvement. XGB + DT + SVM with optimized parameters outperformed all other models with an efficiency of 95.75%. The implication of the proposed ensemble-based learning approach clearly shows the best results compared to other ML models. This experimental comparative analysis improved understanding of the above-defined methods and enhanced their scope and significance in the early detection of Alzheimer’s disease. Full article
Show Figures

Figure 1

14 pages, 4695 KiB  
Article
Data-Driven Approach to Modeling Biohydrogen Production from Biodiesel Production Waste: Effect of Activation Functions on Model Configurations
by SK Safdar Hossain, Bamidele Victor Ayodele, Zaid Abdulhamid Alhulaybi and Muhammad Mudassir Ahmad Alwi
Appl. Sci. 2022, 12(24), 12914; https://doi.org/10.3390/app122412914 - 15 Dec 2022
Viewed by 1496
Abstract
Biodiesel production often results in the production of a significant amount of waste glycerol. Through various technological processes, waste glycerol can be sustainably utilized for the production of value-added products such as hydrogen. One such process used for waste glycerol conversion is the [...] Read more.
Biodiesel production often results in the production of a significant amount of waste glycerol. Through various technological processes, waste glycerol can be sustainably utilized for the production of value-added products such as hydrogen. One such process used for waste glycerol conversion is the bioprocess, whereby thermophilic microorganisms are utilized. However, due to the complex mechanism of the bioprocess, it is uncertain how various input parameters are interrelated with biohydrogen production. In this study, a data-driven machine-learning approach is employed to model the prediction of biohydrogen from waste glycerol. Twelve configurations consisting of the multilayer perceptron neural network (MLPNN) and the radial basis function neural network (RBFNN) were investigated. The effect of using different combinations of activation functions such as hyperbolic tangent, identity, and sigmoid on the model’s performance was investigated. Moreover, the effect of two optimization algorithms, scaled conjugate gradient and gradient descent, on the model performance was also investigated. The performance analysis of the models revealed that the manner in which the activation functions are combined in the hidden and outer layers significantly influences the performance of various models. Similarly, the model performance was also influenced by the nature of the optimization algorithms. The MLPNN models displayed better predictive performance compared to the RBFNN models. The RBFNN model with softmax as the hidden layer activation function and identity as the outer layer activation function has the least predictive performance, as indicated by an R2 of 0.403 and a RMSE of 301.55. While the MLPNN configuration with the hyperbolic tangent as the hidden layer activation function and the sigmoid as the outer layer activation function yielded the best performance as indicated by an R2 of 0.978 and a RMSE of 9.91. The gradient descent optimization algorithm was observed to help improve the model’s performance. All the input variables significantly influence the predicted biohydrogen. However, waste glycerol has the most significant effects. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

32 pages, 22806 KiB  
Article
Spatial Prediction of Current and Future Flood Susceptibility: Examining the Implications of Changing Climates on Flood Susceptibility Using Machine Learning Models
by Navid Mahdizadeh Gharakhanlou and Liliana Perez
Entropy 2022, 24(11), 1630; https://doi.org/10.3390/e24111630 - 10 Nov 2022
Cited by 9 | Viewed by 3052
Abstract
The main aim of this study was to predict current and future flood susceptibility under three climate change scenarios of RCP2.6 (i.e., optimistic), RCP4.5 (i.e., business as usual), and RCP8.5 (i.e., pessimistic) employing four machine learning models, including Gradient Boosting Machine (GBM), Random [...] Read more.
The main aim of this study was to predict current and future flood susceptibility under three climate change scenarios of RCP2.6 (i.e., optimistic), RCP4.5 (i.e., business as usual), and RCP8.5 (i.e., pessimistic) employing four machine learning models, including Gradient Boosting Machine (GBM), Random Forest (RF), Multilayer Perceptron Neural Network (MLP-NN), and Naïve Bayes (NB). The study was conducted for two watersheds in Canada, namely Lower Nicola River, BC and Loup, QC. Three statistical metrics were used to validate the models: Receiver Operating Characteristic Curve, Figure of Merit, and F1-score. Findings indicated that the RF model had the highest accuracy in providing the flood susceptibility maps (FSMs). Moreover, the provided FSMs indicated that flooding is more likely to occur in the Lower Nicola River watershed than the Loup watershed. Following the RCP4.5 scenario, the area percentages of the flood susceptibility classes in the Loup watershed in 2050 and 2080 have changed by the following percentages from the year 2020 and 2050, respectively: Very Low = −1.68%, Low = −5.82%, Moderate = +6.19%, High = +0.71%, and Very High = +0.6% and Very Low = −1.61%, Low = +2.98%, Moderate = −3.49%, High = +1.29%, and Very High = +0.83%. Likewise, in the Lower Nicola River watershed, the changes between the years 2020 and 2050 and between the years 2050 and 2080 were: Very Low = −0.38%, Low = −0.81%, Moderate = −0.95%, High = +1.72%, and Very High = +0.42% and Very Low = −1.31%, Low = −1.35%, Moderate = −1.81%, High = +2.37%, and Very High = +2.1%, respectively. The impact of climate changes on future flood-prone places revealed that the regions designated as highly and very highly susceptible to flooding, grow in the forecasts for both watersheds. The main contribution of this study lies in the novel insights it provides concerning the flood susceptibility of watersheds in British Columbia and Quebec over time and under various climate change scenarios. Full article
Show Figures

Figure 1

Back to TopTop