Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (381)

Search Parameters:
Keywords = Bayesian forecasting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1840 KiB  
Review
Review of Methods and Models for Potato Yield Prediction
by Magdalena Piekutowska and Gniewko Niedbała
Agriculture 2025, 15(4), 367; https://doi.org/10.3390/agriculture15040367 - 9 Feb 2025
Viewed by 293
Abstract
This article provides a comprehensive overview of the development and application of statistical methods, process-based models, machine learning, and deep learning techniques in potato yield forecasting. It emphasizes the importance of integrating diverse data sources, including meteorological, phenotypic, and remote sensing data. Advances [...] Read more.
This article provides a comprehensive overview of the development and application of statistical methods, process-based models, machine learning, and deep learning techniques in potato yield forecasting. It emphasizes the importance of integrating diverse data sources, including meteorological, phenotypic, and remote sensing data. Advances in computer technology have enabled the creation of more sophisticated models, such as mixed, geostatistical, and Bayesian models. Special attention is given to deep learning techniques, particularly convolutional neural networks, which significantly enhance forecast accuracy by analyzing complex data patterns. The article also discusses the effectiveness of other algorithms, such as Random Forest and Support Vector Machines, in capturing nonlinear relationships affecting yields. According to standards adopted in agricultural research, the Mean Absolute Percentage Error (MAPE) in the implementation of prediction issues should generally not exceed 15%. Contemporary research indicates that, through the use of advanced and accurate algorithms, the value of this error can reach levels of even less than 10 per cent, significantly increasing the efficiency of yield forecasting. Key challenges in the field include climatic variability and difficulties in obtaining accurate data on soil properties and agronomic practices. Despite these challenges, technological advancements present new opportunities for more accurate forecasting. Future research should focus on leveraging Internet of Things (IoT) technology for real-time data collection and analyzing the impact of biological variables on yield. An interdisciplinary approach, integrating insights from ecology and meteorology, is recommended to develop innovative predictive models. The exploration of machine learning methods has the potential to advance knowledge in potato yield forecasting and support sustainable agricultural practices. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

22 pages, 3949 KiB  
Article
Hidden Markov Neural Networks
by Lorenzo Rimella and Nick Whiteley
Entropy 2025, 27(2), 168; https://doi.org/10.3390/e27020168 - 5 Feb 2025
Viewed by 282
Abstract
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling [...] Read more.
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling the weights of a neural network as the hidden states of a Hidden Markov model, with the observed process defined by the available data. A filtering algorithm is employed to learn a variational approximation of the evolving-in-time posterior distribution over the weights. By leveraging a sequential variant of Bayes by Backprop, enriched with a stronger regularization technique called variational DropConnect, Hidden Markov Neural Networks achieve robust regularization and scalable inference. Experiments on MNIST, dynamic classification tasks, and next-frame forecasting in videos demonstrate that Hidden Markov Neural Networks provide strong predictive performance while enabling effective uncertainty quantification. Full article
(This article belongs to the Special Issue Advances in Probabilistic Machine Learning)
20 pages, 2521 KiB  
Article
A Bayesian Network Framework to Predict Compressive Strength of Recycled Aggregate Concrete
by Tien-Dung Nguyen, Rachid Cherif, Pierre-Yves Mahieux and Emilio Bastidas-Arteaga
J. Compos. Sci. 2025, 9(2), 72; https://doi.org/10.3390/jcs9020072 - 5 Feb 2025
Viewed by 388
Abstract
In recent years, the use of recycled aggregate concrete (RAC) has become a major concern when promoting sustainable development in construction. However, the design of concrete mixes and the prediction of their compressive strength becomes difficult due to the heterogeneity of recycled aggregates [...] Read more.
In recent years, the use of recycled aggregate concrete (RAC) has become a major concern when promoting sustainable development in construction. However, the design of concrete mixes and the prediction of their compressive strength becomes difficult due to the heterogeneity of recycled aggregates (RA). Artificial-intelligence (AI) approaches for the prediction of RAC compressive strength (fc) need a sizable database to have the ability to generalize models. Additionally, not all AI methods may update input values in the model to improve the performance of the algorithms or to identify some model parameters. To overcome these challenges, this study proposes a new method based on Bayesian Networks (BNs) to predict the fc of RAC, as well as to identify some parameters of the RAC formulation to achieve a given fc target. The BN approach utilizes the available data from three input variables: water-to-cement ratio, aggregate-to-cement ratio, and RA replacement ratio to calculate the prior and posterior probability of fc. The outcomes demonstrate how BNs may be used to forecast both forward and backward, related to the fc of RAC, and the parameters of the concrete formulation. Full article
(This article belongs to the Special Issue Novel Cement and Concrete Materials)
Show Figures

Figure 1

27 pages, 5841 KiB  
Article
Frictional Pressure Loss Prediction in Symmetrical Pipes During Drilling Using Soft Computing Algorithms
by Okorie Ekwe Agwu, Sia Chee Wee and Moses Gideon Akpabio
Symmetry 2025, 17(2), 228; https://doi.org/10.3390/sym17020228 - 5 Feb 2025
Viewed by 428
Abstract
One of the significant challenges during wellbore drilling is accurately predicting frictional pressure losses in symmetrical drill pipes. In this work, a Bayesian regularized neural network (BRANN) and multivariate adaptive regression splines (MARS) are employed to develop accurate and interpretable models for predicting [...] Read more.
One of the significant challenges during wellbore drilling is accurately predicting frictional pressure losses in symmetrical drill pipes. In this work, a Bayesian regularized neural network (BRANN) and multivariate adaptive regression splines (MARS) are employed to develop accurate and interpretable models for predicting frictional pressure losses during drilling. Utilizing data of frictional pressure loss collected through experimentation, the models are created. The model inputs include mud flow rate, mud density, pipe diameter (inside and outside diameters), and viscometer dial readings, while pressure loss is the output. Statistical comparisons between the model predictions and the actual values demonstrate the models’ ability to reasonably forecast frictional pressure losses in wells. The performance of the models, as measured by error metrics, is as follows: BRANN (0.999, 0.076, 16.76, and 11.67) and MARS (0.998, 0.0989, 21.32, and 16.499) with respect to the coefficient of determination, average absolute percentage error, root mean square error, and mean absolute error, respectively. Additionally, a parametric importance study reveals that, among the input variables, internal and external pipe diameters are the top predictors, with a relevancy factor of −0.784 for each, followed by the mud flow rate, with a relevancy factor of 0.553. The trend analysis further confirms the physical validity of the proposed models. The explicit nature of the models, together with their physical validation through trend analysis and interpretability via a sensitivity analysis, adds to the novelty of this study. The precise and robust estimations provided by the models make them valuable virtual tools for the development of drilling hydraulics simulators for frictional pressure loss estimations in the field. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

19 pages, 3350 KiB  
Article
Dissolved Oxygen Modeling by a Bayesian-Optimized Explainable Artificial Intelligence Approach
by Qiulin Li, Jinchao He, Dewei Mu, Hao Liu and Shicheng Li
Appl. Sci. 2025, 15(3), 1471; https://doi.org/10.3390/app15031471 - 31 Jan 2025
Viewed by 614
Abstract
Dissolved oxygen (DO) is a vital water quality index influencing biological processes in aquatic environments. Accurate modeling of DO levels is crucial for maintaining ecosystem health and managing freshwater resources. To this end, the present study contributes a Bayesian-optimized explainable machine learning (ML) [...] Read more.
Dissolved oxygen (DO) is a vital water quality index influencing biological processes in aquatic environments. Accurate modeling of DO levels is crucial for maintaining ecosystem health and managing freshwater resources. To this end, the present study contributes a Bayesian-optimized explainable machine learning (ML) model to reveal DO dynamics and predict DO concentrations. Three ML models, support vector regression (SVR), regression tree (RT), and boosting ensemble, coupled with Bayesian optimization (BO), are employed to estimate DO levels in the Mississippi River. It is concluded that the BO-SVR model outperforms others, achieving a coefficient of determination (CD) of 0.97 and minimal error metrics (root mean square error = 0.395 mg/L, mean absolute error = 0.303 mg/L). Shapley Additive Explanation (SHAP) analysis identifies temperature, discharge, and gage height as the most dominant factors affecting DO levels. Sensitivity analysis confirms the robustness of the models under varying input conditions. With perturbations from 5% to 30%, the temperature sensitivity ranges from 1.0% to 6.1%, discharge from 0.9% to 5.2%, and gage height from 0.8% to 5.0%. Although the models experience reduced accuracy with extended prediction horizons, they still achieve satisfactory results (CD > 0.75) for forecasting periods of up to 30 days. The established models also exhibit higher accuracy than many prior approaches. This study highlights the potential of BO-optimized explainable ML models for reliable DO forecasting, offering valuable insights for water resource management. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

16 pages, 9655 KiB  
Article
Salmon Consumption Behavior Prediction Based on Bayesian Optimization and Explainable Artificial Intelligence
by Zhan Wu, Sina Cha, Chunxiao Wang, Tinghong Qu and Zongfeng Zou
Foods 2025, 14(3), 429; https://doi.org/10.3390/foods14030429 - 28 Jan 2025
Viewed by 617
Abstract
Predicting seafood consumption behavior is essential for fishing companies to adjust their production plans and marketing strategies. To achieve accurate predictions, this paper introduces a model for forecasting seafood consumption behavior based on an interpretable machine learning algorithm. Additionally, the Shapley Additive exPlanation [...] Read more.
Predicting seafood consumption behavior is essential for fishing companies to adjust their production plans and marketing strategies. To achieve accurate predictions, this paper introduces a model for forecasting seafood consumption behavior based on an interpretable machine learning algorithm. Additionally, the Shapley Additive exPlanation (SHAP) model and the Accumulated Local Effects (ALE) plot were integrated to provide a detailed analysis of the factors influencing Shanghai residents’ intentions to purchase salmon. In this study, we constructed nine regression prediction models, including ANN, Decision Tree, GBDT, Random Forest, AdaBoost, XGBoost, LightGBM, CatBoost, and NGBoost, to predict the consumers’ intentions to purchase salmon and to compare their predictive performance. In addition, Bayesian optimization algorithm is used to optimize the hyperparameters of the optimal regression prediction model to improve the model prediction accuracy. Finally, the SHAP model was used to analyze the key factors and interactions affecting the consumers’ willingness to purchase salmon, and the Accumulated Local Effects plot was used to show the specific prediction patterns of different influences on salmon consumption. The results of the study show that salmon farming safety and ease of cooking have significant nonlinear effects on salmon consumption; the BO-CatBoost nonlinear regression prediction model demonstrates superior performance compared to the benchmark model, with the test set exhibiting RMSE, MSE, MAE, R2 and TIC values of 0.155, 0.024, 0.097, 0.902, and 0.313, respectively. This study can provide technical support for suppliers in the salmon value chain and help their decision-making to adjust their corporate production plan and marketing activities Full article
(This article belongs to the Topic Consumer Behaviour and Healthy Food Consumption)
Show Figures

Figure 1

21 pages, 1162 KiB  
Article
Forecasting Stock Market Indices Using Integration of Encoder, Decoder, and Attention Mechanism
by Tien Thanh Thach
Entropy 2025, 27(1), 82; https://doi.org/10.3390/e27010082 - 17 Jan 2025
Viewed by 654
Abstract
Accurate forecasting of stock market indices is crucial for investors, financial analysts, and policymakers. The integration of encoder and decoder architectures, coupled with an attention mechanism, has emerged as a powerful approach to enhance prediction accuracy. This paper presents a novel framework that [...] Read more.
Accurate forecasting of stock market indices is crucial for investors, financial analysts, and policymakers. The integration of encoder and decoder architectures, coupled with an attention mechanism, has emerged as a powerful approach to enhance prediction accuracy. This paper presents a novel framework that leverages these components to capture complex temporal dependencies and patterns within stock price data. The encoder effectively transforms an input sequence into a dense representation, which the decoder then uses to reconstruct future values. The attention mechanism provides an additional layer of sophistication, allowing the model to selectively focus on relevant parts of the input sequence for making predictions. Furthermore, Bayesian optimization is employed to fine-tune hyperparameters, further improving forecast precision. Our results demonstrate a significant improvement in forecast precision over traditional recurrent neural networks. This indicates the potential of our integrated approach to effectively handle the complex patterns and dependencies in stock price data. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

26 pages, 850 KiB  
Article
Forecasting Half-Hourly Electricity Prices Using a Mixed-Frequency Structural VAR Framework
by Gaurav Kapoor, Nuttanan Wichitaksorn, Mengheng Li and Wenjun Zhang
Econometrics 2025, 13(1), 2; https://doi.org/10.3390/econometrics13010002 - 8 Jan 2025
Viewed by 482
Abstract
Electricity price forecasting has been a topic of significant interest since the deregulation of electricity markets worldwide. The New Zealand electricity market is run primarily on renewable fuels, and so weather metrics have a significant impact on electricity price and volatility. In this [...] Read more.
Electricity price forecasting has been a topic of significant interest since the deregulation of electricity markets worldwide. The New Zealand electricity market is run primarily on renewable fuels, and so weather metrics have a significant impact on electricity price and volatility. In this paper, we employ a mixed-frequency vector autoregression (MF-VAR) framework where we propose a VAR specification to the reverse unrestricted mixed-data sampling (RU-MIDAS) model, called RU-MIDAS-VAR, to provide point forecasts of half-hourly electricity prices using several weather variables and electricity demand. A key focus of this study is the use of variational Bayes as an estimation technique and its comparison with other well-known Bayesian estimation methods. We separate forecasts for peak and off-peak periods in a day since we are primarily concerned with forecasts for peak periods. Our forecasts, which include peak and off-peak data, show that weather variables and demand as regressors can replicate some key characteristics of electricity prices. We also find the MF-VAR and RU-MIDAS-VAR models achieve similar forecast results. Using the LASSO, adaptive LASSO, and random subspace regression as dimension-reduction and variable selection methods helps to improve forecasts where random subspace methods perform well for large parameter sets while the LASSO significantly improves our forecasting results in all scenarios. Full article
Show Figures

Figure 1

15 pages, 2505 KiB  
Article
Short-Term Load Forecasting in Power Systems Based on the Prophet–BO–XGBoost Model
by Shuang Zeng, Chang Liu, Heng Zhang, Baoqun Zhang and Yutong Zhao
Energies 2025, 18(2), 227; https://doi.org/10.3390/en18020227 - 7 Jan 2025
Viewed by 475
Abstract
To tackle the challenges of limited accuracy and poor generalization in short-term load forecasting under complex nonlinear conditions, this study introduces a Prophet–BO–XGBoost-based forecasting framework. This approach employs the XGBoost model to interpret the nonlinear relationships between features and loads and integrates the [...] Read more.
To tackle the challenges of limited accuracy and poor generalization in short-term load forecasting under complex nonlinear conditions, this study introduces a Prophet–BO–XGBoost-based forecasting framework. This approach employs the XGBoost model to interpret the nonlinear relationships between features and loads and integrates the Prophet model for label prediction from a time-series viewpoint. Given that hyperparameters substantially impact XGBoost’s performance, this study leverages Bayesian optimization (BO) to refine these parameters. Using a Gaussian process-based surrogate model and an acquisition function aimed at expected improvement, this framework optimizes hyperparameter settings to enhance model adaptability and precision. Through a regional case study, this method demonstrated improved predictive accuracy and operational efficiency, highlighting its advantages in both runtime and performance. Full article
(This article belongs to the Special Issue New Progress in Electricity Demand Forecasting)
Show Figures

Figure 1

22 pages, 4854 KiB  
Article
Predicting Earthquake Casualties and Emergency Supplies Needs Based on PCA-BO-SVM
by Fuyu Wang, Huiying Xu, Huifen Ye, Yan Li and Yibo Wang
Systems 2025, 13(1), 24; https://doi.org/10.3390/systems13010024 - 2 Jan 2025
Viewed by 524
Abstract
The prediction of casualties in earthquake disasters is a prerequisite for determining the quantity of emergency supplies needed and serves as the foundational work for the timely distribution of resources. In order to address challenges such as the large computational workload, tedious training [...] Read more.
The prediction of casualties in earthquake disasters is a prerequisite for determining the quantity of emergency supplies needed and serves as the foundational work for the timely distribution of resources. In order to address challenges such as the large computational workload, tedious training process, and multiple influencing factors associated with predicting earthquake casualties, this study proposes a Support Vector Machine (SVM) model utilizing Principal Component Analysis (PCA) and Bayesian Optimization (BO). The original data are first subjected to dimensionality reduction using PCA, with principal components contributing cumulatively to over 80% selected as input variables for the SVM model, while earthquake casualties are designated as the output variable. Subsequently, the optimal hyperparameters for the SVM model are obtained using the Bayesian Optimization algorithm. This approach results in the development of an earthquake casualty prediction model based on PCA-BO-SVM. Experimental results indicate that compared to the GA-SVM model, the BO-SVM model, and the PCA-GA-SVM model, the PCA-BO-SVM model exhibits a reduction in average error rates by 12.86%, 9.01%, and 2%, respectively, along with improvements in average accuracy and operational efficiency by 10.1%, 7.05%, and 0.325% and 25.5%, 18.4%, and 19.2%, respectively. These findings demonstrate that the proposed PCA-BO-SVM model can effectively and scientifically predict earthquake casualties, showcasing strong generalization capabilities and high predictive accuracy. Full article
Show Figures

Figure 1

24 pages, 6981 KiB  
Article
Machine-Learning-Driven Optimization of Cold Spray Process Parameters: Robust Inverse Analysis for Higher Deposition Efficiency
by Abderrachid Hamrani, Aditya Medarametla, Denny John and Arvind Agarwal
Coatings 2025, 15(1), 12; https://doi.org/10.3390/coatings15010012 - 26 Dec 2024
Viewed by 903
Abstract
Cold spray technology has become essential for industries requiring efficient material deposition, yet achieving optimal deposition efficiency (DE) presents challenges due to complex interactions among process parameters. This study developed a two-stage machine learning (ML) framework incorporating Bayesian optimization to address these challenges. [...] Read more.
Cold spray technology has become essential for industries requiring efficient material deposition, yet achieving optimal deposition efficiency (DE) presents challenges due to complex interactions among process parameters. This study developed a two-stage machine learning (ML) framework incorporating Bayesian optimization to address these challenges. In the first stage, a classification model predicted the occurrence of deposition, while the second stage used a regression model to forecast DE values given deposition presence. The approach was validated on Aluminum 6061 data, demonstrating its capability to accurately predict DE and identify optimal process parameters for target efficiencies. Model interpretability was enhanced with SHAP analysis, which identified gas temperature and gas type as primary factors affecting DE. Scenario-based inverse analysis further validated the framework by comparing model-predicted parameters to literature data, revealing high accuracy in replicating real-world conditions. Notably, substituting hydrogen as the gas carrier reduced the required gas temperature and pressure for high DE values, suggesting economic and operational benefits over helium and nitrogen. This study demonstrates the effectiveness of AI-driven solutions in optimizing cold spray processes, contributing to more efficient and practical approaches in material deposition. Full article
Show Figures

Figure 1

26 pages, 1651 KiB  
Article
Long-Term Effectiveness and Safety of Proactive Therapeutic Drug Monitoring of Infliximab in Paediatric Inflammatory Bowel Disease: A Real-World Study
by Susana Clemente Bautista, Óscar Segarra Cantón, Núria Padullés-Zamora, Sonia García García, Marina Álvarez Beltrán, María Larrosa García, Maria Josep Cabañas Poy, Maria Teresa Sanz-Martínez, Ana Vázquez, Maria Queralt Gorgas Torner and Marta Miarons
Pharmaceutics 2024, 16(12), 1577; https://doi.org/10.3390/pharmaceutics16121577 - 10 Dec 2024
Viewed by 906
Abstract
Background: This study evaluated the long-term effectiveness and safety of a multidisciplinary early proactive therapeutic drug monitoring (TDM) program combined with Bayesian forecasting for infliximab (IFX) dose adjustment in a real-world dataset of paediatric patients with inflammatory bowel disease (IBD). Methods: A descriptive, [...] Read more.
Background: This study evaluated the long-term effectiveness and safety of a multidisciplinary early proactive therapeutic drug monitoring (TDM) program combined with Bayesian forecasting for infliximab (IFX) dose adjustment in a real-world dataset of paediatric patients with inflammatory bowel disease (IBD). Methods: A descriptive, ambispective, single-centre study of paediatric patients with IBD who underwent IFX serum concentration measurements between September 2015 and September 2023. The patients received reactive TDM before September 2019 (n = 17) and proactive TDM thereafter (n = 21). We analysed for clinical, biological, and endoscopic remission; treatment failure; hospitalisations; emergency visits; and adverse drug reactions. The IFX doses were adjusted to maintain trough concentrations ≥ 5 µg/mL, with specific targets for proactive TDM. Results: Of the 38 patients, 21 had Crohn’s disease (CD), 16 ulcerative colitis (UC), and 1 undetermined IBD. The mean (standard deviation) IFX trough concentrations were 6.83 (5.66) µg/mL (reactive) and 12.38 (9.24) µg/mL (proactive) (p = 0.08). No statistically significant differences between groups were found in remission rates or treatment failure. The proactive group had fewer hospitalisations (14.29% vs. 23.53%; p = 0.47) and shorter median hospitalisation days (6 vs. 19; p = 0.50), although the difference was not statistically significant. The number of patients with adverse reactions (infusion related reactions and infections) was higher in the proactive group (38.10% vs. 23.53%; p = 0.34) but the difference was not significantly different. Conclusions: Proactive TDM showed no significant differences in treatment outcomes compared to reactive TDM. However, the results in both the reactive and proactive TDM groups were not worse than those reported in other studies. Further studies with larger samples are needed to optimize the treatment strategies for pediatric IBD patients. Full article
(This article belongs to the Special Issue Role of Pharmacokinetics in Drug Development and Evaluation)
Show Figures

Figure 1

28 pages, 3873 KiB  
Article
Bayesian Inference for Long Memory Stochastic Volatility Models
by Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Viewed by 798
Abstract
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination [...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive. Full article
Show Figures

Figure 1

18 pages, 2007 KiB  
Article
Single Well Production Prediction Model of Gas Reservoir Based on CNN-BILSTM-AM
by Daihong Gu, Rongchen Zheng, Peng Cheng, Shuaiqi Zhou, Gongjie Yan, Haitao Liu, Kexin Yang, Jianguo Wang, Yuan Zhu and Mingwei Liao
Energies 2024, 17(22), 5674; https://doi.org/10.3390/en17225674 - 13 Nov 2024
Cited by 1 | Viewed by 658
Abstract
In the prediction of single-well production in gas reservoirs, the traditional empirical formula of gas reservoirs generally shows poor accuracy. In the process of machine learning training and prediction, the problems of small data volume and dirty data are often encountered. In order [...] Read more.
In the prediction of single-well production in gas reservoirs, the traditional empirical formula of gas reservoirs generally shows poor accuracy. In the process of machine learning training and prediction, the problems of small data volume and dirty data are often encountered. In order to overcome the above problems, a single-well production prediction model of gas reservoirs based on CNN-BILSTM-AM is proposed. The model is built by long-term and short-term memory neural networks, convolutional neural networks and attention modules. The input of the model includes the production of the previous period and its influencing factors. At the same time, the fitting production and error value of the traditional gas reservoir empirical formula are introduced to predict the future production data. The loss function is used to evaluate the deviation between the predicted data and the real data, and the Bayesian hyperparameter optimization algorithm is used to optimize the model structure and comprehensively improve the generalization ability of the model. Three single wells in the Daniudi D28 well area were selected as the database, and the CNN-BILSTM-AM model was used to predict the single-well production. The results show that compared with the prediction results of the convolutional neural network (CNN) model, long short-term memory neural network (LSTM) model and bidirectional long short-term memory neural network (BILSTM) model, the error of the CNN-BILSTM-AM model on the test set of three experimental wells is reduced by 6.2425%, 4.9522% and 3.0750% on average. It shows that on the basis of coupling the empirical formula of traditional gas reservoirs, the CNN-BILSTM-AM model meets the high-precision requirements for the single-well production prediction of gas reservoirs, which is of great significance to guide the efficient development of oil fields and ensure the safety of China’s energy strategy. Full article
Show Figures

Figure 1

21 pages, 1933 KiB  
Article
Intelligent Financial Forecasting with Granger Causality and Correlation Analysis Using Bayesian Optimization and Long Short-Term Memory
by Julius Olaniyan, Deborah Olaniyan, Ibidun Christiana Obagbuwa, Bukohwo Michael Esiefarienrhe, Ayodele A. Adebiyi and Olorunfemi Paul Bernard
Electronics 2024, 13(22), 4408; https://doi.org/10.3390/electronics13224408 - 11 Nov 2024
Cited by 1 | Viewed by 1299
Abstract
Financial forecasting plays a critical role in decision-making across various economic sectors, aiming to predict market dynamics and economic indicators through the analysis of historical data. This study addresses the challenges posed by traditional forecasting methods, which often struggle to capture the complexities [...] Read more.
Financial forecasting plays a critical role in decision-making across various economic sectors, aiming to predict market dynamics and economic indicators through the analysis of historical data. This study addresses the challenges posed by traditional forecasting methods, which often struggle to capture the complexities of financial data, leading to suboptimal predictions. To overcome these limitations, this research proposes a hybrid forecasting model that integrates Bayesian optimization with Long Short-Term Memory (LSTM) networks. The primary objective is to enhance the accuracy of market trend and asset price predictions while improving the robustness of forecasts for economic indicators, which are essential for strategic positioning, risk management, and policy formulation. The methodology involves leveraging the strengths of both Bayesian optimization and LSTM networks, allowing for more effective pattern recognition and forecasting in volatile market conditions. Key contributions of this work include the development of a novel hybrid framework that demonstrates superior performance with significantly reduced forecasting errors compared to traditional methods. Experimental results highlight the model’s potential to support informed decision-making amidst market uncertainty, ultimately contributing to improved market efficiency and stability. Full article
Show Figures

Figure 1

Back to TopTop