Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (669)

Search Parameters:
Keywords = noise variance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 17021 KiB  
Article
Impact of Particular Stages of the Manufacturing Process on the Reliability of Flexible Printed Circuits
by Andrzej Kiernich, Jerzy Kalenik, Wojciech Stęplewski, Marek Kościelski and Aneta Chołaj
Sensors 2025, 25(1), 140; https://doi.org/10.3390/s25010140 (registering DOI) - 29 Dec 2024
Viewed by 93
Abstract
The purpose of the experiment was to indicate which element of the production process of flexible printed circuit boards is optimal in terms of the reliability of final products. According to the Taguchi method, in the experiment, five factors with two levels each [...] Read more.
The purpose of the experiment was to indicate which element of the production process of flexible printed circuit boards is optimal in terms of the reliability of final products. According to the Taguchi method, in the experiment, five factors with two levels each were chosen for the subsequent analysis. These included the number of conductive layers, the thickness of the laminate layer, the type of the laminate, the diameter of the plated holes, and the current density in the galvanic bath. The reliability of the PCBs in the produced variations was verified using the Interconnect Stress Test environmental test. The qualitatively best variant of the board construction was indicated using the signal-to-noise ratio and the analysis of variance method for each factor. The factors determined to be the most important in terms of reliability were the number of conductive layers and the current density in the galvanic bath. The optimal variant of the board construction was two conductive layers on a polyimide laminate, where the laminate layer was 100 μm thick, the hole diameter was equal to 0.4 mm, and current density was 2 A/dm2 in the galvanic bath. Therefore, the plated experiment indicated the factors needed to obtain a high-quality product with a low failure rate. Full article
(This article belongs to the Special Issue RFID-Enabled Sensor Design and Applications)
Show Figures

Figure 1

14 pages, 2836 KiB  
Article
Causal Effects of Air Pollution, Noise, and Shift Work on Unstable Angina and Myocardial Infarction: A Mendelian Randomization Study
by Qiye Ma, Lin Chen, Hao Xu and Yiru Weng
Toxics 2025, 13(1), 21; https://doi.org/10.3390/toxics13010021 (registering DOI) - 28 Dec 2024
Viewed by 182
Abstract
Cardiovascular disease continues to be a major contributor to global morbidity and mortality, with environmental and occupational factors such as air pollution, noise, and shift work increasingly recognized as potential contributors. Using a two-sample Mendelian randomization (MR) approach, this study investigates the causal [...] Read more.
Cardiovascular disease continues to be a major contributor to global morbidity and mortality, with environmental and occupational factors such as air pollution, noise, and shift work increasingly recognized as potential contributors. Using a two-sample Mendelian randomization (MR) approach, this study investigates the causal relationships of these risk factors with the risks of unstable angina (UA) and myocardial infarction (MI). Leveraging single nucleotide polymorphisms (SNPs) as genetic instruments, a comprehensive MR study was used to assess the causal influence of four major air pollutants (PM2.5, PM10, NO2, and NOx), noise, and shift work on unstable angina and myocardial infarction. Summary statistics were derived from large genome-wide association studies (GWASs) from the UK Biobank and the FinnGen consortium (Helsinki, Finland), with replication using an independent GWAS data source for myocardial infarction. The inverse-variance weighted (IVW) approach demonstrated a significant positive correlation between shift work and the increased risk of both unstable angina (OR with 95% CI: 1.62 [1.12–2.33], p = 0.010) and myocardial infarction (OR with 95% CI: 1.46 [1.00–2.14], p = 0.052). MR-PRESSO analysis identified outliers, and after correction, the association between shift work and myocardial infarction strengthened (OR with 95% CI: 1.58 [1.11–2.27], p = 0.017). No notable causal associations were identified for air pollution or noise with either outcome. The replication of myocardial infarction findings using independent data supported a possible causal link between shift work and myocardial infarction (OR with 95% CI: 1.41 [1.08–1.84], p = 0.012). These results provide novel evidence supporting shift work as a likely causal risk factor for unstable angina and myocardial infarction, underscoring the need for targeted public health strategies to mitigate its cardiovascular impact. However, further investigation is necessary to elucidate the role of air pollution and noise in cardiovascular outcomes. Full article
Show Figures

Figure 1

20 pages, 10271 KiB  
Article
High-Frequency Workpiece Image Recognition Model Based on Hybrid Attention Mechanism
by Jiaqi Deng, Chenglong Sun, Xin Liu, Gang Du, Liangzhong Jiang and Xu Yang
Appl. Sci. 2025, 15(1), 94; https://doi.org/10.3390/app15010094 (registering DOI) - 26 Dec 2024
Viewed by 257
Abstract
High-frequency workpieces are specialized items characterized by complex internal textures and minimal variance in properties. Under intricate lighting conditions, existing mainstream image recognition models struggle with low precision when applied to the identification of high-frequency workpiece images. This paper introduces a high-frequency workpiece [...] Read more.
High-frequency workpieces are specialized items characterized by complex internal textures and minimal variance in properties. Under intricate lighting conditions, existing mainstream image recognition models struggle with low precision when applied to the identification of high-frequency workpiece images. This paper introduces a high-frequency workpiece image recognition model based on a hybrid attention mechanism, HAEN. Initially, the high-frequency workpiece dataset is enhanced through geometric transformations, random noise, and random lighting adjustments to augment the model’s generalization capabilities. Subsequently, lightweight convolution, including one-dimensional and dilated convolutions, is employed to enhance convolutional attention and reduce the model’s parameter count, extracting original image features with robustness to strong lighting and mitigating the impact of lighting conditions on recognition outcomes. Finally, lightweight re-estimation attention modules are integrated at various model levels to reassess spatial information in feature maps and enhance the model’s representation of depth channel features. Experimental results demonstrate that the proposed model effectively extracts features from high-frequency workpiece images under complex lighting, outperforming existing models in image classification tasks with a precision of 97.23%. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

16 pages, 4154 KiB  
Article
Optimisation of Hot-Chamber Die-Casting Process of AM60 Alloy Using Taguchi Method
by Tomasz Rzychoń and Andrzej Kiełbus
Materials 2024, 17(24), 6256; https://doi.org/10.3390/ma17246256 - 21 Dec 2024
Viewed by 359
Abstract
This paper presents the effect of hot-chamber HPDC (high-pressure die casting) process parameters on the porosity, mechanical properties, and microstructure of AM60 magnesium alloy. To reduce costs, a Taguchi design of the experimental method was used to optimise the HPDC process. Six parameters [...] Read more.
This paper presents the effect of hot-chamber HPDC (high-pressure die casting) process parameters on the porosity, mechanical properties, and microstructure of AM60 magnesium alloy. To reduce costs, a Taguchi design of the experimental method was used to optimise the HPDC process. Six parameters set at two levels were selected for optimisation, i.e., piston speed in the first phase, piston speed in the second phase, molten metal temperature, piston travel, mould temperature, and die-casting pressure (the pressure under the piston). Signal-to-noise (S/N) ratios were used to quantify the present variations. The significance of the influence of the HPDC parameters was assessed using statistical analysis of variance (ANOVA). The results showed that the die-casting pressure had the most significant influence on the porosity of the AM60 alloy. Moreover, piston speed in the first phase, second phase, and die-casting pressure had the most important effects on tensile strength. It is well known that porosity determines the mechanical properties of die castings; however, in AM60 alloy, changes in the HPDC parameters also contribute to microstructural changes, mainly through the formation of Externally Solidified Crystals. Full article
(This article belongs to the Special Issue Achievements in Foundry Materials and Technologies)
Show Figures

Figure 1

16 pages, 2882 KiB  
Communication
Mathematical Mechanism of Gini Index Used for Multiple-Impulse Phenomenon Characterization
by Guofeng Jin, Anbo Ming and Wei Zhang
Aerospace 2024, 11(12), 1034; https://doi.org/10.3390/aerospace11121034 - 18 Dec 2024
Viewed by 253
Abstract
The Gini index (GI) is widely used for measuring the sparsity of signals and has been proven to be effective in the extraction of fault features. A fault-induced vibration, which involves the obvious phenomenon of multiple impulses, is a kind of sparse signal [...] Read more.
The Gini index (GI) is widely used for measuring the sparsity of signals and has been proven to be effective in the extraction of fault features. A fault-induced vibration, which involves the obvious phenomenon of multiple impulses, is a kind of sparse signal and the GI has been widely used in the diagnosis of rotating machine faults. However, why the GI can be used to evaluate the sparsity or impulsiveness of a signal has not been revealed directly. In this study, the mathematical mechanism of the GI, used for the representation of the multiple-impulse phenomenon, is deeply researched based on the theoretical deviation of the GI with regard to several typical signals. The theoretical results show that the GI increases with the increment in the number of impulses in the signal when the signal is interrupted by relatively low degrees of white noise. The bigger the difference between the amplitude of the impulse and the variance in the noise, the bigger the value of the GI. Namely, the signal-to-noise ratio has a great influence on the value of the GI. However, the GI is still a powerful tool for the characterization of the impulsive intensity of the multiple-impulse phenomenon. Both simulation and experimental data analysis are introduced to show the application of the GI in practice. It is shown that the fault diagnosis method based on the maximization of the GI is more powerful than that of kurtosis in terms of the extraction of fault features of rolling element bearings (REBs). Full article
Show Figures

Figure 1

22 pages, 4938 KiB  
Article
Numerical Simulation of Uplift Behavior of a Rock-Socketed Pier Anchored by Inclined Anchors in Rock Masses
by Yuan Peng, Qijun Shu, Huayu Zhang, Hao Huang, Yiqing Zhang and Zengzhen Qian
Buildings 2024, 14(12), 3987; https://doi.org/10.3390/buildings14123987 - 16 Dec 2024
Viewed by 367
Abstract
The rock-socketed pier anchored by inclined anchors (RPIA) is a new type of foundation developed by combining a rock-socketed pier and inclined anchors. Current research on RPIA is relatively limited, and the impact of design parameters on its bearing performance remains unclear. To [...] Read more.
The rock-socketed pier anchored by inclined anchors (RPIA) is a new type of foundation developed by combining a rock-socketed pier and inclined anchors. Current research on RPIA is relatively limited, and the impact of design parameters on its bearing performance remains unclear. To investigate the uplift-bearing performance of RPIA, a finite-element model that considers the nonlinear properties of materials and multidirectional interactions was developed and verified. Based on this model, numerical simulations were performed on twenty-five RPIA that were designed using the L25 orthogonal array proposed by the Taguchi method, and the uplift load–displacement curve for each RPIA was obtained. Based on the interpretation of the elastic limit, uplift resistance, initial stiffness, and the ductility index for each simulated RPIA, the sensitivity of each factor was examined by analyzing the signal-to-noise ratio and variance. The results indicated that rock strength and pier diameter were the main factors determining the uplift performance of the RPIAs, while the angle of inclined anchors is the most influential factor affecting the ductility of RPIA. The primary role of the inclined anchor is to reduce the extraction of the pier after failure of the side resistance between the pier and rock mass, thus significantly enhancing the ductility of the uplift-loaded RPIA. The addition of reinforcements around the connection joints of the pier and anchors may prevent concrete failure and to fully execute the role of inclined anchors. Full article
Show Figures

Figure 1

28 pages, 7454 KiB  
Article
Equations to Predict Carbon Monoxide Emissions from Amazon Rainforest Fires
by Sarah M. Gallup, Bonne Ford, Stijn Naus, John L. Gallup and Jeffrey R. Pierce
Fire 2024, 7(12), 477; https://doi.org/10.3390/fire7120477 - 15 Dec 2024
Viewed by 653
Abstract
Earth systems models (ESMs), which can simulate the complex feedbacks between climate and fires, struggle to predict fires well for tropical rainforests. This study provides equations that predict historic carbon monoxide emissions from Amazon rainforest fires for 2003–2018, which could be implemented within [...] Read more.
Earth systems models (ESMs), which can simulate the complex feedbacks between climate and fires, struggle to predict fires well for tropical rainforests. This study provides equations that predict historic carbon monoxide emissions from Amazon rainforest fires for 2003–2018, which could be implemented within ESMs’ current structures. We also include equations to convert the predicted emissions to burned area. Regressions of varying mathematical forms are fitted to one or both of two fire CO emission inventories. Equation accuracy is scored on r2, bias of the mean prediction, and ratio of explained variances. We find that one equation is best for studying smoke consequences that scale approximately linearly with emissions, or for a fully coupled ESM with online meteorology. Compared to the deforestation fire equation in the Community Land Model ver. 4.5, this equation’s linear-scale accuracies are higher for both emissions and burned area. A second equation, more accurate when evaluated on a log scale, may better support studies of certain health or cloud process consequences of fires. The most accurate recommended equation requires that meteorology be known before emissions are calculated. For all three equations, both deforestation rates and meteorological variables are key groups of predictors. Predictions nevertheless fail to reproduce most of the variation in emissions. The highest linear r2s for monthly and annual predictions are 0.30 and 0.41, respectively. The impossibility of simultaneously matching both emission inventories limits achievable fit. One key cause of the remaining unexplained variability appears to be noise inherent to pan-tropical data, especially meteorology. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

20 pages, 825 KiB  
Article
Stochastic H Filtering of the Attitude Quaternion
by Daniel Choukroun, Lotan Cooper and Nadav Berman
Sensors 2024, 24(24), 7971; https://doi.org/10.3390/s24247971 - 13 Dec 2024
Viewed by 361
Abstract
Several stochastic H filters for estimating the attitude of a rigid body from line-of-sight measurements and rate gyro readings are developed. The measurements are corrupted by white noise with unknown variances. Our approach consists of estimating the quaternion while attenuating the transmission [...] Read more.
Several stochastic H filters for estimating the attitude of a rigid body from line-of-sight measurements and rate gyro readings are developed. The measurements are corrupted by white noise with unknown variances. Our approach consists of estimating the quaternion while attenuating the transmission gain from the unknown variances and initial errors to the current estimation error. The time-varying H gain is computed by solving algebraic and differential linear matrix inequalities for a given transmission threshold, which is iteratively lowered until feasibility fails. Thanks to the bilinear structure of the quaternion state-space model, the algorithm parameters are independent of the state. The case of a gyro drift is addressed, too. Extensive Monte-Carlo simulations show that the proposed stochastic H quaternion filters perform well for a wide range of noise variances. The actual attenuation, which improves with the noise variance and is worst in the noise-free case, is better than the guaranteed attenuation by one order of magnitude. The proposed stochastic H filter produces smaller biases than nonlinear Kalman or unscented filters and similar standard deviations at large noise levels. An essential advantage of this H filter is that the gains are independent of the quaternion, which makes it insensitive to modeling errors. This desired feature is illustrated by comparing its performances against those of unmatched nonlinear optimal filters. When provided with too high or too low noise variances, the multiplicative Kalman filter and the unscented quaternion filter are outperformed by the H filter, which essentially delivers identical error magnitudes. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

21 pages, 3654 KiB  
Article
Improving Performance of ADRC Control Systems Affected by Measurement Noise Using Kalman Filter-Tuned Extended State Observer
by Jacek Michalski, Mikołaj Mrotek, Dariusz Pazderski, Piotr Kozierski and Marek Retinger
Electronics 2024, 13(24), 4916; https://doi.org/10.3390/electronics13244916 - 12 Dec 2024
Viewed by 555
Abstract
This paper presents a novel tuning method for the extended state observer (ESO), which is applied in the active disturbance rejection control (ADRC) algorithm operating in a stochastic environment. Instead of the traditional pole placement (PP) method, the selection of ESO gains based [...] Read more.
This paper presents a novel tuning method for the extended state observer (ESO), which is applied in the active disturbance rejection control (ADRC) algorithm operating in a stochastic environment. Instead of the traditional pole placement (PP) method, the selection of ESO gains based on the noise variances of the Kalman filter (KF) is proposed. Also, a simple parametrization of ESO gains for the particular control process based on the observer bandwidth is introduced. A root locus and frequency analysis is conducted for the KF-based observer and presented with regard to the proposed tuning method. The presented results come from experiments carried out on the ball balancing table (BBT) real plant for various measurement noise levels. The possibilities of rejecting measurement noise by the estimation algorithm were investigated to ensure effective control and minimize the control signal energy. Based on the conducted experiments, one can conclude that the presented tuning method provides better results than the traditional PP algorithm in the stochastic environment in terms of control quality and reduction in measurement noise. Full article
(This article belongs to the Collection Predictive and Learning Control in Engineering Applications)
Show Figures

Figure 1

18 pages, 1765 KiB  
Article
Variance Resonance in Weakly Coupled Harmonic Oscillators Driven by Thermal Gradients
by Tarcisio Boffi and Paolo De Gregorio
Entropy 2024, 26(12), 1087; https://doi.org/10.3390/e26121087 - 12 Dec 2024
Viewed by 449
Abstract
We study two harmonic oscillators with high quality factors, driven by equilibrium and off equilibrium thermal noise, the latter mimicked by establishing a temperature gradient. The two oscillators are coupled via a third reciprocal harmonic interaction. We deepen the case of a weak [...] Read more.
We study two harmonic oscillators with high quality factors, driven by equilibrium and off equilibrium thermal noise, the latter mimicked by establishing a temperature gradient. The two oscillators are coupled via a third reciprocal harmonic interaction. We deepen the case of a weak coupling between the two oscillators, and show the emergence of a “spike” in the displacement variance of the colder oscillator, when the respective elastic constants approach each other. Away from the peak, the displacement variance of each oscillator only reflects the value of the local temperature. We name this phenomenon the variance resonance, or alternatively covariance resonance, in the sense that it comes about as one element of the covariance matrix describing both oscillators. In fact, all of the elements of the covariance matrix show some distinctive behavior. The oscillator at the lower temperature, therefore, oscillates as if driven by a higher temperature, resonating with the other one. By converse, the variance of the hotter oscillator develops a deep dent, or depression, around the same region. We could not reproduce this behavior if either the coupling constant is not small compared to those of the two oscillators, or if the quality factors are not large enough. In fact, in such instances the system tends to resemble one which is in equilibrium at the average temperature, regardless of the relative strengths of the elastic constants of the two oscillators. Our results could have various applications including for example precision measurement systems, when not all parts of the apparatuses are at the same temperature. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

21 pages, 1601 KiB  
Article
Method of Mobile Speed Measurement Using Semi-Supervised Masked Auxiliary Classifier Generative Adversarial Networks
by Eunchul Yoon and Sun-Yong Kim
Electronics 2024, 13(24), 4896; https://doi.org/10.3390/electronics13244896 - 12 Dec 2024
Viewed by 367
Abstract
We propose a semi-supervised masked auxiliary classifier generative adversarial network (SM-ACGAN) that has good classification performance in situations where labeled training data are limited. To develop SM-ACGAN, we combine the strengths of SSGAN (semi-supervised GAN), ACGAN-SG (auxiliary classifier GAN based on spectral normalization [...] Read more.
We propose a semi-supervised masked auxiliary classifier generative adversarial network (SM-ACGAN) that has good classification performance in situations where labeled training data are limited. To develop SM-ACGAN, we combine the strengths of SSGAN (semi-supervised GAN), ACGAN-SG (auxiliary classifier GAN based on spectral normalization and gradient penalty), and MaskedGAN. Additionally, we devise a novel masking technique that performs masking adaptively depending on the real/fake ratio of the input data and a novel regularization technique that stabilizes the generator training depending on the maximum ratio of the average power of the generated fake data to the average power of the noise latent variables. Finally, we devise a rule of selecting an appropriate quantity of unlabeled data and labeled fake data generated by the generator for effective data augmentation. Through simulations, we demonstrate that SM-ACGAN has lower root mean square error (RMSE) values and lower variance, demonstrating superior mobile speed measurement performance on Rician channels compared to ACGAN-SG, MaskedGAN, SSGAN, a CNN (convolutional neural network), and a DNN (deep neural network). Full article
Show Figures

Figure 1

17 pages, 1633 KiB  
Article
Stochastic Models for Ontogenetic Growth
by Chau Hoang, Tuan Anh Phan and Jianjun Paul Tian
Axioms 2024, 13(12), 861; https://doi.org/10.3390/axioms13120861 - 9 Dec 2024
Viewed by 335
Abstract
Based on allometric theory and scaling laws, numerous mathematical models have been proposed to study ontogenetic growth patterns of animals. Although deterministic models have provided valuable insight into growth dynamics, animal growth often deviates from strict deterministic patterns due to stochastic factors such [...] Read more.
Based on allometric theory and scaling laws, numerous mathematical models have been proposed to study ontogenetic growth patterns of animals. Although deterministic models have provided valuable insight into growth dynamics, animal growth often deviates from strict deterministic patterns due to stochastic factors such as genetic variation and environmental fluctuations. In this study, we extend a general model for ontogenetic growth proposed by West et al. to stochastic models for ontogenetic growth by incorporating stochasticity using white noise. According to data variance fitting for stochasticity, we propose two stochastic models for ontogenetic growth, one is for determinate growth and one is for indeterminate growth. To develop a universal stochastic process for ontogenetic growth across diverse species, we approximate stochastic trajectories of two stochastic models, apply random time change, and obtain a geometric Brownian motion with a multiplier of an exponential time factor. We conduct detailed mathematical analysis and numerical analysis for our stochastic models. Our stochastic models not only predict average growth well but also variations in growth within species. This stochastic framework may be extended to studies of other growth phenomena. Full article
(This article belongs to the Special Issue Advances in Mathematical Modeling and Related Topics)
Show Figures

Figure 1

14 pages, 4346 KiB  
Article
Robust Sparse Bayesian Learning Source Localization in an Uncertain Shallow-Water Waveguide
by Bing Zhang, Rui Jin, Longyu Jiang, Lei Yang and Tao Zhang
Electronics 2024, 13(23), 4789; https://doi.org/10.3390/electronics13234789 - 4 Dec 2024
Viewed by 399
Abstract
Conventional matched-field processing (MFP) for acoustic source localization is sensitive to environmental mismatches because it is based on the wave propagation model and environmental information that is uncertain in reality. In this paper, a mode-predictable sparse Bayesian learning (MPR-SBL) method is proposed to [...] Read more.
Conventional matched-field processing (MFP) for acoustic source localization is sensitive to environmental mismatches because it is based on the wave propagation model and environmental information that is uncertain in reality. In this paper, a mode-predictable sparse Bayesian learning (MPR-SBL) method is proposed to increase robustness in the presence of environmental uncertainty. The estimator maximizes the marginalized probability density function (PDF) of the received data at the sensors, utilizing the Bayesian rule and two hyperparameters (the source powers and the noise variance). The replica vectors in the estimator are reconstructed with the predictable modes from the decomposition of the pressure in the representation of the acoustic normal mode. The performance of this approach is evaluated and compared with the Bartlett processor and original sparse Bayesian learning, both in simulation and using the SWellEx-96 Event S5 dataset. The results illustrate that the proposed MPR-SBL method exhibits better performance in the two-source scenario, especially for the weaker source. Full article
(This article belongs to the Special Issue Research on Cooperative Control of Multi-agent Unmanned Systems)
Show Figures

Figure 1

23 pages, 9869 KiB  
Article
Machining Eco-Friendly Jute Fiber-Reinforced Epoxy Composites Using Specially Produced Cryo-Treated and Untreated Cutting Tools
by Mehmet Şükrü Adin and Hamit Adin
Polymers 2024, 16(23), 3329; https://doi.org/10.3390/polym16233329 - 27 Nov 2024
Viewed by 594
Abstract
In recent years, consumers have become increasingly interested in natural, biodegradable and eco-friendly composites. Eco-friendly composites manufactured using natural reinforcing filling materials stand out with properties such as cost effectiveness and easy accessibility. For these reasons, in this research, a composite workpiece was [...] Read more.
In recent years, consumers have become increasingly interested in natural, biodegradable and eco-friendly composites. Eco-friendly composites manufactured using natural reinforcing filling materials stand out with properties such as cost effectiveness and easy accessibility. For these reasons, in this research, a composite workpiece was specially manufactured using eco-friendly jute fibers. Two cost-effective cutting tools were specially produced to ensure high-quality machining of this composite workpiece. One of these specially manufactured cutting tools was subjected to DC&T (deep cryogenic treatment and tempering) processes to improve its performance. At the end of the research, when the lowest and highest Fd (delamination factor) values obtained with DC&T-T1 and T1 cutting tools were compared, it was observed that 5.49% and 6.23% better results were obtained with the DC&T-T1 cutting tool, respectively. From the analysis of the S/N (signal-to-noise) ratios obtained using Fd values, it was found that the most appropriate machining parameters for the composite workpiece used in this investigation were the DC&T-T1 cutting tool, a 2000 rev/min spindle speed and a 100 mm/min feed rate. Through ANOVAs (analyses of variance), it was discovered that the most significant parameter having an impact on the Fd values was the spindle speed, with a rate of 53.01%. Considering the lowest and highest Ra (average surface roughness) values obtained using DC&T-T1 and T1 cutting tools, it was seen that 19.42% and 16.91% better results were obtained using the DC&T-T1 cutting tool, respectively. In the S/N ratio analysis results obtained using Ra values, it was revealed that the most appropriate machining parameters for the composite workpiece used in this investigation were the DC&T-T1 cutting tool, a 2000 rev/min spindle speed and a 100 mm/min feed rate. In the ANOVAs, it was revealed that the most significant parameter having an effect on the Ra values was the feed rate at 37.86%. Full article
Show Figures

Graphical abstract

28 pages, 3873 KiB  
Article
Bayesian Inference for Long Memory Stochastic Volatility Models
by Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Viewed by 527
Abstract
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination [...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive. Full article
Show Figures

Figure 1

Back to TopTop