Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,701)

Search Parameters:
Keywords = ensemble technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3249 KiB  
Article
LSTM-Autoencoder Based Detection of Time-Series Noise Signals for Water Supply and Sewer Pipe Leakages
by Yungyeong Shin, Kwang Yoon Na, Si Eun Kim, Eun Ji Kyung, Hyun Gyu Choi and Jongpil Jeong
Water 2024, 16(18), 2631; https://doi.org/10.3390/w16182631 - 16 Sep 2024
Viewed by 394
Abstract
The efficient management of urban water distribution networks is crucial for public health and urban development. One of the major challenges is the quick and accurate detection of leaks, which can lead to water loss, infrastructure damage, and environmental hazards. Many existing leak [...] Read more.
The efficient management of urban water distribution networks is crucial for public health and urban development. One of the major challenges is the quick and accurate detection of leaks, which can lead to water loss, infrastructure damage, and environmental hazards. Many existing leak detection methods are ineffective, especially in complex and aging pipeline networks. If these limitations are not overcome, it can result in a chain of infrastructure failures, exacerbating damage, increasing repair costs, and causing water shortages and public health risks. The leak issue is further complicated by increasing urban water demand, climate change, and population growth. Therefore, there is an urgent need for intelligent systems that can overcome the limitations of traditional methodologies and leverage sophisticated data analysis and machine learning technologies. In this study, we propose a reliable and advanced method for detecting leaks in water pipes using a framework based on Long Short-Term Memory (LSTM) networks combined with autoencoders. The framework is designed to manage the temporal dimension of time-series data and is enhanced with ensemble learning techniques, making it sensitive to subtle signals indicating leaks while robustly dealing with noise signals. Through the integration of signal processing and pattern recognition, the machine learning-based model addresses the leak detection problem, providing an intelligent system that enhances environmental protection and resource management. The proposed approach greatly enhances the accuracy and precision of leak detection, making essential contributions in the field and offering promising prospects for the future of sustainable water management strategies. Full article
(This article belongs to the Special Issue Prediction and Assessment of Hydrological Processes)
Show Figures

Figure 1

18 pages, 1556 KiB  
Article
Bayesian Optimized Machine Learning Model for Automated Eye Disease Classification from Fundus Images
by Tasnim Bill Zannah, Md. Abdulla-Hil-Kafi, Md. Alif Sheakh, Md. Zahid Hasan, Taslima Ferdaus Shuva, Touhid Bhuiyan, Md. Tanvir Rahman, Risala Tasin Khan, M. Shamim Kaiser and Md Whaiduzzaman
Computation 2024, 12(9), 190; https://doi.org/10.3390/computation12090190 - 16 Sep 2024
Viewed by 416
Abstract
Eye diseases are defined as disorders or diseases that damage the tissue and related parts of the eyes. They appear in various types and can be either minor, meaning that they do not last long, or permanent blindness. Cataracts, glaucoma, and diabetic retinopathy [...] Read more.
Eye diseases are defined as disorders or diseases that damage the tissue and related parts of the eyes. They appear in various types and can be either minor, meaning that they do not last long, or permanent blindness. Cataracts, glaucoma, and diabetic retinopathy are all eye illnesses that can cause vision loss if not discovered and treated early on. Automated classification of these diseases from fundus images can empower quicker diagnoses and interventions. Our research aims to create a robust model, BayeSVM500, for eye disease classification to enhance medical technology and improve patient outcomes. In this study, we develop models to classify images accurately. We start by preprocessing fundus images using contrast enhancement, normalization, and resizing. We then leverage several state-of-the-art deep convolutional neural network pre-trained models, including VGG16, VGG19, ResNet50, EfficientNet, and DenseNet, to extract deep features. To reduce feature dimensionality, we employ techniques such as principal component analysis, feature agglomeration, correlation analysis, variance thresholding, and feature importance rankings. Using these refined features, we train various traditional machine learning models as well as ensemble methods. Our best model, named BayeSVM500, is a Support Vector Machine classifier trained on EfficientNet features reduced to 500 dimensions via PCA, achieving 93.65 ± 1.05% accuracy. Bayesian hyperparameter optimization further improved performance to 95.33 ± 0.60%. Through comprehensive feature engineering and model optimization, we demonstrate highly accurate eye disease classification from fundus images, comparable to or superior to previous benchmarks. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

29 pages, 3538 KiB  
Article
FBLearn: Decentralized Platform for Federated Learning on Blockchain
by Daniel Djolev, Milena Lazarova and Ognyan Nakov
Electronics 2024, 13(18), 3672; https://doi.org/10.3390/electronics13183672 - 16 Sep 2024
Viewed by 390
Abstract
In recent years, rapid technological advancements have propelled blockchain and artificial intelligence (AI) into prominent roles within the digital industry, each having unique applications. Blockchain, recognized for its secure and transparent data storage, and AI, a powerful tool for data analysis and decision [...] Read more.
In recent years, rapid technological advancements have propelled blockchain and artificial intelligence (AI) into prominent roles within the digital industry, each having unique applications. Blockchain, recognized for its secure and transparent data storage, and AI, a powerful tool for data analysis and decision making, exhibit common features that render them complementary. At the same time, machine learning has become a robust and influential technology, adopted by many companies to address non-trivial technical problems. This adoption is fueled by the vast amounts of data generated and utilized in daily operations. An intriguing intersection of blockchain and AI occurs in the realm of federated learning, a distributed approach allowing multiple parties to collaboratively train a shared model without centralizing data. This paper presents a decentralized platform FBLearn for the implementation of federated learning in blockchain, which enables us to harness the benefits of federated learning without the necessity of exchanging sensitive customer or product data, thereby fostering trustless collaboration. As the decentralized blockchain network is introduced in the distributed model training to replace the centralized server, global model aggregation approaches have to be utilized. This paper investigates several techniques for model aggregation based on the local model average and ensemble using either local or globally distributed validation data for model evaluation. The suggested aggregation approaches are experimentally evaluated based on two use cases of the FBLearn platform: credit risk scoring using a random forest classifier and credit card fraud detection using a logistic regression. The experimental results confirm that the suggested adaptive weight calculation and ensemble techniques based on the quality of local training data enhance the robustness of the global model. The performance evaluation metrics and ROC curves prove that the aggregation strategies successfully isolate the influence of the low-quality models on the final model. The proposed system’s ability to outperform models created with separate datasets underscores its potential to enhance collaborative efforts and to improve the accuracy of the final global model compared to each of the local models. Integrating blockchain and federated learning presents a forward-looking approach to data collaboration while addressing privacy concerns. Full article
Show Figures

Figure 1

17 pages, 3315 KiB  
Article
Application of the Gradient-Boosting with Regression Trees to Predict the Coefficient of Friction on Drawbead in Sheet Metal Forming
by Sherwan Mohammed Najm, Tomasz Trzepieciński, Salah Eddine Laouini, Marek Kowalik, Romuald Fejkiel and Rafał Kowalik
Materials 2024, 17(18), 4540; https://doi.org/10.3390/ma17184540 - 15 Sep 2024
Viewed by 397
Abstract
Correct design of the sheet metal forming process requires knowledge of the friction phenomenon occurring in various areas of the drawpiece. Additionally, the friction at the drawbead is decisive to ensure that the sheet flows in the desired direction. This article presents the [...] Read more.
Correct design of the sheet metal forming process requires knowledge of the friction phenomenon occurring in various areas of the drawpiece. Additionally, the friction at the drawbead is decisive to ensure that the sheet flows in the desired direction. This article presents the results of experimental tests enabling the determination of the coefficient of friction at the drawbead and using a specially designed tribometer. The test material was a DC04 carbon steel sheet. The tests were carried out for different orientations of the samples in relation to the sheet rolling direction, different drawbead heights, different lubrication conditions and different average roughnesses of the countersamples. According to the aim of this work, the Features Importance analysis, conducted using the Gradient-Boosted Regression Trees algorithm, was used to find the influence of several parameter features on the coefficient of friction. The advantage of gradient-boosted decision trees is their ability to analyze complex relationships in the data and protect against overfitting. Another advantage is that there is no need for prior data processing. According to the best of the authors’ knowledge, the effectiveness of gradient-boosted decision trees in analyzing the friction occurring in the drawbead in sheet metal forming has not been previously studied. To improve the accuracy of the model, five MinLeafs were applied to the regression tree, together with 500 ensembles utilized for learning the previously learned nodes, noting that the MinLeaf indicates the minimum number of leaf node observations. The least-squares-boosting technique, often known as LSBoost, is used to train a group of regression trees. Features Importance analysis has shown that the friction conditions (dry friction of lubricated conditions) had the most significant influence on the coefficient of friction, at 56.98%, followed by the drawbead height, at 23.41%, and the sample width, at 11.95%. The average surface roughness of rollers and sample orientation have the smallest impact on the value of the coefficient of friction at 6.09% and 1.57%, respectively. The dispersion and deviation observed for the testing dataset from the experimental data indicate the model’s ability to predict the values of the coefficient of friction at a coefficient of determination of R2 = 0.972 and a mean-squared error of MSE = 0.000048. It was qualitatively found that in order to ensure the optimal (the lowest) coefficient of friction, it is necessary to control the friction conditions (use of lubricant) and the drawbead height. Full article
Show Figures

Figure 1

16 pages, 2933 KiB  
Article
A Two-Level Machine Learning Prediction Approach for RAC Compressive Strength
by Fei Qi and Hangyu Li
Buildings 2024, 14(9), 2885; https://doi.org/10.3390/buildings14092885 - 12 Sep 2024
Viewed by 300
Abstract
Through the use of recycled aggregates, the construction industry can mitigate its environmental impact. A key consideration for concrete structural engineers when designing and constructing concrete structures is compressive strength. This study aims to accurately forecast the compressive strength of recycled aggregate concrete [...] Read more.
Through the use of recycled aggregates, the construction industry can mitigate its environmental impact. A key consideration for concrete structural engineers when designing and constructing concrete structures is compressive strength. This study aims to accurately forecast the compressive strength of recycled aggregate concrete (RAC) using machine learning techniques. We propose a simplified approach that incorporates a two-layer stacked ensemble learning model to predict RAC compressive strength. In this framework, the first layer consists of ensemble models acting as base learners, while the second layer utilizes a random forest (RF) model as the meta-learner. A comparative analysis with four other ensemble learning models demonstrates the superior performance of the proposed stacked model in effectively integrating predictions from the base learners, resulting in enhanced model accuracy. The model achieves a low mean absolute error (MAE) of 2.599 MPa, a root mean squared error (RMSE) of 3.645 MPa, and a high R-squared (R2) value of 0.964. Additionally, a Shapley (SHAP) additive explanation analysis reveals the influence and interrelationships of various input factors on the compressive strength of RAC, aiding design and construction professionals in optimizing raw material content during the RAC design and production process. Full article
Show Figures

Figure 1

17 pages, 4838 KiB  
Article
Improved Detection of Multi-Class Bad Traffic Signs Using Ensemble and Test Time Augmentation Based on Yolov5 Models
by Ibrahim Yahaya Garta, Shao-Kuo Tai and Rung-Ching Chen
Appl. Sci. 2024, 14(18), 8200; https://doi.org/10.3390/app14188200 - 12 Sep 2024
Viewed by 311
Abstract
Various factors such as natural disasters, vandalism, weather, and environmental conditions can affect the physical state of traffic signs. The proposed model aims to improve detection of traffic signs affected by partial occlusion as a result of overgrown vegetation, displaced signs (those knocked [...] Read more.
Various factors such as natural disasters, vandalism, weather, and environmental conditions can affect the physical state of traffic signs. The proposed model aims to improve detection of traffic signs affected by partial occlusion as a result of overgrown vegetation, displaced signs (those knocked down, bent), perforated signs (those damaged with holes), faded signs (color degradation), rusted signs (corroded surface), and de-faced signs (placing graffiti, etc., by vandals). This research aims to improve the detection of bad traffic signs using three approaches. In the first approach, Spiral Pooling Pyramid-Fast (SPPF) and C3TR modules are introduced to the architecture of Yolov5 models. SPPF helps provide a multi-scale representation of the input feature map by pooling at different scales, which is useful in improving the quality of feature maps and detecting bad traffic signs of various sizes and perspectives. The C3TR module uses convolutional layers to enhance local feature extraction and transformers to boost understanding of the global context. Secondly, we use predictions of Yolov5 as base models to implement a mean ensemble to improve performance. Thirdly, test time augmentation (TTA) is applied at test time by using scaling and flipping to improve accuracy. Some signs are generated using stable diffusion techniques to augment certain classes. We test the proposed models on the CCTSDB2021, TT100K, GTSDB, and GTSRD datasets to ensure generalization and use k-fold cross-validation to further evaluate the performance of the models. The proposed models outperform other state-of-the-art models in comparison. Full article
Show Figures

Figure 1

17 pages, 2368 KiB  
Article
Maritime Object Detection by Exploiting Electro-Optical and Near-Infrared Sensors Using Ensemble Learning
by Muhammad Furqan Javed, Muhammad Osama Imam, Muhammad Adnan, Iqbal Murtza and Jin-Young Kim
Electronics 2024, 13(18), 3615; https://doi.org/10.3390/electronics13183615 - 11 Sep 2024
Viewed by 395
Abstract
Object detection in maritime environments is a challenging problem because of the continuously changing background and moving objects resulting in shearing, occlusion, noise, etc. Unluckily, this problem is of critical importance since such failure may result in significant loss of human lives and [...] Read more.
Object detection in maritime environments is a challenging problem because of the continuously changing background and moving objects resulting in shearing, occlusion, noise, etc. Unluckily, this problem is of critical importance since such failure may result in significant loss of human lives and economic loss. The available object detection methods rely on radar and sonar sensors. Even with the advances in electro-optical sensors, their employment in maritime object detection is rarely considered. The proposed research aims to employ both electro-optical and near-infrared sensors for effective maritime object detection. For this, dedicated deep learning detection models are trained on electro-optical and near-infrared (NIR) sensor datasets. For this, (ResNet-50, ResNet-101, and SSD MobileNet) are utilized in both electro-optical and near-infrared space. Then, dedicated ensemble classifications are constructed on each collection of base learners from electro-optical and near-infrared spaces. After this, decisions about object detection from these spaces are combined using logical-disjunction-based final ensemble classification. This strategy is utilized to reduce false negatives effectively. To evaluate the performance of the proposed methodology, the publicly available standard Singapore Maritime Dataset is used and the results show that the proposed methodology outperforms the contemporary maritime object detection techniques with a significantly improved mean average precision. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

23 pages, 3337 KiB  
Article
Attention-Driven Transfer Learning Model for Improved IoT Intrusion Detection
by Salma Abdelhamid, Islam Hegazy, Mostafa Aref and Mohamed Roushdy
Big Data Cogn. Comput. 2024, 8(9), 116; https://doi.org/10.3390/bdcc8090116 - 9 Sep 2024
Viewed by 470
Abstract
The proliferation of Internet of Things (IoT) devices has become inevitable in contemporary life, significantly affecting myriad applications. Nevertheless, the pervasive use of heterogeneous IoT gadgets introduces vulnerabilities to malicious cyber-attacks, resulting in data breaches that jeopardize the network’s integrity and resilience. This [...] Read more.
The proliferation of Internet of Things (IoT) devices has become inevitable in contemporary life, significantly affecting myriad applications. Nevertheless, the pervasive use of heterogeneous IoT gadgets introduces vulnerabilities to malicious cyber-attacks, resulting in data breaches that jeopardize the network’s integrity and resilience. This study proposes an Intrusion Detection System (IDS) for IoT environments that leverages Transfer Learning (TL) and the Convolutional Block Attention Module (CBAM). We extensively evaluate four prominent pre-trained models, each integrated with an independent CBAM at the uppermost layer. Our methodology is validated using the BoT-IoT dataset, which undergoes preprocessing to rectify the imbalanced data distribution, eliminate redundancy, and reduce dimensionality. Subsequently, the tabular dataset is transformed into RGB images to enhance the interpretation of complex patterns. Our evaluation results demonstrate that integrating TL models with the CBAM significantly improves classification accuracy and reduces false-positive rates. Additionally, to further enhance the system performance, we employ an Ensemble Learning (EL) technique to aggregate predictions from the two best-performing models. The final findings prove that our TL-CBAM-EL model achieves superior performance, attaining an accuracy of 99.93% as well as high recall, precision, and F1-score. Henceforth, the proposed IDS is a robust and efficient solution for securing IoT networks. Full article
(This article belongs to the Special Issue Advances in Intelligent Defense Systems for the Internet of Things)
Show Figures

Figure 1

29 pages, 3830 KiB  
Review
Utilizing Molecular Dynamics Simulations, Machine Learning, Cryo-EM, and NMR Spectroscopy to Predict and Validate Protein Dynamics
by Ahrum Son, Woojin Kim, Jongham Park, Wonseok Lee, Yerim Lee, Seongyun Choi and Hyunsoo Kim
Int. J. Mol. Sci. 2024, 25(17), 9725; https://doi.org/10.3390/ijms25179725 - 8 Sep 2024
Viewed by 534
Abstract
Protein dynamics play a crucial role in biological function, encompassing motions ranging from atomic vibrations to large-scale conformational changes. Recent advancements in experimental techniques, computational methods, and artificial intelligence have revolutionized our understanding of protein dynamics. Nuclear magnetic resonance spectroscopy provides atomic-resolution insights, [...] Read more.
Protein dynamics play a crucial role in biological function, encompassing motions ranging from atomic vibrations to large-scale conformational changes. Recent advancements in experimental techniques, computational methods, and artificial intelligence have revolutionized our understanding of protein dynamics. Nuclear magnetic resonance spectroscopy provides atomic-resolution insights, while molecular dynamics simulations offer detailed trajectories of protein motions. Computational methods applied to X-ray crystallography and cryo-electron microscopy (cryo-EM) have enabled the exploration of protein dynamics, capturing conformational ensembles that were previously unattainable. The integration of machine learning, exemplified by AlphaFold2, has accelerated structure prediction and dynamics analysis. These approaches have revealed the importance of protein dynamics in allosteric regulation, enzyme catalysis, and intrinsically disordered proteins. The shift towards ensemble representations of protein structures and the application of single-molecule techniques have further enhanced our ability to capture the dynamic nature of proteins. Understanding protein dynamics is essential for elucidating biological mechanisms, designing drugs, and developing novel biocatalysts, marking a significant paradigm shift in structural biology and drug discovery. Full article
(This article belongs to the Special Issue Advanced Research on Protein Structure and Protein Dynamics)
Show Figures

Figure 1

30 pages, 5045 KiB  
Review
A Review of Research on Building Energy Consumption Prediction Models Based on Artificial Neural Networks
by Qing Yin, Chunmiao Han, Ailin Li, Xiao Liu and Ying Liu
Sustainability 2024, 16(17), 7805; https://doi.org/10.3390/su16177805 - 7 Sep 2024
Viewed by 1523
Abstract
Building energy consumption prediction models are powerful tools for optimizing energy management. Among various methods, artificial neural networks (ANNs) have become increasingly popular. This paper reviews studies since 2015 on using ANNs to predict building energy use and demand, focusing on the characteristics [...] Read more.
Building energy consumption prediction models are powerful tools for optimizing energy management. Among various methods, artificial neural networks (ANNs) have become increasingly popular. This paper reviews studies since 2015 on using ANNs to predict building energy use and demand, focusing on the characteristics of different ANN structures and their applications across building phases—design, operation, and retrofitting. It also provides guidance on selecting the most appropriate ANN structures for each phase. Finally, this paper explores future developments in ANN-based predictions, including improving data processing techniques for greater accuracy, refining parameterization to better capture building features, optimizing algorithms for faster computation, and integrating ANNs with other machine learning methods, such as ensemble learning and hybrid models, to enhance predictive performance. Full article
Show Figures

Figure 1

16 pages, 840 KiB  
Article
Sentiment Informed Sentence BERT-Ensemble Algorithm for Depression Detection
by Bayode Ogunleye, Hemlata Sharma and Olamilekan Shobayo
Big Data Cogn. Comput. 2024, 8(9), 112; https://doi.org/10.3390/bdcc8090112 - 5 Sep 2024
Viewed by 366
Abstract
The World Health Organisation (WHO) revealed approximately 280 million people in the world suffer from depression. Yet, existing studies on early-stage depression detection using machine learning (ML) techniques are limited. Prior studies have applied a single stand-alone algorithm, which is unable to deal [...] Read more.
The World Health Organisation (WHO) revealed approximately 280 million people in the world suffer from depression. Yet, existing studies on early-stage depression detection using machine learning (ML) techniques are limited. Prior studies have applied a single stand-alone algorithm, which is unable to deal with data complexities, prone to overfitting, and limited in generalization. To this end, our paper examined the performance of several ML algorithms for early-stage depression detection using two benchmark social media datasets (D1 and D2). More specifically, we incorporated sentiment indicators to improve our model performance. Our experimental results showed that sentence bidirectional encoder representations from transformers (SBERT) numerical vectors fitted into the stacking ensemble model achieved comparable F1 scores of 69% in the dataset (D1) and 76% in the dataset (D2). Our findings suggest that utilizing sentiment indicators as an additional feature for depression detection yields an improved model performance, and thus, we recommend the development of a depressive term corpus for future work. Full article
Show Figures

Figure 1

17 pages, 2194 KiB  
Article
A Multidimensional Framework Incorporating 2D U-Net and 3D Attention U-Net for the Segmentation of Organs from 3D Fluorodeoxyglucose-Positron Emission Tomography Images
by Andreas Vezakis, Ioannis Vezakis, Theodoros P. Vagenas, Ioannis Kakkos and George K. Matsopoulos
Electronics 2024, 13(17), 3526; https://doi.org/10.3390/electronics13173526 - 5 Sep 2024
Viewed by 316
Abstract
Accurate analysis of Fluorodeoxyglucose (FDG)-Positron Emission Tomography (PET) images is crucial for the diagnosis, treatment assessment, and monitoring of patients suffering from various cancer types. FDG-PET images provide valuable insights by revealing regions where FDG, a glucose analog, accumulates within the body. While [...] Read more.
Accurate analysis of Fluorodeoxyglucose (FDG)-Positron Emission Tomography (PET) images is crucial for the diagnosis, treatment assessment, and monitoring of patients suffering from various cancer types. FDG-PET images provide valuable insights by revealing regions where FDG, a glucose analog, accumulates within the body. While regions of high FDG uptake include suspicious tumor lesions, FDG also accumulates in non-tumor-specific regions and organs. Identifying these regions is crucial for excluding them from certain measurements, or calculating useful parameters, for example, the mean standardized uptake value (SUV) to assess the metabolic activity of the liver. Manual organ delineation from FDG-PET by clinicians demands significant effort and time, which is often not feasible in real clinical workflows with high patient loads. For this reason, this study focuses on automatically identifying key organs with high FDG uptake, namely the brain, left cardiac ventricle, kidneys, liver, and bladder. To this end, an ensemble approach is adopted, where a three-dimensional Attention U-Net (3D AU-Net) is employed for robust three-dimensional analysis, while a two-dimensional U-Net (2D U-Net) is utilized for analysis in the coronal plane. The 3D AU-Net demonstrates highly detailed organ segmentations, but also includes many false positive regions. In contrast, 2D U-Net achieves higher reliability with minimal false positive regions, but lacks the 3D details. Experiments conducted on a subset of the public AutoPET dataset with 60 PET scans demonstrate that the proposed ensemble model achieves high accuracy in segmenting the required organs, surpassing current state-of-the-art techniques, and supporting the potential utilization of the proposed methodology in accelerating and enhancing the clinical workflow of cancer patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 6525 KiB  
Article
Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy towards Comparative Performance on Optical Coherence Tomography as on Fundus Autofluorescence
by Zubin Mishra, Ziyuan Chris Wang, Emily Xu, Sophia Xu, Iyad Majid, SriniVas R. Sadda and Zhihong Jewel Hu
Appl. Sci. 2024, 14(17), 7773; https://doi.org/10.3390/app14177773 - 3 Sep 2024
Viewed by 362
Abstract
Stargardt atrophy and geographic atrophy (GA) represent pivotal endpoints in FDA-approved clinical trials. Predicting atrophy progression is crucial for evaluating drug efficacy. Fundus autofluorescence (FAF), the standard 2D imaging modality in these trials, has limitations in patient comfort. In contrast, spectral-domain optical coherence [...] Read more.
Stargardt atrophy and geographic atrophy (GA) represent pivotal endpoints in FDA-approved clinical trials. Predicting atrophy progression is crucial for evaluating drug efficacy. Fundus autofluorescence (FAF), the standard 2D imaging modality in these trials, has limitations in patient comfort. In contrast, spectral-domain optical coherence tomography (SD-OCT), a 3D imaging modality, is more patient friendly but suffers from lower image quality. This study has two primary objectives: (1) develop an efficient predictive modeling for the generation of future FAF images and prediction of future Stargardt atrophic (as well as GA) regions and (2) develop an efficient predictive modeling with advanced 3D OCT features at ellipsoid zone (EZ) for the comparative performance in the generation of future enface EZ maps and prediction of future Stargardt atrophic regions on OCT as on FAF. To achieve these goals, we propose two deep neural networks (termed ReConNet and ReConNet-Ensemble) with recurrent learning units (long short-term memory, LSTM) integrating with a convolutional neural network (CNN) encoder–decoder architecture and concurrent learning units integrated by ensemble/multiple recurrent learning channels. The ReConNet, which incorporates LSTM connections with CNN, is developed for the first goal on longitudinal FAF. The ReConNet-Ensemble, which incorporates multiple recurrent learning channels based on enhanced EZ enface maps to capture higher-order inherent OCT EZ features, is developed for the second goal on longitudinal OCT. Using FAF images at months 0, 6, and 12 to predict atrophy at month 18, the ReConNet achieved mean (±standard deviation, SD) and median Dice coefficients of 0.895 (±0.086) and 0.922 for Stargardt atrophy and 0.864 (±0.113) and 0.893 for GA. Using SD-OCT images at months 0 and 6 to predict atrophy at month 12, the ReConNet-Ensemble achieved mean and median Dice coefficients of 0.882 (±0.101) and 0.906 for Stargardt atrophy. The prediction performance on OCT images is comparably good to that on FAF. These results underscore the potential of SD-OCT for efficient and practical assessment of atrophy progression in clinical trials and retina clinics, complementing or surpassing the widely used FAF imaging technique. Full article
Show Figures

Figure 1

21 pages, 3639 KiB  
Article
AHEAD: A Novel Technique Combining Anti-Adversarial Hierarchical Ensemble Learning with Multi-Layer Multi-Anomaly Detection for Blockchain Systems
by Muhammad Kamran, Muhammad Maaz Rehan, Wasif Nisar and Muhammad Waqas Rehan
Big Data Cogn. Comput. 2024, 8(9), 103; https://doi.org/10.3390/bdcc8090103 - 2 Sep 2024
Viewed by 457
Abstract
Blockchain technology has impacted various sectors and is transforming them through its decentralized, immutable, transparent, smart contracts (automatically executing digital agreements) and traceable attributes. Due to the adoption of blockchain technology in versatile applications, millions of transactions take place globally. These transactions are [...] Read more.
Blockchain technology has impacted various sectors and is transforming them through its decentralized, immutable, transparent, smart contracts (automatically executing digital agreements) and traceable attributes. Due to the adoption of blockchain technology in versatile applications, millions of transactions take place globally. These transactions are no exception to adversarial attacks which include data tampering, double spending, data corruption, Sybil attacks, eclipse attacks, DDoS attacks, P2P network partitioning, delay attacks, selfish mining, bribery, fake transactions, fake wallets or phishing, false advertising, malicious smart contracts, and initial coin offering scams. These adversarial attacks result in operational, financial, and reputational losses. Although numerous studies have proposed different blockchain anomaly detection mechanisms, challenges persist. These include detecting anomalies in just a single layer instead of multiple layers, targeting a single anomaly instead of multiple, not encountering adversarial machine learning attacks (for example, poisoning, evasion, and model extraction attacks), and inadequate handling of complex transactional data. The proposed AHEAD model solves the above problems by providing the following: (i) data aggregation transformation to detect transactional and user anomalies at the data and network layers of the blockchain, respectively, (ii) a Three-Layer Hierarchical Ensemble Learning Model (HELM) incorporating stratified random sampling to add resilience against adversarial attacks, and (iii) an advanced preprocessing technique with hybrid feature selection to handle complex transactional data. The performance analysis of the proposed AHEAD model shows that it achieves higher anti-adversarial resistance and detects multiple anomalies at the data and network layers. A comparison of the proposed AHEAD model with other state-of-the-art models shows that it achieves 98.85% accuracy against anomaly detection on data and network layers targeting transaction and user anomalies, along with 95.97% accuracy against adversarial machine learning attacks, which surpassed other models. Full article
Show Figures

Figure 1

14 pages, 2945 KiB  
Article
Low-Cost CO2 NDIR Sensors: Performance Evaluation and Calibration Using Machine Learning Techniques
by Ravish Dubey, Arina Telles, James Nikkel, Chang Cao, Jonathan Gewirtzman, Peter A. Raymond and Xuhui Lee
Sensors 2024, 24(17), 5675; https://doi.org/10.3390/s24175675 - 31 Aug 2024
Viewed by 432
Abstract
The study comprehensively evaluates low-cost CO2 sensors from different price tiers, assessing their performance against a reference-grade instrument and exploring the possibility of calibration using different machine learning techniques. Three sensors (Sunrise AB by Senseair, K30 CO2 by Senseair, and GMP [...] Read more.
The study comprehensively evaluates low-cost CO2 sensors from different price tiers, assessing their performance against a reference-grade instrument and exploring the possibility of calibration using different machine learning techniques. Three sensors (Sunrise AB by Senseair, K30 CO2 by Senseair, and GMP 343 by Vaisala) were tested alongside a reference instrument (Los Gatos precision greenhouse gas analyzer). The results revealed differences in sensor performance, with the higher cost Vaisala sensors exhibiting superior accuracy. Despite its lower price, the Sunrise sensors still demonstrated reasonable accuracy. Meanwhile, the K30 sensor measurements displayed higher variability and noise. Machine learning models, including linear regression, gradient boosting regression, and random forest regression, were employed for sensor calibration. In general, linear regression models performed best for extrapolating data, whereas decision tree-based models were generally more useful in handling non-linear datasets. Notably, a stack ensemble model combining these techniques outperformed the individual models and significantly improved sensor accuracy by approximately 65%. Overall, this study contributes to filling the gap in intercomparing CO2 sensors across different price categories and underscores the potential of machine learning for enhancing sensor accuracy, particularly in low-cost sensor applications. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Back to TopTop