Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (14,856)

Search Parameters:
Keywords = machine learning algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6338 KiB  
Article
State of Health Estimation of Lithium-Ion Batteries Using Fusion Health Indicator by PSO-ELM Model
by Jun Chen, Yan Liu, Jun Yong, Cheng Yang, Liqin Yan and Yanping Zheng
Batteries 2024, 10(11), 380; https://doi.org/10.3390/batteries10110380 (registering DOI) - 28 Oct 2024
Abstract
The accurate estimation of the State of Health (SOH) of lithium-ion batteries is essential for ensuring their safe and reliable operation, as direct measurement is not feasible. This paper presents a novel SOH estimation method that integrates Particle Swarm Optimization (PSO) with an [...] Read more.
The accurate estimation of the State of Health (SOH) of lithium-ion batteries is essential for ensuring their safe and reliable operation, as direct measurement is not feasible. This paper presents a novel SOH estimation method that integrates Particle Swarm Optimization (PSO) with an Extreme Learning Machine (ELM) to improve prediction accuracy. Health Indicators (HIs) are first extracted from the battery’s charging curve, and correlation analysis is conducted on seven indirect HIs using Pearson and Spearman coefficients. To reduce dimensionality and eliminate redundancy, Principal Component Analysis (PCA) is applied, with the principal component contributing over 94% used as a fusion HI to represent battery capacity degradation. PSO is then employed to optimize the weights (ε) between the input and hidden layers, as well as the hidden layer bias (u) in the ELM, treating these parameters as particles in the PSO framework. This optimization enhances the ELM’s performance, addressing instability issues in the standard algorithm. The proposed PSO-ELM model demonstrates superior accuracy in SOH prediction compared with ELM and other methods. Experimental results show that the mean absolute error (MAE) is 0.0034, the mean absolute percentage error (MAPE) is 0.467%, and the root mean square error (RMSE) is 0.0043, providing a valuable reference for battery safety and reliability assessments. Full article
Show Figures

Figure 1

26 pages, 792 KiB  
Review
Deep Learning in Finance: A Survey of Applications and Techniques
by Ebikella Mienye, Nobert Jere, George Obaido, Ibomoiye Domor Mienye and Kehinde Aruleba
AI 2024, 5(4), 2066-2091; https://doi.org/10.3390/ai5040101 (registering DOI) - 28 Oct 2024
Abstract
Machine learning (ML) has transformed the financial industry by enabling advanced applications such as credit scoring, fraud detection, and market forecasting. At the core of this transformation is deep learning (DL), a subset of ML that is robust in processing and analyzing complex [...] Read more.
Machine learning (ML) has transformed the financial industry by enabling advanced applications such as credit scoring, fraud detection, and market forecasting. At the core of this transformation is deep learning (DL), a subset of ML that is robust in processing and analyzing complex and large datasets. This paper provides a comprehensive overview of key deep learning models, including Convolutional Neural Networks (CNNs), Long Short-Term Memory networks (LSTMs), Deep Belief Networks (DBNs), Transformers, Generative Adversarial Networks (GANs), and Deep Reinforcement Learning (Deep RL). Beyond summarizing their mathematical foundations and learning processes, this study offers new insights into how these models are applied in real-world financial contexts, highlighting their specific advantages and limitations in tasks such as algorithmic trading, risk management, and portfolio optimization. It also examines recent advances and emerging trends in the financial industry alongside critical challenges such as data quality, model interpretability, and computational complexity. These insights can guide future research directions toward developing more efficient, robust, and explainable financial models that address the evolving needs of the financial sector. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
26 pages, 2953 KiB  
Article
Development of a Flexible Information Security Risk Model Using Machine Learning Methods and Ontologies
by Alibek Barlybayev, Altynbek Sharipbay, Gulmira Shakhmetova and Ainur Zhumadillayeva
Appl. Sci. 2024, 14(21), 9858; https://doi.org/10.3390/app14219858 - 28 Oct 2024
Abstract
This paper presents a significant advancement in information security risk assessment by introducing a flexible and comprehensive model. The research integrates established standards, expert knowledge, machine learning, and ontological modeling to create a multifaceted approach for understanding and managing information security risks. The [...] Read more.
This paper presents a significant advancement in information security risk assessment by introducing a flexible and comprehensive model. The research integrates established standards, expert knowledge, machine learning, and ontological modeling to create a multifaceted approach for understanding and managing information security risks. The combination of standards and expert insights forms a robust foundation, ensuring a holistic grasp of the intricate risk landscape. The use of cluster analysis, specifically applying k-means on information security standards, expands the data-driven approach, uncovering patterns not discernible through traditional methods. The integration of machine learning algorithms in the creation of information security risk dendrogram demonstrates effective computational techniques for enhanced risk discovery. The introduction of a heat map as a visualization tool adds innovation, facilitating an intuitive understanding of risk interconnections and prioritization for decision makers. Additionally, a thesaurus optimizes risk descriptions, ensuring comprehensiveness and relevance despite evolving terminologies in the dynamic field of information security. The development of an ontological model for structured risk classification is a significant stride forward, offering an effective means of categorizing information security risks based on ontological relationships. These collective innovations enhance understanding and management of information security risks, paving the way for more effective approaches in the ever-evolving technological landscape. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 15920 KiB  
Article
Research on Impact Prediction Model for Corn Ears by Integrating Motion Features Using Machine Learning Algorithms
by Chenlong Fan, Wenjin Wang, Tao Cui, Ying Liu and Mengmeng Qiao
Processes 2024, 12(11), 2362; https://doi.org/10.3390/pr12112362 - 28 Oct 2024
Abstract
The mechanical damage of corn kernels during harvest leads to mildew in the kernel storage process, seriously affecting food safety and quality. Impact force is the primary source of mechanical damage in the corn threshing process, and its accurate detection is of great [...] Read more.
The mechanical damage of corn kernels during harvest leads to mildew in the kernel storage process, seriously affecting food safety and quality. Impact force is the primary source of mechanical damage in the corn threshing process, and its accurate detection is of great significance for corn threshing with low damage. A method for the impact force detection of corn ears was proposed in this manuscript. The momentum theorem determined the main factors influencing impact force (weight, falling height, and space attitude). Corn ear weight, falling height, and space attitude were used as experimental factors. The bench test was carried out with the impact force of corn ear as the output variable. During the experiment, piezoelectric sensors were used to collect the impact force of corn ears under different motion states. Then, the impact force detection model was constructed using four machine learning algorithms: multiple linear regression, ridge regression, random forest, and support vector regression. The results showed that the RF algorithm was more suitable for constructing the prediction model of average and maximum impact force when corn ears fall, SD, RMSE, and r were, respectively: 0.9526, 1.2685, 0.9855; 3.8389, 3.6071, and 0.8510. Secondly, the weight characteristics had the most significant influence on the impact force detection of the ear. Therefore, this method can be used as an accurate, objective, and efficient online detection method for impact force. Full article
(This article belongs to the Special Issue Modeling, Simulation, Control, and Optimization of Processes)
Show Figures

Figure 1

29 pages, 7565 KiB  
Article
Leveraging Explainable Artificial Intelligence (XAI) for Expert Interpretability in Predicting Rapid Kidney Enlargement Risks in Autosomal Dominant Polycystic Kidney Disease (ADPKD)
by Latifa Dwiyanti, Hidetaka Nambo and Nur Hamid
AI 2024, 5(4), 2037-2065; https://doi.org/10.3390/ai5040100 - 28 Oct 2024
Abstract
Autosomal dominant polycystic kidney disease (ADPKD) is the predominant hereditary factor leading to end-stage renal disease (ESRD) worldwide, affecting individuals across all races with a prevalence of 1 in 400 to 1 in 1000. The disease presents significant challenges in management, particularly with [...] Read more.
Autosomal dominant polycystic kidney disease (ADPKD) is the predominant hereditary factor leading to end-stage renal disease (ESRD) worldwide, affecting individuals across all races with a prevalence of 1 in 400 to 1 in 1000. The disease presents significant challenges in management, particularly with limited options for slowing cyst progression, as well as the use of tolvaptan being restricted to high-risk patients due to potential liver injury. However, determining high-risk status typically requires magnetic resonance imaging (MRI) to calculate total kidney volume (TKV), a time-consuming process demanding specialized expertise. Motivated by these challenges, this study proposes alternative methods for high-risk categorization that do not rely on TKV data. Utilizing historical patient data, we aim to predict rapid kidney enlargement in ADPKD patients to support clinical decision-making. We applied seven machine learning algorithms—Random Forest, Logistic Regression, Support Vector Machine (SVM), Light Gradient Boosting Machine (LightGBM), Gradient Boosting Tree, XGBoost, and Deep Neural Network (DNN)—to data from the Polycystic Kidney Disease Outcomes Consortium (PKDOC) database. The XGBoost model, combined with the Synthetic Minority Oversampling Technique (SMOTE), yielded the best performance. We also leveraged explainable artificial intelligence (XAI) techniques, specifically Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), to visualize and clarify the model’s predictions. Furthermore, we generated text summaries to enhance interpretability. To evaluate the effectiveness of our approach, we proposed new metrics to assess explainability and conducted a survey with 27 doctors to compare models with and without XAI techniques. The results indicated that incorporating XAI and textual summaries significantly improved expert explainability and increased confidence in the model’s ability to support treatment decisions for ADPKD patients. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

14 pages, 253 KiB  
Review
Novel Approaches for the Early Detection of Glaucoma Using Artificial Intelligence
by Marco Zeppieri, Lorenzo Gardini, Carola Culiersi, Luigi Fontana, Mutali Musa, Fabiana D’Esposito, Pier Luigi Surico, Caterina Gagliano and Francesco Saverio Sorrentino
Life 2024, 14(11), 1386; https://doi.org/10.3390/life14111386 - 28 Oct 2024
Abstract
Background: If left untreated, glaucoma—the second most common cause of blindness worldwide—causes irreversible visual loss due to a gradual neurodegeneration of the retinal ganglion cells. Conventional techniques for identifying glaucoma, like optical coherence tomography (OCT) and visual field exams, are frequently laborious and [...] Read more.
Background: If left untreated, glaucoma—the second most common cause of blindness worldwide—causes irreversible visual loss due to a gradual neurodegeneration of the retinal ganglion cells. Conventional techniques for identifying glaucoma, like optical coherence tomography (OCT) and visual field exams, are frequently laborious and dependent on subjective interpretation. Through the fast and accurate analysis of massive amounts of imaging data, artificial intelligence (AI), in particular machine learning (ML) and deep learning (DL), has emerged as a promising method to improve the early detection and management of glaucoma. Aims: The purpose of this study is to examine the current uses of AI in the early diagnosis, treatment, and detection of glaucoma while highlighting the advantages and drawbacks of different AI models and algorithms. In addition, it aims to determine how AI technologies might transform glaucoma treatment and suggest future lines of inquiry for this area of study. Methods: A thorough search of databases, including Web of Science, PubMed, and Scopus, was carried out to find pertinent papers released until August 2024. The inclusion criteria were limited to research published in English in peer-reviewed publications that used AI, ML, or DL to diagnose or treat glaucoma in human subjects. Articles were chosen and vetted according to their quality, contribution to the field, and relevancy. Results: Convolutional neural networks (CNNs) and other deep learning algorithms are among the AI models included in this paper that have been shown to have excellent sensitivity and specificity in identifying glaucomatous alterations in fundus photos, OCT scans, and visual field tests. By automating standard screening procedures, these models have demonstrated promise in distinguishing between glaucomatous and healthy eyes, forecasting the course of the disease, and possibly lessening the workload of physicians. Nonetheless, several significant obstacles remain, such as the requirement for various training datasets, outside validation, decision-making transparency, and handling moral and legal issues. Conclusions: Artificial intelligence (AI) holds great promise for improving the diagnosis and treatment of glaucoma by facilitating prompt and precise interpretation of imaging data and assisting in clinical decision making. To guarantee wider accessibility and better patient results, future research should create strong generalizable AI models validated in various populations, address ethical and legal matters, and incorporate AI into clinical practice. Full article
(This article belongs to the Special Issue Cornea and Anterior Eye Diseases: 2nd Edition)
20 pages, 14643 KiB  
Article
Exploring IRGs as a Biomarker of Pulmonary Hypertension Using Multiple Machine Learning Algorithms
by Jiashu Yang, Siyu Chen, Ke Chen, Junyi Wu and Hui Yuan
Diagnostics 2024, 14(21), 2398; https://doi.org/10.3390/diagnostics14212398 - 28 Oct 2024
Abstract
Background: Pulmonary arterial hypertension (PAH) is a severe disease with poor prognosis and high mortality, lacking simple and sensitive diagnostic biomarkers in clinical practice. This study aims to identify novel diagnostic biomarkers for PAH using genomics research. Methods: We conducted a comprehensive analysis [...] Read more.
Background: Pulmonary arterial hypertension (PAH) is a severe disease with poor prognosis and high mortality, lacking simple and sensitive diagnostic biomarkers in clinical practice. This study aims to identify novel diagnostic biomarkers for PAH using genomics research. Methods: We conducted a comprehensive analysis of a large transcriptome dataset, including PAH and inflammatory response genes (IRGs), integrated with 113 machine learning models to assess diagnostic potential. We developed a clinical diagnostic model based on hub genes, evaluating their effectiveness through calibration curves, clinical decision curves, and ROC curves. An animal model of PAH was also established to validate hub gene expression patterns. Results: Among the 113 machine learning algorithms, the Lasso + LDA model achieved the highest AUC of 0.741. Differential expression profiles of hub genes CTGF, DDR2, FGFR2, MYH10, and YAP1 were observed between the PAH and normal control groups. A diagnostic model utilizing these hub genes was developed, showing high accuracy with an AUC of 0.87. MYH10 demonstrated the most favorable diagnostic performance with an AUC of 0.8. Animal experiments confirmed the differential expression of CTGF, DDR2, FGFR2, MYH10, and YAP1 between the PAH and control groups (p < 0.05); Conclusions: We successfully established a diagnostic model for PAH using IRGs, demonstrating excellent diagnostic performance. CTGF, DDR2, FGFR2, MYH10, and YAP1 may serve as novel molecular diagnostic markers for PAH. Full article
(This article belongs to the Section Clinical Laboratory Medicine)
Show Figures

Figure 1

19 pages, 2164 KiB  
Article
Enhancing IoT Security Using GA-HDLAD: A Hybrid Deep Learning Approach for Anomaly Detection
by Ibrahim Mutambik
Appl. Sci. 2024, 14(21), 9848; https://doi.org/10.3390/app14219848 - 28 Oct 2024
Abstract
The adoption and use of the Internet of Things (IoT) have increased rapidly over recent years, and cyber threats in IoT devices have also become more common. Thus, the development of a system that can effectively identify malicious attacks and reduce security threats [...] Read more.
The adoption and use of the Internet of Things (IoT) have increased rapidly over recent years, and cyber threats in IoT devices have also become more common. Thus, the development of a system that can effectively identify malicious attacks and reduce security threats in IoT devices has become a topic of great importance. One of the most serious threats comes from botnets, which commonly attack IoT devices by interrupting the networks required for the devices to run. There are a number of methods that can be used to improve security by identifying unknown patterns in IoT networks, including deep learning and machine learning approaches. In this study, an algorithm named the genetic algorithm with hybrid deep learning-based anomaly detection (GA-HDLAD) is developed, with the aim of improving security by identifying botnets within the IoT environment. The GA-HDLAD technique addresses the problem of high dimensionality by using a genetic algorithm during feature selection. Hybrid deep learning is used to detect botnets; the approach is a combination of recurrent neural networks (RNNs), feature extraction techniques (FETs), and attention concepts. Botnet attacks commonly involve complex patterns that the hybrid deep learning (HDL) method can detect. Moreover, the use of FETs in the model ensures that features can be effectively extracted from spatial data, while temporal dependencies are captured by RNNs. Simulated annealing (SA) is utilized to select the hyperparameters necessary for the HDL approach. In this study, the GA-HDLAD system is experimentally assessed using a benchmark botnet dataset, and the findings reveal that the system provides superior results in comparison to existing detection methods. Full article
(This article belongs to the Special Issue Advances in Internet of Things (IoT) Technologies and Cybersecurity)
Show Figures

Figure 1

18 pages, 6587 KiB  
Article
Predicting the Wear Amount of Tire Tread Using 1D−CNN
by Hyunjae Park, Junyeong Seo, Kangjun Kim and Taewung Kim
Sensors 2024, 24(21), 6901; https://doi.org/10.3390/s24216901 - 28 Oct 2024
Abstract
Since excessively worn tires pose a significant risk to vehicle safety, it is crucial to monitor tire wear regularly. This study aimed to verify the efficient tire wear prediction algorithm proposed in a previous modeling study, which minimizes the required input data, and [...] Read more.
Since excessively worn tires pose a significant risk to vehicle safety, it is crucial to monitor tire wear regularly. This study aimed to verify the efficient tire wear prediction algorithm proposed in a previous modeling study, which minimizes the required input data, and use driving test data to validate the method. First, driving tests were conducted with tires at various wear levels to measure internal accelerations. The acceleration signals were then screened using empirical functions to exclude atypical data before proceeding with the machine learning process. Finally, a tire wear prediction algorithm based on a 1D−CNN with bottleneck features was developed and evaluated. The developed algorithm showed an RMSE of 5.2% (or 0.42 mm) using only the acceleration signals. When tire pressure and vertical load were included, the prediction error was reduced by 11.5%, resulting in an RMSE of 4.6%. These findings suggest that the 1D−CNN approach is an efficient method for predicting tire wear states, requiring minimal input data. Additionally, it supports the potential usefulness of the intelligent tire technology framework proposed in the modeling study. Full article
Show Figures

Figure 1

14 pages, 1729 KiB  
Article
Jade Identification Using Ultraviolet Spectroscopy Based on the SpectraViT Model Incorporating CNN and Transformer
by Xiongjun Li, Jilin Cai and Jin Feng
Appl. Sci. 2024, 14(21), 9839; https://doi.org/10.3390/app14219839 - 28 Oct 2024
Abstract
Jade is a highly valuable and diverse gemstone, and its spectral characteristics can be used to identify its quality and type. We propose a jade ultraviolet (UV) spectrum recognition model based on deep learning, called SpectraViT, aiming to improve the accuracy and efficiency [...] Read more.
Jade is a highly valuable and diverse gemstone, and its spectral characteristics can be used to identify its quality and type. We propose a jade ultraviolet (UV) spectrum recognition model based on deep learning, called SpectraViT, aiming to improve the accuracy and efficiency of jade identification. The algorithm combines residual modules to extract local features and transformers to capture global dependencies of jade’s UV spectrum, and finally classifying jade using fully connected layers. Experiments were conducted on a UV spectrum dataset containing four types of jade (natural diamond, cultivated diamond (CVD/HPHT), and moissanite). The results show that the algorithm can effectively identify different types of jade, achieving an accuracy of 99.24%, surpassing traditional algorithms based on Support Vector Machines (SVM) and Partial Least Squares Discriminant Analysis (PLS_DA), as well as other deep learning methods. This paper also provides a reference solution for other spectral analysis problems. Full article
Show Figures

Figure 1

18 pages, 15290 KiB  
Article
Machine Learning-Based Local Knowledge Approach to Mapping Urban Slums in Bandung City, Indonesia
by Galdita Aruba Chulafak, Muhammad Rokhis Khomarudin, Orbita Roswintiarti, Hamid Mehmood, Gatot Nugroho, Udhi Catur Nugroho, Mohammad Ardha, Kusumaning Ayu Dyah Sukowati, I Kadek Yoga Dwi Putra and Silvan Anggia Bayu Setia Permana
Urban Sci. 2024, 8(4), 189; https://doi.org/10.3390/urbansci8040189 - 28 Oct 2024
Abstract
Rapid urban population growth in Bandung City has led to the development of slums due to inadequate housing facilities and urban planning. However, it remains unclear how these slums are distributed and evolve spatially and temporally. Therefore, it is necessary to map their [...] Read more.
Rapid urban population growth in Bandung City has led to the development of slums due to inadequate housing facilities and urban planning. However, it remains unclear how these slums are distributed and evolve spatially and temporally. Therefore, it is necessary to map their distribution and trends effectively. This study aimed to classify slum areas in Bandung City using a machine learning-based local knowledge approach; this classification exercise contributes towards Sustainable Development Goal 11 related to sustainable cities and communities. The methods included settlement and commercial/industrial classification from 2021 SPOT-6 satellite data by the Random Forest classifier. A knowledge-based classifier was used to derive slum and non-slum settlements from the settlement and commercial/industrial classification, as well as railway, river, and road buffering. Our findings indicate that these methods achieved an overall accuracy of 82%. The producer’s accuracy for slum areas was 70%, while the associated user’s accuracy was 92%. Meanwhile, the Kappa coefficient was 0.63. These findings suggest that local knowledge could be a potent option in the machine learning algorithm. Full article
Show Figures

Figure 1

19 pages, 2076 KiB  
Article
Discovery of Plasma Lipids as Potential Biomarkers Distinguishing Breast Cancer Patients from Healthy Controls
by Desmond Li, Kerry Heffernan, Forrest C. Koch, David A. Peake, Dana Pascovici, Mark David, Cheka Kehelpannala, G. Bruce Mann, David Speakman, John Hurrell, Simon Preston, Fatemeh Vafaee and Amani Batarseh
Int. J. Mol. Sci. 2024, 25(21), 11559; https://doi.org/10.3390/ijms252111559 - 28 Oct 2024
Abstract
The development of a sensitive and specific blood test for the early detection of breast cancer is crucial to improve screening and patient outcomes. Existing methods, such as mammography, have limitations, necessitating the exploration of alternative approaches, including circulating factors. Using 598 prospectively [...] Read more.
The development of a sensitive and specific blood test for the early detection of breast cancer is crucial to improve screening and patient outcomes. Existing methods, such as mammography, have limitations, necessitating the exploration of alternative approaches, including circulating factors. Using 598 prospectively collected blood samples, a multivariate plasma-derived lipid biomarker signature was developed that can distinguish healthy control individuals from those with breast cancer. Liquid chromatography with high-resolution and tandem mass spectrometry (LC-MS/MS) was employed to identify lipids for both extracellular vesicle-derived and plasma-derived signatures. For each dataset, we identified a signature of 20 lipids using a robust, statistically rigorous feature selection algorithm based on random forest feature importance applied to cross-validated training samples. Using an ensemble of machine learning models, the plasma 20-lipid signature generated an area under the curve (AUC) of 0.95, sensitivity of 0.91, and specificity of 0.79. The results from this study indicate that lipids extracted from plasma can be used as target analytes in the development of assays to detect the presence of early-stage breast cancer. Full article
(This article belongs to the Special Issue Precision Medicine in Oncology 2.0)
Show Figures

Figure 1

34 pages, 1254 KiB  
Article
Hyperspectral Imaging Aiding Artificial Intelligence: A Reliable Approach for Food Qualification and Safety
by Mehrad Nikzadfar, Mahdi Rashvand, Hongwei Zhang, Alex Shenfield, Francesco Genovese, Giuseppe Altieri, Attilio Matera, Iolanda Tornese, Sabina Laveglia, Giuliana Paterna, Carmela Lovallo, Orkhan Mammadov, Burcu Aykanat and Giovanni Carlo Di Renzo
Appl. Sci. 2024, 14(21), 9821; https://doi.org/10.3390/app14219821 - 27 Oct 2024
Abstract
Hyperspectral imaging (HSI) is one of the non-destructive quality assessment methods providing both spatial and spectral information. HSI in food quality and safety can detect the presence of contaminants, adulterants, and quality attributes, such as moisture, ripeness, and microbial spoilage, in a non-destructive [...] Read more.
Hyperspectral imaging (HSI) is one of the non-destructive quality assessment methods providing both spatial and spectral information. HSI in food quality and safety can detect the presence of contaminants, adulterants, and quality attributes, such as moisture, ripeness, and microbial spoilage, in a non-destructive manner by analyzing spectral signatures of food components in a wide range of wavelengths with speed and accuracy. However, analyzing HSI data can be quite complicated and time consuming, in addition to needing some special expertise. Artificial intelligence (AI) has shown immense promise in HSI for the assessment of food quality because it is so powerful at coping with irrelevant information, extracting key features, and building calibration models. This review has shown various machine learning (ML) approaches applied to HSI for quality and safety control of foods. It covers the basic concepts of HSI, advanced preprocessing methods, and strategies for wavelength selection and machine learning methods. The application of HSI to AI increases the speed with which food safety and quality can be inspected. This happens through automation in contaminant detection, classification, and prediction of food quality attributes. So, it can enable decisions in real-time by reducing human error at food inspection. This paper outlines their benefits, challenges, and potential improvements while again assessing the validity and practical usability of HSI technologies in developing reliable calibration models for food quality and safety monitoring. The review concludes that HSI integrated with state-of-the-art AI techniques has good potential to significantly improve the assessment of food quality and safety, and that various ML algorithms have their strengths, and contexts in which they are best applied. Full article
(This article belongs to the Section Food Science and Technology)
Show Figures

Figure 1

18 pages, 8730 KiB  
Article
A Novel Non-Contact Multi-User Online Indoor Positioning Strategy Based on Channel State Information
by Yixin Zhuang, Yue Tian and Wenda Li
Sensors 2024, 24(21), 6896; https://doi.org/10.3390/s24216896 - 27 Oct 2024
Abstract
The IEEE 802.11bf-based wireless fidelity (WiFi) indoor positioning system has gained significant attention recently. It is important to recognize that multi-user online positioning occurs in real wireless environments. This paper proposes an indoor positioning sensing strategy that includes an optimized preprocessing process and [...] Read more.
The IEEE 802.11bf-based wireless fidelity (WiFi) indoor positioning system has gained significant attention recently. It is important to recognize that multi-user online positioning occurs in real wireless environments. This paper proposes an indoor positioning sensing strategy that includes an optimized preprocessing process and a new machine learning (ML) method called NKCK. The NKCK method can be broken down into three components: neighborhood component analysis (NCA) for dimensionality reduction, K-means clustering, and K-nearest neighbor (KNN) classification with cross-validation (CV). The KNN algorithm is particularly suitable for our dataset since it effectively classifies data based on proximity, relying on the spatial relationships between points. Experimental results indicate that the NKCK method outperforms traditional methods, achieving reductions in error rates of 82.4% compared to naive Bayes (NB), 85.0% compared to random forest (RF), 72.1% compared to support vector machine (SVM), 64.7% compared to multilayer perceptron (MLP), 50.0% compared to density-based spatial clustering of applications with noise (DBSCAN)-based methods, 42.0% compared to linear discriminant analysis (LDA)-based channel state information (CSI) amplitude fingerprinting, and 33.0% compared to principal component analysis (PCA)-based approaches. Due to the sensitivity of CSI, our multi-user online positioning system faces challenges in detecting dynamic human activities, such as human tracking, which requires further investigation in the future. Full article
Show Figures

Figure 1

16 pages, 3470 KiB  
Article
YOLOv8-Based Estimation of Estrus in Sows Through Reproductive Organ Swelling Analysis Using a Single Camera
by Iyad Almadani, Mohammed Abuhussein and Aaron L. Robinson
Digital 2024, 4(4), 898-913; https://doi.org/10.3390/digital4040044 - 27 Oct 2024
Abstract
Accurate and efficient estrus detection in sows is crucial in modern agricultural practices to ensure optimal reproductive health and successful breeding outcomes. A non-contact method using computer vision to detect a change in a sow’s vulva size holds great promise for automating and [...] Read more.
Accurate and efficient estrus detection in sows is crucial in modern agricultural practices to ensure optimal reproductive health and successful breeding outcomes. A non-contact method using computer vision to detect a change in a sow’s vulva size holds great promise for automating and enhancing this critical process. However, achieving precise and reliable results depends heavily on maintaining a consistent camera distance during image capture. Variations in camera distance can lead to erroneous estrus estimations, potentially resulting in missed breeding opportunities or false positives. To address this challenge, we propose a robust six-step methodology, accompanied by three stages of evaluation. First, we carefully annotated masks around the vulva to ensure an accurate pixel perimeter calculation of its shape. Next, we meticulously identified keypoints on the sow’s vulva, which enabled precise tracking and analysis of its features. We then harnessed the power of machine learning to train our model using annotated images, which facilitated keypoint detection and segmentation with the state-of-the-art YOLOv8 algorithm. By identifying the keypoints, we performed precise calculations of the Euclidean distances: first, between each labium (horizontal distance), and second, between the clitoris and the perineum (vertical distance). Additionally, by segmenting the vulva’s size, we gained valuable insights into its shape, which helped with performing precise perimeter measurements. Equally important was our effort to calibrate the camera using monocular depth estimation. This calibration helped establish a functional relationship between the measurements on the image (such as the distances between the labia and from the clitoris to the perineum, and the vulva perimeter) and the depth distance to the camera, which enabled accurate adjustments and calibration for our analysis. Lastly, we present a classification method for distinguishing between estrus and non-estrus states in subjects based on the pixel width, pixel length, and perimeter measurements. The method calculated the Euclidean distances between a new data point and reference points from two datasets: “estrus data” and “not estrus data”. Using custom distance functions, we computed the distances for each measurement dimension and aggregated them to determine the overall similarity. The classification process involved identifying the three nearest neighbors of the datasets and employing a majority voting mechanism to assign a label. A new data point was classified as “estrus” if the majority of the nearest neighbors were labeled as estrus; otherwise, it was classified as “non-estrus”. This method provided a robust approach for automated classification, which aided in more accurate and efficient detection of the estrus states. To validate our approach, we propose three evaluation stages. In the first stage, we calculated the Mean Squared Error (MSE) between the ground truth keypoints of the labia distance and the distance between the predicted keypoints, and we performed the same calculation for the distance between the clitoris and perineum. Then, we provided a quantitative analysis and performance comparison, including a comparison between our previous U-Net model and our new YOLOv8 segmentation model. This comparison focused on each model’s performance in terms of accuracy and speed, which highlighted the advantages of our new approach. Lastly, we evaluated the estrus–not-estrus classification model by defining the confusion matrix. By using this comprehensive approach, we significantly enhanced the accuracy of estrus detection in sows while effectively mitigating human errors and resource wastage. The automation and optimization of this critical process hold the potential to revolutionize estrus detection in agriculture, which will contribute to improved reproductive health management and elevate breeding outcomes to new heights. Through extensive evaluation and experimentation, our research aimed to demonstrate the transformative capabilities of computer vision techniques, paving the way for more advanced and efficient practices in the agricultural domain. Full article
Show Figures

Figure 1

Back to TopTop