Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,014)

Search Parameters:
Keywords = YOLO-V3

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4811 KiB  
Article
YOLO-AMM: A Real-Time Classroom Behavior Detection Algorithm Based on Multi-Dimensional Feature Optimization
by Yi Cao, Qian Cao, Chengshan Qian and Deji Chen
Sensors 2025, 25(4), 1142; https://doi.org/10.3390/s25041142 (registering DOI) - 13 Feb 2025
Abstract
Classroom behavior detection is a key task in constructing intelligent educational environments. However, the existing models are still deficient in detail feature capture capability, multi-layer feature correlation, and multi-scale target adaptability, making it challenging to realize high-precision real-time detection in complex scenes. This [...] Read more.
Classroom behavior detection is a key task in constructing intelligent educational environments. However, the existing models are still deficient in detail feature capture capability, multi-layer feature correlation, and multi-scale target adaptability, making it challenging to realize high-precision real-time detection in complex scenes. This paper proposes an improved classroom behavior detection algorithm, YOLO-AMM, to solve these problems. Firstly, we constructed the Adaptive Efficient Feature Fusion (AEFF) module to enhance the fusion of semantic information between different features and improve the model’s ability to capture detailed features. Then, we designed a Multi-dimensional Feature Flow Network (MFFN), which fuses multi-dimensional features and enhances the correlation information between features through the multi-scale feature aggregation module and contextual information diffusion mechanism. Finally, we proposed a Multi-Scale Perception and Fusion Detection Head (MSPF-Head), which significantly improves the adaptability of the head to different scale targets by introducing multi-scale feature perception, feature interaction, and fusion mechanisms. The experimental results showed that compared with the YOLOv8n model, YOLO-AMM improved the mAP0.5 and mAP0.5-0.95 by 3.1% and 4.0%, significantly improving the detection accuracy. Meanwhile, YOLO-AMM increased the detection speed (FPS) by 12.9 frames per second to 169.1 frames per second, which meets the requirement for real-time detection of classroom behavior. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

16 pages, 2980 KiB  
Article
RF-YOLOv7: A Model for the Detection of Poor-Quality Grapes in Natural Environments
by Changyong Li, Shunchun Zhang and Zhijie Ma
Agriculture 2025, 15(4), 387; https://doi.org/10.3390/agriculture15040387 - 12 Feb 2025
Abstract
This study addresses the challenges of detecting inferior fruits in table grapes in natural environments, focusing on subtle appearance differences, occlusions, and fruit overlaps. We propose an enhanced green grape fruit disease detection model named RF-YOLOv7. The model is trained on a dataset [...] Read more.
This study addresses the challenges of detecting inferior fruits in table grapes in natural environments, focusing on subtle appearance differences, occlusions, and fruit overlaps. We propose an enhanced green grape fruit disease detection model named RF-YOLOv7. The model is trained on a dataset comprising images of small fruits, sunburn, excess grapes, fruit fractures, and poor-quality grape bunches. RF-YOLOv7 builds upon the YOLOv7 architecture by integrating four Contextual Transformer (CoT) modules to improve target-detection accuracy, employing the Wise-IoU (WIoU) loss function to enhance generalization and overall performance, and introducing the Bi-Former attention mechanism for dynamic query awareness sparsity. The experimental results demonstrate that RF-YOLOv7 achieves a detection accuracy of 83.5%, recall rate of 76.4%, mean average precision (mAP) of 80.1%, and detection speed of 58.8 ms. Compared to the original YOLOv7, RF-YOLOv7 exhibits a 3.5% increase in mAP, with only an 8.3 ms increase in detection time. This study lays a solid foundation for the development of automatic detection equipment for intelligent grape pruning. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

26 pages, 13643 KiB  
Article
An Approach to Multiclass Industrial Heat Source Detection Using Optical Remote Sensing Images
by Yi Zeng, Ruilin Liao, Caihong Ma, Dacheng Wang and Yongze Lv
Energies 2025, 18(4), 865; https://doi.org/10.3390/en18040865 - 12 Feb 2025
Abstract
Industrial heat sources (IHSs) are major contributors to energy consumption and environmental pollution, making their accurate detection crucial for supporting industrial restructuring and emission reduction strategies. However, existing models either focus on single-class detection under complex backgrounds or handle multiclass tasks for simple [...] Read more.
Industrial heat sources (IHSs) are major contributors to energy consumption and environmental pollution, making their accurate detection crucial for supporting industrial restructuring and emission reduction strategies. However, existing models either focus on single-class detection under complex backgrounds or handle multiclass tasks for simple targets, leaving a gap in effective multiclass detection for complex scenarios. To address this, we propose a novel multiclass IHS detection model based on the YOLOv8-FC framework, underpinned by the multiclass IHS training dataset constructed from optical remote sensing images and point-of-interest (POI) data firstly. This dataset incorporates five categories: cement plants, coke plants, coal mining areas, oil and gas refineries, and steel plants. The proposed YOLOv8-FC model integrates the FasterNet backbone and a Coordinate Attention (CA) module, significantly enhancing feature extraction, detection precision, and operational speed. Experimental results demonstrate the model’s robust performance, achieving a precision rate of 92.3% and a recall rate of 95.6% in detecting IHS objects across diverse backgrounds. When applied in the Beijing–Tianjin–Hebei (BTH) region, YOLOv8-FC successfully identified 429 IHS objects, with detailed category-specific results providing valuable insights into industrial distribution. It shows that our proposed multiclass IHS detection model with the novel YOLOv8-FC approach could effectively and simultaneously detect IHS categories under complex backgrounds. The IHS datasets derived from the BTH region can support regional industrial restructuring and optimization schemes. Full article
(This article belongs to the Section J: Thermal Management)
Show Figures

Figure 1

21 pages, 7597 KiB  
Article
A Novel Neural Network Model Based on Real Mountain Road Data for Driver Fatigue Detection
by Dabing Peng, Junfeng Cai, Lu Zheng, Minghong Li, Ling Nie and Zuojin Li
Biomimetics 2025, 10(2), 104; https://doi.org/10.3390/biomimetics10020104 - 12 Feb 2025
Abstract
Mountainous roads are severely affected by environmental factors such as insufficient lighting and shadows from tree branches, which complicates the detection of drivers’ facial features and the determination of fatigue states. An improved method for recognizing driver fatigue states on mountainous roads using [...] Read more.
Mountainous roads are severely affected by environmental factors such as insufficient lighting and shadows from tree branches, which complicates the detection of drivers’ facial features and the determination of fatigue states. An improved method for recognizing driver fatigue states on mountainous roads using the YOLOv5 neural network is proposed. Initially, modules from Deformable Convolutional Networks (DCNs) are integrated into the feature extraction stage of the YOLOv5 framework to improve the model’s flexibility in recognizing facial characteristics and handling postural changes. Subsequently, a Triplet Attention (TA) mechanism is embedded within the YOLOv5 network to bolster image noise suppression and improve the network’s robustness in recognition. Finally, the Wing loss function is introduced into the YOLOv5 model to heighten the sensitivity to micro-features and enhance the network’s capability to capture details. Experimental results demonstrate that the modified YOLOv5 neural network achieves an average accuracy rate of 85% in recognizing driver fatigue states. Full article
(This article belongs to the Special Issue Bio-Inspired Robotics and Applications)
Show Figures

Figure 1

22 pages, 3331 KiB  
Article
FPGA Accelerated Deep Learning for Industrial and Engineering Applications: Optimal Design Under Resource Constraints
by Yanyi Liu, Hang Du, Yin Wu and Tianli Mo
Electronics 2025, 14(4), 703; https://doi.org/10.3390/electronics14040703 - 12 Feb 2025
Abstract
In response to the need for deploying the YOLOv4-Tiny model on resource-constrained Field-Programmable Gate Array (FPGA) platforms for rapid inference, this study proposes a general optimization acceleration strategy and method aimed at achieving fast inference for object detection networks. This approach centers on [...] Read more.
In response to the need for deploying the YOLOv4-Tiny model on resource-constrained Field-Programmable Gate Array (FPGA) platforms for rapid inference, this study proposes a general optimization acceleration strategy and method aimed at achieving fast inference for object detection networks. This approach centers on the synergistic effect of several key strategies: a refined resource management strategy that dynamically adjusts FPGA hardware resource allocation based on the network architecture; a dynamic dual-buffering strategy that maximizes the parallelism of data computation and transmission; an interface access latency pre-configuration strategy that effectively improves data throughput; and quantization operations for dynamic bit width tuning of model parameters and cached variables. Experimental results on the ZYNQ7020 platform demonstrate that this accelerator operates at a frequency of 200 MHz, achieving an average computing performance of 36.97 Giga Operations Per Second (GOPS) with an energy efficiency of 8.82 Giga Operations Per Second per Watt (GOPS/W). Testing with a metal surface defect dataset maintains an accuracy of approximately 90% per image, while reducing the inference delay per frame to 185 ms, representing a 52.2% improvement in inference speed. Compared to other FPGA accelerator designs, the accelerator design strategies and methods proposed in this study showcase significant enhancements in average computing performance, energy efficiency, and inference latency. Full article
Show Figures

Figure 1

33 pages, 6997 KiB  
Article
CFR-YOLO: A Novel Cow Face Detection Network Based on YOLOv7 Improvement
by Guohong Gao, Yuxin Ma, Jianping Wang, Zhiyu Li, Yan Wang and Haofan Bai
Sensors 2025, 25(4), 1084; https://doi.org/10.3390/s25041084 - 11 Feb 2025
Abstract
With the rapid development of machine learning and deep learning technology, cow face detection technology has achieved remarkable results. Traditional contact cattle identification methods are costly; are easy to lose and tamper with; and can lead to a series of security problems, such [...] Read more.
With the rapid development of machine learning and deep learning technology, cow face detection technology has achieved remarkable results. Traditional contact cattle identification methods are costly; are easy to lose and tamper with; and can lead to a series of security problems, such as untimely disease prevention and control, incorrect traceability of cattle products, and fraudulent insurance claims. In order to solve these problems, this study explores the application of cattle face detection technology in cattle individual detection to improve the accuracy of detection, an approach that is particularly important in smart animal husbandry and animal behavior analysis. In this paper, we propose a novel cow face detection network based on YOLOv7 improvement, named CFR-YOLO. First of all, the method of extracting the features of a cow’s face (including nose, eye corner, and mouth corner) is constructed. Then, we calculate the frame center of gravity and frame size based on these feature points to design the cow face detection CFR-YOLO network model. To optimize the performance of the model, the activation function of FReLU is used instead of the original SiLU activation function, and the CBS module is replaced by the CBF module. The RFB module is introduced in the backbone network; and in the head layer, the CBAM convolutional attention module is introduced. The performance of CFR-YOLO is compared with other mainstream deep learning models (including YOLOv7, YOLOv5, YOLOv4, and SSD) on a self-built cow face dataset. Experiments indicate that the CFR-YOLO model achieves 98.46% accuracy (precision), 97.21% recall (recall), and 96.27% average accuracy (mAP), proving its excellent performance in the field of cow face detection. In addition, comparative analyses with the other four methods show that CFR-YOLO exhibits faster convergence speed while ensuring the same detection accuracy; and its detection accuracy is higher under the condition of the same model convergence speed. These results will be helpful to further develop the cattle identification technique. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 5745 KiB  
Article
Automated Disassembly of Waste Printed Circuit Boards: The Role of Edge Computing and IoT
by Muhammad Mohsin, Stefano Rovetta, Francesco Masulli and Alberto Cabri
Computers 2025, 14(2), 62; https://doi.org/10.3390/computers14020062 - 11 Feb 2025
Abstract
The ever-growing volume of global electronic waste (e-waste) poses significant environmental and health challenges. Printed circuit boards (PCBs), which form the core of most electronic devices, contain valuable metals as well as hazardous materials. The efficient disassembly and recycling of e-waste is critical [...] Read more.
The ever-growing volume of global electronic waste (e-waste) poses significant environmental and health challenges. Printed circuit boards (PCBs), which form the core of most electronic devices, contain valuable metals as well as hazardous materials. The efficient disassembly and recycling of e-waste is critical for both economic and environmental sustainability. The traditional manual disassembly methods are time-consuming, labor-intensive, and often hazardous. The integration of edge computing and the Internet of Things (IoT) provides a novel approach to automating the disassembly process, potentially transforming the way e-waste is managed. Automated disassembly of WPCBs involves the use of advanced technologies, specifically edge computing and the IoT, to streamline the recycling process. This strategy aims to improve the efficiency and sustainability of e-waste management by leveraging real-time data analytics and intelligent decision-making at the edge of the network. This paper explores the application of edge computing and the IoT in the automated disassembly of WPCBs, discussing the technological framework, benefits, challenges, and future prospects. The experimental results show that the YOLOv10 model achieves 99.9% average precision (AP), enabling accurate real-time detection of electronic components, which greatly facilitates the automated disassembly process. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

17 pages, 3243 KiB  
Article
An Improved YOLOv5s-Based Algorithm for Unsafe Behavior Detection of Construction Workers in Construction Scenarios
by Yongqiang Liu, Pengxiang Wang and Haomin Li
Appl. Sci. 2025, 15(4), 1853; https://doi.org/10.3390/app15041853 - 11 Feb 2025
Abstract
Currently, the identification of unsafe behaviors among construction workers predominantly relies on manual methods, which are time-consuming, labor intensive, and inefficient. To enhance identification accuracy and ensure real-time performance, this paper proposes an enhanced YOLOv5s framework with three strategic improvements: (1) adoption of [...] Read more.
Currently, the identification of unsafe behaviors among construction workers predominantly relies on manual methods, which are time-consuming, labor intensive, and inefficient. To enhance identification accuracy and ensure real-time performance, this paper proposes an enhanced YOLOv5s framework with three strategic improvements: (1) adoption of the Focal-EIoU loss function to resolve sample imbalance and localization inaccuracies in complex scenarios; (2) integration of the Coordinate Attention (CA) mechanism, which enhances spatial perception through channel-direction feature encoding, outperforming conventional SE blocks in positional sensitivity; and (3) development of a dedicated small-target detection layer to capture critical fine-grained features. Based on the improved model, a method for identifying unsafe behaviors of construction workers is proposed. Validated through a sluice renovation project in Jiangsu Province, the optimized model demonstrates a 3.6% higher recall (reducing missed detections) and a 2.2% mAP improvement over baseline, while maintaining a 42 FPS processing speed. The model effectively identifies unsafe behaviors at water conservancy construction sites, accurately detecting relevant unsafe actions, while meeting real-time performance requirements. Full article
Show Figures

Figure 1

19 pages, 6474 KiB  
Article
Improved Lightweight YOLOv8 Model for Rice Disease Detection in Multi-Scale Scenarios
by Jinfeng Wang, Siyuan Ma, Zhentao Wang, Xinhua Ma, Chunhe Yang, Guoqing Chen and Yijia Wang
Agronomy 2025, 15(2), 445; https://doi.org/10.3390/agronomy15020445 - 11 Feb 2025
Abstract
In response to the challenges of detecting rice pests and diseases at different scales and the difficulties associated with deploying and running models on embedded devices with limited computational resources, this study proposes a multi-scale rice pest and disease recognition model (RGC-YOLO). Based [...] Read more.
In response to the challenges of detecting rice pests and diseases at different scales and the difficulties associated with deploying and running models on embedded devices with limited computational resources, this study proposes a multi-scale rice pest and disease recognition model (RGC-YOLO). Based on the YOLOv8n network, which includes an SPPF layer, the model introduces a structural reparameterization module (RepGhost) to achieve implicit feature reuse through reparameterization. GhostConv layers replace some standard convolutions, reducing the model’s computational cost and improving inference speed. A Hybrid Attention Module (CBAM) is incorporated into the backbone network to enhance the model’s ability to extract important features. The RGC-YOLO model is evaluated for accuracy and inference time on a multi-scale rice pest and disease dataset, including bacterial blight, rice blast, brown spot, and rice planthopper. Experimental results show that RGC-YOLO achieves a precision (P) of 86.2%, a recall (R) of 90.8%, and a mean average precision at Intersection over Union 0.5(mAP50) of 93.2%. In terms of model size, the parameters are reduced by 33.2%, and GFLOPs decrease by 29.27% compared to the base YOLOv8n model. Finally, the RGC-YOLO model is deployed on an embedded Jetson Nano device, where the inference time per image is reduced by 21.3% compared to the base YOLOv8n model, reaching 170 milliseconds. This study develops a multi-scale rice pest and disease recognition model, which is successfully deployed on embedded field devices, achieving high-accuracy real-time monitoring and providing valuable reference for intelligent equipment in unmanned farms. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

24 pages, 13033 KiB  
Article
Detection of Parabolic Antennas in Satellite Inverse Synthetic Aperture Radar Images Using Component Prior and Improved-YOLOv8 Network in Terahertz Regime
by Liuxiao Yang, Hongqiang Wang, Yang Zeng, Wei Liu, Ruijun Wang and Bin Deng
Remote Sens. 2025, 17(4), 604; https://doi.org/10.3390/rs17040604 - 10 Feb 2025
Abstract
Inverse Synthetic Aperture Radar (ISAR) images of space targets and their key components are very important. However, this method suffers from numerous drawbacks, including a low Signal-to-Noise Ratio (SNR), blurred edges, significant variations in scattering intensity, and limited data availability, all of which [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) images of space targets and their key components are very important. However, this method suffers from numerous drawbacks, including a low Signal-to-Noise Ratio (SNR), blurred edges, significant variations in scattering intensity, and limited data availability, all of which constrain its recognition capabilities. The terahertz (THz) regime has reflected excellent capacity for space detection in terms of showing the details of target structures. However, in ISAR images, as the observation aperture moves, the imaging features of the extended structures (ESs) undergo significant changes, posing challenges to the subsequent recognition performance. In this paper, a parabolic antenna is taken as the research object. An innovative approach for identifying this component is proposed by using the advantages of the Component Prior and Imaging Characteristics (CPICs) effectively. In order to tackle the challenges associated with component identification in satellite ISAR imagery, this study employs the Improved-YOLOv8 model, which was developed by incorporating the YOLOv8 algorithm, an adaptive detection head known as the Dynamic head (Dyhead) that utilizes an attention mechanism, and a regression box loss function called Wise Intersection over Union (WIoU), which addresses the issue of varying sample difficulty. After being trained on the simulated dataset, the model demonstrated a considerable enhancement in detection accuracy over the five base models, reaching an mAP50 of 0.935 and an mAP50-95 of 0.520. Compared with YOLOv8n, it improved by 0.192 and 0.076 in mAP50 and mAP50-95, respectively. Ultimately, the effectiveness of the suggested method is confirmed through the execution of comprehensive simulations and anechoic chamber tests. Full article
(This article belongs to the Special Issue Advanced Spaceborne SAR Processing Techniques for Target Detection)
Show Figures

Figure 1

19 pages, 11003 KiB  
Article
FDD-YOLO: A Novel Detection Model for Detecting Surface Defects in Wood
by Bo Wang, Rijun Wang, Yesheng Chen, Chunhui Yang, Xianglong Teng and Peng Sun
Forests 2025, 16(2), 308; https://doi.org/10.3390/f16020308 - 10 Feb 2025
Abstract
Wood surface defect detection is a critical step in wood processing and manufacturing. To address the performance degradation caused by small targets and multi-scale features in wood surface defect detection, a novel deep learning model is proposed in this study, FDD-YOLO, specifically designed [...] Read more.
Wood surface defect detection is a critical step in wood processing and manufacturing. To address the performance degradation caused by small targets and multi-scale features in wood surface defect detection, a novel deep learning model is proposed in this study, FDD-YOLO, specifically designed for this task. In the feature extraction stage, the C2f module and the funnel attention (FA) mechanisms are integrated into the design of the C2f-FA module to enhance the model’s ability to extract features of wood surface defects of various sizes. Additionally, the Dual Spatial Pyramid Pooling-Fast (DSPPF) module is developed, and the Context Self-attention Module (CSAM) is introduced to address the limitations of traditional max pooling methods, which often overlook global contextual information when extracting local features, thereby improving the detection of small-scale wood defects. In the feature fusion stage, a Dual Cross-scale Weighted Feature-fusion (DCWF) module is proposed to fuse shallow, deep, and cross-scale features through a weighted summation approach, effectively addressing the challenge of scale variation in wood surface defects. Experimental results demonstrate that the proposed FDD-YOLO model significantly improves detection performance, increasing the mAP of the baseline model YOLOv8 from 78% to 82.3%, a substantial enhancement of 4.3 percentage points. Furthermore, FDD-YOLO outperforms other mainstream defect detection models in terms of detection accuracy. The proposed model demonstrates significant potential for industrial applications by improving detection accuracy, enhancing production efficiency, and reducing material waste, thereby advancing quality control in wood processing and manufacturing. Full article
(This article belongs to the Section Wood Science and Forest Products)
Show Figures

Figure 1

26 pages, 12201 KiB  
Article
MPG-YOLO: Enoki Mushroom Precision Grasping with Segmentation and Pulse Mapping
by Limin Xie, Jun Jing, Haoyu Wu, Qinguan Kang, Yiwei Zhao and Dapeng Ye
Agronomy 2025, 15(2), 432; https://doi.org/10.3390/agronomy15020432 - 10 Feb 2025
Abstract
The flatness of the cut surface in enoki mushrooms (Flammulina filiformis Z.W. Ge, X.B. Liu & Zhu L. Yang) is a key factor in quality classification. However, conventional automatic cutting equipment struggles with deformation issues due to its inability to adjust the [...] Read more.
The flatness of the cut surface in enoki mushrooms (Flammulina filiformis Z.W. Ge, X.B. Liu & Zhu L. Yang) is a key factor in quality classification. However, conventional automatic cutting equipment struggles with deformation issues due to its inability to adjust the grasping force based on individual mushroom sizes. To address this, we propose an improved method that integrates visual feedback to dynamically adjust the execution end, enhancing cut precision. Our approach enhances YOLOv8n-seg with Star Net, SPPECAN (a reconstructed SPPF with efficient channel attention), and C2fDStar (C2f with Star Net and deformable convolution) to improve feature extraction while reducing computational complexity and feature loss. Additionally, we introduce a mask ownership judgment and merging optimization algorithm to correct positional offsets, internal disconnections, and boundary instabilities in grasping area predictions. Based on this, we optimize grasping parameters using an improved centroid-based region width measurement and establish a region width-to-PWM mapping model for the precise conversion from visual data to gripper control. Experiments in real-situation settings demonstrate the effectiveness of our method, achieving a mean average precision (mAP50:95) of 0.743 for grasping area segmentation, a 4.5% improvement over YOLOv8, with an average detection speed of 10.3 ms and a target width measurement error of only 0.14%. The proposed mapping relationship enables adaptive end-effector control, resulting in a 96% grasping success rate and a 98% qualified cutting surface rate. These results confirm the feasibility of our approach and provide a strong technical foundation for the intelligent automation of enoki mushroom cutting systems. Full article
(This article belongs to the Section Soil and Plant Nutrition)
Show Figures

Figure 1

23 pages, 5243 KiB  
Article
GS-YOLO: A Lightweight Identification Model for Precision Parts
by Haojie Zhu, Lei Dong, Hanpeng Ren, Hongchao Zhuang and Hu Li
Symmetry 2025, 17(2), 268; https://doi.org/10.3390/sym17020268 - 10 Feb 2025
Abstract
With the development of aerospace technology, the variety and complexity of spacecraft components have increased. Traditional manual and machine learning-based detection methods struggle to accurately and quickly identify these parts. Deep learning-based object detection networks require significant computational resources and high hardware requirements. [...] Read more.
With the development of aerospace technology, the variety and complexity of spacecraft components have increased. Traditional manual and machine learning-based detection methods struggle to accurately and quickly identify these parts. Deep learning-based object detection networks require significant computational resources and high hardware requirements. This study introduces Ghost SCYLLA Intersection over Union You Only Look Once (GS-YOLO), an improved image recognition model derived from YOLOv5s, which integrates the global attention mechanism (GAM) with the Ghost module. The lightweight Ghost module substitutes the original convolutional layers, producing half of the features via convolution and the other half by symmetric linear operations. This minimizes the computing burden and model parameters by effectively acquiring superfluous feature layers. A more lightweight SimSPPF structure is created to supplant the old spatial pyramid pooling—fast (SPPF), enhancing the network speed. The GAM is included in the bottleneck architecture, improving feature extraction via channel–space interaction. The experimental results on the custom-made precision component dataset show that GS-YOLO achieves an accuracy of 96.5% with a model size of 10.8 MB. Compared to YOLOv5s, GS-YOLO improves accuracy by 1%, reduces parameters by 23%, and decreases computational requirements by 40.6%. Despite the model’s light weight, its detection accuracy has been improved. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 5569 KiB  
Article
Supervised Hyperspectral Band Selection Using Texture Features for Classification of Citrus Leaf Diseases with YOLOv8
by Quentin Frederick, Thomas Burks, Jonathan Adam Watson, Pappu Kumar Yadav, Jianwei Qin, Moon Kim and Megan M. Dewdney
Sensors 2025, 25(4), 1034; https://doi.org/10.3390/s25041034 - 9 Feb 2025
Abstract
Citrus greening disease (HLB) and citrus canker cause financial losses in Florida citrus groves via smaller fruits, blemishes, premature fruit drop, and/or eventual tree death. Management of these two diseases requires early detection and distinction from other leaf defects and infections. Automated leaf [...] Read more.
Citrus greening disease (HLB) and citrus canker cause financial losses in Florida citrus groves via smaller fruits, blemishes, premature fruit drop, and/or eventual tree death. Management of these two diseases requires early detection and distinction from other leaf defects and infections. Automated leaf inspection with hyperspectral imagery (HSI) is tested in this study. Citrus leaves bearing visible symptoms of HLB, canker, scab, melanose, greasy spot, zinc deficiency, and a control class were collected, and images were taken with a line-scan HSI camera. YOLOv8 was trained to classify multispectral images from this image dataset, created by selecting bands with a novel variance-based method. The ‘small’ network using an intensity-based band combination yielded an overall weighted F1 score of 0.8959, classifying HLB and canker with F1 scores of 0.788 and 0.941, respectively. The network size appeared to exert greater influence on performance than the HSI bands selected. These findings suggest that YOLOv8 relies more heavily on intensity differences than on the texture properties of citrus leaves and is less sensitive to the choice of wavelengths than traditional machine vision classifiers. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

29 pages, 16077 KiB  
Article
Traffic Sign Detection and Quality Assessment Using YOLOv8 in Daytime and Nighttime Conditions
by Ziyad N. Aldoski and Csaba Koren
Sensors 2025, 25(4), 1027; https://doi.org/10.3390/s25041027 - 9 Feb 2025
Abstract
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic [...] Read more.
Traffic safety remains a pressing global concern, with traffic signs playing a vital role in regulating and guiding drivers. However, environmental factors like lighting and weather often compromise their visibility, impacting human drivers and autonomous vehicle (AV) systems. This study addresses critical traffic sign detection (TSD) and classification (TSC) gaps by leveraging the YOLOv8 algorithm to evaluate the detection accuracy and sign quality under diverse lighting conditions. The model achieved robust performance metrics across day and night scenarios using the novel ZND dataset, comprising 16,500 labeled images sourced from the GTSRB, GitHub repositories, and real-world own photographs. Complementary retroreflectivity assessments using handheld retroreflectometers revealed correlations between the material properties of the signs and their detection performance, emphasizing the importance of the retroreflective quality, especially under night-time conditions. Additionally, video analysis highlighted the influence of sharpness, brightness, and contrast on detection rates. Human evaluations further provided insights into subjective perceptions of visibility and their relationship with algorithmic detection, underscoring areas for potential improvement. The findings emphasize the need for using various assessment methods, advanced algorithms, enhanced sign materials, and regular maintenance to improve detection reliability and road safety. This research bridges the theoretical and practical aspects of TSD, offering recommendations that could advance AV systems and inform future traffic sign design and evaluation standards. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

Back to TopTop