Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (203)

Search Parameters:
Keywords = vehicle make classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2190 KiB  
Review
Security of Smart Grid: Cybersecurity Issues, Potential Cyberattacks, Major Incidents, and Future Directions
by Mohammad Ahmed Alomari, Mohammed Nasser Al-Andoli, Mukhtar Ghaleb, Reema Thabit, Gamal Alkawsi, Jamil Abedalrahim Jamil Alsayaydeh and AbdulGuddoos S. A. Gaid
Energies 2025, 18(1), 141; https://doi.org/10.3390/en18010141 - 1 Jan 2025
Viewed by 807
Abstract
Despite the fact that countless IoT applications are arising frequently in various fields, such as green cities, net-zero decarbonization, healthcare systems, and smart vehicles, the smart grid is considered the most critical cyber–physical IoT application. With emerging technologies supporting the much-anticipated smart energy [...] Read more.
Despite the fact that countless IoT applications are arising frequently in various fields, such as green cities, net-zero decarbonization, healthcare systems, and smart vehicles, the smart grid is considered the most critical cyber–physical IoT application. With emerging technologies supporting the much-anticipated smart energy systems, particularly the smart grid, these smart systems will continue to profoundly transform our way of life and the environment. Energy systems have improved over the past ten years in terms of intelligence, efficiency, decentralization, and ICT usage. On the other hand, cyber threats and attacks against these systems have greatly expanded as a result of the enormous spread of sensors and smart IoT devices inside the energy sector as well as traditional power grids. In order to detect and mitigate these vulnerabilities while increasing the security of energy systems and power grids, a thorough investigation and in-depth research are highly required. This study offers a comprehensive overview of state-of-the-art smart grid cybersecurity research. In this work, we primarily concentrate on examining the numerous threats and cyberattacks that have recently invaded the developing smart energy systems in general and smart grids in particular. This study begins by introducing smart grid architecture, it key components, and its security issues. Then, we present the spectrum of cyberattacks against energy systems while highlighting the most significant research studies that have been documented in the literature. The categorization of smart grid cyberattacks, while taking into account key information security characteristics, can help make it possible to provide organized and effective solutions for the present and potential attacks in smart grid applications. This cyberattack classification is covered thoroughly in this paper. This study also discusses the historical incidents against energy systems, which depicts how harsh and disastrous these attacks can go if not detected and mitigated. Finally, we provide a summary of the latest emerging future research trend and open research issues. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

18 pages, 5057 KiB  
Article
Road Traffic Gesture Autonomous Integrity Monitoring Using Fuzzy Logic
by Kwame Owusu Ampadu and Michael Huebner
Sensors 2025, 25(1), 152; https://doi.org/10.3390/s25010152 - 30 Dec 2024
Viewed by 301
Abstract
Occasionally, four cars arrive at the four legs of an unsignalized intersection at the same time or almost at the same time. If each lane has a stop sign, all four cars are required to stop. In such instances, gestures are used to [...] Read more.
Occasionally, four cars arrive at the four legs of an unsignalized intersection at the same time or almost at the same time. If each lane has a stop sign, all four cars are required to stop. In such instances, gestures are used to communicate approval for one vehicle to leave. Nevertheless, the autonomous vehicle lacks the ability to participate in gestural exchanges. A sophisticated in-vehicle traffic light system has therefore been developed to monitor and facilitate communication among autonomous vehicles and classic car drivers. The fuzzy logic-based system was implemented and evaluated on a self-organizing network comprising eight ESP32 microcontrollers, all operating under the same program. A single GPS sensor connects to each microcontroller that also manages three light-emitting diodes. The ESPNow broadcast feature is used. The system requires no internet service and no large-scale or long-term storage, such as the driving cloud platform, making it backward-compatible with classical vehicles. Simulations were conducted based on the order and arrival direction of vehicles at three junctions. Results have shown that autonomous vehicles at four-legged intersections can now communicate with human drivers at a much lower cost with precise position classification and lane dispersion under 30 s. Full article
Show Figures

Figure 1

19 pages, 3120 KiB  
Article
Optimized Fault Classification in Electric Vehicle Drive Motors Using Advanced Machine Learning and Data Transformation Techniques
by S. Thirunavukkarasu, K. Karthick, S. K. Aruna, R. Manikandan and Mejdl Safran
Processes 2024, 12(12), 2648; https://doi.org/10.3390/pr12122648 - 24 Nov 2024
Viewed by 1049
Abstract
The increasing use of electric vehicles has made fault diagnosis in electric drive motors, particularly in variable speed drives (VSDs) using three-phase induction motors, a critical area of research. This article presents a fault classification model based on machine learning (ML) algorithms to [...] Read more.
The increasing use of electric vehicles has made fault diagnosis in electric drive motors, particularly in variable speed drives (VSDs) using three-phase induction motors, a critical area of research. This article presents a fault classification model based on machine learning (ML) algorithms to identify various faults under six operating conditions: normal operating mode (NOM), phase-to-phase fault (PTPF), phase-to-ground fault (PTGF), overloading fault (OLF), over-voltage fault (OVF), and under-voltage fault (UVF). A dataset simulating real-world operating conditions, consisting of 39,034 instances and nine key motor features, was analyzed. Comprehensive data preprocessing steps, including missing value removal, duplicate detection, and data transformation, were applied to enhance the dataset’s suitability for ML models. Yeo–Johnson and Hyperbolic Sine transformations were used to reduce skewness and improve the normality of the features. Multiple ML algorithms, including CatBoost, Random Forest (RF) Classifier, AdaBoost, and quadratic discriminant analysis (QDA), were trained and evaluated using Bayesian optimization with cross-validation. The CatBoost model achieved the best performance, with an accuracy of 94.1%, making it the most suitable model for fault classification in electric vehicle drive motors. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

24 pages, 6941 KiB  
Article
Discriminating Seagrasses from Green Macroalgae in European Intertidal Areas Using High-Resolution Multispectral Drone Imagery
by Simon Oiry, Bede Ffinian Rowe Davies, Ana I. Sousa, Philippe Rosa, Maria Laura Zoffoli, Guillaume Brunier, Pierre Gernez and Laurent Barillé
Remote Sens. 2024, 16(23), 4383; https://doi.org/10.3390/rs16234383 - 23 Nov 2024
Viewed by 902
Abstract
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations [...] Read more.
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations on temporal and spatial coverage, particularly in intertidal zones, prompting the addition of satellite data within monitoring programs. Yet, satellite remote sensing can be limited by too coarse spatial and/or spectral resolutions, making it difficult to discriminate seagrass from other macrophytes in highly heterogeneous meadows. Drone (unmanned aerial vehicle—UAV) images at a very high spatial resolution offer a promising solution to address challenges related to spatial heterogeneity and the intrapixel mixture. This study focuses on using drone acquisitions with a ten spectral band sensor similar to that onboard Sentinel-2 for mapping intertidal macrophytes at low tide (i.e., during a period of emersion) and effectively discriminating between seagrass and green macroalgae. Nine drone flights were conducted at two different altitudes (12 m and 120 m) across heterogeneous intertidal European habitats in France and Portugal, providing multispectral reflectance observation at very high spatial resolution (8 mm and 80 mm, respectively). Taking advantage of their extremely high spatial resolution, the low altitude flights were used to train a Neural Network classifier to discriminate five taxonomic classes of intertidal vegetation: Magnoliopsida (Seagrass), Chlorophyceae (Green macroalgae), Phaeophyceae (Brown algae), Rhodophyceae (Red macroalgae), and benthic Bacillariophyceae (Benthic diatoms), and validated using concomitant field measurements. Classification of drone imagery resulted in an overall accuracy of 94% across all sites and images, covering a total area of 467,000 m2. The model exhibited an accuracy of 96.4% in identifying seagrass. In particular, seagrass and green algae can be discriminated. The very high spatial resolution of the drone data made it possible to assess the influence of spatial resolution on the classification outputs, showing a limited loss in seagrass detection up to about 10 m. Altogether, our findings suggest that the MultiSpectral Instrument (MSI) onboard Sentinel-2 offers a relevant trade-off between its spatial and spectral resolution, thus offering promising perspectives for satellite remote sensing of intertidal biodiversity over larger scales. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

19 pages, 53371 KiB  
Article
Efficient UAV-Based Automatic Classification of Cassava Fields Using K-Means and Spectral Trend Analysis
by Apinya Boonrang, Pantip Piyatadsananon and Tanakorn Sritarapipat
AgriEngineering 2024, 6(4), 4406-4424; https://doi.org/10.3390/agriengineering6040250 - 22 Nov 2024
Viewed by 509
Abstract
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery [...] Read more.
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery is increasingly utilized for various agricultural classification tasks. This study introduces an automatic classification method designed to streamline the process, specifically targeting cassava plants, weeds, and soil classification. The approach combines K-means unsupervised classification with spectral trend-based labeling, significantly reducing the need for manual intervention. The method ensures reliable and accurate classification results by leveraging color indices derived from RGB data and applying mean-shift filtering parameters. Key findings reveal that the combination of the blue (B) channel, Visible Atmospherically Resistant Index (VARI), and color index (CI) with filtering parameters, including a spatial radius (sp) = 5 and a color radius (sr) = 10, effectively differentiates soil from vegetation. Notably, using the green (G) channel, excess red (ExR), and excess green (ExG) with filtering parameters (sp = 10, sr = 20) successfully distinguishes cassava from weeds. The classification maps generated by this method achieved high kappa coefficients of 0.96, with accuracy levels comparable to supervised methods like Random Forest classification. This technique offers significant reductions in processing time compared to traditional methods and does not require training data, making it adaptable to different cassava fields captured by various UAV-mounted optical sensors. Ultimately, the proposed classification process minimizes manual intervention by incorporating efficient pre-processing steps into the classification workflow, making it a valuable tool for precision agriculture. Full article
(This article belongs to the Special Issue Computer Vision for Agriculture and Smart Farming)
Show Figures

Figure 1

16 pages, 1799 KiB  
Article
Optimizing Fire Scene Analysis: Hybrid Convolutional Neural Network Model Leveraging Multiscale Feature and Attention Mechanisms
by Shakhnoza Muksimova, Sabina Umirzakova, Mirjamol Abdullaev and Young-Im Cho
Fire 2024, 7(11), 422; https://doi.org/10.3390/fire7110422 - 20 Nov 2024
Viewed by 701
Abstract
The rapid and accurate detection of fire scenes in various environments is crucial for effective disaster management and mitigation. Fire scene classification is a critical aspect of modern fire detection systems that directly affects public safety and property preservation. This research introduced a [...] Read more.
The rapid and accurate detection of fire scenes in various environments is crucial for effective disaster management and mitigation. Fire scene classification is a critical aspect of modern fire detection systems that directly affects public safety and property preservation. This research introduced a novel hybrid deep learning model designed to enhance the accuracy and efficiency of fire scene classification across diverse environments. The proposed model integrates advanced convolutional neural networks with multiscale feature extraction, attention mechanisms, and ensemble learning to achieve superior performance in real-time fire detection. By leveraging the strengths of pre-trained networks such as ResNet50, VGG16, and EfficientNet-B3, the model captures detailed features at multiple scales, ensuring robust detection capabilities. Including spatial and channel attention mechanisms further refines the focus on critical areas within the input images, reducing false positives and improving detection precision. Extensive experiments on a comprehensive dataset encompassing wildfires, building fires, vehicle fires, and non-fire scenes demonstrate that the proposed framework outperforms existing cutting-edge techniques. The model also exhibited reduced computational complexity and enhanced inference speed, making it suitable for deployment in real-time applications on various hardware platforms. This study sets a new benchmark for fire detection and offers a powerful tool for early warning systems and emergency response initiatives. Full article
Show Figures

Figure 1

19 pages, 4245 KiB  
Article
Lightweight UAV Small Target Detection and Perception Based on Improved YOLOv8-E
by Yongjuan Zhao, Lijin Wang, Guannan Lei, Chaozhe Guo and Qiang Ma
Drones 2024, 8(11), 681; https://doi.org/10.3390/drones8110681 - 19 Nov 2024
Viewed by 914
Abstract
Traditional unmanned aerial vehicle (UAV) detection methods struggle with multi-scale variations during flight, complex backgrounds, and low accuracy, whereas existing deep learning detection methods have high accuracy but high dependence on equipment, making it difficult to detect small UAV targets efficiently. To address [...] Read more.
Traditional unmanned aerial vehicle (UAV) detection methods struggle with multi-scale variations during flight, complex backgrounds, and low accuracy, whereas existing deep learning detection methods have high accuracy but high dependence on equipment, making it difficult to detect small UAV targets efficiently. To address the above challenges, this paper proposes an improved lightweight high-precision model, YOLOv8-E (Enhanced YOLOv8), for the fast and accurate detection and identification of small UAVs in complex environments. First, a Sobel filter is introduced to enhance the C2f module to form the C2f-ESCFFM (Edge-Sensitive Cross-Stage Feature Fusion Module) module, which achieves higher computational efficiency and feature representation capacity while preserving detection accuracy as much as possible by fusing the SobelConv branch for edge extraction and the convolution branch to extract spatial information. Second, the neck network is based on the HSFPN (High-level Screening-feature Pyramid Network) architecture, and the CAA (Context Anchor Attention) mechanism is introduced to enhance the semantic parsing of low-level features to form a new CAHS-FPN (Context-Augmented Hierarchical Scale Feature Pyramid Network) network, enabling the fusion of deep and shallow features. This improves the feature representation capability of the model, allowing it to detect targets of different sizes efficiently. Finally, the optimized detail-enhanced convolution (DEConv) technique is introduced into the head network, forming the LSCOD (Lightweight Shared Convolutional Object Detector Head) module, enhancing the generalization ability of the model by integrating a priori information and adopting the strategy of shared convolution. This ensures that the model enhances its localization and classification performance without increasing parameters or computational costs, thus effectively improving the detection performance of small UAV targets. The experimental results show that compared with the baseline model, the YOLOv8-E model achieved (mean average precision at IoU = 0.5) an [email protected] improvement of 6.3%, reaching 98.4%, whereas the model parameter scale was reduced by more than 50%. Overall, YOLOv8-E significantly reduces the demand for computational resources while ensuring high-precision detection. Full article
Show Figures

Figure 1

33 pages, 16970 KiB  
Article
Ontological Airspace-Situation Awareness for Decision System Support
by Carlos C. Insaurralde and Erik Blasch
Aerospace 2024, 11(11), 942; https://doi.org/10.3390/aerospace11110942 - 15 Nov 2024
Viewed by 788
Abstract
Air Traffic Management (ATM) has become complicated mainly due to the increase and variety of input information from Communication, Navigation, and Surveillance (CNS) systems as well as the proliferation of Unmanned Aerial Vehicles (UAVs) requiring Unmanned Aerial System Traffic Management (UTM). In response [...] Read more.
Air Traffic Management (ATM) has become complicated mainly due to the increase and variety of input information from Communication, Navigation, and Surveillance (CNS) systems as well as the proliferation of Unmanned Aerial Vehicles (UAVs) requiring Unmanned Aerial System Traffic Management (UTM). In response to the UTM challenge, a decision support system (DSS) has been developed to help ATM personnel and aircraft pilots cope with their heavy workloads and challenging airspace situations. The DSS provides airspace situational awareness (ASA) driven by knowledge representation and reasoning from an Avionics Analytics Ontology (AAO), which is an Artificial Intelligence (AI) database that augments humans’ mental processes by means of implementing AI cognition. Ontologies for avionics have also been of interest to the Federal Aviation Administration (FAA) Next Generation Air Transportation System (NextGen) and the Single European Sky ATM Research (SESAR) project, but they have yet to be received by practitioners and industry. This paper presents a decision-making computer tool to support ATM personnel and aviators in deciding on airspace situations. It details the AAO and the analytical AI foundations that support such an ontology. An application example and experimental test results from a UAV AAO (U-AAO) framework prototype are also presented. The AAO-based DSS can provide ASA from outdoor park-testing trials based on downscaled application scenarios that replicate takeoffs where drones play the role of different aircraft, i.e., where a drone represents an airplane that takes off and other drones represent AUVs flying around during the airplane’s takeoff. The resulting ASA is the output of an AI cognitive process, the inputs of which are the aircraft localization based on Automatic Dependent Surveillance–Broadcast (ADS-B) and the classification of airplanes and UAVs (both represented by drones), the proximity between aircraft, and the knowledge of potential hazards from airspace situations involving the aircraft. The ASA outcomes are shown to augment the human ability to make decisions. Full article
(This article belongs to the Collection Avionic Systems)
Show Figures

Figure 1

25 pages, 4366 KiB  
Article
Hybrid AI-Powered Real-Time Distributed Denial of Service Detection and Traffic Monitoring for Software-Defined-Based Vehicular Ad Hoc Networks: A New Paradigm for Securing Intelligent Transportation Networks
by Onur Polat, Saadin Oyucu, Muammer Türkoğlu, Hüseyin Polat, Ahmet Aksoz and Fahri Yardımcı
Appl. Sci. 2024, 14(22), 10501; https://doi.org/10.3390/app142210501 - 14 Nov 2024
Viewed by 1002
Abstract
Vehicular Ad Hoc Networks (VANETs) are wireless networks that improve traffic efficiency, safety, and comfort for smart vehicle users. However, with the rise of smart and electric vehicles, traditional VANETs struggle with issues like scalability, management, energy efficiency, and dynamic pricing. Software Defined [...] Read more.
Vehicular Ad Hoc Networks (VANETs) are wireless networks that improve traffic efficiency, safety, and comfort for smart vehicle users. However, with the rise of smart and electric vehicles, traditional VANETs struggle with issues like scalability, management, energy efficiency, and dynamic pricing. Software Defined Networking (SDN) can help address these challenges by centralizing network control. The integration of SDN with VANETs, forming Software Defined-based VANETs (SD-VANETs), shows promise for intelligent transportation, particularly with autonomous vehicles. Nevertheless, SD-VANETs are susceptible to cyberattacks, especially Distributed Denial of Service (DDoS) attacks, making cybersecurity a crucial consideration for their future development. This study proposes a security system that incorporates a hybrid artificial intelligence model to detect DDoS attacks targeting the SDN controller in SD-VANET architecture. The proposed system is designed to operate as a module within the SDN controller, enabling the detection of DDoS attacks. The proposed attack detection methodology involves the collection of network traffic data, data processing, and the classification of these data. This methodology is based on a hybrid artificial intelligence model that combines a one-dimensional Convolutional Neural Network (1D-CNN) and Decision Tree models. According to experimental results, the proposed attack detection system identified that approximately 90% of the traffic in the SD-VANET network under DDoS attack consisted of malicious DDoS traffic flows. These results demonstrate that the proposed security system provides a promising solution for detecting DDoS attacks targeting the SD-VANET architecture. Full article
(This article belongs to the Special Issue Emerging Technologies in Network Security and Cryptography)
Show Figures

Figure 1

31 pages, 2257 KiB  
Article
Evaluation of Cluster Algorithms for Radar-Based Object Recognition in Autonomous and Assisted Driving
by Daniel Carvalho de Ramos, Lucas Reksua Ferreira, Max Mauro Dias Santos, Evandro Leonardo Silva Teixeira, Leopoldo Rideki Yoshioka, João Francisco Justo and Asad Waqar Malik
Sensors 2024, 24(22), 7219; https://doi.org/10.3390/s24227219 - 12 Nov 2024
Viewed by 1051
Abstract
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable [...] Read more.
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable and robust navigation system. Radar, in particular, operates with electromagnetic waves and remains effective under a variety of weather conditions. It uses point cloud technology to map the objects in front of you, making it easy to group these points to associate them with real-world objects. Numerous clustering algorithms have been developed and can be integrated into radar systems to identify, investigate, and track objects. In this study, we evaluate several clustering algorithms to determine their suitability for application in automotive radar systems. Our analysis covered a variety of current methods, the mathematical process of these methods, and presented a comparison table between these algorithms, including Hierarchical Clustering, Affinity Propagation Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Mini-Batch K-Means, K-Means Mean Shift, OPTICS, Spectral Clustering, and Gaussian Mixture. We have found that K-Means, Mean Shift, and DBSCAN are particularly suitable for these applications, based on performance indicators that assess suitability and efficiency. However, DBSCAN shows better performance compared to others. Furthermore, our findings highlight that the choice of radar significantly impacts the effectiveness of these object recognition methods. Full article
Show Figures

Figure 1

15 pages, 2771 KiB  
Article
Vehicle Lane Changing Game Model Based on Improved SVM Algorithm
by Jian Wang, Hongxiang Wang, Mingzhe Fei and Gang Zhou
World Electr. Veh. J. 2024, 15(11), 505; https://doi.org/10.3390/wevj15110505 - 4 Nov 2024
Viewed by 813
Abstract
In order to improve the autonomous lane-changing performance of unmanned vehicles, this paper aims to solve the problem of inaccurate decision classification in traditional support vector machine (SVM) algorithms applied to the lane-changing decision-making stage of intelligent driving vehicles. By using game theory-related [...] Read more.
In order to improve the autonomous lane-changing performance of unmanned vehicles, this paper aims to solve the problem of inaccurate decision classification in traditional support vector machine (SVM) algorithms applied to the lane-changing decision-making stage of intelligent driving vehicles. By using game theory-related theories and combining the improved support vector machine (SSA-SVM) method, a vehicle autonomous lane-changing strategy based on game theory is established. The optimized SVM method has certain advantages for vehicle lane-changing decision-making with a small sample size in actual production processes. The lane-changing decision judgment accuracy rate of the SSA-SVM algorithm model can reach 93.6% compared with the SVM algorithm model without algorithm optimization; the SSA-SVM algorithm model has obvious advantages in decision performance and running speed. Therefore, the proposed new algorithm can effectively solve the problem of the objective consideration of the payoff function in conventional decision game theory. Full article
Show Figures

Figure 1

16 pages, 1563 KiB  
Article
Tree Species Classification from UAV Canopy Images with Deep Learning Models
by Yunmei Huang, Botong Ou, Kexin Meng, Baijian Yang, Joshua Carpenter, Jinha Jung and Songlin Fei
Remote Sens. 2024, 16(20), 3836; https://doi.org/10.3390/rs16203836 - 15 Oct 2024
Viewed by 1354
Abstract
Forests play a critical role in the provision of ecosystem services, and understanding their compositions, especially tree species, is essential for effective ecosystem management and conservation. However, identifying tree species is challenging and time-consuming. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors [...] Read more.
Forests play a critical role in the provision of ecosystem services, and understanding their compositions, especially tree species, is essential for effective ecosystem management and conservation. However, identifying tree species is challenging and time-consuming. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have emerged as a promising technology for species identification due to their relatively low cost and high spatial and temporal resolutions. Moreover, the advancement of various deep learning models makes remote sensing based species identification more a reality. However, three questions remain to be answered: first, which of the state-of-the-art models performs best for this task; second, which is the optimal season for tree species classification in a temperate forest; and third, whether a model trained in one season can be effectively transferred to another season. To address these questions, we focus on tree species classification by using five state-of-the-art deep learning models on UAV-based RGB images, and we explored the model transferability between seasons. Utilizing UAV images taken in the summer and fall, we captured 8799 crown images of eight species. We trained five models using summer and fall images and compared their performance on the same dataset. All models achieved high performances in species classification, with the best performance on summer images, with an average F1-score was 0.96. For the fall images, Vision Transformer (ViT), EfficientNetB0, and YOLOv5 achieved F1-scores greater than 0.9, outperforming both ResNet18 and DenseNet. On average, across the two seasons, ViT achieved the best accuracy. This study demonstrates the capability of deep learning models in forest inventory, particularly for tree species classification. While the choice of certain models may not significantly affect performance when using summer images, the advanced models prove to be a better choice for fall images. Given the limited transferability from one season to another, further research is required to overcome the challenge associated with transferability across seasons. Full article
(This article belongs to the Special Issue LiDAR Remote Sensing for Forest Mapping)
Show Figures

Figure 1

21 pages, 5748 KiB  
Article
Automated Audible Truck-Mounted Attenuator Alerts: Vision System Development and Evaluation
by Neema Jakisa Owor, Yaw Adu-Gyamfi, Linlin Zhang and Carlos Sun
AI 2024, 5(4), 1816-1836; https://doi.org/10.3390/ai5040090 - 8 Oct 2024
Viewed by 1043
Abstract
Background: The rise in work zone crashes due to distracted and aggressive driving calls for improved safety measures. While Truck-Mounted Attenuators (TMAs) have helped reduce crash severity, the increasing number of crashes involving TMAs shows the need for improved warning systems. Methods: This [...] Read more.
Background: The rise in work zone crashes due to distracted and aggressive driving calls for improved safety measures. While Truck-Mounted Attenuators (TMAs) have helped reduce crash severity, the increasing number of crashes involving TMAs shows the need for improved warning systems. Methods: This study proposes an AI-enabled vision system to automatically alert drivers on collision courses with TMAs, addressing the limitations of manual alert systems. The system uses multi-task learning (MTL) to detect and classify vehicles, estimate distance zones (danger, warning, and safe), and perform lane and road segmentation. MTL improves efficiency and accuracy, making it ideal for devices with limited resources. Using a Generalized Efficient Layer Aggregation Network (GELAN) backbone, the system enhances stability and performance. Additionally, an alert module triggers alarms based on speed, acceleration, and time to collision. Results: The model achieves a recall of 90.5%, an mAP of 0.792 for vehicle detection, an mIOU of 0.948 for road segmentation, an accuracy of 81.5% for lane segmentation, and 83.8% accuracy for distance classification. Conclusions: The results show the system accurately detects vehicles, classifies distances, and provides real-time alerts, reducing TMA collision risks and enhancing work zone safety. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

21 pages, 29624 KiB  
Article
Object Detection and Classification Framework for Analysis of Video Data Acquired from Indian Roads
by Aayushi Padia, Aryan T. N., Sharan Thummagunti, Vivaan Sharma, Manjunath K. Vanahalli, Prabhu Prasad B. M., Girish G. N., Yong-Guk Kim and Pavan Kumar B. N.
Sensors 2024, 24(19), 6319; https://doi.org/10.3390/s24196319 - 29 Sep 2024
Viewed by 1277
Abstract
Object detection and classification in autonomous vehicles are crucial for ensuring safe and efficient navigation through complex environments. This paper addresses the need for robust detection and classification algorithms tailored specifically for Indian roads, which present unique challenges such as diverse traffic patterns, [...] Read more.
Object detection and classification in autonomous vehicles are crucial for ensuring safe and efficient navigation through complex environments. This paper addresses the need for robust detection and classification algorithms tailored specifically for Indian roads, which present unique challenges such as diverse traffic patterns, erratic driving behaviors, and varied weather conditions. Despite significant progress in object detection and classification for autonomous vehicles, existing methods often struggle to generalize effectively to the conditions encountered on Indian roads. This paper proposes a novel approach utilizing the YOLOv8 deep learning model, designed to be lightweight, scalable, and efficient for real-time implementation using onboard cameras. Experimental evaluations were conducted using real-life scenarios encompassing diverse weather and traffic conditions. Videos captured in various environments were utilized to assess the model’s performance, with particular emphasis on its accuracy and precision across 35 distinct object classes. The experiments demonstrate a precision of 0.65 for the detection of multiple classes, indicating the model’s efficacy in handling a wide range of objects. Moreover, real-time testing revealed an average accuracy exceeding 70% across all scenarios, with a peak accuracy of 95% achieved in optimal conditions. The parameters considered in the evaluation process encompassed not only traditional metrics but also factors pertinent to Indian road conditions, such as low lighting, occlusions, and unpredictable traffic patterns. The proposed method exhibits superiority over existing approaches by offering a balanced trade-off between model complexity and performance. By leveraging the YOLOv8 architecture, this solution achieved high accuracy while minimizing computational resources, making it well suited for deployment in autonomous vehicles operating on Indian roads. Full article
Show Figures

Figure 1

13 pages, 13020 KiB  
Article
Classification of Unmanned Aerial Vehicles Based on Acoustic Signals Obtained in External Environmental Conditions
by Marzena Mięsikowska
Sensors 2024, 24(17), 5663; https://doi.org/10.3390/s24175663 - 30 Aug 2024
Viewed by 768
Abstract
Detection of unmanned aerial vehicles (UAVs) and their classification on the basis of acoustic signals recorded in the presence of UAVs is a very important source of information. Such information can be the basis of certain decisions. It can support the autonomy of [...] Read more.
Detection of unmanned aerial vehicles (UAVs) and their classification on the basis of acoustic signals recorded in the presence of UAVs is a very important source of information. Such information can be the basis of certain decisions. It can support the autonomy of drones and their decision-making system, enabling them to cooperate in a swarm. The aim of this study was to classify acoustic signals recorded in the presence of 17 drones while they hovered individually at a height of 8 m above the recording equipment. The signals were obtained for the drones one at a time in external environmental conditions. Mel-frequency cepstral coefficients (MFCCs) were evaluated from the recorded signals. A discriminant analysis was performed based on 12 MFCCs. The grouping factor was the drone model. The result of the classification is a score of 98.8%. This means that on the basis of acoustic signals recorded in the presence of a drone, it is possible not only to detect the object but also to classify its model. Full article
(This article belongs to the Special Issue New Methods and Applications for UAVs)
Show Figures

Figure 1

Back to TopTop