Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,469)

Search Parameters:
Keywords = aerial image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3776 KiB  
Article
Rapid Prediction and Inversion of Pond Aquaculture Water Quality Based on Hyperspectral Imaging by Unmanned Aerial Vehicles
by Qiliang Ma, Shuimiao Li, Hengnian Qi, Xiaoming Yang and Mei Liu
Water 2025, 17(4), 517; https://doi.org/10.3390/w17040517 - 11 Feb 2025
Abstract
Water quality in aquaculture has a direct impact on the growth and development of the aquatic organisms being cultivated. The rapid, accurate and comprehensive control of water quality in aquaculture ponds is crucial for the management of aquaculture water environments. Traditional water quality [...] Read more.
Water quality in aquaculture has a direct impact on the growth and development of the aquatic organisms being cultivated. The rapid, accurate and comprehensive control of water quality in aquaculture ponds is crucial for the management of aquaculture water environments. Traditional water quality monitoring methods often use manual sampling, which is not only time-consuming but also reflects only small areas of water bodies. In this study, unmanned aerial vehicles (UAV) equipped with high-spectral cameras were used to take remote sensing images of experimental aquaculture ponds. Concurrently, we manually collected water samples to analyze critical water quality parameters, including total nitrogen (TN), ammonia nitrogen (NH4+-N), total phosphorus (TP), and chemical oxygen demand (COD). Regression models were developed to assess the accuracy of predicting these parameters based on five preprocessing techniques for hyperspectral image data (L2 norm, Savitzky–Golay, first derivative, wavelet transform, and standard normal variate), two spectral feature selection methods were utilized (successive projections algorithm and competitive adaptive reweighted sampling), and three machine learning algorithms (extreme learning machine, support vector regression, and eXtreme gradient boosting). Additionally, a deep learning model incorporating the full spectrum was constructed for comparative analysis. Ultimately, according to the determination coefficient (R2) of the model, the optimal prediction model was selected for each water quality parameter, with R2 values of 0.756, 0.603, 0.94, and 0.858, respectively. These optimal models were then utilized to visualize the spatial concentration distribution of each water quality parameter within the aquaculture district, and evaluate the rationality of the model prediction by combining manual detection data. The results show that UAV hyperspectral technology can rapidly reverse the spatial distribution map of water quality of aquaculture ponds, realizing rapid and accurate acquisition for the quality of aquaculture water, and providing an effective method for monitoring aquaculture water environments. Full article
15 pages, 1607 KiB  
Article
Diagnosis of Winter Wheat Nitrogen Status Using Unmanned Aerial Vehicle-Based Hyperspectral Remote Sensing
by Liyang Huangfu, Jundang Jiao, Zhichao Chen, Lixiao Guo, Weidong Lou and Zheng Zhang
Appl. Sci. 2025, 15(4), 1869; https://doi.org/10.3390/app15041869 - 11 Feb 2025
Abstract
The nitrogen nutrition index (NNI) is a significant agronomic statistic used to assess the nitrogen nutrition status of crops. The use of remote sensing to invert it is crucial for accurately diagnosing and managing nitrogen nutrition in crops during critical periods. This study [...] Read more.
The nitrogen nutrition index (NNI) is a significant agronomic statistic used to assess the nitrogen nutrition status of crops. The use of remote sensing to invert it is crucial for accurately diagnosing and managing nitrogen nutrition in crops during critical periods. This study utilizes the UHD185 airborne hyperspectral imager and the ASD Field Spec3 portable spectrometer to acquire hyperspectral remote sensing data and agronomic parameters of the winter wheat canopy during the nodulation and flowering stages. The objective is to estimate the NNI of winter wheat through a winter wheat nitrogen gradient experiment conducted in Leling, Shandong Province. The ASD spectral reflectance data of the winter wheat canopy were selected as the reference standard and compared with the UHD185 hyperspectral data obtained from an unmanned aerial vehicle (UAV). The comparison focused on analyzing the trends in the spectral curve changes and the spectral correlation between the two datasets. The findings indicated a strong agreement between the UHD185 hyperspectral data and the spectral data obtained by ASD in the range of 450–830 nm. A spectrum index was developed to estimate the nitrogen nutritional index utilizing the bands within this range. The linear model, based on the first-order derivative ratio spectral index (RSI) (FD666, FD826), demonstrated the highest accuracy in estimating the nitrogen nutrient index in winter wheat. The model yielded R2 values of 0.85 and 0.75, respectively, and may be represented by the equation y = −2.0655x + 0.156. The results serve as a benchmark for future utilization of the UHD185 hyperspectral data in estimating agronomic characteristics of winter wheat. Full article
(This article belongs to the Special Issue State-of-the-Art Agricultural Science and Technology in China)
22 pages, 5683 KiB  
Article
Co-Registration of Multi-Modal UAS Pushbroom Imaging Spectroscopy and RGB Imagery Using Optical Flow
by Ryan S. Haynes, Arko Lucieer, Darren Turner and Emiliano Cimoli
Drones 2025, 9(2), 132; https://doi.org/10.3390/drones9020132 - 11 Feb 2025
Abstract
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy [...] Read more.
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy and hindering cross-comparison and change detection analysis. To address this, we demonstrate the use of an optical flow algorithm, eFOLKI, for co-registering imagery from two pushbroom imaging spectroscopy sensors (VNIR and NIR/SWIR) to an RGB orthomosaic. Our study focuses on two ecologically diverse vegetative sites in Tasmania, Australia. Both sites are structurally complex, posing challenging datasets for co-registration algorithms with initial georectification spatial errors of up to 9 m planimetrically. The optical flow co-registration significantly improved the spatial accuracy of the imaging spectroscopy relative to the RGB orthomosaic. After co-registration, spatial alignment errors were greatly improved, with RMSE and MAE values of less than 13 cm for the higher-spatial-resolution dataset and less than 33 cm for the lower resolution dataset, corresponding to only 2–4 pixels in both cases. These results demonstrate the efficacy of optical flow co-registration in reducing spatial discrepancies between multi-sensor UAS datasets, enhancing accuracy and alignment to enable robust environmental monitoring. Full article
25 pages, 7982 KiB  
Article
Aerial Imagery Redefined: Next-Generation Approach to Object Classification
by Eran Dahan, Itzhak Aviv and Tzvi Diskin
Information 2025, 16(2), 134; https://doi.org/10.3390/info16020134 (registering DOI) - 11 Feb 2025
Abstract
Identifying and classifying objects in aerial images are two significant and complex issues in computer vision. The fine-grained classification of objects in overhead images has become widespread in various real-world applications, due to recent advancements in high-resolution satellite and airborne imaging systems. The [...] Read more.
Identifying and classifying objects in aerial images are two significant and complex issues in computer vision. The fine-grained classification of objects in overhead images has become widespread in various real-world applications, due to recent advancements in high-resolution satellite and airborne imaging systems. The task is challenging, particularly in low-resource cases, due to the minor differences between classes and the significant differences within each class caused by the fine-grained nature. We introduce Classification of Objects for Fine-Grained Analysis (COFGA), a recently developed dataset for accurately categorizing objects in high-resolution aerial images. The COFGA dataset comprises 2104 images and 14,256 annotated objects across 37 distinct labels. This dataset offers superior spatial information compared to other publicly available datasets. The MAFAT Challenge is a task that utilizes COFGA to improve fine-grained classification methods. The baseline model achieved a mAP of 0.6. This cost was 60, whereas the most superior model achieved a score of 0.6271 by utilizing state-of-the-art ensemble techniques and specific preprocessing techniques. We offer solutions to address the difficulties in analyzing aerial images, particularly when annotated and imbalanced class data are scarce. The findings provide valuable insights into the detailed categorization of objects and have practical applications in urban planning, environmental assessment, and agricultural management. We discuss the constraints and potential future endeavors, specifically emphasizing the potential to integrate supplementary modalities and contextual information into aerial imagery analysis. Full article
(This article belongs to the Special Issue Online Registration and Anomaly Detection of Cyber Security Events)
Show Figures

Figure 1

17 pages, 2815 KiB  
Article
Unmanned Aerial Vehicle-Based Hyperspectral Imaging and Soil Texture Mapping with Robust AI Algorithms
by Pablo Flores Peña, Mohammad Sadeq Ale Isaac, Daniela Gîfu, Eleftheria Maria Pechlivani and Ahmed Refaat Ragab
Drones 2025, 9(2), 129; https://doi.org/10.3390/drones9020129 - 11 Feb 2025
Abstract
This paper explores the integration of UAV-based hyperspectral imaging and advanced AI algorithms for soil texture mapping and stress detection in agricultural settings. The primary focus lies on leveraging multi-modal sensor data, including hyperspectral imaging, thermal imaging, and gamma-ray spectroscopy, to enable precise [...] Read more.
This paper explores the integration of UAV-based hyperspectral imaging and advanced AI algorithms for soil texture mapping and stress detection in agricultural settings. The primary focus lies on leveraging multi-modal sensor data, including hyperspectral imaging, thermal imaging, and gamma-ray spectroscopy, to enable precise monitoring of abiotic and biotic stressors in crops. An innovative algorithm combining vegetation indices, path planning, and machine learning methods is introduced to enhance the efficiency of data collection and analysis. Experimental results demonstrate significant improvements in accuracy and operational efficiency, paving the way for real-time, data-driven decision-making in precision agriculture. Full article
Show Figures

Figure 1

22 pages, 5596 KiB  
Article
URAdv: A Novel Framework for Generating Ultra-Robust Adversarial Patches Against UAV Object Detection
by Hailong Xi, Le Ru, Jiwei Tian, Bo Lu, Shiguang Hu, Wenfei Wang and Xiaohui Luan
Mathematics 2025, 13(4), 591; https://doi.org/10.3390/math13040591 (registering DOI) - 11 Feb 2025
Viewed by 3
Abstract
In recent years, deep learning has been extensively deployed on unmanned aerial vehicles (UAVs), particularly for object detection. As the cornerstone of UAV-based object detection, deep neural networks are susceptible to adversarial attacks, with adversarial patches being a relatively straightforward method to implement. [...] Read more.
In recent years, deep learning has been extensively deployed on unmanned aerial vehicles (UAVs), particularly for object detection. As the cornerstone of UAV-based object detection, deep neural networks are susceptible to adversarial attacks, with adversarial patches being a relatively straightforward method to implement. However, current research on adversarial patches, especially those targeting UAV object detection, is limited. This scarcity is notable given the complex and dynamically changing environment inherent in UAV image acquisition, which necessitates the development of more robust adversarial patches to achieve effective attacks. To address the challenge of adversarial attacks in UAV high-altitude reconnaissance, this paper presents a robust adversarial patch generation framework. Firstly, the dataset is reconstructed by considering various environmental factors that UAVs may encounter during image collection, and the influences of reflections and shadows during photography are integrated into patch training. Additionally, a nested optimization method is employed to enhance the continuity of attacks across different altitudes. Experimental results demonstrate that the adversarial patches generated by the proposed method exhibit greater robustness in complex environments and have better transferability among similar models. Full article
Show Figures

Figure 1

24 pages, 13965 KiB  
Article
Estimating Canopy Chlorophyll Content of Potato Using Machine Learning and Remote Sensing
by Xiaofei Yang, Hao Zhou, Qiao Li, Xueliang Fu and Honghui Li
Agriculture 2025, 15(4), 375; https://doi.org/10.3390/agriculture15040375 - 11 Feb 2025
Viewed by 91
Abstract
Potato is a major food crop in China. Its development and nutritional state can be inferred by the content of chlorophyll in its canopy. However, the existing study on applying feature extraction and optimization algorithms to determine the canopy SPAD (Soil–Plant Analytical Development) [...] Read more.
Potato is a major food crop in China. Its development and nutritional state can be inferred by the content of chlorophyll in its canopy. However, the existing study on applying feature extraction and optimization algorithms to determine the canopy SPAD (Soil–Plant Analytical Development) values of potatoes at various fertility stages is inadequate and not very reliable. Using the Pearson feature selection algorithm and the Competitive Adaptive Reweighted Sampling (CARS) method, the Vegetation Index (VI) with the highest correlation was selected as a training feature depended on multispectral orthophoto images from unmanned aerial vehicle (UAV) and measured SPAD values. At various potato fertility stages, Random Forest (RF), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGBoost) inversion models were constructed. The models’ parameters were then optimized using the Grey Wolf Optimizer (GWO) and Sparrow Search Algorithm (SSA). The findings demonstrated a higher correlation between the feature selected VI and SPAD values; additionally, the optimization algorithm enhanced the models’ prediction accuracy; finally, the addition of the fertility stage feature considerably increased the accuracy of the full fertility stage in comparison to the single fertility stage. The models with the highest inversion accuracy were the CARS-SSA-RF, CARS-SSA-XGBoost, and Pearson-SSA-XGBoost models. For the single-fertility and full-fertility phases, respectively, the optimal coefficients of determination (R2s) were 0.60, 0.66, and 0.87, the root-mean-square errors (RMSEs) were 2.63, 3.23, and 2.39, and the mean absolute errors (MAEs) were 2.00, 2.75, and 1.99. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

37 pages, 7440 KiB  
Review
Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors
by Wouter H. Maes
Remote Sens. 2025, 17(4), 606; https://doi.org/10.3390/rs17040606 (registering DOI) - 10 Feb 2025
Viewed by 467
Abstract
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, [...] Read more.
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, hyperspectral, or thermal cameras. Based on a literature review, this paper provides comprehensive guidelines and best practices for executing such mapping flights. It addresses critical aspects of flight preparation and flight execution. Key considerations in flight preparation covered include sensor selection, flight height and GSD, flight speed, overlap settings, flight pattern, direction, and viewing angle; considerations in flight execution include on-site preparations (GCPs, camera settings, sensor calibration, and reference targets) as well as on-site conditions (weather conditions, time of the flights) to take into account. In all these steps, high-resolution and high-quality data acquisition needs to be balanced with feasibility constraints such as flight time, data volume, and post-flight processing time. For reflectance and thermal measurements, BRDF issues also influence the correct setting. The formulated guidelines are based on literature consensus. However, the paper also identifies knowledge gaps for mapping flight settings, particularly in viewing angle pattern, flight direction, and thermal imaging in general. The guidelines aim to advance the harmonization of UAV mapping practices, promoting reproducibility and enhanced data quality across diverse applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

30 pages, 16247 KiB  
Article
A Scale-Invariant Looming Detector for UAV Return Missions in Power Line Scenarios
by Jiannan Zhao, Qidong Zhao, Chenggen Wu, Zhiteng Li and Feng Shuang
Biomimetics 2025, 10(2), 99; https://doi.org/10.3390/biomimetics10020099 (registering DOI) - 10 Feb 2025
Viewed by 188
Abstract
Unmanned aerial vehicles (UAVs) offer an efficient solution for power grid maintenance, but collision avoidance during return flights is challenged by crossing power lines, especially for small drones with limited computational resources. Conventional visual systems struggle to detect thin, intricate power lines, which [...] Read more.
Unmanned aerial vehicles (UAVs) offer an efficient solution for power grid maintenance, but collision avoidance during return flights is challenged by crossing power lines, especially for small drones with limited computational resources. Conventional visual systems struggle to detect thin, intricate power lines, which are often overlooked or misinterpreted. While deep learning methods have improved static power line detection in images, they still struggle with dynamic scenarios where collision risks are not detected in real time. Inspired by the hypothesis that the Lobula Giant Movement Detector (LGMD) distinguishes sparse and incoherent motion in the background by detecting continuous and clustered motion contours of the looming object, we propose a Scale-Invariant Looming Detector (SILD). SILD detects motion by preprocessing video frames, enhances motion regions using attention masks, and simulates biological arousal to recognize looming threats while suppressing noise. It also predicts impending collisions during high-speed flight and overcomes the limitations of motion vision to ensure consistent sensitivity to looming objects at different scales. We compare SILD with existing static power line detection techniques, including the Hough transform and D-LinkNet with a dilated convolution-based encoder–decoder architecture. Our results show that SILD strikes an effective balance between detection accuracy and real-time processing efficiency. It is well suited for UAV-based power line detection, where high precision and low-latency performance are essential. Furthermore, we evaluated the performance of the model under various conditions and successfully deployed it on a UAV-embedded board for collision avoidance testing at power lines. This approach provides a novel perspective for UAV obstacle avoidance in power line scenarios. Full article
Show Figures

Figure 1

31 pages, 18303 KiB  
Article
A Novel Approach for Maize Straw Type Recognition Based on UAV Imagery Integrating Height, Shape, and Spectral Information
by Xin Liu, Huili Gong, Lin Guo, Xiaohe Gu and Jingping Zhou
Drones 2025, 9(2), 125; https://doi.org/10.3390/drones9020125 - 9 Feb 2025
Viewed by 240
Abstract
Accurately determining the distribution and quantity of maize straw types is of great significance for evaluating the effectiveness of conservation tillage, precisely estimating straw resources, and predicting the risk of straw burning. The widespread adoption of conservation tillage technology has greatly increased the [...] Read more.
Accurately determining the distribution and quantity of maize straw types is of great significance for evaluating the effectiveness of conservation tillage, precisely estimating straw resources, and predicting the risk of straw burning. The widespread adoption of conservation tillage technology has greatly increased the diversity and complexity of maize straw coverage in fields after harvest. To improve the precision and effectiveness of remote sensing recognition for maize straw types, a novel method was proposed. This method utilized unmanned aerial vehicle (UAV) multispectral imagery, integrated the Stacking Enhanced Straw Index (SESI) introduced in this study, and combined height, shape, and spectral characteristics to improve recognition accuracy. Using the original five-band multispectral imagery, a new nine-band image of the study area was constructed by integrating the calculated SESI, Canopy Height Model (CHM), Product Near-Infrared Straw Index (PNISI), and Normalized Difference Vegetation Index (NDVI) through band combination. An object-oriented classification method, utilizing a “two-step segmentation with multiple algorithms” strategy, was employed to integrate height, shape, and spectral features, enabling rapid and accurate mapping of maize straw types. The results showed that height information obtained from the CHM and spectral information derived from SESI were essential for accurately classifying maize straw types. Compared to traditional methods that relied solely on spectral information for recognition of maize straw types, the proposed approach achieved a significant improvement in overall classification accuracy, increasing it by 8.95% to reach 95.46%, with a kappa coefficient of 0.94. The remote sensing recognition methods and findings for maize straw types presented in this study can offer valuable information and technical support to agricultural departments, environmental protection agencies, and related enterprises. Full article
Show Figures

Figure 1

23 pages, 3547 KiB  
Article
Classification of Garden Chrysanthemum Flowering Period Using Digital Imagery from Unmanned Aerial Vehicle (UAV)
by Jiuyuan Zhang, Jingshan Lu, Qimo Qi, Mingxiu Sun, Gangjun Zheng, Qiuyan Zhang, Fadi Chen, Sumei Chen, Fei Zhang, Weimin Fang and Zhiyong Guan
Agronomy 2025, 15(2), 421; https://doi.org/10.3390/agronomy15020421 - 7 Feb 2025
Viewed by 334
Abstract
Monitoring the flowering period is essential for evaluating garden chrysanthemum cultivars and their landscaping use. However, traditional field observation methods are labor-intensive. This study proposes a classification method based on color information from canopy digital images. In this study, an unmanned aerial vehicle [...] Read more.
Monitoring the flowering period is essential for evaluating garden chrysanthemum cultivars and their landscaping use. However, traditional field observation methods are labor-intensive. This study proposes a classification method based on color information from canopy digital images. In this study, an unmanned aerial vehicle (UAV) with a red-green-blue (RGB) sensor was utilized to capture orthophotos of garden chrysanthemums. A mask region-convolutional neural network (Mask R-CNN) was employed to remove field backgrounds and categorize growth stages into vegetative, bud, and flowering periods. Images were then converted to the hue-saturation-value (HSV) color space to calculate eight color indices: R_ratio, Y_ratio, G_ratio, Pink_ratio, Purple_ratio, W_ratio, D_ratio, and Fsum_ratio, representing various color proportions. A color ratio decision tree and random forest model were developed to further subdivide the flowering period into initial, peak, and late periods. The results showed that the random forest model performed better with F1-scores of 0.9040 and 0.8697 on two validation datasets, requiring less manual involvement. This method provides a rapid and detailed assessment of flowering periods, aiding in the evaluation of new chrysanthemum cultivars. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application—2nd Edition)
Show Figures

Figure 1

20 pages, 946 KiB  
Article
PSNet: A Universal Algorithm for Multispectral Remote Sensing Image Segmentation
by Yifan Zheng, Zhong Chen, Tong Zheng, Chang Tian and Weiyu Dong
Remote Sens. 2025, 17(4), 563; https://doi.org/10.3390/rs17040563 - 7 Feb 2025
Viewed by 315
Abstract
Semantic segmentation, a fundamental task in remote sensing, plays a crucial role in urban planning, land monitoring, and road vehicle detection. However, compared to conventional images, multispectral remote sensing images present significant challenges due to large-scale variations, multiple bands, and complex details. These [...] Read more.
Semantic segmentation, a fundamental task in remote sensing, plays a crucial role in urban planning, land monitoring, and road vehicle detection. However, compared to conventional images, multispectral remote sensing images present significant challenges due to large-scale variations, multiple bands, and complex details. These challenges manifest in three major issues: low cross-scale object segmentation accuracy, confusion between band information, and difficulties in balancing local and global information. Recognizing that traditional remote sensing indices, such as the Normalized Difference Vegetation Index and the water body index, reveal unique semantic information in specific bands, this paper proposes a feature-decoupling-based pseudo-Siamese semantic segmentation architecture. To evaluate the effectiveness and robustness of the proposed algorithm, comparative experiments were conducted on the Suichang Spatial Remote Sensing Dataset and the Potsdam-S Aerial Remote Sensing Dataset. The results demonstrate that the proposed algorithm outperforms all comparison methods, with average accuracy improvements of 80.719% and 77.856% on the Suichang and Potsdam datasets, respectively. Full article
Show Figures

Figure 1

35 pages, 13743 KiB  
Article
Integration of UAV Multispectral Remote Sensing and Random Forest for Full-Growth Stage Monitoring of Wheat Dynamics
by Donghui Zhang, Hao Qi, Xiaorui Guo, Haifang Sun, Jianan Min, Si Li, Liang Hou and Liangjie Lv
Agriculture 2025, 15(3), 353; https://doi.org/10.3390/agriculture15030353 - 6 Feb 2025
Viewed by 342
Abstract
Wheat is a key staple crop globally, essential for food security and sustainable agricultural development. The results of this study highlight how innovative monitoring techniques, such as UAV-based multispectral imaging, can significantly improve agricultural practices by providing precise, real-time data on crop growth. [...] Read more.
Wheat is a key staple crop globally, essential for food security and sustainable agricultural development. The results of this study highlight how innovative monitoring techniques, such as UAV-based multispectral imaging, can significantly improve agricultural practices by providing precise, real-time data on crop growth. This study utilized unmanned aerial vehicle (UAV)-based remote sensing technology at the wheat experimental field of the Hebei Academy of Agriculture and Forestry Sciences to capture the dynamic growth characteristics of wheat using multispectral data, aiming to explore efficient and precise monitoring and management strategies for wheat. A UAV equipped with multispectral sensors was employed to collect high-resolution imagery at five critical growth stages of wheat: tillering, jointing, booting, flowering, and ripening. The data covered four key spectral bands: green (560 nm), red (650 nm), red-edge (730 nm), and near-infrared (840 nm). Combined with ground-truth measurements, such as chlorophyll content and plant height, 21 vegetation indices were analyzed for their nonlinear relationships with wheat growth parameters. Statistical analyses, including Pearson’s correlation and stepwise regression, were used to identify the most effective indices for monitoring wheat growth. The Normalized Difference Red-Edge Index (NDRE) and the Triangular Vegetation Index (TVI) were selected based on their superior performance in predicting wheat growth parameters, as demonstrated by their high correlation coefficients and predictive accuracy. A random forest model was developed to comprehensively evaluate the application potential of multispectral data in wheat growth monitoring. The results demonstrated that the NDRE and TVI indices were the most effective indices for monitoring wheat growth. The random forest model exhibited superior predictive accuracy, with a mean squared error (MSE) significantly lower than that of traditional regression models, particularly during the flowering and ripening stages, where the prediction error for plant height was less than 1.01 cm. Furthermore, dynamic analyses of UAV imagery effectively identified abnormal field areas, such as regions experiencing water stress or disease, providing a scientific basis for precision agricultural interventions. This study highlights the potential of UAV-based remote sensing technology in monitoring wheat growth, addressing the research gap in systematic full-cycle analysis of wheat. It also offers a novel technological pathway for optimizing agricultural resource management and improving crop yields. These findings are expected to advance intelligent agricultural production and accelerate the implementation of precision agriculture. Full article
Show Figures

Figure 1

26 pages, 13026 KiB  
Article
Unified Spatial-Frequency Modeling and Alignment for Multi-Scale Small Object Detection
by Jing Liu, Ying Wang, Yanyan Cao, Chaoping Guo, Peijun Shi and Pan Li
Symmetry 2025, 17(2), 242; https://doi.org/10.3390/sym17020242 - 6 Feb 2025
Viewed by 336
Abstract
Small object detection in aerial imagery remains challenging due to sparse feature representation, limited spatial resolution, and complex background interference. Current deep learning approaches enhance detection performance through multi-scale feature fusion, leveraging convolutional operations to expand the receptive field or self-attention mechanisms for [...] Read more.
Small object detection in aerial imagery remains challenging due to sparse feature representation, limited spatial resolution, and complex background interference. Current deep learning approaches enhance detection performance through multi-scale feature fusion, leveraging convolutional operations to expand the receptive field or self-attention mechanisms for global context modeling. However, these methods primarily rely on spatial-domain features, while self-attention introduces high computational costs, and conventional fusion strategies (e.g., concatenation or addition) often result in weak feature correlation or boundary misalignment. To address these challenges, we propose a unified spatial-frequency modeling and multi-scale alignment fusion framework, termed USF-DETR, for small object detection. The framework comprises three key modules: the Spatial-Frequency Interaction Backbone (SFIB), the Dual Alignment and Balance Fusion FPN (DABF-FPN), and the Efficient Attention-AIFI (EA-AIFI). The SFIB integrates the Scharr operator for spatial edge and detail extraction and FFT/IFFT for capturing frequency-domain patterns, achieving a balanced fusion of global semantics and local details. The DABF-FPN employs bidirectional geometric alignment and adaptive attention to enhance the significance expression of the target area, suppress background noise, and improve feature asymmetry across scales. The EA-AIFI streamlines the Transformer attention mechanism by removing key-value interactions and encoding query relationships via linear projections, significantly boosting inference speed and contextual modeling. Experiments on the VisDrone and TinyPerson datasets demonstrate the effectiveness of USF-DETR, achieving improvements of 2.3% and 1.4% mAP over baselines, respectively, while balancing accuracy and computational efficiency. The framework outperforms state-of-the-art methods in small object detection. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Study in Object Detection)
Show Figures

Figure 1

20 pages, 6195 KiB  
Article
Transform Dual-Branch Attention Net: Efficient Semantic Segmentation of Ultra-High-Resolution Remote Sensing Images
by Bingyun Du, Lianlei Shan, Xiaoyu Shao, Dongyou Zhang, Xinrui Wang and Jiaxi Wu
Remote Sens. 2025, 17(3), 540; https://doi.org/10.3390/rs17030540 - 5 Feb 2025
Viewed by 274
Abstract
With the advancement of remote sensing technology, the acquisition of ultra-high-resolution remote sensing imagery has become a reality, opening up new possibilities for detailed research and applications of Earth’s surface. These ultra-high-resolution images, with spatial resolutions at the meter or sub-meter level and [...] Read more.
With the advancement of remote sensing technology, the acquisition of ultra-high-resolution remote sensing imagery has become a reality, opening up new possibilities for detailed research and applications of Earth’s surface. These ultra-high-resolution images, with spatial resolutions at the meter or sub-meter level and pixel counts exceeding 4 million, contain rich geometric and attribute details of surface objects. Their use significantly improves the accuracy of surface feature analysis. However, this also increases the computational resource demands of deep learning-driven semantic segmentation tasks. Therefore, we propose the Transform Dual-Branch Attention Net (TDBAN), which effectively integrates global and local information through a dual-branch design, enhancing image segmentation performance and reducing memory consumption. TDBAN leverages a cross-collaborative module (CCM) based on the Transform mechanism and a data-related learnable fusion module (DRLF) to achieve adaptive content processing. Experimental results show that TDBAN achieves mean intersection over union (mIoU) of 73.6% and 72.7% on DeepGlobe and Inria Aerial datasets, respectively, and surpasses existing models in memory efficiency, highlighting its superiority in handling ultra-high-resolution remote sensing images. This study not only advances the development of ultra-high-resolution remote sensing image segmentation technology, but also lays a solid foundation for further research in this field. Full article
Show Figures

Graphical abstract

Back to TopTop