Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,915)

Search Parameters:
Keywords = multispectral imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 17838 KiB  
Article
Estimation of Tree Vitality Reduced by Pine Needle Disease Using Multispectral Drone Images
by Langning Huo, Iryna Matsiakh, Jonas Bohlin and Michelle Cleary
Remote Sens. 2025, 17(2), 271; https://doi.org/10.3390/rs17020271 - 14 Jan 2025
Viewed by 127
Abstract
Multispectral imagery from unmanned aerial vehicles (UAVs) can provide high-resolution data to map tree mortality caused by pests or diseases. Although many studies have investigated UAV-imagery-based methods to detect trees under acute stress followed by tree mortality, few have tested the feasibility and [...] Read more.
Multispectral imagery from unmanned aerial vehicles (UAVs) can provide high-resolution data to map tree mortality caused by pests or diseases. Although many studies have investigated UAV-imagery-based methods to detect trees under acute stress followed by tree mortality, few have tested the feasibility and accuracy of detecting trees under chronic stress. This study aims to develop methods and test how well UAV-based multispectral imagery can detect pine needle disease long before tree mortality. Multispectral images were acquired four times through the growing season in an area with pine trees infected by needle pathogens. Vegetation indices (VIs) were used to quantify the decline in vitality, which was verified by tree needle retention (%) estimated from the ground. Results showed that several VIs had strong correlations with the needle retention level and were used to identify severely defoliated trees (<75% needle retention) with 0.71 overall classification accuracy, while the accuracy of detecting slightly defoliated trees (>75% needle retention) was very low. The results from one study area also implied more defoliation observed from the UAV (top view) than from the ground (bottom view). We conclude that using UAV-based multispectral imagery can efficiently identify severely defoliated trees caused by needle-cast pathogens, thus assisting forest health monitoring. Full article
Show Figures

Figure 1

25 pages, 5204 KiB  
Article
Comparative Evaluation of AI-Based Multi-Spectral Imaging and PCR-Based Assays for Early Detection of Botrytis cinerea Infection on Pepper Plants
by Dimitrios Kapetas, Eleni Kalogeropoulou, Panagiotis Christakakis, Christos Klaridopoulos and Eleftheria Maria Pechlivani
Agriculture 2025, 15(2), 164; https://doi.org/10.3390/agriculture15020164 - 13 Jan 2025
Viewed by 362
Abstract
Pepper production is a critical component of the global agricultural economy, with exports reaching a remarkable $6.9B in 2023. This underscores the crop’s importance as a major economic driver of export revenue for producing nations. Botrytis cinerea, the causative agent of gray [...] Read more.
Pepper production is a critical component of the global agricultural economy, with exports reaching a remarkable $6.9B in 2023. This underscores the crop’s importance as a major economic driver of export revenue for producing nations. Botrytis cinerea, the causative agent of gray mold, significantly impacts crops like fruits and vegetables, including peppers. Early detection of this pathogen is crucial for a reduction in fungicide reliance and economic loss prevention. Traditionally, visual inspection has been a primary method for detection. However, symptoms often appear after the pathogen has begun to spread. This study employs the Deep Learning algorithm YOLO for single-class segmentation on plant images to extract spatial details of pepper leaves. The dataset included hyperspectral images at discrete wavelengths (460 nm, 540 nm, 640 nm, 775 nm, and 875 nm) from derived vegetation indices (CVI, GNDVI, NDVI, NPCI, and PSRI) and from RGB. At an Intersection over Union with a 0.5 threshold, the Mean Average Precision (mAP50) achieved by the leaf-segmentation solution YOLOv11-Small was 86.4%. The extracted leaf segments were processed by multiple Transformer models, each yielding a descriptor. These descriptors were combined in ensemble and classified into three distinct classes using a K-nearest neighbor, a Long Short-Term Memory (LSTM), and a ResNet solution. The Transformer models that comprised the best ensemble classifier were as follows: the Swin-L (P:4 × 4–W:12 × 12), the ViT-L (P:16 × 16), the VOLO (D:5), and the XCIT-L (L:24–P:16 × 16), with the LSTM-based classification solution on the RGB, CVI, GNDVI, NDVI, and PSRI image sets. The classifier achieved an overall accuracy of 87.42% with an F1-Score of 81.13%. The per-class F1-Scores for the three classes were 85.25%, 66.67%, and 78.26%, respectively. Moreover, for B. cinerea detection during the initial as well as quiescent stages of infection prior to symptom development, qPCR-based methods (RT-qPCR) were used for quantification of in planta fungal biomass and integrated with the findings from the AI approach to offer a comprehensive strategy. The study demonstrates early and accurate detection of B. cinerea on pepper plants by combining segmentation techniques with Transformer model descriptors, ensembled for classification. This approach marks a significant step forward in the detection and management of crop diseases, highlighting the potential to integrate such methods into in situ systems like mobile apps or robots. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

21 pages, 8384 KiB  
Article
Multi-Temporal Image Fusion-Based Shallow-Water Bathymetry Inversion Method Using Active and Passive Satellite Remote Sensing Data
by Jie Li, Zhipeng Dong, Lubin Chen, Qiuhua Tang, Jiaoyu Hao and Yujie Zhang
Remote Sens. 2025, 17(2), 265; https://doi.org/10.3390/rs17020265 - 13 Jan 2025
Viewed by 236
Abstract
In the active–passive fusion-based bathymetry inversion method using single-temporal images, image data often suffer from errors due to inadequate atmospheric correction and interference from neighboring land and water pixels. This results in the generation of noise, making high-quality data difficult to obtain. To [...] Read more.
In the active–passive fusion-based bathymetry inversion method using single-temporal images, image data often suffer from errors due to inadequate atmospheric correction and interference from neighboring land and water pixels. This results in the generation of noise, making high-quality data difficult to obtain. To address this problem, this paper introduces a multi-temporal image fusion method. First, a median filter is applied to separate land and water pixels, eliminating the influence of adjacent land and water pixels. Next, multiple images captured at different times are fused to remove noise caused by water surface fluctuations and surface vessels. Finally, ICESat-2 laser altimeter data are fused with multi-temporal Sentinel-2 satellite data to construct a machine learning framework for coastal bathymetry. The bathymetric control points are extracted from ICESat-2 ATL03 products rather than from field measurements. A backpropagation (BP) neural network model is then used to incorporate the initial multispectral information of Sentinel-2 data at each bathymetric point and its surrounding area during the training process. Bathymetric maps of the study areas are generated based on the trained model. In the three study areas selected in the South China Sea (SCS), the validation is performed by comparing with the measurement data obtained using shipborne single-beam or multi-beam and airborne laser bathymetry systems. The root mean square errors (RMSEs) of the model using the band information after image fusion and median filter processing are better than 1.82 m, and the mean absolute errors (MAEs) are better than 1.63 m. The results show that the proposed method achieves good performance and can be applied for shallow-water terrain inversion. Full article
Show Figures

Figure 1

18 pages, 7292 KiB  
Article
Concurrent Viewing of H&E and Multiplex Immunohistochemistry in Clinical Specimens
by Larry E. Morrison, Tania M. Larrinaga, Brian D. Kelly, Mark R. Lefever, Rachel C. Beck and Daniel R. Bauer
Diagnostics 2025, 15(2), 164; https://doi.org/10.3390/diagnostics15020164 - 13 Jan 2025
Viewed by 231
Abstract
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens [...] Read more.
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens and dual-camera imaging, including monochrome and color cameras, to implement simultaneous H&E and IHC. Using this approach, conventional H&E staining could be simultaneously viewed in color on a computer monitor alongside a monochrome video of the invisible IHC staining, while manually scanning the specimen. Methods: We have now simplified the microscope system to a single camera and increased the IHC multiplexing to four biomarkers using translational assays. The color camera used in this approach also enabled multispectral imaging, similar to monochrome cameras. Results: Application is made to several clinically relevant specimens, including breast cancer (HER2, ER, and PR), prostate cancer (PSMA, P504S, basal cell, and CD8), Hodgkin’s lymphoma (CD15 and CD30), and melanoma (LAG3). Additionally, invisible chromogenic IHC was combined with conventional DAB IHC to present a multiplex IHC assay with unobscured DAB staining, suitable for visual interrogation. Conclusions: Simultaneous staining and detection, as described here, provides the pathologist a means to evaluate complex multiplexed assays, while seated at the microscope, with the added multispectral imaging capability to support digital pathology and artificial intelligence workflows of the future. Full article
(This article belongs to the Special Issue New Promising Diagnostic Signatures in Histopathological Diagnosis)
Show Figures

Figure 1

36 pages, 13780 KiB  
Article
Combining a Standardized Growth Class Assessment, UAV Sensor Data, GIS Processing, and Machine Learning Classification to Derive a Correlation with the Vigour and Canopy Volume of Grapevines
by Ronald P. Dillner, Maria A. Wimmer, Matthias Porten, Thomas Udelhoven and Rebecca Retzlaff
Sensors 2025, 25(2), 431; https://doi.org/10.3390/s25020431 - 13 Jan 2025
Viewed by 286
Abstract
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely [...] Read more.
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely located grapevines were predicted with specifically selected Machine Learning (ML) classifiers (Random Forest Classifier (RFC), Support Vector Machines (SVM)), utilizing multispectral UAV (Unmanned Aerial Vehicle) sensor data. The input features for ML model training comprise spectral, structural, and texture feature types generated from multispectral orthomosaics (spectral features), Digital Terrain and Surface Models (DTM/DSM- structural features), and Gray-Level Co-occurrence Matrix (GLCM) calculations (texture features). The specific features were selected based on extensive literature research, including especially the fields of precision agri- and viticulture. To integrate only vine canopy-exclusive features into ML classifications, different feature types were extracted and spatially aggregated (zonal statistics), based on a combined pixel- and object-based image-segmentation-technique-created vine row mask around each single grapevine position. The extracted canopy features were progressively grouped into seven input feature groups for model training. Model overall performance metrics were optimized with grid search-based hyperparameter tuning and repeated-k-fold-cross-validation. Finally, ML-based growth class prediction results were extensively discussed and evaluated for overall (accuracy, f1-weighted) and growth class specific- classification metrics (accuracy, user- and producer accuracy). Full article
(This article belongs to the Special Issue Remote Sensing for Crop Growth Monitoring)
Show Figures

Graphical abstract

26 pages, 10085 KiB  
Article
Improvement of Citrus Yield Prediction Using UAV Multispectral Images and the CPSO Algorithm
by Wenhao Xu, Xiaogang Liu, Jianhua Dong, Jiaqiao Tan, Xulei Wang, Xinle Wang and Lifeng Wu
Agronomy 2025, 15(1), 171; https://doi.org/10.3390/agronomy15010171 - 12 Jan 2025
Viewed by 256
Abstract
Achieving timely and non-destructive assessments of crop yields is a key challenge in the agricultural field, as it is important for optimizing field management measures and improving crop productivity. To accurately and quickly predict citrus yield, this study obtained multispectral images of citrus [...] Read more.
Achieving timely and non-destructive assessments of crop yields is a key challenge in the agricultural field, as it is important for optimizing field management measures and improving crop productivity. To accurately and quickly predict citrus yield, this study obtained multispectral images of citrus fruit maturity through an unmanned aerial vehicle (UAV) and extracted multispectral vegetation indices (VIs) and texture features (T) from the images as feature variables. Extreme gradient boosting (XGB), random forest (RF), support vector machine (SVM), gaussian process regression (GPR), and multiple stepwise regression (MSR) models were used to construct citrus fruit number and quality prediction models. The results show that, for fruit number prediction, the XGB model performed best under the combined input of VIs and T, with an R2 = 0.792 and an RMSE = 462 fruits. However, for fruit quality prediction, the RF model performed best when only the VIs were used, with an R2 = 0.787 and an RMSE = 20.0 kg. Although the model accuracy was acceptable, the number of input feature variables used was large. To further improve the model prediction performance, we explored a method that utilizes a hybrid coding particle swarm optimization algorithm (CPSO) coupled with XGB and SVM models. The coupled models had a significant improvement in predicting the number and quality of citrus fruits, especially the model of CPSO coupled with XGB (CPSO-XGB). The CPSO-XGB model had fewer input features and higher accuracy, with an R2 > 0.85. Finally, the Shapley additive explanations (SHAP) method was used to reveal the importance of the normalized difference chlorophyll index (NDCI) and the red band mean feature (MEA_R) when constructing the prediction model. The results of this study provide an application reference and a theoretical basis for the research on UAV remote sensing in relation to citrus yield. Full article
(This article belongs to the Special Issue Advances in Data, Models, and Their Applications in Agriculture)
Show Figures

Figure 1

22 pages, 33216 KiB  
Article
Characterizing Sparse Spectral Diversity Within a Homogenous Background: Hydrocarbon Production Infrastructure in Arctic Tundra near Prudhoe Bay, Alaska
by Daniel Sousa, Latha Baskaran, Kimberley Miner and Elizabeth Josephine Bushnell
Remote Sens. 2025, 17(2), 244; https://doi.org/10.3390/rs17020244 - 11 Jan 2025
Viewed by 525
Abstract
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark [...] Read more.
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark (S, V, and D) endmember spectra which dominate spectral variance for most of Earth’s land surface. We illustrate the approach using AVIRIS-3 imagery of anthropogenic surfaces (primarily hydrocarbon extraction infrastructure) embedded in a background of Arctic tundra near Prudhoe Bay, Alaska. Computational experiments further explore sensitivity to spatial and spectral resolution. Analysis involves two stages: first, computing the mixture residual of a generalized linear spectral mixture model; and second, nonlinear dimensionality reduction via manifold learning. Anthropogenic targets and lakeshore sediments are successfully isolated from the Arctic tundra background. Dependence on spatial resolution is observed, with substantial degradation of manifold topology as images are blurred from 5 m native ground sampling distance to simulated 30 m ground projected instantaneous field of view of a hypothetical spaceborne sensor. Degrading spectral resolution to mimicking the Sentinel-2A MultiSpectral Imager (MSI) also results in loss of information but is less severe than spatial blurring. These results inform spectroscopic characterization of sparse targets using spectroscopic images of varying spatial and spectral resolution. Full article
Show Figures

Figure 1

21 pages, 4440 KiB  
Article
Automatic Grape Cluster Detection Combining YOLO Model and Remote Sensing Imagery
by Ana María Codes-Alcaraz, Nicola Furnitto, Giuseppe Sottosanti, Sabina Failla, Herminia Puerto, Carmen Rocamora-Osorio, Pedro Freire-García and Juan Miguel Ramírez-Cuesta
Remote Sens. 2025, 17(2), 243; https://doi.org/10.3390/rs17020243 - 11 Jan 2025
Viewed by 307
Abstract
Precision agriculture has recently experienced significant advancements through the use of technologies such as unmanned aerial vehicles (UAVs) and satellite imagery, enabling more efficient and precise agricultural management. Yield estimation from these technologies is essential for optimizing resource allocation, improving harvest logistics, and [...] Read more.
Precision agriculture has recently experienced significant advancements through the use of technologies such as unmanned aerial vehicles (UAVs) and satellite imagery, enabling more efficient and precise agricultural management. Yield estimation from these technologies is essential for optimizing resource allocation, improving harvest logistics, and supporting decision-making for sustainable vineyard management. This study aimed to evaluate grape cluster numbers estimated by using YOLOv7x in combination with images obtained by UAVs from a vineyard. Additionally, the capability of several vegetation indices calculated from Sentinel-2 and PlanetScope satellites to estimate grape clusters was evaluated. The results showed that the application of the YOLOv7x model to RGB images acquired from UAVs was able to accurately predict grape cluster numbers (R2 value and RMSE value of 0.64 and 0.78 clusters vine−1). On the contrary, vegetation indexes derived from Sentinel-2 and PlanetScope satellites were found not able to predict grape cluster numbers (R2 lower than 0.23), probably due to the fact that these indexes are more related to vegetation vigor, which is not always related to yield parameters (e.g., cluster number). This study suggests that the combination of high-resolution UAV images, multispectral satellite images, and advanced detection models like YOLOv7x can significantly improve the accuracy of vineyard management, resulting in more efficient and sustainable agriculture. Full article
(This article belongs to the Special Issue Cropland and Yield Mapping with Multi-source Remote Sensing)
Show Figures

Figure 1

17 pages, 7144 KiB  
Article
Fine-Grained Building Classification in Rural Areas Based on GF-7 Data
by Mingbo Liu, Ping Wang, Peng Han, Longfei Liu and Baotian Li
Sensors 2025, 25(2), 392; https://doi.org/10.3390/s25020392 - 10 Jan 2025
Viewed by 233
Abstract
Building type information is widely used in various fields, such as disaster management, urbanization studies, and population modelling. Few studies have been conducted on fine-grained building classification in rural areas using China’s Gaofen-7 (GF-7) high-resolution stereo mapping satellite data. In this study, we [...] Read more.
Building type information is widely used in various fields, such as disaster management, urbanization studies, and population modelling. Few studies have been conducted on fine-grained building classification in rural areas using China’s Gaofen-7 (GF-7) high-resolution stereo mapping satellite data. In this study, we employed a two-stage method combining supervised classification and unsupervised clustering to classify buildings in the rural area of Pingquan, northern China, based on building footprints, building heights, and multispectral information extracted from GF-7 data. In the supervised classification stage, we compared different classification models, including Extreme Gradient Boosting (XGBoost) and Random Forest classifiers. The best-performing XGBoost model achieved an overall roof type classification accuracy of 88.89%. Additionally, we proposed a template-based building height correction method for pitched roof buildings, which combined geometric features of the building footprint, street view photos, and height information extracted from the GF-7 stereo image. This method reduced the RMSE of the pitched roof building heights from 2.28 m to 1.20 m. In the cluster analysis stage, buildings with different roof types were further classified in the color and shape feature spaces and combined with the building height information to produce fine-grained building type codes. The results of the roof type classification and fine-grained building classification reveal the physical and geometric characteristics of buildings and the spatial distribution of different building types in the study area. The building classification method proposed in this study has broad application prospects for disaster management in rural areas. Full article
Show Figures

Figure 1

24 pages, 8166 KiB  
Article
UAV Remote Sensing Technology for Wheat Growth Monitoring in Precision Agriculture: Comparison of Data Quality and Growth Parameter Inversion
by Jikai Liu, Weiqiang Wang, Jun Li, Ghulam Mustafa, Xiangxiang Su, Ying Nian, Qiang Ma, Fengxian Zhen, Wenhui Wang and Xinwei Li
Agronomy 2025, 15(1), 159; https://doi.org/10.3390/agronomy15010159 - 10 Jan 2025
Viewed by 331
Abstract
The quality of the image data and the potential to invert crop growth parameters are essential for effectively using unmanned aerial vehicle (UAV)-based sensor systems in precision agriculture (PA). However, the existing research falls short in providing a comprehensive examination of sensor data [...] Read more.
The quality of the image data and the potential to invert crop growth parameters are essential for effectively using unmanned aerial vehicle (UAV)-based sensor systems in precision agriculture (PA). However, the existing research falls short in providing a comprehensive examination of sensor data quality and the inversion potential of crop growth parameters, and there is still ambiguity regarding how the quality of data affects the inversion potential. Therefore, this study explored the application potential of RGB and multispectral (MS) images acquired from three lightweight UAV platforms in the realm of PA: the DJI Mavic 2 Pro (M2P), Phantom 4 Multispectral (P4M), and Mavic 3 Multispectral (M3M). The reliability of pixel-scale data quality was evaluated based on image quality assessment metrics, and three winter wheat growth parameters, above-ground biomass (AGB), plant nitrogen content (PNC) and soil and plant analysis development (SPAD), were inverted using machine learning models based on multi-source image features at the plot scale. The results indicated that the RGB image quality from the M3M outperformed that of the M2P, while the MS image quality was marginally superior to that of the P4M. Nevertheless, these advantages in pixel-scale data quality did not improve inversion accuracy for crop parameters at the plot scale. Spectral features (SFs) derived from the P4M-based MS sensor demonstrated significant advantages in AGB inversion (R2 = 0.86, rRMSE = 27.47%), while SFs derived from the M2P-based RGB camera exhibited the best performance in SPAD inversion (R2 = 0.60, rRMSE = 7.67%). Additionally, combining spectral and textural features derived from the P4M-based MS sensor yielded the highest accuracy in PNC inversion (R2 = 0.82, rRMSE = 14.62%). This study clarified the data quality of three prevalent UAV mounted sensor systems in PA and their influence on parameter inversion potential, offering guidance for selecting appropriate sensors and monitoring key crop growth parameters. Full article
(This article belongs to the Section Agricultural Biosystem and Biological Engineering)
Show Figures

Figure 1

7 pages, 1868 KiB  
Communication
A Sentinel-2-Based System to Detect and Monitor Oil Spills: Demonstration on 2024 Tobago Accident
by Emilio D’Ugo, Ashish Kallikkattilkuruvila, Roberto Giuseppetti, Alejandro Carvajal, Abdou Mbacke Diouf, Matteo Tucci, Federico Aulenta, Alessandro Ursi, Patrizia Sacco, Deodato Tapete, Giovanni Laneve and Fabio Magurano
Remote Sens. 2025, 17(2), 230; https://doi.org/10.3390/rs17020230 - 10 Jan 2025
Viewed by 307
Abstract
In this paper, we analyze the serious environmental accident caused by a massive oil spill on 7 February 2024, off the island of Tobago, using two separate algorithms, namely, the established visible near-red index (VNRI) algorithm and the novel IVI visible reflectance ratio [...] Read more.
In this paper, we analyze the serious environmental accident caused by a massive oil spill on 7 February 2024, off the island of Tobago, using two separate algorithms, namely, the established visible near-red index (VNRI) algorithm and the novel IVI visible reflectance ratio index (IVI), both applied to Sentinel-2 satellite images. These algorithms were specifically designed to monitor oil spills in inner waters. In this paper, where the IVI is presented for the first time, its effectiveness in the open sea is also showcased allowing the identification and subsequent monitoring over time of the oily masses that threaten the coral reef of the island. The analysis suggests that with sufficient cloud-free conditions, high temporal revisit multispectral optical satellites could support the timely detection and tracking of oil masses during environmental incidents near natural sanctuaries. Full article
Show Figures

Figure 1

19 pages, 21678 KiB  
Article
Combining UAV-Based Multispectral and Thermal Images to Diagnosing Dryness Under Different Crop Areas on the Loess Plateau
by Juan Zhang, Yuan Qi, Qian Li, Jinlong Zhang, Rui Yang, Hongwei Wang and Xiangfeng Li
Agriculture 2025, 15(2), 126; https://doi.org/10.3390/agriculture15020126 - 8 Jan 2025
Viewed by 298
Abstract
Dryness is a critical limiting factor for achieving high agricultural productivity on China’s Loess Plateau (LP). High-precision, field-scale dryness monitoring is essential for the implementation of precision agriculture. However, obtaining dryness information with adequate spatial and temporal resolution remains a significant challenge. Unmanned [...] Read more.
Dryness is a critical limiting factor for achieving high agricultural productivity on China’s Loess Plateau (LP). High-precision, field-scale dryness monitoring is essential for the implementation of precision agriculture. However, obtaining dryness information with adequate spatial and temporal resolution remains a significant challenge. Unmanned aerial vehicle (UAV) systems can capture high-resolution remote sensing images on demand, but the effectiveness of UAV-based dryness indices in mapping the high-resolution spatial heterogeneity of dryness across different crop areas at the agricultural field scale on the LP has yet to be fully explored. Here, we conducted UAV–ground synchronized experiments on three typical croplands in the eastern Gansu province of the Loess Plateau (LP). Multispectral and thermal infrared sensors mounted on the UAV were used to collect high-resolution multispectral and thermal images. The temperature vegetation dryness index (TVDI) and the temperature–vegetation–soil moisture dryness index (TVMDI) were calculated based on UAV imagery. A total of 14 vegetation indices (VIs) were employed to construct various VI-based TVDIs, and the optimal VI was selected. Correlation analysis and Gradient Structure Similarity (GSSIM) were applied to evaluate the suitability and spatial differences between the TVDI and TVMDI for dryness monitoring. The results indicate that TVDIs constructed using the normalized difference vegetation index (NDVI) and the visible atmospherically resistant index (VARI) were more consistent with the characteristics of crop responses to dryness stress. Furthermore, the TVDI demonstrated higher sensitivity in dryness monitoring compared with the TVMDI, making it more suitable for assessing dryness variations in rain-fed agriculture in arid regions. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

24 pages, 13944 KiB  
Article
A Comparative Analysis of Spatial Resolution Sentinel-2 and Pleiades Imagery for Mapping Urban Tree Species
by Fabio Recanatesi, Antonietta De Santis, Lorenzo Gatti, Alessio Patriarca, Eros Caputi, Giulia Mancini, Chiara Iavarone, Carlo Maria Rossi, Gabriele Delogu, Miriam Perretta, Lorenzo Boccia and Maria Nicolina Ripa
Land 2025, 14(1), 106; https://doi.org/10.3390/land14010106 - 7 Jan 2025
Viewed by 474
Abstract
Urbanization poses significant challenges to ecosystems, resources, and human well-being, necessitating sustainable planning. Urban vegetation, particularly trees, provides critical ecosystem services such as carbon sequestration, air quality improvement, and biodiversity conservation. Traditional tree assessments are resource-intensive and time-consuming. Recent advances in remote sensing, [...] Read more.
Urbanization poses significant challenges to ecosystems, resources, and human well-being, necessitating sustainable planning. Urban vegetation, particularly trees, provides critical ecosystem services such as carbon sequestration, air quality improvement, and biodiversity conservation. Traditional tree assessments are resource-intensive and time-consuming. Recent advances in remote sensing, especially high-resolution multispectral imagery and object-based image analysis (OBIA), offer efficient alternatives for mapping urban vegetation. This study evaluates and compares the efficacy of Sentinel-2 and Pléiades satellite imagery in classifying tree species within historic urban parks in Rome—Villa Borghese, Villa Ada Savoia, and Villa Doria Pamphilj. Pléiades imagery demonstrated superior classification accuracy, achieving an overall accuracy (OA) of 89% and a Kappa index of 0.84 in Villa Ada Savoia, compared to Sentinel-2’s OA of 66% and Kappa index of 0.47. Specific tree species, such as Pinus pinea (Stone Pine), reached a user accuracy (UA) of 84% with Pléiades versus 53% with Sentinel-2. These insights underscore the potential of integrating high-resolution remote sensing data into urban forestry practices to support sustainable urban management and planning. Full article
Show Figures

Figure 1

16 pages, 5136 KiB  
Article
Characterization of Hazelnut Trees in Open Field Through High-Resolution UAV-Based Imagery and Vegetation Indices
by Maurizio Morisio, Emanuela Noris, Chiara Pagliarani, Stefano Pavone, Amedeo Moine, José Doumet and Luca Ardito
Sensors 2025, 25(1), 288; https://doi.org/10.3390/s25010288 - 6 Jan 2025
Viewed by 391
Abstract
The increasing demand for hazelnut kernels is favoring an upsurge in hazelnut cultivation worldwide, but ongoing climate change threatens this crop, affecting yield decreases and subject to uncontrolled pathogen and parasite attacks. Technical advances in precision agriculture are expected to support farmers to [...] Read more.
The increasing demand for hazelnut kernels is favoring an upsurge in hazelnut cultivation worldwide, but ongoing climate change threatens this crop, affecting yield decreases and subject to uncontrolled pathogen and parasite attacks. Technical advances in precision agriculture are expected to support farmers to more efficiently control the physio-pathological status of crops. Here, we report a straightforward approach to monitoring hazelnut trees in an open field, using aerial multispectral pictures taken by drones. A dataset of 4112 images, each having 2Mpixel resolution per tree and covering RGB, Red Edge, and near-infrared frequencies, was obtained from 185 hazelnut trees located in two different orchards of the Piedmont region (northern Italy). To increase accuracy, and especially to reduce false negatives, the image of each tree was divided into nine quadrants. For each quadrant, nine different vegetation indices (VIs) were computed, and in parallel, each tree quadrant was tagged as “healthy/unhealthy” by visual inspection. Three supervised binary classification algorithms were used to build models capable of predicting the status of the tree quadrant, using the VIs as predictors. Out of the nine VIs considered, only five (GNDVI, GCI, NDREI, NRI, and GI) were good predictors, while NDVI SAVI, RECI, and TCARI were not. Using them, a model accuracy of about 65%, with 13% false negatives was reached in a way that was rather independent of the algorithms, demonstrating that some VIs allow inferring the physio-pathological condition of these trees. These achievements support the use of drone-captured images for performing a rapid, non-destructive physiological characterization of hazelnut trees. This approach offers a sustainable strategy for supporting farmers in their decision-making process during agricultural practices. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 5357 KiB  
Article
Assessment of Peacock Spot Disease (Fusicladium oleagineum) in Olive Orchards Through Agronomic Approaches and UAV-Based Multispectral Imaging
by Hajar Hamzaoui, Ilyass Maafa, Hasnae Choukri, Ahmed El Bakkali, Salma El Iraqui El Houssaini, Rachid Razouk, Aziz Aziz, Said Louahlia and Khaoula Habbadi
Horticulturae 2025, 11(1), 46; https://doi.org/10.3390/horticulturae11010046 - 6 Jan 2025
Viewed by 386
Abstract
Olive leaf spot (OLS), caused by Fusicladium oleagineum, is a significant disease affecting olive orchards, leading to reduced yields and compromising olive tree health. Early and accurate detection of this disease is critical for effective management. This study presents a comprehensive assessment [...] Read more.
Olive leaf spot (OLS), caused by Fusicladium oleagineum, is a significant disease affecting olive orchards, leading to reduced yields and compromising olive tree health. Early and accurate detection of this disease is critical for effective management. This study presents a comprehensive assessment of OLS disease progression in olive orchards by integrating agronomic measurements and multispectral imaging techniques. Key disease parameters—incidence, severity, diseased leaf area, and disease index—were systematically monitored from March to October, revealing peak values of 45% incidence in April and 35% severity in May. Multispectral drone imagery, using sensors for NIR, Red, Green, and Red Edge spectral bands, enabled the calculation of vegetation indices. Indices incorporating Red Edge and near-infrared bands, such as Red Edge and SR705-750, exhibited the strongest correlations with disease severity (correlation coefficients of 0.72 and 0.68, respectively). This combined approach highlights the potential of remote sensing for early disease detection and supports precision agriculture practices by facilitating targeted interventions and optimized orchard management. The findings underscore the effectiveness of integrating a traditional agronomic assessment with advanced spectral analysis to improve OLS disease surveillance and promote sustainable olive cultivation. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

Back to TopTop