Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,763)

Search Parameters:
Keywords = high-resolution remote-sensing images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3978 KiB  
Article
Application and Evaluation of the AI-Powered Segment Anything Model (SAM) in Seafloor Mapping: A Case Study from Puck Lagoon, Poland
by Łukasz Janowski and Radosław Wróblewski
Remote Sens. 2024, 16(14), 2638; https://doi.org/10.3390/rs16142638 - 18 Jul 2024
Viewed by 82
Abstract
The digital representation of seafloor, a challenge in UNESCO’s Ocean Decade initiative, is essential for sustainable development support and marine environment protection, aligning with the United Nations’ 2030 program goals. Accuracy in seafloor representation can be achieved through remote sensing measurements, including acoustic [...] Read more.
The digital representation of seafloor, a challenge in UNESCO’s Ocean Decade initiative, is essential for sustainable development support and marine environment protection, aligning with the United Nations’ 2030 program goals. Accuracy in seafloor representation can be achieved through remote sensing measurements, including acoustic and laser sources. Ground truth information integration facilitates comprehensive seafloor assessment. The current seafloor mapping paradigm benefits from the object-based image analysis (OBIA) approach, managing high-resolution remote sensing measurements effectively. A critical OBIA step is the segmentation process, with various algorithms available. Recent artificial intelligence advancements have led to AI-powered segmentation algorithms development, like the Segment Anything Model (SAM) by META AI. This paper presents the SAM approach’s first evaluation for seafloor mapping. The benchmark remote sensing dataset refers to Puck Lagoon, Poland and includes measurements from various sources, primarily multibeam echosounders, bathymetric lidar, airborne photogrammetry, and satellite imagery. The SAM algorithm’s performance was evaluated on an affordable workstation equipped with an NVIDIA GPU, enabling CUDA architecture utilization. The growing popularity and demand for AI-based services predict their widespread application in future underwater remote sensing studies, regardless of the measurement technology used (acoustic, laser, or imagery). Applying SAM in Puck Lagoon seafloor mapping may benefit other seafloor mapping studies intending to employ AI technology. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Technology in Geodesy, Surveying and Mapping)
Show Figures

Figure 1

20 pages, 5228 KiB  
Article
Remote Sensing Image Change Detection Based on Deep Learning: Multi-Level Feature Cross-Fusion with 3D-Convolutional Neural Networks
by Sibo Yu, Chen Tao, Guang Zhang, Yubo Xuan and Xiaodong Wang
Appl. Sci. 2024, 14(14), 6269; https://doi.org/10.3390/app14146269 (registering DOI) - 18 Jul 2024
Viewed by 125
Abstract
Change detection (CD) in high-resolution remote sensing imagery remains challenging due to the complex nature of objects and varying spectral characteristics across different times and locations. Convolutional neural networks (CNNs) have shown promising performance in CD tasks by extracting meaningful semantic features. However, [...] Read more.
Change detection (CD) in high-resolution remote sensing imagery remains challenging due to the complex nature of objects and varying spectral characteristics across different times and locations. Convolutional neural networks (CNNs) have shown promising performance in CD tasks by extracting meaningful semantic features. However, traditional 2D-CNNs may struggle to accurately integrate deep features from multi-temporal images, limiting their ability to improve CD accuracy. This study proposes a Multi-level Feature Cross-Fusion (MFCF) network with 3D-CNNs for remote sensing image change detection. The network aims to effectively extract and fuse deep features from multi-temporal images to identify surface changes. To bridge the semantic gap between high-level and low-level features, a MFCF module is introduced. A channel attention mechanism (CAM) is also integrated to enhance model performance, interpretability, and generalization capabilities. The proposed methodology is validated on the LEVIR construction dataset (LEVIR-CD). The experimental results demonstrate superior performance compared to the current state-of-the-art in evaluation metrics including recall, F1 score, and IOU. The MFCF network, which combines 3D-CNNs and a CAM, effectively utilizes multi-temporal information and deep feature fusion, resulting in precise and reliable change detection in remote sensing imagery. This study significantly contributes to the advancement of change detection methods, facilitating more efficient management and decision making across various domains such as urban planning, natural resource management, and environmental monitoring. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

14 pages, 2596 KiB  
Article
Occurrence of Wetness on the Fruit Surface Modeled Using Spatio-Temporal Temperature Data from Sweet Cherry Tree Canopies
by Nicolas Tapia-Zapata, Andreas Winkler and Manuela Zude-Sasse
Horticulturae 2024, 10(7), 757; https://doi.org/10.3390/horticulturae10070757 - 17 Jul 2024
Viewed by 304
Abstract
Typically, fruit cracking in sweet cherry is associated with the occurrence of free water at the fruit surface level due to direct (rain and fog) and indirect (cold exposure and dew) mechanisms. Recent advances in close range remote sensing have enabled the monitoring [...] Read more.
Typically, fruit cracking in sweet cherry is associated with the occurrence of free water at the fruit surface level due to direct (rain and fog) and indirect (cold exposure and dew) mechanisms. Recent advances in close range remote sensing have enabled the monitoring of the temperature distribution with high spatial resolution based on light detection and ranging (LiDAR) and thermal imaging. The fusion of LiDAR-derived geometric 3D point clouds and merged thermal data provides spatially resolved temperature data at the fruit level as LiDAR 4D point clouds. This paper aimed to investigate the thermal behavior of sweet cherry canopies using this new method with emphasis on the surface temperature of fruit around the dew point. Sweet cherry trees were stored in a cold chamber (6 °C) and subsequently scanned at different time intervals at room temperature. A total of 62 sweet cherry LiDAR 4D point clouds were identified. The estimated temperature distribution was validated by means of manual reference readings (n = 40), where average R2 values of 0.70 and 0.94 were found for ideal and real scenarios, respectively. The canopy density was estimated using the ratio of the number of LiDAR points of fruit related to the canopy. The occurrence of wetness on the surface of sweet cherry was visually assessed and compared to an estimated dew point (Ydew) index. At mean Ydew of 1.17, no wetness was observed on the fruit surface. The canopy density ratio had a marginal impact on the thermal kinetics and the occurrence of wetness on the surface of sweet cherry in the slender spindle tree architecture. The modelling of fruit surface wetness based on estimated fruit temperature distribution can support ecophysiological studies on tree architectures considering resilience against climate change and in studies on physiological disorders of fruit. Full article
Show Figures

Figure 1

22 pages, 8451 KiB  
Article
Research on the Temporal and Spatial Changes and Driving Forces of Rice Fields Based on the NDVI Difference Method
by Jinglian Tian, Yongzhong Tian, Wenhao Wan, Chenxi Yuan, Kangning Liu and Yang Wang
Agriculture 2024, 14(7), 1165; https://doi.org/10.3390/agriculture14071165 - 17 Jul 2024
Viewed by 203
Abstract
Rice is a globally important food crop, and it is crucial to accurately and conveniently obtain information on rice fields, understand their spatial patterns, and grasp their dynamic changes to address food security challenges. In this study, Chongqing’s Yongchuan District was selected as [...] Read more.
Rice is a globally important food crop, and it is crucial to accurately and conveniently obtain information on rice fields, understand their spatial patterns, and grasp their dynamic changes to address food security challenges. In this study, Chongqing’s Yongchuan District was selected as the research area. By utilizing UAVs (Unmanned Aerial Vehicles) to collect multi-spectral remote sensing data during three seasons, the phenological characteristics of rice fields were analyzed using the NDVI (Normalized Difference Vegetation Index). Based on Sentinel data with a resolution of 10 m, the NDVI difference method was used to extract rice fields between 2019 and 2023. Furthermore, the reasons for changes in rice fields over the five years were also analyzed. First, a simulation model of the rice harvesting period was constructed using data from 32 sampling points through multiple regression analysis. Based on the model, the study area was classified into six categories, and the necessary data for each region were identified. Next, the NDVI values for the pre-harvest and post-harvest periods of rice fields, as well as the differences between them, were calculated for various regions. Additionally, every year, 35 samples of rice fields were chosen from high-resolution images provided by Google. The thresholds for extracting rice fields were determined by statistically analyzing the difference in NDVI values within the sample area. By utilizing these thresholds, rice fields corresponding to six harvesting regions were extracted separately. The rice fields extracted from different regions were merged to obtain the rice fields for the study area from 2019 to 2023, and the accuracy of the extraction results was verified. Then, based on five years of rice fields in the study area, we analyzed them from both temporal and spatial perspectives. In the temporal analysis, a transition matrix of rice field changes and the calculation of the rice fields’ dynamic degree were utilized to examine the temporal changes. The spatial changes were analyzed by incorporating DEM (Digital Elevation Model) data. Finally, a logistic regression model was employed to investigate the causes of both temporal and spatial changes in the rice fields. The study results indicated the following: (1) The simulation model of the rice harvesting period can quickly and accurately determine the best period of remote sensing images needed to extract rice fields. (2) The confusion matrix shows the effectiveness of the NDVI difference method in extracting rice fields. (3) The total area of rice fields in the study area did not change much each year, but there were still significant spatial adjustments. Over the five years, the spatial distribution of gained rice fields was relatively uniform, while the lost rice fields showed obvious regional differences. In combination with the analysis of altitude, it tended to grow in lower areas. (4) The logistic regression analysis revealed that gained rice fields tended to be found in regions with convenient irrigation, flat terrain, lower altitude, and proximity to residential areas. Conversely, lost rice fields were typically located in areas with inconvenient irrigation, long distance from residential areas, low population, and negative topography. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

16 pages, 4099 KiB  
Article
Multi-Frequency Spectral–Spatial Interactive Enhancement Fusion Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Guangxu Xie, Peng Liu and Tong Li
Electronics 2024, 13(14), 2802; https://doi.org/10.3390/electronics13142802 - 16 Jul 2024
Viewed by 220
Abstract
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including [...] Read more.
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including insufficient edge detail, spectral distortion, increased noise, and limited robustness. To address these challenges, we propose a multi-frequency spectral–spatial interaction enhancement network (MFSINet) that comprises the spectral–spatial interactive fusion (SSIF) and multi-frequency feature enhancement (MFFE) subnetworks. The SSIF enhances both spatial and spectral fusion features by optimizing the characteristics of each spectral band through band-aware processing. The MFFE employs a variant of wavelet transform to perform multiresolution analyses on remote sensing scenes, enhancing the spatial resolution, spectral fidelity, and the texture and structural features of the fused images by optimizing directional and spatial properties. Moreover, qualitative analysis and quantitative comparative experiments using the IKONOS and WorldView-2 datasets indicate that this method significantly improves the fidelity and accuracy of the fused images. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

22 pages, 23824 KiB  
Article
DEDNet: Dual-Encoder DeeplabV3+ Network for Rock Glacier Recognition Based on Multispectral Remote Sensing Image
by Lujun Lin, Lei Liu, Ming Liu, Qunjia Zhang, Min Feng, Yasir Shaheen Khalil and Fang Yin
Remote Sens. 2024, 16(14), 2603; https://doi.org/10.3390/rs16142603 - 16 Jul 2024
Viewed by 226
Abstract
Understanding the distribution of rock glaciers provides key information for investigating and recognizing the status and changes of the cryosphere environment. Deep learning algorithms and red–green–blue (RGB) bands from high-resolution satellite images have been extensively employed to map rock glaciers. However, the near-infrared [...] Read more.
Understanding the distribution of rock glaciers provides key information for investigating and recognizing the status and changes of the cryosphere environment. Deep learning algorithms and red–green–blue (RGB) bands from high-resolution satellite images have been extensively employed to map rock glaciers. However, the near-infrared (NIR) band offers rich spectral information and sharp edge features that could significantly contribute to semantic segmentation tasks, but it is rarely utilized in constructing rock glacier identification models due to the limitation of three input bands for classical semantic segmentation networks, like DeeplabV3+. In this study, a dual-encoder DeeplabV3+ network (DEDNet) was designed to overcome the flaws of the classical DeeplabV3+ network (CDNet) when identifying rock glaciers using multispectral remote sensing images by extracting spatial and spectral features from RGB and NIR bands, respectively. This network, trained with manually labeled rock glacier samples from the Qilian Mountains, established a model with accuracy, precision, recall, specificity, and mIoU (mean intersection over union) of 0.9131, 0.9130, 0.9270, 0.9195, and 0.8601, respectively. The well-trained model was applied to identify new rock glaciers in a test region, achieving a producer’s accuracy of 93.68% and a user’s accuracy of 94.18%. Furthermore, the model was employed in two study areas in northern Tien Shan (Kazakhstan) and Daxue Shan (Hengduan Shan, China) with high accuracy, which proved that the DEDNet offers an innovative solution to more accurately map rock glaciers on a larger scale due to its robustness across diverse geographic regions. Full article
Show Figures

Figure 1

17 pages, 2726 KiB  
Article
A Remote Sensing Image Super-Resolution Reconstruction Model Combining Multiple Attention Mechanisms
by Yamei Xu, Tianbao Guo and Chanfei Wang
Sensors 2024, 24(14), 4492; https://doi.org/10.3390/s24144492 - 11 Jul 2024
Viewed by 193
Abstract
Remote sensing images are characterized by high complexity, significant scale variations, and abundant details, which present challenges for existing deep learning-based super-resolution reconstruction methods. These algorithms often exhibit limited convolutional receptive fields and thus struggle to establish global contextual information, which can lead [...] Read more.
Remote sensing images are characterized by high complexity, significant scale variations, and abundant details, which present challenges for existing deep learning-based super-resolution reconstruction methods. These algorithms often exhibit limited convolutional receptive fields and thus struggle to establish global contextual information, which can lead to an inadequate utilization of both global and local details and limited generalization capabilities. To address these issues, this study introduces a novel multi-branch residual hybrid attention block (MBRHAB). This innovative approach is part of a proposed super-resolution reconstruction model for remote sensing data, which incorporates various attention mechanisms to enhance performance. First, the model employs window-based multi-head self-attention to model long-range dependencies in images. A multi-branch convolution module (MBCM) is then constructed to enhance the convolutional receptive field for improved representation of global information. Convolutional attention is subsequently combined across channels and spatial dimensions to strengthen associations between different features and areas containing crucial details, thereby augmenting local semantic information. Finally, the model adopts a parallel design to enhance computational efficiency. Generalization performance was assessed using a cross-dataset approach involving two training datasets (NWPU-RESISC45 and PatternNet) and a third test dataset (UCMerced-LandUse). Experimental results confirmed that the proposed method surpassed the existing super-resolution algorithms, including Bicubic interpolation, SRCNN, ESRGAN, Real-ESRGAN, IRN, and DSSR in the metrics of PSNR and SSIM across various magnifications scales. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 2905 KiB  
Article
Estimating Sugarcane Maturity Using High Spatial Resolution Remote Sensing Images
by Esteban Rodriguez Leandro, Muditha K. Heenkenda and Kerin F. Romero
Crops 2024, 4(3), 333-347; https://doi.org/10.3390/crops4030024 - 11 Jul 2024
Viewed by 253
Abstract
Sugarcane suffers from the increased frequency and severity of droughts and floods, negatively affecting growing conditions. Climate change has affected cultivation, and the growth dynamics have changed over the years. The identification of the development stages of sugarcane is necessary to reduce its [...] Read more.
Sugarcane suffers from the increased frequency and severity of droughts and floods, negatively affecting growing conditions. Climate change has affected cultivation, and the growth dynamics have changed over the years. The identification of the development stages of sugarcane is necessary to reduce its vulnerability. Traditional methods are inefficient when detecting those changes, especially when estimating sugarcane maturity—a critical step in sugarcane production. Hence, the study aimed to develop a cost- and time-effective method to estimate sugarcane maturity using high spatial-resolution remote sensing data. Images were acquired using a drone. Field samples were collected and measured in the laboratory for brix and pol values. Normalized Difference Water Index, Green Normalized Difference Vegetation Index and green band were chosen (highest correlation with field samples) for further analysis. Random forest (RF), Support Vector Machine (SVM), and multi-linear regression models were used to predict sugarcane maturity using the brix and pol variables. The best performance was obtained from the RF model. Hence, the maturity index of the study area was calculated based on the RF model results. It was found that the field plot has not yet reached maturity for harvesting. The developed cost- and time-effective method allows temporal crop monitoring and optimizes the harvest time. Full article
Show Figures

Figure 1

19 pages, 2418 KiB  
Article
Research on Tobacco Field Semantic Segmentation Method Based on Multispectral Unmanned Aerial Vehicle Data and Improved PP-LiteSeg Model
by Jun Zhang, Zhenping Qiang, Hong Lin, Zhuqun Chen, Kaibo Li and Shuang Zhang
Agronomy 2024, 14(7), 1502; https://doi.org/10.3390/agronomy14071502 - 11 Jul 2024
Viewed by 357
Abstract
In recent years, the estimation of tobacco field areas has become a critical component of precision tobacco cultivation. However, traditional satellite remote sensing methods face challenges such as high costs, low accuracy, and susceptibility to noise, making it difficult to meet the demand [...] Read more.
In recent years, the estimation of tobacco field areas has become a critical component of precision tobacco cultivation. However, traditional satellite remote sensing methods face challenges such as high costs, low accuracy, and susceptibility to noise, making it difficult to meet the demand for high precision. Additionally, optical remote sensing methods perform poorly in regions with complex terrain. Therefore, Unmanned Aerial Vehicle multispectral remote sensing technology has emerged as a viable solution due to its high resolution and rich spectral information. This study employed a DJI Mavic 3M equipped with high-resolution RGB and multispectral cameras to collect tobacco field data covering five bands: RGB, RED, RED EDGE, NIR, and GREEN in Agang Town, Luoping County, Yunnan Province, China. To ensure the accuracy of the experiment, we used 337, 242, and 215 segmented tobacco field images for model training, targeting both RGB channels and seven-channel data. We developed a tobacco field semantic segmentation method based on PP-LiteSeg and deeply customized the model to adapt to the characteristics of multispectral images. The input layer’s channel number was adjusted to multiple channels to fully utilize the information from the multispectral images. The model structure included an encoder, decoder, and SPPM module, which used a multi-layer convolution structure to achieve feature extraction and segmentation of multispectral images. The results indicated that compared to traditional RGB images, multispectral images offered significant advantages in handling edges and complex terrain for semantic segmentation. Specifically, the predicted area using the seven-channel data was 11.43 m² larger than that obtained with RGB channels. Additionally, the seven-channel model achieved a prediction accuracy of 98.84%. This study provides an efficient and feasible solution for estimating tobacco field areas based on multispectral images, offering robust support for modern agricultural management. Full article
(This article belongs to the Special Issue Advances in Data, Models, and Their Applications in Agriculture)
Show Figures

Figure 1

27 pages, 10341 KiB  
Article
Typhoon-Induced Forest Damage Mapping in the Philippines Using Landsat and PlanetScope Images
by Benjamin Jonah Perez Magallon and Satoshi Tsuyuki
Land 2024, 13(7), 1031; https://doi.org/10.3390/land13071031 - 9 Jul 2024
Viewed by 328
Abstract
Forests provide valuable resources for households in the Philippines, particularly in poor and upland communities. This makes forests an integral part of building resilient communities. This relationship became complex during extreme events such as typhoon occurrence as forests can be a contributor to [...] Read more.
Forests provide valuable resources for households in the Philippines, particularly in poor and upland communities. This makes forests an integral part of building resilient communities. This relationship became complex during extreme events such as typhoon occurrence as forests can be a contributor to the intensity and impact of disasters. However, little attention has been paid to forest cover losses due to typhoons during disaster assessments. In this study, forest damage caused by typhoons was measured using harmonic analysis of time series (HANTS) with Landsat-8 Operation Land Imager (OLI) images. The ΔHarmonic Vegetation Index was computed by calculating the difference between HANTS and the actual observed vegetation index value. This was used to identify damaged areas in the forest regions and create a damage map. To validate the reliability of the results, the resulting maps produced using ΔHarmonic VI were compared with the damage mapped from PlanetScope’s high-resolution pre- and post-typhoon images. The method achieved an overall accuracy of 69.20%. The accuracy of the results was comparable to the traditional remote sensing techniques used in forest damage assessment, such as ΔVI and land cover change detection. To further the understanding of the relationship between forest and typhoon occurrence, the presence of time lag in the observations was investigated. Additionally, different contributing factors in forest damage were identified. Most of the forest damage observed was in forest areas with slopes facing the typhoon direction and in vulnerable areas such as near the coast and hill tops. This study will help the government and forest management sectors preserve forests, which will ultimately result in the development of a more resilient community, by making it easier to identify forest areas that are vulnerable to typhoon damage. Full article
(This article belongs to the Special Issue Geospatial Data in Landscape Ecology and Biodiversity Conservation)
Show Figures

Figure 1

27 pages, 25257 KiB  
Article
A Study on the Object-Based High-Resolution Remote Sensing Image Classification of Crop Planting Structures in the Loess Plateau of Eastern Gansu Province
by Rui Yang, Yuan Qi, Hui Zhang, Hongwei Wang, Jinlong Zhang, Xiaofang Ma, Juan Zhang and Chao Ma
Remote Sens. 2024, 16(13), 2479; https://doi.org/10.3390/rs16132479 - 6 Jul 2024
Viewed by 541
Abstract
The timely and accurate acquisition of information on the distribution of the crop planting structure in the Loess Plateau of eastern Gansu Province, one of the most important agricultural areas in Western China, is crucial for promoting fine management of agriculture and ensuring [...] Read more.
The timely and accurate acquisition of information on the distribution of the crop planting structure in the Loess Plateau of eastern Gansu Province, one of the most important agricultural areas in Western China, is crucial for promoting fine management of agriculture and ensuring food security. This study uses multi-temporal high-resolution remote sensing images to determine optimal segmentation scales for various crops, employing the estimation of scale parameter 2 (ESP2) tool and the Ratio of Mean Absolute Deviation to Standard Deviation (RMAS) model. The Canny edge detection algorithm is then applied for multi-scale image segmentation. By incorporating crop phenological factors and using the L1-regularized logistic regression model, we optimized 39 spatial feature factors—including spectral, textural, geometric, and index features. Within a multi-level classification framework, the Random Forest (RF) classifier and Convolutional Neural Network (CNN) model are used to classify the cropping patterns in four test areas based on the multi-scale segmented images. The results indicate that integrating the Canny edge detection algorithm with the optimal segmentation scales calculated using the ESP2 tool and RMAS model produces crop parcels with more complete boundaries and better separability. Additionally, optimizing spatial features using the L1-regularized logistic regression model, combined with phenological information, enhances classification accuracy. Within the OBIC framework, the RF classifier achieves higher accuracy in classifying cropping patterns. The overall classification accuracies for the four test areas are 91.93%, 94.92%, 89.37%, and 90.68%, respectively. This paper introduced crop phenological factors, effectively improving the extraction precision of the shattered agricultural planting structure in the Loess Plateau of eastern Gansu Province. Its findings have important application value in crop monitoring, management, food security and other related fields. Full article
Show Figures

Figure 1

21 pages, 15396 KiB  
Article
Development of an Imaging Spectrometer with a High Signal-to-Noise Ratio Based on High Energy Transmission Efficiency for Soil Organic Matter Detection
by Jize Fan, Yuwei Wang, Guochao Gu, Zhe Li, Xiaoxu Wang, Hanshuang Li, Bo Li and Denghui Hu
Sensors 2024, 24(13), 4385; https://doi.org/10.3390/s24134385 - 5 Jul 2024
Viewed by 377
Abstract
Hyperspectral detection of the change rate of organic matter content in agricultural remote sensing requires a high signal-to-noise ratio (SNR). However, due to the large number and efficiency limitation of the components, it is difficult to improve the SNR. This study uses high-efficiency [...] Read more.
Hyperspectral detection of the change rate of organic matter content in agricultural remote sensing requires a high signal-to-noise ratio (SNR). However, due to the large number and efficiency limitation of the components, it is difficult to improve the SNR. This study uses high-efficiency convex grating with a diffraction efficiency exceeding 50% across the 360–850 nm range, a back-illuminated Complementary Metal Oxide Semiconductor (CMOS) detector with a 95% efficiency in peak wavelength, and silver-coated mirrors to develop an imaging spectrometer for detecting soil organic matter (SOM). The designed system meets the spectral resolution of 10 nm in the 360–850 nm range and achieves a swath of 100 km and a spatial resolution of 100 m at an orbital height of 648.2 km. This study also uses the basic structure of Offner with fewer components in the design and sets the mirrors of the Offner structure to have the same sphere, which can achieve the rapid adjustment of the co-standard. This study performs a theoretical analysis of the developed Offner imaging spectrometer based on the classical Rowland circular structure, with a 21.8 mm slit length; simulates its capacity for suppressing the +2nd-order diffraction stray light with the filter; and analyzes the imaging quality after meeting the tolerance requirements, which is combined with the surface shape characteristics of the high-efficiency grating. After this test, the grating has a diffraction efficiency above 50%, and the silver-coated mirrors have a reflection value above 95% on average. Finally, the laboratory tests show that the SNR over the waveband exceeds 300 and reaches 800 at 550 nm, which is higher than some current instruments in orbit for soil observation. The proposed imaging spectrometer has a spectral resolution of 10 nm, and its modulation transfer function (MTF) is greater than 0.23 at the Nyquist frequency, making it suitable for remote sensing observation of SOM change rate. The manufacture of such a high-efficiency broadband grating and the development of the proposed instrument with high energy transmission efficiency can provide a feasible technical solution for observing faint targets with a high SNR. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

25 pages, 22898 KiB  
Article
Research on Segmentation Method of Maize Seedling Plant Instances Based on UAV Multispectral Remote Sensing Images
by Tingting Geng, Haiyang Yu, Xinru Yuan, Ruopu Ma and Pengao Li
Plants 2024, 13(13), 1842; https://doi.org/10.3390/plants13131842 - 4 Jul 2024
Viewed by 752
Abstract
The accurate instance segmentation of individual crop plants is crucial for achieving a high-throughput phenotypic analysis of seedlings and smart field management in agriculture. Current crop monitoring techniques employing remote sensing predominantly focus on population analysis, thereby lacking precise estimations for individual plants. [...] Read more.
The accurate instance segmentation of individual crop plants is crucial for achieving a high-throughput phenotypic analysis of seedlings and smart field management in agriculture. Current crop monitoring techniques employing remote sensing predominantly focus on population analysis, thereby lacking precise estimations for individual plants. This study concentrates on maize, a critical staple crop, and leverages multispectral remote sensing data sourced from unmanned aerial vehicles (UAVs). A large-scale SAM image segmentation model is employed to efficiently annotate maize plant instances, thereby constructing a dataset for maize seedling instance segmentation. The study evaluates the experimental accuracy of six instance segmentation algorithms: Mask R-CNN, Cascade Mask R-CNN, PointRend, YOLOv5, Mask Scoring R-CNN, and YOLOv8, employing various combinations of multispectral bands for a comparative analysis. The experimental findings indicate that the YOLOv8 model exhibits exceptional segmentation accuracy, notably in the NRG band, with bbox_mAP50 and segm_mAP50 accuracies reaching 95.2% and 94%, respectively, surpassing other models. Furthermore, YOLOv8 demonstrates robust performance in generalization experiments, indicating its adaptability across diverse environments and conditions. Additionally, this study simulates and analyzes the impact of different resolutions on the model’s segmentation accuracy. The findings reveal that the YOLOv8 model sustains high segmentation accuracy even at reduced resolutions (1.333 cm/px), meeting the phenotypic analysis and field management criteria. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

19 pages, 43187 KiB  
Article
Large-Scale Land Cover Mapping Framework Based on Prior Product Label Generation: A Case Study of Cambodia
by Hongbo Zhu, Tao Yu, Xiaofei Mi, Jian Yang, Chuanzhao Tian, Peizhuo Liu, Jian Yan, Yuke Meng, Zhenzhao Jiang and Zhigao Ma
Remote Sens. 2024, 16(13), 2443; https://doi.org/10.3390/rs16132443 - 3 Jul 2024
Viewed by 608
Abstract
Large-Scale land cover mapping (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as [...] Read more.
Large-Scale land cover mapping (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as labels to achieve up-to-date land cover mapping. Nonetheless, the accuracy of PPs at the regional level remains uncertain, and the Remote Sensing Image (RSI) corresponding to the product is not publicly accessible. Consequently, the sample dataset constructed through geographic location matching may lack precision. Errors in such datasets are not only due to inherent product discrepancies, and can also arise from temporal and scale disparities between the RSI and PPs. In order to solve the above problems, this paper proposes an LLCM framework for generating labels for use with PPs. The framework consists of three main parts. First, initial generation of labels, in which the collected PPs are integrated based on D-S evidence theory and initial labels are obtained using the generated trust map. Second, for dynamic label correction, a two-stage training method based on initial labels is adopted. The correction model is pretrained in the first stage, then the confidence probability (CP) correction module of the dynamic threshold value and NDVI correction module are introduced in the second stage. The initial labels are iteratively corrected while the model is trained using the joint correction loss, with the corrected labels obtained after training. Finally, the classification model is trained using the corrected labels. Using the proposed land cover mapping framework, this study used PPs to produce a 10 m spatial resolution land cover map of Cambodia in 2020. The overall accuracy of the land cover map was 91.68% and the Kappa value was 0.8808. Based on these results, the proposed mapping framework can effectively use PPs to update medium-resolution large-scale land cover datasets, and provides a powerful solution for label acquisition in LLCM projects. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

26 pages, 10366 KiB  
Article
Integrating Sentinel 2 Imagery with High-Resolution Elevation Data for Automated Inundation Monitoring in Vegetated Floodplain Wetlands
by Jessica T. Heath, Liam Grimmett, Tharani Gopalakrishnan, Rachael F. Thomas and Joanne Lenehan
Remote Sens. 2024, 16(13), 2434; https://doi.org/10.3390/rs16132434 - 2 Jul 2024
Viewed by 782
Abstract
Monitoring inundation in flow-dependent floodplain wetlands is important for understanding the outcomes of environmental water deliveries that aim to inundate different floodplain wetland vegetation types. The most effective way to monitor inundation across large landscapes is with remote sensing. Spectral water indices are [...] Read more.
Monitoring inundation in flow-dependent floodplain wetlands is important for understanding the outcomes of environmental water deliveries that aim to inundate different floodplain wetland vegetation types. The most effective way to monitor inundation across large landscapes is with remote sensing. Spectral water indices are often used to detect water in the landscape, but there are challenges in using them to map inundation within the complex vegetated floodplain wetlands. The current method used for monitoring inundation in the large floodplain wetlands that are targets for environmental water delivery in the New South Wales portion of the Murray–Darling Basin (MDB) in eastern Australia considers the complex mixing of water with vegetation and soil, but it is a time-consuming process focused on individual wetlands. In this study, we developed the automated inundation monitoring (AIM) method to enable an efficient process to map inundation in floodplain wetlands with a focus on the lower Lachlan floodplain utilising 25 Sentinel-2 image dates spanning from 2019 to 2023. A local adaptive thresholding (ATH) approach of a suite of spectral indices combined with best available DEM and a cropping layer were integrated into the AIM method. The resulting AIM maps were validated against high-resolution drone images, and vertical and oblique aerial images. Although instances of omission and commission errors were identified in dense vegetation and narrow creek lines, the AIM method showcased high mapping accuracy with overall accuracy of 0.8 measured. The AIM method could be adapted to other MDB wetlands that would further support the inundation monitoring across the basin. Full article
Show Figures

Figure 1

Back to TopTop