Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (507)

Search Parameters:
Keywords = multispectral camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 5923 KiB  
Article
Inter-Annual Variability of Peatland Vegetation Captured Using Phenocam- and UAV Imagery
by Gillian Simpson, Tom Wade, Carole Helfter, Matthew R. Jones, Karen Yeung and Caroline J. Nichol
Remote Sens. 2025, 17(3), 526; https://doi.org/10.3390/rs17030526 - 4 Feb 2025
Viewed by 248
Abstract
Plant phenology is an important driver of inter-annual variability in peatland carbon uptake. However, the use of traditional phenology datasets (e.g., manual surveys, satellite remote sensing) to quantify this link is hampered by their limited spatial and temporal coverage. This study examined the [...] Read more.
Plant phenology is an important driver of inter-annual variability in peatland carbon uptake. However, the use of traditional phenology datasets (e.g., manual surveys, satellite remote sensing) to quantify this link is hampered by their limited spatial and temporal coverage. This study examined the use of phenology cameras (phenocams) and uncrewed aerial vehicles (UAVs) for monitoring phenology in a Scottish temperate peatland. Data were collected at the site over multiple growing seasons using a UAV platform fitted with a multispectral Parrot Sequoia camera. We found that greenness indices calculated using data from both platforms were in strong agreement with each other, and exhibited strong correlations with rates of gross primary production (GPP) at the site. Greenness maps generated with the UAV data were combined with fine-scale vegetation classifications, and highlighted the variable sensitivity of different plant species to dry spells over the study period. While a lack of suitable weather conditions for surveying limited the UAV data temporally, the phenocam provided a near-continuous record of phenology. The latter revealed substantial temporal variability in the relationship between canopy greenness and peatland GPP, which although strong over the growing season as a whole (rs = 0.88, p < 0.01), was statistically insignificant during the peak growing season. Full article
47 pages, 20552 KiB  
Article
Commissioning an All-Sky Infrared Camera Array for Detection of Airborne Objects
by Laura Domine, Ankit Biswas, Richard Cloete, Alex Delacroix, Andriy Fedorenko, Lucas Jacaruso, Ezra Kelderman, Eric Keto, Sarah Little, Abraham Loeb, Eric Masson, Mike Prior, Forrest Schultz, Matthew Szenher, Wesley Andrés Watters and Abigail White
Sensors 2025, 25(3), 783; https://doi.org/10.3390/s25030783 - 28 Jan 2025
Viewed by 373
Abstract
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based [...] Read more.
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based observatory to continuously monitor the sky and collect data for UAP studies via a rigorous long-term aerial census of all aerial phenomena, including natural and human-made. One of the key instruments is an all-sky infrared camera array using eight uncooled long-wave-infrared FLIR Boson 640 cameras. In addition to performing intrinsic and thermal calibrations, we implement a novel extrinsic calibration method using airplane positions from Automatic Dependent Surveillance–Broadcast (ADS-B) data that we collect synchronously on site. Using a You Only Look Once (YOLO) machine learning model for object detection and the Simple Online and Realtime Tracking (SORT) algorithm for trajectory reconstruction, we establish a first baseline for the performance of the system over five months of field operation. Using an automatically generated real-world dataset derived from ADS-B data, a dataset of synthetic 3D trajectories, and a hand-labeled real-world dataset, we find an acceptance rate (fraction of in-range airplanes passing through the effective field of view of at least one camera that are recorded) of 41% for ADS-B-equipped aircraft, and a mean frame-by-frame aircraft detection efficiency (fraction of recorded airplanes in individual frames which are successfully detected) of 36%. The detection efficiency is heavily dependent on weather conditions, range, and aircraft size. Approximately 500,000 trajectories of various aerial objects are reconstructed from this five-month commissioning period. These trajectories are analyzed with a toy outlier search focused on the large sinuosity of apparent 2D reconstructed object trajectories. About 16% of the trajectories are flagged as outliers and manually examined in the IR images. From these ∼80,000 outliers and 144 trajectories remain ambiguous, which are likely mundane objects but cannot be further elucidated at this stage of development without information about distance and kinematics or other sensor modalities. We demonstrate the application of a likelihood-based statistical test to evaluate the significance of this toy outlier analysis. Our observed count of ambiguous outliers combined with systematic uncertainties yields an upper limit of 18,271 outliers for the five-month interval at a 95% confidence level. This test is applicable to all of our future outlier searches. Full article
(This article belongs to the Section Sensors and Robotics)
26 pages, 12275 KiB  
Article
Estimating Winter Wheat Canopy Chlorophyll Content Through the Integration of Unmanned Aerial Vehicle Spectral and Textural Insights
by Huiling Miao, Rui Zhang, Zhenghua Song and Qingrui Chang
Remote Sens. 2025, 17(3), 406; https://doi.org/10.3390/rs17030406 - 24 Jan 2025
Viewed by 475
Abstract
Chlorophyll content is an essential parameter for evaluating the growth condition of winter wheat, and its accurate monitoring through remote sensing is of great significance for early warnings about winter wheat growth. In order to investigate unmanned aerial vehicle (UAV) multispectral technology’s capability [...] Read more.
Chlorophyll content is an essential parameter for evaluating the growth condition of winter wheat, and its accurate monitoring through remote sensing is of great significance for early warnings about winter wheat growth. In order to investigate unmanned aerial vehicle (UAV) multispectral technology’s capability to estimate the chlorophyll content of winter wheat, this study proposes a method for estimating the relative canopy chlorophyll content (RCCC) of winter wheat based on UAV multispectral images. Concretely, an M350RTK UAV with an MS600 Pro multispectral camera was utilized to collect data, immediately followed by ground chlorophyll measurements with a Dualex handheld instrument. Then, the band information and texture features were extracted by image preprocessing to calculate the vegetation indices (VIs) and the texture indices (TIs). Univariate and multivariate regression models were constructed using random forest (RF), backpropagation neural network (BPNN), kernel extremum learning machine (KELM), and convolutional neural network (CNN), respectively. Finally, the optimal model was utilized for spatial mapping. The results provided the following indications: (1) Red-edge vegetation indices (RIs) and TIs were key to estimating RCCC. Univariate regression models were tolerable during the flowering and filling stages, while the superior multivariate models, incorporating multiple features, revealed more complex relationships, improving R² by 0.35% to 69.55% over the optimal univariate models. (2) The RF model showed notable performance in both univariate and multivariate regressions, with the RF model incorporating RIS and TIS during the flowering stage achieving the best results (R²_train = 0.93, RMSE_train = 1.36, RPD_train = 3.74, R²_test = 0.79, RMSE_test = 3.01, RPD_test = 2.20). With more variables, BPNN, KELM, and CNN models effectively leveraged neural network advantages, improving training performance. (3) Compared to using single-feature indices for RCCC estimation, the combination of vegetation indices and texture indices increased from 0.16% to 40.70% in the R² values of some models. Integrating UAV multispectral spectral and texture data allows effective RCCC estimation for winter wheat, aiding wheatland management, though further work is needed to extend the applicability of the developed estimation models. Full article
20 pages, 6209 KiB  
Article
Monitoring and Prediction of Wild Blueberry Phenology Using a Multispectral Sensor
by Kenneth Anku, David Percival, Mathew Vankoughnett, Rajasekaran Lada and Brandon Heung
Remote Sens. 2025, 17(2), 334; https://doi.org/10.3390/rs17020334 - 19 Jan 2025
Viewed by 456
Abstract
(1) Background: Research and development in remote sensing have been used to determine and monitor crop phenology. This approach assesses the internal and external changes of the plant. Therefore, the objective of this study was to determine the potential of using a multispectral [...] Read more.
(1) Background: Research and development in remote sensing have been used to determine and monitor crop phenology. This approach assesses the internal and external changes of the plant. Therefore, the objective of this study was to determine the potential of using a multispectral sensor to predict phenology in wild blueberry fields. (2) Method: A UAV equipped with a five-banded multispectral camera was used to collect aerial imagery. Sites consisted of two commercial fields, Lemmon Hill and Kemptown. An RCBD with six replications, four treatments, and a plot size of 6 × 8 m with a 2 m buffer between plots was used. Orthomosaic maps and vegetative indices were generated. (3) Results: There were significant correlations between VIs and growth parameters at different stages. The F4/F5 and F6/F7 stages showed significantly high correlation values among all growth stages. LAI, floral, and vegetative bud stages could be estimated at the tight cluster (F4/F5) and bloom (F6/F7) stages with R2/CCC = 0.90/0.84. Variable importance showed that NDVI, ENDVI, GLI, VARI, and GRVI contributed significantly to achieving these predicted values, with NDRE showing low effects. (4) Conclusion: This implies that the F4/F5 and F6/F7 stages are good stages for making phenological predictions and estimations about wild blueberry plants. Full article
Show Figures

Graphical abstract

18 pages, 7292 KiB  
Article
Concurrent Viewing of H&E and Multiplex Immunohistochemistry in Clinical Specimens
by Larry E. Morrison, Tania M. Larrinaga, Brian D. Kelly, Mark R. Lefever, Rachel C. Beck and Daniel R. Bauer
Diagnostics 2025, 15(2), 164; https://doi.org/10.3390/diagnostics15020164 - 13 Jan 2025
Viewed by 477
Abstract
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens [...] Read more.
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens and dual-camera imaging, including monochrome and color cameras, to implement simultaneous H&E and IHC. Using this approach, conventional H&E staining could be simultaneously viewed in color on a computer monitor alongside a monochrome video of the invisible IHC staining, while manually scanning the specimen. Methods: We have now simplified the microscope system to a single camera and increased the IHC multiplexing to four biomarkers using translational assays. The color camera used in this approach also enabled multispectral imaging, similar to monochrome cameras. Results: Application is made to several clinically relevant specimens, including breast cancer (HER2, ER, and PR), prostate cancer (PSMA, P504S, basal cell, and CD8), Hodgkin’s lymphoma (CD15 and CD30), and melanoma (LAG3). Additionally, invisible chromogenic IHC was combined with conventional DAB IHC to present a multiplex IHC assay with unobscured DAB staining, suitable for visual interrogation. Conclusions: Simultaneous staining and detection, as described here, provides the pathologist a means to evaluate complex multiplexed assays, while seated at the microscope, with the added multispectral imaging capability to support digital pathology and artificial intelligence workflows of the future. Full article
(This article belongs to the Special Issue New Promising Diagnostic Signatures in Histopathological Diagnosis)
Show Figures

Figure 1

24 pages, 8166 KiB  
Article
UAV Remote Sensing Technology for Wheat Growth Monitoring in Precision Agriculture: Comparison of Data Quality and Growth Parameter Inversion
by Jikai Liu, Weiqiang Wang, Jun Li, Ghulam Mustafa, Xiangxiang Su, Ying Nian, Qiang Ma, Fengxian Zhen, Wenhui Wang and Xinwei Li
Agronomy 2025, 15(1), 159; https://doi.org/10.3390/agronomy15010159 - 10 Jan 2025
Viewed by 565
Abstract
The quality of the image data and the potential to invert crop growth parameters are essential for effectively using unmanned aerial vehicle (UAV)-based sensor systems in precision agriculture (PA). However, the existing research falls short in providing a comprehensive examination of sensor data [...] Read more.
The quality of the image data and the potential to invert crop growth parameters are essential for effectively using unmanned aerial vehicle (UAV)-based sensor systems in precision agriculture (PA). However, the existing research falls short in providing a comprehensive examination of sensor data quality and the inversion potential of crop growth parameters, and there is still ambiguity regarding how the quality of data affects the inversion potential. Therefore, this study explored the application potential of RGB and multispectral (MS) images acquired from three lightweight UAV platforms in the realm of PA: the DJI Mavic 2 Pro (M2P), Phantom 4 Multispectral (P4M), and Mavic 3 Multispectral (M3M). The reliability of pixel-scale data quality was evaluated based on image quality assessment metrics, and three winter wheat growth parameters, above-ground biomass (AGB), plant nitrogen content (PNC) and soil and plant analysis development (SPAD), were inverted using machine learning models based on multi-source image features at the plot scale. The results indicated that the RGB image quality from the M3M outperformed that of the M2P, while the MS image quality was marginally superior to that of the P4M. Nevertheless, these advantages in pixel-scale data quality did not improve inversion accuracy for crop parameters at the plot scale. Spectral features (SFs) derived from the P4M-based MS sensor demonstrated significant advantages in AGB inversion (R2 = 0.86, rRMSE = 27.47%), while SFs derived from the M2P-based RGB camera exhibited the best performance in SPAD inversion (R2 = 0.60, rRMSE = 7.67%). Additionally, combining spectral and textural features derived from the P4M-based MS sensor yielded the highest accuracy in PNC inversion (R2 = 0.82, rRMSE = 14.62%). This study clarified the data quality of three prevalent UAV mounted sensor systems in PA and their influence on parameter inversion potential, offering guidance for selecting appropriate sensors and monitoring key crop growth parameters. Full article
(This article belongs to the Section Agricultural Biosystem and Biological Engineering)
Show Figures

Figure 1

21 pages, 10149 KiB  
Article
Minimizing Seam Lines in UAV Multispectral Image Mosaics Utilizing Irradiance, Vignette, and BRDF
by Hoyong Ahn, Chansol Kim, Seungchan Lim, Cheonggil Jin, Jinsu Kim and Chuluong Choi
Remote Sens. 2025, 17(1), 151; https://doi.org/10.3390/rs17010151 - 4 Jan 2025
Viewed by 532
Abstract
Unmanned aerial vehicle (UAV) imaging provides the ability to obtain high-resolution images at a lower cost than satellite imagery and aerial photography. However, multiple UAV images need to be mosaicked to obtain images of large areas, and the resulting UAV multispectral image mosaics [...] Read more.
Unmanned aerial vehicle (UAV) imaging provides the ability to obtain high-resolution images at a lower cost than satellite imagery and aerial photography. However, multiple UAV images need to be mosaicked to obtain images of large areas, and the resulting UAV multispectral image mosaics typically contain seam lines. To address this problem, we applied irradiance, vignette, and bidirectional reflectance distribution function (BRDF) filters and performed field work using a DJI Mavic 3 Multispectral (M3M) camera to collect data. We installed a calibrated reference tarp (CRT) in the center of the collection area and conducted three types of flights (BRDF, vignette, and validation) to measure the irradiance, radiance, and reflectance—which are essential for irradiance correction—using a custom reflectance box (ROX). A vignette filter was generated from the vignette parameter, and the anisotropy factor (ANIF) was calculated by measuring the radiance at the nadir, following which the BRDF model parameters were calculated. The calibration approaches were divided into the following categories: a vignette-only process, which solely applied vignette and irradiance corrections, and the full process, which included irradiance, vignette, and BRDF. The accuracy was verified through a validation flight. The radiance uncertainty at the seam line ranged from 3.00 to 5.26% in the 80% lap mode when using nine images around the CRT, and from 4.06 to 6.93% in the 50% lap mode when using all images with the CRT. The term ‘lap’ in ‘lap mode’ refers to both overlap and sidelap. The images that were subjected to the vignette-only process had a radiance difference of 4.48–6.98%, while that of the full process images was 1.44–2.40%, indicating that the seam lines were difficult to find with the naked eye and that the process was successful. Full article
Show Figures

Figure 1

16 pages, 3623 KiB  
Article
Background Light Suppression for Multispectral Imaging in Surgical Settings
by Moritz Gerlich, Andreas Schmid, Thomas Greiner and Stefan Kray
Sensors 2025, 25(1), 141; https://doi.org/10.3390/s25010141 - 29 Dec 2024
Viewed by 582
Abstract
Multispectral imaging (MSI) enables non-invasive tissue differentiation based on spectral characteristics and has shown great potential as a tool for surgical guidance. However, adapting MSI to open surgeries is challenging. Systems that rely on light sources present in the operating room experience limitations [...] Read more.
Multispectral imaging (MSI) enables non-invasive tissue differentiation based on spectral characteristics and has shown great potential as a tool for surgical guidance. However, adapting MSI to open surgeries is challenging. Systems that rely on light sources present in the operating room experience limitations due to frequent lighting changes, which distort the spectral data and require countermeasures such as disruptive recalibrations. On the other hand, MSI systems that rely on dedicated lighting require external light sources, such as surgical lights, to be turned off during open surgery settings. This disrupts the surgical workflow and extends operation times. To this end, we present an approach that addresses these issues by combining active illumination with smart background suppression. By alternately capturing images with and without a modulated light source at a desired wavelength, we isolate the target signal, enabling artifact-free spectral scanning. We demonstrate the performance of our approach using a smart pixel camera, emphasizing its signal-to-noise ratio (SNR) advantage over a conventional high-speed camera. Our results show that accurate reflectance measurements can be achieved in clinical settings with high background illumination. Medical application is demonstrated through the estimation of blood oxygenation, and its suitability for open surgeries is discussed. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

49 pages, 22036 KiB  
Review
Investigation into UAV Applications for Environmental Ice Detection and De-Icing Technology
by Qingying Li, Zhijie Chai, Rao Yao, Tian Bai and Huanyu Zhao
Drones 2025, 9(1), 5; https://doi.org/10.3390/drones9010005 - 24 Dec 2024
Viewed by 810
Abstract
In cold environments, ice formation poses significant risks to infrastructure such as transportation systems and power transmission. Yet, traditional de-icing methods are often time-consuming, hazardous, and inefficient. In this regard, unmanned aerial vehicles (UAVs) have shown great potential in environmental ice detection and [...] Read more.
In cold environments, ice formation poses significant risks to infrastructure such as transportation systems and power transmission. Yet, traditional de-icing methods are often time-consuming, hazardous, and inefficient. In this regard, unmanned aerial vehicles (UAVs) have shown great potential in environmental ice detection and de-icing applications. This study comprehensively reviews the application of UAVs in ice detection and de-icing operations in external environments, emphasizing their potential to replace traditional manual methods. Firstly, the latest developments in UAV-based external ice detection technology are examined, with a focus on the unique capabilities of sensors such as multispectral cameras, infrared imagers, and LiDAR in capturing specific ice features. Subsequently, the implementation and effectiveness of chemical, mechanical, and thermal de-icing methods delivered via UAV platforms are evaluated, focusing on their operational efficiency and adaptability. In addition, key operational requirements are reviewed, including environmental adaptability, mission planning and execution, and command transmission, as well as system design and manufacturing. Finally, the practical challenges involved in deploying UAVs under complex weather conditions are examined and solutions are proposed. These are aimed at promoting future research and ultimately driving the adoption of UAV technology in de-icing applications. Full article
(This article belongs to the Special Issue Recent Development in Drones Icing)
Show Figures

Figure 1

18 pages, 10480 KiB  
Article
Bacterial and Viral-Induced Changes in the Reflectance Spectra of Nicotiana benthamiana Plants
by Alyona Grishina, Maxim Lysov, Maria Ageyeva, Victoria Diakova, Oksana Sherstneva, Anna Brilkina and Vladimir Vodeneev
Horticulturae 2024, 10(12), 1363; https://doi.org/10.3390/horticulturae10121363 - 19 Dec 2024
Viewed by 713
Abstract
Phytopathogens pose a serious threat to agriculture, causing a decrease in yield and product quality. This necessitates the development of methods for early detection of phytopathogens, which will reduce losses and improve product quality by using lower quantities of agrochemicals. In this study, [...] Read more.
Phytopathogens pose a serious threat to agriculture, causing a decrease in yield and product quality. This necessitates the development of methods for early detection of phytopathogens, which will reduce losses and improve product quality by using lower quantities of agrochemicals. In this study, the efficiency of spectral imaging in the early detection and differentiation of diseases caused by pathogens of different types (Potato virus X (PVX) and the bacterium Pseudomonas syringae) was analyzed. An evaluation of the visual symptoms of diseases demonstrated the presence of pronounced symptoms in the case of bacterial infection and an almost complete absence of visual symptoms in the case of viral infection. P. syringae caused severe inhibition of photosynthetic activity in the infected leaf, while PVX did not have a pronounced effect on photosynthetic activity. Reflectance spectra of infected and healthy plants were detected in the range from 400 to 1000 nm using a hyperspectral camera, and the dynamics of infection-induced changes during disease progression were analyzed. P. syringae caused a strong increase in reflectance in the blue and red spectral ranges, as well as a decrease in the near-infrared range. PVX-induced changes in the reflectance spectrum had smaller amplitudes compared to P. syringae, and were localized mainly in the red edge (RE) range. The entire set of normalized reflectance indices (NRI) for the analyzed spectral range was calculated. The most sensitive NRIs to bacterial (NRI510/545, NRI510/850) and viral (NRI600/850, NRI700/850) infections were identified. The use of these indices makes it possible to detect the disease at an early stage. The study of the identified NRIs demonstrated the possibility of using the multispectral imaging method in early pathogen detection, which has high performance and a low cost of analysis. Full article
(This article belongs to the Section Plant Pathology and Disease Management (PPDM))
Show Figures

Figure 1

18 pages, 7162 KiB  
Article
A Rice Leaf Area Index Monitoring Method Based on the Fusion of Data from RGB Camera and Multi-Spectral Camera on an Inspection Robot
by Yan Li, Xuerui Qi, Yucheng Cai, Yongchao Tian, Yan Zhu, Weixing Cao and Xiaohu Zhang
Remote Sens. 2024, 16(24), 4725; https://doi.org/10.3390/rs16244725 - 18 Dec 2024
Viewed by 834
Abstract
Automated monitoring of the rice leaf area index (LAI) using near-ground sensing platforms, such as inspection robots, is essential for modern rice precision management. These robots are equipped with various complementary sensors, where specific sensor capabilities partially overlap to provide redundancy and enhanced [...] Read more.
Automated monitoring of the rice leaf area index (LAI) using near-ground sensing platforms, such as inspection robots, is essential for modern rice precision management. These robots are equipped with various complementary sensors, where specific sensor capabilities partially overlap to provide redundancy and enhanced reliability. Thus, leveraging multi-sensor fusion technology to improve the accuracy of LAI monitoring has become a crucial research focus. This study presents a rice LAI monitoring model based on the fused data from RGB and multi-spectral cameras with an ensemble learning algorithm. The results indicate that the estimation accuracy of the rice LAI monitoring model is effectively improved by fusing the vegetation index and textures from RGB and multi-spectral sensors. The model based on the LightGBM regression algorithm has the most improvement in accuracy, with a coefficient of determination (R2) of 0.892, a root mean square error (RMSE) of 0.270, and a mean absolute error (MAE) of 0.160. Furthermore, the accuracy of LAI estimation in the jointing stage is higher than in the heading stage. At the jointing stage, both LightGBM based on optimal RGB image features and Random Forest based on fused features achieved an R2 of 0.95. This study provides a technical reference for automatically monitoring rice growth parameters in the field using inspection robots. Full article
Show Figures

Figure 1

19 pages, 4990 KiB  
Article
A 3D Surface Reconstruction Pipeline for Plant Phenotyping
by Lina Stausberg, Berit Jost, Lasse Klingbeil and Heiner Kuhlmann
Remote Sens. 2024, 16(24), 4720; https://doi.org/10.3390/rs16244720 - 17 Dec 2024
Viewed by 641
Abstract
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, [...] Read more.
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, including RGB, multi-spectral, and hyper-spectral cameras, which offer valuable insights into plant physiology. In recent years, 3D sensing systems such as laser scanners have gained popularity due to their ability to capture structural plant parameters that are difficult to obtain using spectral sensors. Unlike images, point clouds are not structured and require pre-processing steps to extract precise information and handle noise or missing points. One approach is to generate mesh-based surface representations using triangulation. A key challenge in the 3D surface reconstruction of plants is the pre-processing of point clouds, which involves removing non-plant noise from the scene, segmenting point clouds from populations to individual plants, and further dividing individual plants into their respective organs. In this study, we will not focus on the segmentation aspect but rather on the other pre-processing steps, like denoising parameters, which depend on the data type. We present an automated pipeline for converting high-resolution point clouds into surface models of plants. The pipeline incorporates additional pre-processing steps such as outlier removal, denoising, and subsampling to ensure the accuracy and quality of the reconstructed surfaces. Data were collected using three different sensors: a handheld scanner, a terrestrial laser scanner (TLS), and a mobile mapping platform, under varying conditions from controlled laboratory environments to complex field settings. The investigation includes five different plant species, each with distinct characteristics, to demonstrate the potential of the pipeline. In a next step, phenotypic traits such as leaf area, leaf area index (LAI), and leaf angle distribution (LAD) were calculated to further illustrate the pipeline’s potential and effectiveness. The pipeline is based on the Open3D framework and is available open source. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

24 pages, 6309 KiB  
Article
Enhancing Multispectral Breast Imaging Quality Through Frame Accumulation and Hybrid GA-CPSO Registration
by Tsabeeh Salah M. Mahmoud, Adnan Munawar, Muhammad Zeeshan Nawaz and Yuanyuan Chen
Bioengineering 2024, 11(12), 1281; https://doi.org/10.3390/bioengineering11121281 - 17 Dec 2024
Viewed by 691
Abstract
Multispectral transmission imaging has emerged as a promising technique for imaging breast tissue with high resolution. However, the method encounters challenges such as low grayscale, noisy transmission images with weak signals, primarily due to the strong absorption and scattering of light in breast [...] Read more.
Multispectral transmission imaging has emerged as a promising technique for imaging breast tissue with high resolution. However, the method encounters challenges such as low grayscale, noisy transmission images with weak signals, primarily due to the strong absorption and scattering of light in breast tissue. A common approach to improve the signal-to-noise ratio (SNR) and overall image quality is frame accumulation. However, factors such as camera jitter and respiratory motion during image acquisition can cause frame misalignment, degrading the quality of the accumulated image. To address these issues, this study proposes a novel image registration method. A hybrid approach combining a genetic algorithm (GA) and a constriction factor-based particle swarm optimization (CPSO), referred to as GA-CPSO, is applied for image registration before frame accumulation. The efficiency of this hybrid method is enhanced by incorporating a squared constriction factor (SCF), which speeds up the registration process and improves convergence towards optimal solutions. The GA identifies potential solutions, which are then refined by CPSO to expedite convergence. This methodology was validated on the sequence of breast frames taken at 600 nm, 620 nm, 670 nm, and 760 nm wavelength of light and proved the enhancement of accuracy by various mathematical assessments. It demonstrated high accuracy (99.93%) and reduced registration time. As a result, the GA-CPSO approach significantly improves the effectiveness of frame accumulation and enhances overall image quality. This study explored the groundwork for precise multispectral transmission image segmentation and classification. Full article
(This article belongs to the Special Issue Optical Imaging for Biomedical Applications)
Show Figures

Graphical abstract

20 pages, 7839 KiB  
Article
Normalized Difference Vegetation Index Prediction for Blueberry Plant Health from RGB Images: A Clustering and Deep Learning Approach
by A. G. M. Zaman, Kallol Roy and Jüri Olt
AgriEngineering 2024, 6(4), 4831-4850; https://doi.org/10.3390/agriengineering6040276 - 16 Dec 2024
Viewed by 801
Abstract
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of [...] Read more.
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of blueberry plants using RGB images and deep learning, offering a cost-effective alternative. To identify individual plant bushes, K-means and Gaussian Mixture Model (GMM) clustering were applied. RGB images were transformed into the HSL (hue, saturation, lightness) color space, and the hue channel was constrained using percentiles to exclude extreme values while preserving relevant plant hues. Further refinement was achieved through adaptive pixel-to-pixel distance filtering combined with the Davies–Bouldin Index (DBI) to eliminate pixels deviating from the compact cluster structure. This enhanced clustering accuracy and enabled precise NDVI calculations. A convolutional neural network (CNN) was trained and tested to predict NDVI-based health indices. The model achieved strong performance with mean squared losses of 0.0074, 0.0044, and 0.0021 for training, validation, and test datasets, respectively. The test dataset also yielded a mean absolute error of 0.0369 and a mean percentage error of 4.5851. These results demonstrate the NDVI prediction method’s potential for cost-effective, real-time plant health assessment, particularly in agrobotics. Full article
Show Figures

Figure 1

16 pages, 6401 KiB  
Article
Estimation of Water Interception of Winter Wheat Canopy Under Sprinkler Irrigation Using UAV Image Data
by Xueqing Zhou, Haijun Liu and Lun Li
Water 2024, 16(24), 3609; https://doi.org/10.3390/w16243609 - 15 Dec 2024
Viewed by 584
Abstract
Canopy water interception is a key parameter to study the hydrological cycle, water utilization efficiency, and energy balance in terrestrial ecosystems. Especially in sprinkler-irrigated farmlands, the canopy interception further influences field energy distribution and microclimate, then plant transpiration and photosynthesis, and finally crop [...] Read more.
Canopy water interception is a key parameter to study the hydrological cycle, water utilization efficiency, and energy balance in terrestrial ecosystems. Especially in sprinkler-irrigated farmlands, the canopy interception further influences field energy distribution and microclimate, then plant transpiration and photosynthesis, and finally crop yield and water productivity. To reduce the field damage and increase measurement accuracy under traditional canopy water interception measurement, UAVs equipped with multispectral cameras were used to extract in situ crop canopy information. Based on the correlation coefficient (r), vegetative indices that are sensitive to canopy interception were screened out and then used to develop canopy interception models using linear regression (LR), random forest (RF), and back propagation neural network (BPNN) methods, and lastly these models were evaluated by root mean square error (RMSE) and mean relative error (MRE). Results show the canopy water interception is first closely related to relative normalized difference vegetation index (R△NDVI) with r of 0.76. The first seven indices with r from high to low are R△NDVI, reflectance values of the blue band (Blue), reflectance values of the near-infrared band (Nir), three-band gradient difference vegetation index (TGDVI), difference vegetation index (DVI), normalized difference red edge index (NDRE), and soil-adjusted vegetation index (SAVI) were chosen to develop canopy interception models. All the developed linear regression models based on three indices (R△NDVI, Blue, and NDRE), the RF model, and the BPNN model performed well in canopy water interception estimation (r: 0.53–0.76, RMSE: 0.18–0.27 mm, MRE: 21–27%) when the interception is less than 1.4 mm. The three methods underestimate the canopy interception by 18–32% when interception is higher than 1.4 mm, which could be due to the saturation of NDVI when leaf area index is higher than 4.0. Because linear regression is easy to perform, then the linear regression method with NDVI is recommended for canopy interception estimation of sprinkler-irrigated winter wheat. The proposed linear regression method and the R△NDVI index can further be used to estimate the canopy water interception of other plants as well as forest canopy. Full article
(This article belongs to the Special Issue Agricultural Water-Land-Plant System Engineering)
Show Figures

Figure 1

Back to TopTop