Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,168)

Search Parameters:
Keywords = image sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 996 KiB  
Article
Geographically-Informed Modeling and Analysis of Platform Attitude Jitter in GF−7 Sub-Meter Stereo Mapping Satellite
by Haoran Xia, Xinming Tang, Fan Mo, Junfeng Xie and Xiang Li
ISPRS Int. J. Geo-Inf. 2024, 13(11), 413; https://doi.org/10.3390/ijgi13110413 (registering DOI) - 15 Nov 2024
Viewed by 280
Abstract
The GF−7 satellite, China’s inaugural sub-meter-level stereoscopic mapping satellite, has been deployed for a wide range of applications, including natural resource investigation, environmental monitoring, fundamental surveying, and the development of global geospatial information resources. The satellite’s stable platform and reliable imaging systems are [...] Read more.
The GF−7 satellite, China’s inaugural sub-meter-level stereoscopic mapping satellite, has been deployed for a wide range of applications, including natural resource investigation, environmental monitoring, fundamental surveying, and the development of global geospatial information resources. The satellite’s stable platform and reliable imaging systems are crucial for achieving high-quality imaging and precise attitude measurements. However, the satellite’s operation is affected by both internal and external factors, which induce vibrations in the satellite platform, thereby affecting image quality and mapping accuracy. To address this challenge, this paper proposes a novel method for constructing a satellite platform vibration model based on geographic location information. The model is developed by integrating composite data from star sensors and gyroscopes (gyro) with subsatellite point location data. The experimental methodology involves the composite processing of gyro data and star sensor optical axis angles, integration of the processed data through time-matching and normalization, and denoising of the integrated data, followed by trigonometric fitting to capture the periodic characteristics of platform vibrations. The positions of the satellite substellar points are determined from the satellite orbit data. A rigorous geometric imaging model is then used to construct a vibration model with geographic location correlation in combination with the satellite subsatellite point positions. The experimental results demonstrate the following: (1) Over the same temporal range, there is a significant convergence in the waveform similarities between the gyro data and the star sensor optical axis angles, indicating a strong correlation in the jitter information; (2) The platform vibration exhibits a robust correlation with the satellite’s geographic location along its orbit. Specifically, the model reveals that the GF−7 satellite experiences the maximum vibration amplitude between 5° S and 20° S latitude during its ascending phase, and the minimum vibration amplitude between 5° N and 20° N latitude during the descending phase. The model established in this study offers theoretical support for optimizing satellite attitude and mitigating platform vibrations. Full article
32 pages, 2457 KiB  
Systematic Review
Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review
by Josef Augusto Oberdan Souza Silva, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, João Paulo Barcelos Lemos, Lorena Nunes Lacerda, Rhuanito Soranz Ferrarezi and Henrique Fonseca Elias de Oliveira
Agronomy 2024, 14(11), 2697; https://doi.org/10.3390/agronomy14112697 (registering DOI) - 15 Nov 2024
Viewed by 309
Abstract
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models [...] Read more.
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models such as convolutional neural network (CNN) and You Only Look Once (YOLO), many studies have emerged given the need to develop solutions to problems and take advantage of all the potential that this technology has to offer. This systematic literature review aims to present an in-depth investigation of the application of AI in supporting the management of weeds, plant nutrition, water, pests, and diseases. This systematic review was conducted using the PRISMA methodology and guidelines. Data from different papers indicated that the main research interests comprise five groups: (a) type of agronomic problems; (b) type of sensor; (c) dataset treatment; (d) evaluation metrics and quantification; and (e) AI technique. The inclusion (I) and exclusion (E) criteria adopted in this study included: (I1) articles that obtained AI techniques for agricultural analysis; (I2) complete articles written in English; (I3) articles from specialized scientific journals; (E1) articles that did not describe the type of agrarian analysis used; (E2) articles that did not specify the AI technique used and that were incomplete or abstract; (E3) articles that did not present substantial experimental results. The articles were searched on the official pages of the main scientific bases: ACM, IEEE, ScienceDirect, MDPI, and Web of Science. The papers were categorized and grouped to show the main contributions of the literature to support agricultural decisions using AI. This study found that AI methods perform better in supporting weed detection, classification of plant diseases, and estimation of agricultural yield in crops when using images captured by Unmanned Aerial Vehicles (UAVs). Furthermore, CNN and YOLO, as well as their variations, present the best results for all groups presented. This review also points out the limitations and potential challenges when working with deep machine learning models, aiming to contribute to knowledge systematization and to benefit researchers and professionals regarding AI applications in mitigating agronomic problems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
14 pages, 2811 KiB  
Article
Carbon Dot Micelles Synthesized from Leek Seeds in Applications for Cobalt (II) Sensing, Metal Ion Removal, and Cancer Therapy
by Teh-Hua Tsai, Wei Lo, Hsiu-Yun Wang and Tsung-Lin Tsai
J. Funct. Biomater. 2024, 15(11), 347; https://doi.org/10.3390/jfb15110347 - 15 Nov 2024
Viewed by 254
Abstract
Popular photoluminescent (PL) nanomaterials, such as carbon dots, have attracted substantial attention from scientists due to their photophysical properties, biocompatibility, low cost, and diverse applicability. Carbon dots have been used in sensors, cell imaging, and cancer therapy. Leek seeds with anticancer, antimicrobial, and [...] Read more.
Popular photoluminescent (PL) nanomaterials, such as carbon dots, have attracted substantial attention from scientists due to their photophysical properties, biocompatibility, low cost, and diverse applicability. Carbon dots have been used in sensors, cell imaging, and cancer therapy. Leek seeds with anticancer, antimicrobial, and antioxidant functions serve as traditional Chinese medicine. However, leek seeds have not been studied as a precursor of carbon dots. In this study, leek seeds underwent a supercritical fluid extraction process. Leek seed extract was obtained and then carbonized using a dry heating method, followed by hydrolysis to form carbon dot micelles (CD-micelles). CD-micelles exhibited analyte-induced PL quenching against Co2+ through the static quenching mechanism, with the formation of self-assembled Co2+-CD-micelle sphere particles. In addition, CD-micelles extracted metal ion through liquid–liquid extraction, with removal efficiencies of >90% for Pb2+, Al3+, Fe3+, Cr3+, Pd2+, and Au3+. Moreover, CD-micelles exhibited ABTS•+ radical scavenging ability and cytotoxicity for cisplatin-resistant lung cancer cells. CD-micelles killed cisplatin-resistant small-cell lung cancer cells in a dose-dependent manner with a cancer cell survival rate down to 12.8 ± 4.2%, with a similar treatment function to that of cisplatin. Consequently, CD-micelles functionalized as novel antioxidants show great potential as anticancer nanodrugs in cancer treatment. Full article
(This article belongs to the Section Biomaterials for Cancer Therapies)
Show Figures

Figure 1

19 pages, 73341 KiB  
Article
A Comparative Study on the Use of Smartphone Cameras in Photogrammetry Applications
by Photis Patonis
Sensors 2024, 24(22), 7311; https://doi.org/10.3390/s24227311 - 15 Nov 2024
Viewed by 264
Abstract
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are [...] Read more.
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are employed to evaluate the results. OpenCV functions estimate the distortions introduced by the lens. Diagrams, evaluation images, statistical quantities, and indicators are utilized to compare the results among sensors. The accuracy achieved in photogrammetry is examined using the photogrammetric bundle adjustment in a real-world application. In the end, generalized conclusions are drawn regarding this technology’s use in close-range photogrammetry applications. Full article
Show Figures

Figure 1

13 pages, 16117 KiB  
Article
A Stride Toward Wine Yield Estimation from Images: Metrological Validation of Grape Berry Number, Radius, and Volume Estimation
by Bernardo Lanza, Davide Botturi, Alessandro Gnutti, Matteo Lancini, Cristina Nuzzi and Simone Pasinetti
Sensors 2024, 24(22), 7305; https://doi.org/10.3390/s24227305 - 15 Nov 2024
Viewed by 203
Abstract
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards [...] Read more.
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards using color images, allowing the computation of the visible (and total) volume of grape clusters, which is necessary to reach the ultimate goal of estimating yield production. The proposed algorithm is validated by analyzing its performance on a custom dataset. The number of berries, their mean radius, and the grape cluster volume are converted to millimeters and compared to reference values obtained through manual measurements. The validation experiment also analyzes the uncertainties of the parameters. Results show that the algorithm can reliably estimate the number (MPE=5%, σ=6%) and the radius of the visible portion of the grape clusters (MPE=0.8%, σ=7%). Instead, the volume estimated in px3 results in a MPE=0.4% with σ=21%, thus the corresponding volume in mm3 is affected by high uncertainty. This analysis highlighted that half of the total uncertainty on the volume is due to the camera–object distance d and parameter R used to take into account the proportion of visible grapes with respect to the total grapes in the grape cluster. This issue is mostly due to the absence of a reliable depth measure between the camera and the grapes, which could be overcome by using depth sensors in combination with color images. Despite being preliminary, the results prove that the model and the metrological analysis are a remarkable advancement toward a reliable approach for directly estimating yield from 2D pictures in the field. Full article
Show Figures

Figure 1

17 pages, 2380 KiB  
Article
Nondestructive Detection of Litchi Stem Borers Using Multi-Sensor Data Fusion
by Zikun Zhao, Sai Xu, Huazhong Lu, Xin Liang, Hongli Feng and Wenjing Li
Agronomy 2024, 14(11), 2691; https://doi.org/10.3390/agronomy14112691 - 15 Nov 2024
Viewed by 223
Abstract
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, [...] Read more.
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, as they often fail to capture both external and internal fruit characteristics. By integrating multiple sensors, our approach overcomes these limitations, offering a more accurate and robust detection system. Significant differences were observed between pest-free and infested lychees. Pest-free lychees exhibited higher hardness, soluble sugars (11% higher in flesh, 7% higher in peel), vitamin C (50% higher in flesh, 2% higher in peel), polyphenols, anthocyanins, and ORAC values (26%, 9%, and 14% higher, respectively). The Vis/NIR data processed with SG+SNV+CARS yielded a partial least squares regression (PLSR) model with an R2 of 0.82, an RMSE of 0.18, and accuracy of 89.22%. The hyperspectral model, using SG+MSC+SPA, achieved an R2 of 0.69, an RMSE of 0.23, and 81.74% accuracy, while the X-ray method with support vector regression (SVR) reached an R2 of 0.69, an RMSE of 0.22, and 76.25% accuracy. Through feature-level fusion, Recursive Feature Elimination with Cross-Validation (RFECV), and dimensionality reduction using PCA, we optimized hyperparameters and developed a Random Forest model. This model achieved 92.39% accuracy in pest detection, outperforming the individual methods by 3.17%, 10.25%, and 16.14%, respectively. The multi-source fusion approach also improved the overall accuracy by 4.79%, highlighting the critical role of sensor fusion in enhancing pest detection and supporting the development of automated non-destructive systems for lychee stem borer detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

25 pages, 2899 KiB  
Article
Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception
by Hang Yan, Yongji Li, Luping Wang and Shichao Chen
Remote Sens. 2024, 16(22), 4256; https://doi.org/10.3390/rs16224256 - 15 Nov 2024
Viewed by 309
Abstract
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar [...] Read more.
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar sensor data or efficiently further utilize them for perception tasks. This paper rethinks the approach to modeling radar signals and proposes a novel U-shaped multilayer perceptron network (U-MLPNet) that aims to enhance the learning of omni-dimensional spatio-temporal dependencies. Our method involves innovative signal processing techniques, including a 3D CNN for spatio-temporal feature extraction and an encoder–decoder framework with cross-shaped receptive fields specifically designed to capture the sparse and non-uniform characteristics of radar signals. We conducted extensive experiments using a diverse dataset of urban driving scenarios to characterize the sensor’s performance in multi-view semantic segmentation and object detection tasks. Experiments showed that U-MLPNet achieves competitive performance against state-of-the-art (SOTA) methods, improving the mAP by 3.0% and mDice by 2.7% in RD segmentation and AR and AP by 1.77% and 2.03%, respectively, in object detection. These improvements signify an advancement in radar-based perception for autonomous vehicles, potentially enhancing their reliability and safety across diverse driving conditions. Full article
Show Figures

Figure 1

15 pages, 4260 KiB  
Article
Microwave-Assisted Synthesis of N, S Co-Doped Carbon Quantum Dots for Fluorescent Sensing of Fe(III) and Hydroquinone in Water and Cell Imaging
by Zhaochuan Yu, Chao Deng, Wenhui Ma, Yuqian Liu, Chao Liu, Tingwei Zhang and Huining Xiao
Nanomaterials 2024, 14(22), 1827; https://doi.org/10.3390/nano14221827 - 14 Nov 2024
Viewed by 455
Abstract
The detection of heavy metal ions and organic pollutants from water sources remains critical challenges due to their detrimental effects on human health and the environment. Herein, a nitrogen and sulfur co-doped carbon quantum dot (NS-CQDs) fluorescent sensor was developed using a microwave-assisted [...] Read more.
The detection of heavy metal ions and organic pollutants from water sources remains critical challenges due to their detrimental effects on human health and the environment. Herein, a nitrogen and sulfur co-doped carbon quantum dot (NS-CQDs) fluorescent sensor was developed using a microwave-assisted carbonization method for the detection of Fe3+ ions and hydroquinone (HQ) in aqueous solutions. NS-CQDs exhibit excellent optical properties, enabling sensitive detection of Fe3+ and HQ, with detection limits as low as 3.40 and 0.96 μM. Notably, with the alternating introduction of Fe3+ and HQ, NS-CQDs exhibit significant fluorescence (FL) quenching and recovery properties. Based on this property, a reliable “on-off-on” detection mechanism was established, enabling continuous and reversible detection of Fe3+ and HQ. Furthermore, the low cytotoxicity of NS-CQDs was confirmed through successful imaging of HeLa cells, indicating their potential for real-time intracellular detection of Fe3+ and HQ. This work not only provides a green and rapid synthesis strategy for CQDs but also highlights their versatility as fluorescent probes for environmental monitoring and bioimaging applications. Full article
(This article belongs to the Special Issue Nanomaterials in Electrochemical Electrode and Electrochemical Sensor)
Show Figures

Figure 1

28 pages, 3209 KiB  
Article
DESAT: A Distance-Enhanced Strip Attention Transformer for Remote Sensing Image Super-Resolution
by Yujie Mao, Guojin He, Guizhou Wang, Ranyu Yin, Yan Peng and Bin Guan
Remote Sens. 2024, 16(22), 4251; https://doi.org/10.3390/rs16224251 - 14 Nov 2024
Viewed by 342
Abstract
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization [...] Read more.
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization of the upsample module. To address these issues, we propose a novel distance-enhanced strip attention transformer (DESAT). The DESAT integrates distance priors, easily obtainable from remote sensing images, into the strip window self-attention mechanism to capture spatial correlations more effectively. To further enhance the transfer of deep features into high-resolution outputs, we designed an attention-enhanced upsample block, which combines the pixel shuffle layer with an attention-based upsample branch implemented through the overlapping window self-attention mechanism. Additionally, to better simulate real-world scenarios, we constructed a new cross-sensor super-resolution dataset using Gaofen-6 satellite imagery. Extensive experiments on both simulated and real-world remote sensing datasets demonstrate that the DESAT outperforms state-of-the-art models by up to 1.17 dB along with superior qualitative results. Furthermore, the DESAT achieves more competitive performance in real-world tasks, effectively balancing spatial detail reconstruction and spectral transform, making it highly suitable for practical remote sensing super-resolution applications. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
22 pages, 8304 KiB  
Article
Application of Imaging Algorithms for Gas–Water Two-Phase Array Fiber Holdup Meters in Horizontal Wells
by Ao Li, Haimin Guo, Yue Niu, Xin Lu, Yiran Zhang, Haoxun Liang, Yongtuo Sun, Yuqing Guo and Dudu Wang
Sensors 2024, 24(22), 7285; https://doi.org/10.3390/s24227285 - 14 Nov 2024
Viewed by 328
Abstract
The flow dynamics of low-yield horizontal wells demonstrate considerable complexity and unpredictability, chiefly attributable to the structural attributes of the wellbore and the interplay of gas–water two-phase flow. In horizontal wellbores, precisely predicting flow patterns using conventional approaches is often problematic. Consequently, accurate [...] Read more.
The flow dynamics of low-yield horizontal wells demonstrate considerable complexity and unpredictability, chiefly attributable to the structural attributes of the wellbore and the interplay of gas–water two-phase flow. In horizontal wellbores, precisely predicting flow patterns using conventional approaches is often problematic. Consequently, accurate monitoring and analysis of water holdup in gas–water two-phase flows are essential. This study performs a gas–water two-phase flow simulation experiment under diverse total flow and water cut conditions, utilizing air and tap water to represent downhole gas and formation water, respectively. The experiment relies on the measurement principles of an array fiber holdup meter (GAT) and the response characteristics of the sensors. In the experiment, GAT was utilized for real-time water holdup measurement, and the acquired sensor data were analyzed using three interpolation algorithms: simple linear interpolation, inverse distance weighted interpolation, and Gaussian radial basis function interpolation. The results were subsequently post-processed and visualized with 2020 version MATLAB software, generating two-dimensional representations of water holdup in the wellbore. The study findings demonstrate that, at total flow of 300 m3/d and 500 m3/d, the simple linear interpolation approach yields superior accuracy in water holdup calculations, with imaging outcomes closely aligning with the actual gas–water flow patterns and the authentic gas–water distribution. As total flow and water cut increase, the gas–water two-phase flow progressively shifts from stratified smooth flow to stratified wavy flow. In this paper, the Gaussian radial basis function and inverse distance weighted interpolation algorithms exhibit superior accuracy in water holdup calculations, effectively representing the fluctuating features of the gas–water interface and yielding imaging outcomes that align more closely with experimentally observed gas–water flow patterns. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

17 pages, 23351 KiB  
Article
FPGA Readout for Frequency-Multiplexed Array of Micromechanical Resonators for Sub-Terahertz Imaging
by Leonardo Gregorat, Marco Cautero, Alessandro Pitanti, Leonardo Vicarelli, Monica La Mura, Alvise Bagolini, Rudi Sergo, Sergio Carrato and Giuseppe Cautero
Sensors 2024, 24(22), 7276; https://doi.org/10.3390/s24227276 - 14 Nov 2024
Viewed by 310
Abstract
Field programmable gate arrays (FPGAs) have not only enhanced traditional sensing methods, such as pixel detection (CCD and CMOS), but also enabled the development of innovative approaches with significant potential for particle detection. This is particularly relevant in terahertz (THz) ray detection, where [...] Read more.
Field programmable gate arrays (FPGAs) have not only enhanced traditional sensing methods, such as pixel detection (CCD and CMOS), but also enabled the development of innovative approaches with significant potential for particle detection. This is particularly relevant in terahertz (THz) ray detection, where microbolometer-based focal plane arrays (FPAs) using microelectromechanical (MEMS) resonators are among the most promising solutions. Designing high-performance, high-pixel-density sensors is challenging without FPGAs, which are crucial for deterministic parallel processing, fast ADC/DAC control, and handling large data throughput. This paper presents a MEMS-resonator detector, fully managed via an FPGA, capable of controlling pixel excitation and tracking resonance-frequency shifts due to radiation using parallel digital lock-in amplifiers. The innovative FPGA architecture, based on a lock-in matrix, enhances the open-loop readout technique by a factor of 32. Measurements were performed on a frequency-multiplexed, 256-pixel sensor designed for imaging applications. Full article
(This article belongs to the Special Issue Application of FPGA-Based Sensor Systems)
Show Figures

Figure 1

21 pages, 8968 KiB  
Article
Lightning Detection Using GEO-KOMPSAT-2A/Advanced Meteorological Imager and Ground-Based Lightning Observation Sensor LINET Data
by Seung-Hee Lee and Myoung-Seok Suh
Remote Sens. 2024, 16(22), 4243; https://doi.org/10.3390/rs16224243 - 14 Nov 2024
Viewed by 317
Abstract
In this study, GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI) and Lightning NETwork (LINET) data were used for lightning detection. A total of 20 lightning cases from the summer of 2020–2021 were selected, with 14 cases for training and 6 for validation to develop lightning detection [...] Read more.
In this study, GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI) and Lightning NETwork (LINET) data were used for lightning detection. A total of 20 lightning cases from the summer of 2020–2021 were selected, with 14 cases for training and 6 for validation to develop lightning detection algorithms. Since these two datasets have different spatiotemporal resolutions, spatiotemporal matching was performed to use them together. To find the optimal lightning detection algorithm, we designed 25 experiments and selected the best experiment by evaluating the detection level. Although the best experiment had a high POD (>0.9) before post-processing, it also showed over-detection of lightning. To minimize the over-detection problem, statistical and Region-Growing post-processing methods were applied, improving the detection performance (FAR: −19.14~−24.32%; HSS: +76.92~+86.41%; Bias: −59.3~−66.9%). Also, a sensitivity analysis of the collocation criterion between the two datasets showed that the detection level improved when the spatial criterion was relaxed. These results suggest that detecting lightning in mid-latitude regions, including the Korean Peninsula, is possible by using GK2A/AMI data. However, reducing the variability in detection performance and the high FAR associated with anvil clouds and addressing the parallax problem of thunderstorms in mid-latitude regions are necessary to improve the detection performance. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

13 pages, 333 KiB  
Review
A Comprehensive Review of Advanced Diagnostic Techniques for Endometriosis: New Approaches to Improving Women’s Well-Being
by Greta Kaspute, Egle Bareikiene, Urte Prentice, Ilona Uzieliene, Diana Ramasauskaite and Tatjana Ivaskiene
Medicina 2024, 60(11), 1866; https://doi.org/10.3390/medicina60111866 - 14 Nov 2024
Viewed by 416
Abstract
According to the World Health Organization (WHO), endometriosis affects roughly 10% (190 million) of reproductive-age women and girls in the world (2023). The diagnostic challenge in endometriosis lies in the limited value of clinical tools, making it crucial to address diagnostic complexities in [...] Read more.
According to the World Health Organization (WHO), endometriosis affects roughly 10% (190 million) of reproductive-age women and girls in the world (2023). The diagnostic challenge in endometriosis lies in the limited value of clinical tools, making it crucial to address diagnostic complexities in patients with suggestive symptoms and inconclusive clinical or imaging findings. Saliva micro ribonucleic acid (miRNA) signature, nanotechnologies, and artificial intelligence (AI) have opened up new perspectives on endometriosis diagnosis. The aim of this article is to review innovations at the intersection of new technology and AI when diagnosing endometriosis. Aberrant epigenetic regulation, such as DNA methylation in endometriotic cells (ECs), is associated with the pathogenesis and development of endometriosis. By leveraging nano-sized sensors, biomarkers specific to endometriosis can be detected with high sensitivity and specificity. A chemotherapeutic agent with an LDL-like nano-emulsion targets rapidly dividing cells in patients with endometriosis. The developed sensor demonstrated effective carbohydrate antigen 19-9 detection within the normal physiological range. Researchers have developed magnetic iron oxide nanoparticles composed of iron oxide. As novel methods continue to emerge at the forefront of endometriosis diagnostic research, it becomes imperative to explore the impact of nanotechnology and AI on the development of innovative diagnostic solutions. Full article
(This article belongs to the Section Obstetrics and Gynecology)
21 pages, 10593 KiB  
Article
Improved Phase Gradient Autofocus Method for Multi-Baseline Circular Synthetic Aperture Radar Three-Dimensional Imaging
by Shiliang Yi, Hongtu Xie, Yuanjie Zhang, Zhitao Wu, Mengfan Ge, Nannan Zhu, Zheng Lu and Pengcheng Qin
Remote Sens. 2024, 16(22), 4242; https://doi.org/10.3390/rs16224242 - 14 Nov 2024
Viewed by 219
Abstract
Multi-baseline circular synthetic aperture radar (MB CSAR) can be applied to obtain a three-dimensional (3D) image of the observation scene. However, the phase error caused by radar platform motion or atmospheric propagation delay restricts its 3D imaging capabilities. The phase error calibration of [...] Read more.
Multi-baseline circular synthetic aperture radar (MB CSAR) can be applied to obtain a three-dimensional (3D) image of the observation scene. However, the phase error caused by radar platform motion or atmospheric propagation delay restricts its 3D imaging capabilities. The phase error calibration of MB CSAR data is an essential step in the 3D imaging procedure due to the limited accuracy of positioning sensors. Phase gradient autofocus (PGA) is widely utilized to estimate the phase errors but is subject to shifts in the direction perpendicular to the line of sight and long iteration time in some sub-apertures. In this paper, an improved PGA method for MB CSAR 3D imaging is proposed, which can suppress the shifts and reduce computation time. This method is based on phase gradient estimation, but the prominent units are selected with an energy criterion. Then, weighted phase gradient estimation is presented to suppress the influence of prominent units with poor quality. Finally, a contrast criterion is adopted to reach faster convergence. The experimental results based on the measured MB CSAR data (Gotcha dataset) demonstrate the validity and feasibility of the proposed phase error calibration method for MB CSAR 3D imaging. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

20 pages, 6095 KiB  
Article
MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms
by Fengguang Xiong, Zhiqiang Zhang, Yu Kong, Chaofan Shen, Mingyue Hu, Liqun Kuang and Xie Han
Remote Sens. 2024, 16(22), 4233; https://doi.org/10.3390/rs16224233 - 14 Nov 2024
Viewed by 392
Abstract
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets [...] Read more.
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets or manual intervention, which are time-consuming and have limited generalization capabilities. To address these issues, we introduce MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms, an end-to-end deep learn-based online calibration network for inferring 6-degree of freedom (DOF) rigid body transformations between 2D images and 3D point clouds. By fusing multi-scale features, we obtain feature representations that contain a lot of detail and rich semantic information. The attention module is used to carry out feature correlation among different modes to complete feature matching. Rather than acquiring the precise parameters directly, MSANet online corrects deviations, aligning the initial calibration with the ground truth. We conducted extensive experiments on the KITTI datasets, demonstrating that our method performs well across various scenarios, the average error of translation prediction especially improves the accuracy by 2.03 cm compared with the best results in the comparison method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Back to TopTop