Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (250)

Search Parameters:
Keywords = shot noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 4045 KiB  
Article
A Stochastic Model for Traffic Incidents and Free Flow Recovery in Road Networks
by Fahem Mouhous, Djamil Aissani and Nadir Farhi
Mathematics 2025, 13(3), 520; https://doi.org/10.3390/math13030520 - 4 Feb 2025
Viewed by 301
Abstract
This study addresses the disruptive impact of incidents on road networks, which often lead to traffic congestion. If not promptly managed, congestion can propagate and intensify over time, significantly delaying the recovery of free-flow conditions. We propose an enhanced model based on an [...] Read more.
This study addresses the disruptive impact of incidents on road networks, which often lead to traffic congestion. If not promptly managed, congestion can propagate and intensify over time, significantly delaying the recovery of free-flow conditions. We propose an enhanced model based on an exponential decay of the time required for free flow recovery between incident occurrences. Our approach integrates a shot noise process, assuming that incidents follow a non-homogeneous Poisson process. The increases in recovery time following incidents are modeled using exponential and gamma distributions. We derive key performance metrics, providing insights into congestion risk and the unlocking phenomenon, including the probability of the first passage time for our process to exceed a predefined congestion threshold. This probability is analyzed using two methods: (1) an exact simulation approach and (2) an analytical approximation technique. Utilizing the analytical approximation, we estimate critical extreme quantities, such as the minimum incident clearance rate, the minimum intensity of recovery time increases, and the maximum intensity of incident occurrences required to avoid exceeding a specified congestion threshold with a given probability. These findings offer valuable tools for managing and mitigating congestion risks in road networks. Full article
(This article belongs to the Section E: Applied Mathematics)
16 pages, 467 KiB  
Article
A Zero-Shot Framework for Low-Resource Relation Extraction via Distant Supervision and Large Language Models
by Peisheng Han, Geng Liang and Yongfei Wang
Electronics 2025, 14(3), 593; https://doi.org/10.3390/electronics14030593 - 2 Feb 2025
Viewed by 295
Abstract
While Large Language Models (LLMs) have significantly advanced various benchmarks in Natural Language Processing (NLP), the challenge of low-resource tasks persists, primarily due to the scarcity of data and difficulties in annotation. This study introduces LoRE, a framework designed for zero-shot relation extraction [...] Read more.
While Large Language Models (LLMs) have significantly advanced various benchmarks in Natural Language Processing (NLP), the challenge of low-resource tasks persists, primarily due to the scarcity of data and difficulties in annotation. This study introduces LoRE, a framework designed for zero-shot relation extraction in low-resource settings, which blends distant supervision with the powerful capabilities of LLMs. LoRE addresses the challenges of data sparsity and noise inherent in traditional distant supervision methods, enabling high-quality relation extraction without requiring extensive labeled data. By leveraging LLMs for zero-shot open information extraction and incorporating heuristic entity and relation alignment with semantic disambiguation, LoRE enhances the accuracy and relevance of the extracted data. Low-resource tasks refer to scenarios where labeled data are extremely limited, making traditional supervised learning approaches impractical. This study aims to develop a robust framework that not only tackles these challenges but also demonstrates the theoretical and practical implications of zero-shot relation extraction. The Chinese Person Relationship Extraction (CPRE) dataset, developed under this framework, demonstrates LoRE’s proficiency in extracting person-related triples. The CPRE dataset consists of 1000 word pairs, capturing diverse semantic relationships. Extensive experiments on the CPRE, IPRE, and DuIE datasets show significant improvements in dataset quality and a reduction in manual annotation efforts. These findings highlight the potential of LoRE to advance both the theoretical understanding and practical applications of relation extraction in low-resource settings. Notably, the performance of LoRE on the manually annotated DuIE dataset attests to the quality of the CPRE dataset, rivaling that of manually curated datasets, and highlights LoRE’s potential for reducing the complexities and costs associated with dataset construction for zero-shot and low-resource tasks. Full article
25 pages, 10469 KiB  
Article
Noise Analysis for Correlation-Assisted Direct Time-of-Flight
by Ayman Morsy, Jonathan Vrijsen, Jan Coosemans, Tuur Bruneel and Maarten Kuijk
Sensors 2025, 25(3), 771; https://doi.org/10.3390/s25030771 - 27 Jan 2025
Viewed by 369
Abstract
The development of a correlation-assisted direct time-of-flight (CA-dToF) pixel provides a novel solution for time-of-flight applications that combines low power consumption, robust ambient shot noise suppression, and a compact design. However, the pixel’s implementation introduces systematic errors, affecting its performance. We investigate the [...] Read more.
The development of a correlation-assisted direct time-of-flight (CA-dToF) pixel provides a novel solution for time-of-flight applications that combines low power consumption, robust ambient shot noise suppression, and a compact design. However, the pixel’s implementation introduces systematic errors, affecting its performance. We investigate the pixel’s robustness against various noise sources, including timing jitter, kTC noise, switching noise, and photon shot noise. Additionally, we address limitations such as the SPAD deadtime, and source follower gain mismatch and offset, identifying their impact on performance. The paper also proposes solutions to enhance the pixel’s overall reliability and to improve the pixel’s implementation. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technologies in Belgium 2024-2025)
Show Figures

Figure 1

27 pages, 1964 KiB  
Article
Zero-Shot Rolling Bearing Fault Diagnosis Based on Attribute Description
by Guorong Fan, Lijun Li, Yue Zhao, Hui Shi, Xiaoyi Zhang and Zengshou Dong
Electronics 2025, 14(3), 452; https://doi.org/10.3390/electronics14030452 - 23 Jan 2025
Viewed by 377
Abstract
Traditional fault diagnosis methods for rolling bearings rely on nemerous labeled samples, which are difficult to obtain in engineering applications. Moreover, when unseen fault categories appear in the test set, these models fail to achieve accurate diagnoses, as the fault categories are not [...] Read more.
Traditional fault diagnosis methods for rolling bearings rely on nemerous labeled samples, which are difficult to obtain in engineering applications. Moreover, when unseen fault categories appear in the test set, these models fail to achieve accurate diagnoses, as the fault categories are not represented in the training data. To address these challenges, a zero-shot fault diagnosis model for rolling bearings is proposed, which realizes knowledge transfer from seen to unseen categories by constructing attribute information, thereby reducing the dependence on labeled samples. First, an attribute method Discrete Label Embedding Method (DLEM) based on word embedding and envelope analysis is designed to generate fault attributes. Fault features are extracted using the Swin Transformer model. Then, the attributes and features are input into the constructed model Distribution Consistency and Multi-modal Cross Alignment Variational Autoencoder (DCMCA-VAE), which is built on Convolutional Residual SE-Attention Variational Autoencoder (CRS-VAE). CRS-VAE replaces fully connected layers with convolutional layers and incorporates residual connections with the Squeeze-and-Excitation Joint Attention Mechanism (SE-JAM) for improved feature extraction. The DCMCA-VAE also incorporates a reconstruction alignment module with the proposed distribution consistency loss LWT and multi-modal cross alignment loss function LMCA. The reconstruction alignment module is used to generate high-quality features with distinguishing information between different categories for classification. In the face of multiple noisy datasets, this model can effectively distinguish unseen categories and has stronger robustness than other models. The model can achieve 100% classification accuracy on the SQ dataset, and more than 85% on the CWRU dataset when unseen and seen categories appear simultaneously with noise interference. Full article
Show Figures

Figure 1

14 pages, 7866 KiB  
Article
The First Seismic Imaging of the Holy Cross Fault in the Łysogóry Region, Poland
by Eslam Roshdy, Artur Marciniak, Rafał Szaniawski and Mariusz Majdański
Appl. Sci. 2025, 15(2), 511; https://doi.org/10.3390/app15020511 - 7 Jan 2025
Viewed by 627
Abstract
The Holy Cross Mountains represent an isolated outcrop of Palaeozoic rocks located in the Trans-European Suture Zone, which is the boundary between the Precambrian East European Craton and Phanerozoic mobile belts of South-Western Europe. Despite extensive structural history studies, high-resolution seismic profiling has [...] Read more.
The Holy Cross Mountains represent an isolated outcrop of Palaeozoic rocks located in the Trans-European Suture Zone, which is the boundary between the Precambrian East European Craton and Phanerozoic mobile belts of South-Western Europe. Despite extensive structural history studies, high-resolution seismic profiling has not been applied to this region until now. This research introduces near-surface seismic imaging of the Holy Cross Fault, separating two tectonic units of different stratigraphic and deformation history. In our study, we utilize a carefully designed weight drop source survey with 5 m shot and receiver spacing and 4.5 Hz geophones. The imaging technique, combining seismic reflection profiling and travel time tomography, reveals detailed fault geometries down to 400 m. Precise data processing, including static corrections and noise attenuation, significantly enhanced signal-to-noise ratio and seismic resolution. Furthermore, the paper discusses various fault imaging techniques with their shortcomings. The data reveal a complex network of intersecting fault strands, confirming general thrust fault geometry of the fault system, that align with the region’s tectonic evolution. These findings enhance understanding of the Holy Cross Mountains’ structural framework and provide valuable reference data for future studies of similar tectonic environments. Full article
(This article belongs to the Special Issue Earthquake Engineering and Seismic Risk)
Show Figures

Figure 1

16 pages, 4575 KiB  
Article
Deep-Learning-Based Reconstruction of Single-Breath-Hold 3 mm HASTE Improves Abdominal Image Quality and Reduces Acquisition Time: A Quantitative Analysis
by Felix Kubicka, Qinxuan Tan, Tom Meyer, Dominik Nickel, Elisabeth Weiland, Moritz Wagner and Stephan Rodrigo Marticorena Garcia
Curr. Oncol. 2025, 32(1), 30; https://doi.org/10.3390/curroncol32010030 - 3 Jan 2025
Viewed by 542
Abstract
Purpose: Breath-hold T2-weighted half-Fourier acquisition single-shot turbo spin echo (HASTE) magnetic resonance imaging (MRI) of the upper abdomen with a slice thickness below 5 mm suffers from high image noise and blurring. The purpose of this prospective study was to improve image quality [...] Read more.
Purpose: Breath-hold T2-weighted half-Fourier acquisition single-shot turbo spin echo (HASTE) magnetic resonance imaging (MRI) of the upper abdomen with a slice thickness below 5 mm suffers from high image noise and blurring. The purpose of this prospective study was to improve image quality and accelerate imaging acquisition by using single-breath-hold T2-weighted HASTE with deep learning (DL) reconstruction (DL-HASTE) with a 3 mm slice thickness. Method: MRI of the upper abdomen with DL-HASTE was performed in 35 participants (5 healthy volunteers and 30 patients) at 3 Tesla. In a subgroup of five healthy participants, signal-to-noise ratio (SNR) analysis was used after DL reconstruction to identify the smallest possible layer thickness (1, 2, 3, 4, 5 mm). DL-HASTE was acquired with a 3 mm slice thickness (DL-HASTE-3 mm) in 30 patients and compared with 5 mm DL-HASTE (DL-HASTE-5 mm) and with standard HASTE (standard-HASTE-5 mm). Image quality and motion artifacts were assessed quantitatively using Laplacian variance and semi-quantitatively by two radiologists using five-point Likert scales. Results: In the five healthy participants, DL-HASTE-3 mm was identified as the optimal slice (SNR 23.227 ± 3.901). Both DL-HASTE-3 mm and DL-HASTE-5 mm were assigned significantly higher overall image quality scores than standard-HASTE-5 mm (Laplacian variance, both p < 0.001; Likert scale, p < 0.001). Compared with DL-HASTE-5 mm (1.10 × 10−5 ± 6.93 × 10−6), DL-HASTE-3 mm (1.56 × 10−5 ± 8.69 × 10−6) provided a significantly higher SNR Laplacian variance (p < 0.001) and sharpness sub-scores for the intestinal tract, adrenal glands, and small anatomic structures (bile ducts, pancreatic ducts, and vessels; p < 0.05). Lesion detectability was rated excellent for both DL-HASTE-3 mm and DL-HASTE-5 mm (both: 5 [IQR4–5]) and was assigned higher scores than standard-HASTE-5 mm (4 [IQR4–5]; p < 0.001). DL-HASTE reduced the acquisition time by 63–69% compared with standard-HASTE-5 mm (p < 0.001). Conclusions: DL-HASTE is a robust abdominal MRI technique that improves image quality while at the same time reducing acquisition time compared with the routine clinical HASTE sequence. Using ultra-thin DL-HASTE-3 mm results in an even greater improvement with a similar SNR. Full article
Show Figures

Figure 1

16 pages, 360 KiB  
Article
EduDCM: A Novel Framework for Automatic Educational Dialogue Classification Dataset Construction via Distant Supervision and Large Language Models
by Changyong Qi, Longwei Zheng, Yuang Wei, Haoxin Xu, Peiji Chen and Xiaoqing Gu
Appl. Sci. 2025, 15(1), 154; https://doi.org/10.3390/app15010154 - 27 Dec 2024
Viewed by 576
Abstract
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework [...] Read more.
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework for the first time, offering an original approach to addressing these challenges. EduDCM innovatively integrates distant supervision with the capabilities of Large Language Models (LLMs) to automate the construction of high-quality educational dialogue classification datasets. EduDCM reduces the noise typically associated with distant supervision by leveraging LLMs for context-aware label generation and incorporating heuristic alignment techniques. To validate the framework, we constructed the EduTalk dataset, encompassing diverse classroom dialogues labeled with pedagogical categories. Extensive experiments on EduTalk and publicly available datasets, combined with expert evaluations, confirm the superior quality of EduDCM-generated datasets. Models trained on EduDCM data achieved a performance comparable to that of manually annotated datasets. Expert evaluations using a 5-point Likert scale show that EduDCM outperforms Template-Based Generation and Few-Shot GPT in terms of annotation accuracy, category coverage, and consistency. These findings emphasize EduDCM’s novelty and its effectiveness in generating high-quality, scalable datasets for low-resource educational NLP tasks, thus reducing manual annotation efforts. Full article
(This article belongs to the Special Issue Intelligent Systems and Tools for Education)
Show Figures

Figure 1

15 pages, 2958 KiB  
Article
CFGAN: A Conditional Filter Generative Adversarial Network for Signal Modulation Mode Recognition
by Fan Zhou, Jiayi Wang, Lan Zhang, Yang Wang, Xi Chen and Peiying Zhang
Electronics 2025, 14(1), 12; https://doi.org/10.3390/electronics14010012 - 24 Dec 2024
Viewed by 358
Abstract
Currently, in the application scenario of generative adversarial networks, determining how to improve the quality of the generated signals and ensure the modulation recognition accuracy of convolutional neural networks are important problems. In this paper, a generative sample quality screening method for the [...] Read more.
Currently, in the application scenario of generative adversarial networks, determining how to improve the quality of the generated signals and ensure the modulation recognition accuracy of convolutional neural networks are important problems. In this paper, a generative sample quality screening method for the problem of low-quality samples generated by generative adversarial networks under few-shot conditions has been proposed, which innovatively establishes a sample expansion mode without fixing the network parameters, realizes the learning of the real data distribution by constantly updating the network weights, and enhances the quality of the expanded samples by adopting the quality screening method with two quality screenings. A generative adversarial network has been designed for this method, which reduces the time investment required for generating samples by extracting different features of few-shots of signals. The experimental results show the few-shot conditions, under the signal-to-noise ratio of −8∼12 dB and three expansion ratios of 1:1, 1:2 and 1:3. Compared with the general method expansion, the average modulation mode recognition accuracy of the QCO-CFGAN method expanded with the quality screening method is improved by 2.65%, 2.46% and 2.73%, respectively, which proves the effectiveness under this condition. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

22 pages, 893 KiB  
Article
Joint Design of Transmitter Precoding and Optical Intelligent Reflecting Surface Configuration for Photon-Counting MIMO Systems Under Poisson Shot Noise
by Jian Wang, Xiaolin Zhou, Fanghua Li, Yongkang Chen, Chaoyi Cai and Haoze Xu
Appl. Sci. 2024, 14(24), 11994; https://doi.org/10.3390/app142411994 - 21 Dec 2024
Viewed by 620
Abstract
Intelligent reflecting surfaces (IRSs) have emerged as a promising technology to enhance link reliability in a cost-effective manner, especially for line-of-sight (LOS) link blocking caused by obstacles. In this paper, we investigate an IRS-assisted single-cell photon-counting communication system in the presence of building [...] Read more.
Intelligent reflecting surfaces (IRSs) have emerged as a promising technology to enhance link reliability in a cost-effective manner, especially for line-of-sight (LOS) link blocking caused by obstacles. In this paper, we investigate an IRS-assisted single-cell photon-counting communication system in the presence of building shadows, where one IRS is deployed to assist the communication between a multi-antenna base station (BS) and multiple single-antenna users. Photon counting has been widely adopted in sixth-generation (6G) optical communications due to its exceptional detection capability for low-power optical signals. However, the correlation between signal and noise complicates analyses. To this end, we first derive the channel gain of the IRS-assisted MIMO system, followed by the derivation of the mean square error (MSE) of the system using probabilistic methods. Given the constraints of the transmit power and IRS configuration, we propose an optimization problem aimed at minimizing the MSE of the system. Next, we present an alternating optimization (AO) algorithm that transforms the original problem into two convex subproblems and analyze its convergence and complexity. Finally, numerical results demonstrate that the IRS-assisted scheme significantly reduces the MSE and bit error rate (BER) of the system, outperforming other baseline schemes. Full article
Show Figures

Figure 1

23 pages, 4727 KiB  
Article
Self-Supervised and Zero-Shot Learning in Multi-Modal Raman Light Sheet Microscopy
by Pooja Kumari, Johann Kern and Matthias Raedle
Sensors 2024, 24(24), 8143; https://doi.org/10.3390/s24248143 - 20 Dec 2024
Viewed by 613
Abstract
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial [...] Read more.
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial and molecular data, critical for biomedical research, histology, and drug discovery. Despite its capabilities, Raman light sheet microscopy faces inherent limitations, including low signal intensity, high noise levels, and restricted spatial resolution, which impede the visualization of fine subcellular structures. Traditional enhancement techniques like Fourier transform filtering and spectral unmixing require extensive preprocessing and often introduce artifacts. More recently, deep learning techniques, which have shown great promise in enhancing image quality, face their own limitations. Specifically, conventional deep learning models require large quantities of high-quality, manually labeled training data for effective denoising and super-resolution tasks, which is challenging to obtain in multi-modal microscopy. In this study, we address these limitations by exploring advanced zero-shot and self-supervised learning approaches, such as ZS-DeconvNet, Noise2Noise, Noise2Void, Deep Image Prior (DIP), and Self2Self, which enhance image quality without the need for labeled and large datasets. This study offers a comparative evaluation of zero-shot and self-supervised learning methods, evaluating their effectiveness in denoising, resolution enhancement, and preserving biological structures in multi-modal Raman light sheet microscopic images. Our results demonstrate significant improvements in image clarity, offering a reliable solution for visualizing complex biological systems. These methods establish the way for future advancements in high-resolution imaging, with broad potential for enhancing biomedical research and discovery. Full article
Show Figures

Figure 1

22 pages, 5498 KiB  
Article
Small-Sample Target Detection Across Domains Based on Supervision and Distillation
by Fusheng Sun, Jianli Jia, Xie Han, Liqun Kuang and Huiyan Han
Electronics 2024, 13(24), 4975; https://doi.org/10.3390/electronics13244975 - 18 Dec 2024
Viewed by 496
Abstract
To address the issues of significant object discrepancies, low similarity, and image noise interference between source and target domains in object detection, we propose a supervised learning approach combined with knowledge distillation. Initially, student and teacher models are jointly trained through supervised and [...] Read more.
To address the issues of significant object discrepancies, low similarity, and image noise interference between source and target domains in object detection, we propose a supervised learning approach combined with knowledge distillation. Initially, student and teacher models are jointly trained through supervised and distillation-based approaches, iteratively refining the inter-model weights to mitigate the issue of model overfitting. Secondly, a combined convolutional module is integrated into the feature extraction network of the student model, to minimize redundant computational effort; an explicit visual center module is embedded within the feature pyramid network, to bolster feature representation; and a spatial grouping enhancement module is incorporated into the region proposal network, to mitigate the adverse effects of noise on the outcomes. Ultimately, the model undergoes a comprehensive optimization process that leverages the loss functions originating from both the supervised and knowledge distillation phases. The experimental results demonstrate that this strategy significantly boosts classification and identification accuracy on cross-domain datasets; when compared to the TFA (Task-agnostic Fine-tuning and Adapter), CD-FSOD (Cross-Domain Few-Shot Object Detection) and DeFRCN (Decoupled Faster R-CNN for Few-Shot Object Detection), with sample orders of magnitude 1 and 5, increased the detection accuracy by 1.67% and 1.87%, respectively. Full article
Show Figures

Figure 1

16 pages, 7607 KiB  
Article
Airwave Noise Identification from Seismic Data Using YOLOv5
by Zhenghong Liang, Lu Gan, Zhifeng Zhang, Xiuju Huang, Fengli Shen, Guo Chen and Rongjiang Tang
Appl. Sci. 2024, 14(24), 11636; https://doi.org/10.3390/app142411636 - 12 Dec 2024
Viewed by 688
Abstract
Airwave interference presents a major source of noise in seismic exploration, posing significant challenges to the quality control of raw seismic data. With the increasing data volume in 3D seismic exploration, manual identification methods fall short of meeting the demands of high-density 3D [...] Read more.
Airwave interference presents a major source of noise in seismic exploration, posing significant challenges to the quality control of raw seismic data. With the increasing data volume in 3D seismic exploration, manual identification methods fall short of meeting the demands of high-density 3D seismic surveys. This study employs the YOLOv5 model, a widely used tool in object detection, to achieve rapid identification of airwave noise in seismic profiles. Initially, the model was pre-trained on the COCO dataset—a large-scale dataset designed for object detection—and subsequently fine-tuned using a training set specifically labeled for airwave noise data. The fine-tuned model achieved an accuracy and recall rate of approximately 85% on the test dataset, successfully identifying not only the presence of noise but also its location, confidence levels, and range. To evaluate the model’s effectiveness, we applied the YOLOv5 model trained on 2D data to seismic records from two regions: 2D seismic data from Ningqiang, Shanxi, and 3D seismic data from Xiushui, Sichuan. The overall prediction accuracy in both regions exceeded 90%, with the accuracy and recall rates for airwave noise surpassing 83% and 90%, respectively. The evaluation time for single-shot 3D seismic data (over 8000 traces) was less than 2 s, highlighting the model’s exceptional transferability, generalization ability, and efficiency. These results demonstrate that the YOLOv5 model is highly effective for detecting airwave noise in raw seismic data across different regions, marking the first successful attempt at computer recognition of airwaves in seismic exploration. Full article
Show Figures

Figure 1

16 pages, 3032 KiB  
Article
Polarization Division Multiplexing CV-QKD with Pilot-Aided Polarization-State Sensing
by Zicong Tan, Tao Wang, Yuehan Xu, Xu Liu, Lang Li, Beibei Zhang, Yuchao Liu, Peng Huang and Guihua Zeng
Mathematics 2024, 12(22), 3599; https://doi.org/10.3390/math12223599 - 17 Nov 2024
Viewed by 793
Abstract
Continuous-variable quantum key distribution (CV-QKD) with local local oscillator (LLO) is well-studied for its security and simplicity, but enhancing performance and interference resistance remains challenging. In this paper, we utilize polarization division multiplexing (PDM) to enhance spectral efficiency and significantly increase the key [...] Read more.
Continuous-variable quantum key distribution (CV-QKD) with local local oscillator (LLO) is well-studied for its security and simplicity, but enhancing performance and interference resistance remains challenging. In this paper, we utilize polarization division multiplexing (PDM) to enhance spectral efficiency and significantly increase the key rate of the CV-QKD system. To address dynamic changes in the state of polarization (SOP) in Gaussian modulated coherent states (GMCS) signals due to polarization impairment effects, we designed a time-division multiplexing pilot scheme to sense and recover changes in SOP in GMCS signals, along with other digital signal processing methods. Experiments over 20 km show that our scheme maintains low excess noise levels (0.062 and 0.043 in shot noise units) and achieves secret key rates of 4.65 Mbps and 5.66 Mbps for the two polarization orientations, totaling 10.31 Mbps. This work confirms the effectiveness of PDM GMCS-CV-QKD and offers technical guidance for high-rate QKD within metropolitan areas. Full article
Show Figures

Figure 1

12 pages, 2745 KiB  
Article
Single-Shot Time-Lapse Target-Oriented Velocity Inversion Using Machine Learning
by Katerine Rincon, Ramon C. F. Araújo, Moisés M. Galvão, Samuel Xavier-de-Souza, João M. de Araújo, Tiago Barros and Gilberto Corso
Appl. Sci. 2024, 14(21), 10047; https://doi.org/10.3390/app142110047 - 4 Nov 2024
Viewed by 705
Abstract
In this study, we used machine learning (ML) to estimate time-lapse velocity variations in a reservoir region using seismic data. To accomplish this task, we needed an adequate training set that could map seismic data to velocity perturbation. We generated a synthetic seismic [...] Read more.
In this study, we used machine learning (ML) to estimate time-lapse velocity variations in a reservoir region using seismic data. To accomplish this task, we needed an adequate training set that could map seismic data to velocity perturbation. We generated a synthetic seismic database by simulating reservoirs of varying velocities using a 2D velocity model typical of the Brazilian pre-salt ocean bottom node (OBN) acquisition, located in the Santos basin, Brazil. The largest velocity change in the injector well was around 3% of the empirical velocity model, which mimicked a realistic scenario. The acquisition geometry was formed by the geometry of 1 shot and 49 receivers. For each synthetic reservoir, the corresponding seismic data were obtained by estimating a one-shot forward-wave propagation using acoustic approximation. We studied the reservoir illumination to optimize the input data of the ML inversion. We split the set of synthetic reservoirs into two subsets: training (80%) and testing (20%) sets. We point out that the ML inversion was restricted to the reservoir zone, which means that it was inversion-oriented to a target. We obtained a good similarity between true and ML-inverted reservoir anomalies. The similarity diminished for a situation with non-repeatability noise. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

17 pages, 7790 KiB  
Article
A Self-Supervised One-Shot Learning Approach for Seismic Noise Reduction
by Catarina de Nazaré Pereira Pinheiro, Roosevelt de Lima Sardinha, Pablo Machado Barros, André Bulcão, Bruno Vieira Costa and Alexandre Gonçalves Evsukoff
Appl. Sci. 2024, 14(21), 9721; https://doi.org/10.3390/app14219721 - 24 Oct 2024
Viewed by 3571
Abstract
Neural networks have been used in various computer vision applications, including noise removal. However, removing seismic noise via deep learning approaches faces a specific issue: the scarcity of labeled data. To address this difficulty, this work introduces an adaptation of the Noise2Self algorithm [...] Read more.
Neural networks have been used in various computer vision applications, including noise removal. However, removing seismic noise via deep learning approaches faces a specific issue: the scarcity of labeled data. To address this difficulty, this work introduces an adaptation of the Noise2Self algorithm featuring a one-shot learning approach tailored for the seismic context. Essentially, the method leverages a single noisy image for training, utilizing a context-centered masking system and convolutional neural network (CNN) architectures, thus eliminating the dependence on previously labeled data. In tests with Gaussian noise, the method was competitive with established approaches such as Noise2Noise. Under real noise conditions, it demonstrated effective noise suppression removal for a smaller architecture. Therefore, our proposed method is a robust alternative for noise removal that is especially valuable in scenarios lacking sufficient data and labels. With a new approach to processing seismic images, particularly in terms of denoising, our method contributes to the ongoing evolution and enhancement of techniques in this field. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop