Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (205)

Search Parameters:
Keywords = image dehazing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3737 KiB  
Article
End-to-End Multi-Scale Adaptive Remote Sensing Image Dehazing Network
by Xinhua Wang, Botao Yuan, Haoran Dong, Qiankun Hao and Zhuang Li
Sensors 2025, 25(1), 218; https://doi.org/10.3390/s25010218 - 2 Jan 2025
Viewed by 422
Abstract
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, [...] Read more.
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, this paper proposes an end-to-end multi-scale adaptive feature extraction method for remote sensing image dehazing (MSD-Net). In our network model, we introduce a dilated convolution adaptive module to extract global and local detail features of remote sensing images. The design of this module can extract important image features at different scales. By expanding convolution, the receptive field is expanded to capture broader contextual information, thereby obtaining a more global feature representation. At the same time, a self-adaptive attention mechanism is also used, allowing the module to automatically adjust the size of its receptive field based on image content. In this way, important features suitable for different scales can be flexibly extracted to better adapt to the changes in details in remote sensing images. To fully utilize the features at different scales, we also adopted feature fusion technology. By fusing features from different scales and integrating information from different scales, more accurate and rich feature representations can be obtained. This process aids in retrieving lost detailed information from remote sensing images, thereby enhancing the overall image quality. A large number of experiments were conducted on the HRRSD and RICE datasets, and the results showed that our proposed method can better restore the original details and texture information of remote sensing images in the field of dehazing and is superior to current state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 13020 KiB  
Article
Multi-Dimensional and Multi-Scale Physical Dehazing Network for Remote Sensing Images
by Hao Zhou, Le Wang, Qiao Li, Xin Guan and Tao Tao
Remote Sens. 2024, 16(24), 4780; https://doi.org/10.3390/rs16244780 - 22 Dec 2024
Viewed by 528
Abstract
Haze obscures remote sensing images, making it difficult to extract valuable information. To address this problem, we propose a fine detail extraction network that aims to restore image details and improve image quality. Specifically, to capture fine details, we design multi-scale and multi-dimensional [...] Read more.
Haze obscures remote sensing images, making it difficult to extract valuable information. To address this problem, we propose a fine detail extraction network that aims to restore image details and improve image quality. Specifically, to capture fine details, we design multi-scale and multi-dimensional extraction blocks and then fuse them to optimize feature extraction. The multi-scale extraction block adopts multi-scale pixel attention and channel attention to extract and combine global and local information from the image. Meanwhile, the multi-dimensional extraction block uses depthwise separable convolutional layers to capture additional dimensional information. Additionally, we integrate an atmospheric scattering model unit into the network to enhance both the dehazing effectiveness and stability. Our experiments on the SateHaze1k and HRSD datasets demonstrate that the proposed method efficiently handles remote sensing images with varying levels of haze, successfully recovers fine details, and achieves superior results compared to existing state-of-the-art dehazing techniques. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

24 pages, 2450 KiB  
Article
Progressive Pruning of Light Dehaze Networks for Static Scenes
by Byeongseon Park, Heekwon Lee, Yong-Kab Kim and Sungkwan Youm
Appl. Sci. 2024, 14(23), 10820; https://doi.org/10.3390/app142310820 - 22 Nov 2024
Viewed by 513
Abstract
This paper introduces an progressive pruning method for Light DeHaze Networks, focusing on a static scene captured by a fixed camera environments. We develop a progressive pruning algorithm that aims to reduce computational complexity while maintaining dehazing quality within a specified threshold. Our [...] Read more.
This paper introduces an progressive pruning method for Light DeHaze Networks, focusing on a static scene captured by a fixed camera environments. We develop a progressive pruning algorithm that aims to reduce computational complexity while maintaining dehazing quality within a specified threshold. Our key contributions include a fine-tuning strategy for specific scenes, channel importance analysis, and an progressive pruning approach considering layer-wise sensitivity. Our experiments demonstrate the effectiveness of our progressive pruning method. Our progressive pruning algorithm, targeting a specific PSNR(Peak Signal-to-Noise Ratio) threshold, achieved optimal results at a certain pruning ratio, significantly reducing the number of channels in the target layer while maintaining PSNR above the threshold and preserving good structural similarity, before automatically stopping when performance dropped below the target. This demonstrates the algorithm’s ability to find an optimal balance between model compression and performance maintenance. This research enables efficient deployment of high-quality dehazing algorithms in resource-constrained environments, applicable to traffic monitoring and outdoor surveillance. Our method paves the way for more accessible image dehazing systems, enhancing visibility in various real-world hazy conditions while optimizing computational resources for fixed camera setups. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

17 pages, 3307 KiB  
Article
MCADNet: A Multi-Scale Cross-Attention Network for Remote Sensing Image Dehazing
by Tao Tao, Haoran Xu, Xin Guan and Hao Zhou
Mathematics 2024, 12(23), 3650; https://doi.org/10.3390/math12233650 - 21 Nov 2024
Viewed by 784
Abstract
Remote sensing image dehazing (RSID) aims to remove haze from remote sensing images to enhance their quality. Although existing deep learning-based dehazing methods have made significant progress, it is still difficult to completely remove the uneven haze, which often leads to color or [...] Read more.
Remote sensing image dehazing (RSID) aims to remove haze from remote sensing images to enhance their quality. Although existing deep learning-based dehazing methods have made significant progress, it is still difficult to completely remove the uneven haze, which often leads to color or structural differences between the dehazed image and the original image. In order to overcome this difficulty, we propose the multi-scale cross-attention dehazing network (MCADNet), which offers a powerful solution for RSID. MCADNet integrates multi-kernel convolution and a multi-head attention mechanism into the U-Net architecture, enabling effective multi-scale information extraction. Additionally, we replace traditional skip connections with a cross-attention-based gating module, enhancing feature extraction and fusion across different scales. This synergy enables the network to maximize the overall similarity between the restored image and the real image while also restoring the details of the complex texture areas in the image. We evaluate MCADNet on two benchmark datasets, Haze1K and RICE, demonstrating its superior performance. Ablation experiments further verify the importance of our key design choices in enhancing dehazing effectiveness. Full article
(This article belongs to the Special Issue Image Processing and Machine Learning with Applications)
Show Figures

Figure 1

16 pages, 7450 KiB  
Article
Latent Graph Attention for Spatial Context in Light-Weight Networks: Multi-Domain Applications in Visual Perception Tasks
by Ayush Singh, Yash Bhambhu, Himanshu Buckchash, Deepak K. Gupta and Dilip K. Prasad
Appl. Sci. 2024, 14(22), 10677; https://doi.org/10.3390/app142210677 - 19 Nov 2024
Viewed by 526
Abstract
Global contexts in images are quite valuable in image-to-image translation problems. Conventional attention-based and graph-based models capture the global context to a large extent; however, these are computationally expensive. Moreover, existing approaches are limited to only learning the pairwise semantic relation between any [...] Read more.
Global contexts in images are quite valuable in image-to-image translation problems. Conventional attention-based and graph-based models capture the global context to a large extent; however, these are computationally expensive. Moreover, existing approaches are limited to only learning the pairwise semantic relation between any two points in the image. In this paper, we present Latent Graph Attention (LGA), a computationally inexpensive (linear to the number of nodes) and stable modular framework for incorporating the global context in existing architectures. This framework particularly empowers small-scale architectures to achieve performance closer to that of large architectures, making the light-weight architectures more useful for edge devices with lower compute power and lower energy needs. LGA propagates information spatially using a network of locally connected graphs, thereby facilitating the construction of a semantically coherent relation between any two spatially distant points that also takes into account the influence of the intermediate pixels. Moreover, the depth of the graph network can be used to adapt the extent of contextual spread to the target dataset, thereby able to explicitly control the added computational cost. To enhance the learning mechanism of LGA, we also introduce a novel contrastive loss term that helps our LGA module to couple well with the original architecture at the expense of minimal additional computational load. We show that incorporating LGA improves performance in three challenging applications, namely transparent object segmentation, image restoration for dehazing and optical flow estimation. Full article
Show Figures

Figure 1

11 pages, 1981 KiB  
Article
Image Dehazing Technique Based on DenseNet and the Denoising Self-Encoder
by Kunxiang Liu, Yue Yang, Yan Tian and Haixia Mao
Processes 2024, 12(11), 2568; https://doi.org/10.3390/pr12112568 - 16 Nov 2024
Viewed by 892
Abstract
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques [...] Read more.
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques like DehazeNet, AOD-Net, and Li have shown encouraging progress in the study of image dehazing applications. However, these methods suffer from a shallow network structure leading to limited network estimation capability, reliance on atmospheric scattering models to generate the final results that are prone to error accumulation, as well as unstable training and slow convergence. Aiming at these problems, this paper proposes an improved end-to-end convolutional neural network method based on the denoising self-encoder-DenseNet (DAE-DenseNet), where the denoising self-encoder is used as the main body of the network structure, the encoder extracts the features of haze images, the decoder performs the feature reconstruction to recover the image, and the boosting module further performs the feature fusion locally and globally, and finally outputs the dehazed image. Testing the defogging effect in the public dataset, the PSNR index of DAE-DenseNet is 22.60, which is much higher than other methods. Experiments have proved that the dehazing method designed in this paper is better than other algorithms to a certain extent, and there is no color oversaturation or an excessive dehazing phenomenon in the image after dehazing. The dehazing results are the closest to the real image and the viewing experience feels natural and comfortable, with the image dehazing effect being very competitive. Full article
Show Figures

Figure 1

20 pages, 22039 KiB  
Article
A Nonconvex Approach with Structural Priors for Restoring Underwater Images
by Hafiz Shakeel Ahmad Awan and Muhammad Tariq Mahmood
Mathematics 2024, 12(22), 3553; https://doi.org/10.3390/math12223553 - 13 Nov 2024
Viewed by 667
Abstract
Underwater image restoration is a crucial task in various computer vision applications, including underwater target detection and recognition, autonomous underwater vehicles, underwater rescue, marine organism monitoring, and marine geological survey. Among other categories, the physics-based methods restore underwater images by improving the transmission [...] Read more.
Underwater image restoration is a crucial task in various computer vision applications, including underwater target detection and recognition, autonomous underwater vehicles, underwater rescue, marine organism monitoring, and marine geological survey. Among other categories, the physics-based methods restore underwater images by improving the transmission map through optimization or regularization techniques. Conventional optimization-based methods often do not consider the effect of structural differences between guidance and transmission maps. To address this issue, in this paper, we present a regularization-based method for restoring underwater images that uses coherent structures between the guidance map and the transmission map. The proposed approach models the optimization of transmission maps through a nonconvex energy function comprising data and smoothness terms. The smoothness term includes static and dynamic structural priors, and the optimization problem is solved using a majorize-minimize algorithm. We evaluate the proposed method on benchmark datasets, and the results demonstrate the superiority of the proposed method over state-of-the-art techniques in terms of improving transmission maps and producing high-quality restored images. Full article
Show Figures

Figure 1

18 pages, 3921 KiB  
Article
Image Dehazing Enhancement Strategy Based on Polarization Detection of Space Targets
by Shuzhuo Miao, Zhengwei Li, Han Zhang and Hongwen Li
Appl. Sci. 2024, 14(21), 10042; https://doi.org/10.3390/app142110042 - 4 Nov 2024
Viewed by 728
Abstract
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the [...] Read more.
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the low ambient brightness and limited target radiation information during space target detection, the polarization information of space target is seriously lost, and the advantages of polarization detection technology in identifying targets through clouds and fog cannot be effectively exerted under the condition of haze detection. In order to solve the above problem, a dehazing enhancement strategy specifically applied to polarization images of space targets is proposed. Firstly, a hybrid multi-channel interpolation method based on regional correlation analysis is proposed to improve the calculation accuracy of polarization information during preprocessing. Secondly, an image processing method based on full polarization information inversion is proposed to obtain the degree of polarization of the image after inversion and the intensity of the image after dehazing. Finally, the image fusion method based on discrete cosine transform is used to obtain the dehazing polarization fusion enhancement image. The effectiveness of the proposed image processing strategy is verified by carrying out simulated and real space target detection experiments. Compared with other methods, by using the proposed image processing strategy, the quality of the polarization images of space targets obtained under the haze condition is significantly improved. Our research results have important practical implications for promoting the wide application of polarization detection technology in the field of space target detection. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

15 pages, 6308 KiB  
Article
Physics-Driven Image Dehazing from the Perspective of Unmanned Aerial Vehicles
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li, Xiaofei Ji, Jiawei Hao and Jie Yang
Electronics 2024, 13(21), 4186; https://doi.org/10.3390/electronics13214186 - 25 Oct 2024
Viewed by 804
Abstract
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like [...] Read more.
Drone vision is widely used in change detection, disaster response, and military reconnaissance due to its wide field of view and flexibility. However, under haze and thin cloud conditions, image quality is usually degraded due to atmospheric scattering. This results in issues like color distortion, reduced contrast, and lower clarity, which negatively impact the performance of subsequent advanced visual tasks. To improve the quality of unmanned aerial vehicle (UAV) images, we propose a dehazing method based on calibration of the atmospheric scattering model. We designed two specialized neural network structures to estimate the two unknown parameters in the atmospheric scattering model: the atmospheric light intensity A and medium transmission t. However, calculation errors always occur in both processes for estimating the two unknown parameters. The error accumulation for atmospheric light and medium transmission will cause the deviation in color fidelity and brightness. Therefore, we designed an encoder-decoder structure for irradiance guidance, which not only eliminates error accumulation but also enhances the detail in the restored image, achieving higher-quality dehazing results. Quantitative and qualitative evaluations indicate that our dehazing method outperforms existing techniques, effectively eliminating haze from drone images and significantly enhancing image clarity and quality in hazy conditions. Specifically, the compared experiment on the R100 dataset demonstrates that the proposed method improved the peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics by 6.9 dB and 0.08 over the second-best method, respectively. On the N100 dataset, the method improved the PSNR and SSIM metrics by 8.7 dB and 0.05 over the second-best method, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Graphical abstract

19 pages, 4551 KiB  
Article
Autonomous Single-Image Dehazing: Enhancing Local Texture with Haze Density-Aware Image Blending
by Siyeon Han, Dat Ngo, Yeonggyu Choi and Bongsoon Kang
Remote Sens. 2024, 16(19), 3641; https://doi.org/10.3390/rs16193641 - 29 Sep 2024
Viewed by 892
Abstract
Single-image dehazing is an ill-posed problem that has attracted a myriad of research efforts. However, virtually all methods proposed thus far assume that input images are already affected by haze. Little effort has been spent on autonomous single-image dehazing. Even though deep learning [...] Read more.
Single-image dehazing is an ill-posed problem that has attracted a myriad of research efforts. However, virtually all methods proposed thus far assume that input images are already affected by haze. Little effort has been spent on autonomous single-image dehazing. Even though deep learning dehazing models, with their widely claimed attribute of generalizability, do not exhibit satisfactory performance on images with various haze conditions. In this paper, we present a novel approach for autonomous single-image dehazing. Our approach consists of four major steps: sharpness enhancement, adaptive dehazing, image blending, and adaptive tone remapping. A global haze density weight drives the adaptive dehazing and tone remapping to handle images with various haze conditions, including those that are haze-free or affected by mild, moderate, and dense haze. Meanwhile, the proposed approach adopts patch-based haze density weights to guide the image blending, resulting in enhanced local texture. Comparative performance analysis with state-of-the-art methods demonstrates the efficacy of our proposed approach. Full article
Show Figures

Figure 1

16 pages, 8351 KiB  
Article
SCL-Dehaze: Toward Real-World Image Dehazing via Semi-Supervised Codebook Learning
by Tong Cui, Qingyue Dai, Meng Zhang, Kairu Li and Xiaofei Ji
Electronics 2024, 13(19), 3826; https://doi.org/10.3390/electronics13193826 - 27 Sep 2024
Viewed by 830
Abstract
Existing dehazing methods deal with real-world haze images with difficulty, especially scenes with thick haze. One of the main reasons is lacking real-world pair data and robust priors. To improve dehazing ability in real-world scenes, we propose a semi-supervised codebook learning dehazing method. [...] Read more.
Existing dehazing methods deal with real-world haze images with difficulty, especially scenes with thick haze. One of the main reasons is lacking real-world pair data and robust priors. To improve dehazing ability in real-world scenes, we propose a semi-supervised codebook learning dehazing method. The codebook is used as a strong prior to guide the hazy image recovery process. However, the following two issues arise when the codebook is applied to the image dehazing task: (1) Latent space features obtained from the coding of degraded hazy images suffer from matching errors when nearest-neighbour matching is performed. (2) Maintaining a good balance of image recovery quality and fidelity for heavily degraded dense hazy images is difficult. To reduce the nearest-neighbor matching error rate in the vector quantization stage of VQGAN, we designed the unit dual-attention residual transformer module (UDART) to correct the latent space features. The UDART can make the latent features obtained from the encoding stage closer to those of the corresponding clear image. To balance the quality and fidelity of the dehazing result, we design a haze density guided weight adaptive module (HDGWA), which can adaptively adjust the multi-scale skip connection weights according to haze density. In addition, we use mean teacher, a semi-supervised learning strategy, to bridge the domain gap between synthetic and real-world data and enhance the model generalization in real-world scenes. Comparative experiments show that our method achieves improvements of 0.003, 2.646, and 0.019 over the second-best method for the no-reference metrics FADE, MUSIQ, and DBCNN, respectively, on the real-world dataset URHI. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

14 pages, 6866 KiB  
Article
MSNet: A Multistage Network for Lightweight Image Dehazing with Content-Guided Attention and Adaptive Encoding
by Lingrui Dai, Hongrui Liu and Shuoshi Li
Electronics 2024, 13(19), 3812; https://doi.org/10.3390/electronics13193812 - 26 Sep 2024
Viewed by 779
Abstract
Image dehazing is a critical technique aimed at improving the visual clarity of images. The diverse nature of hazy environments poses significant challenges in developing an efficient and lightweight dehazing model. In this paper, we design a multistage network (MSNet) with content-guided attention [...] Read more.
Image dehazing is a critical technique aimed at improving the visual clarity of images. The diverse nature of hazy environments poses significant challenges in developing an efficient and lightweight dehazing model. In this paper, we design a multistage network (MSNet) with content-guided attention and adaptive encoding. The multistage dehazing framework decomposes the complex task of image dehazing into three distinct stages, thereby substantially reducing model complexity. Additionally, we introduce a content-guided attention mechanism that assigns varying weights to different image content elements based on their specific characteristics, thereby improving the efficiency of nonhomogeneous dehazing. Furthermore, we present an adaptive encoder that employs a dual-branch feature extraction structure combined with a gating mechanism, enabling dynamic adjustment of the interactions between the two branches according to the input image. Extensive experimental evaluations on three popular dehazing datasets demonstrate the effectiveness of our proposed MSNet. Full article
Show Figures

Figure 1

18 pages, 8451 KiB  
Article
Remote Sensing Image Dehazing via Dual-View Knowledge Transfer
by Lei Yang, Jianzhong Cao, He Bian, Rui Qu, Huinan Guo and Hailong Ning
Appl. Sci. 2024, 14(19), 8633; https://doi.org/10.3390/app14198633 - 25 Sep 2024
Cited by 1 | Viewed by 708
Abstract
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, [...] Read more.
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, a Dual-View Knowledge Transfer (DVKT) framework is proposed to generate a lightweight and efficient student network by distilling knowledge from a pre-trained teacher network on natural image dehazing datasets. The DVKT framework includes two novel knowledge-transfer modules: Intra-layer Transfer (Intra-KT) and Inter-layer Knowledge Transfer (Inter-KT) modules. Specifically, the Intra-KT module is designed to correct the learning bias of the student network by distilling and transferring knowledge from a well-trained teacher network. The Inter-KT module is devised to distill and transfer knowledge about cross-layer correlations. This enables the student network to learn hierarchical and cross-layer dehazing knowledge from the teacher network, thereby extracting compact and effective features. Evaluation results on benchmark datasets demonstrate that the proposed DVKT framework achieves superior performance for RSID. In particular, the distilled model achieves a significant speedup with less than 6% of the parameters and computational cost of the original model, while maintaining a state-of-the-art dehazing performance. Full article
Show Figures

Figure 1

20 pages, 3181 KiB  
Article
Dehazing Algorithm Integration with YOLO-v10 for Ship Fire Detection
by Farkhod Akhmedov, Rashid Nasimov and Akmalbek Abdusalomov
Fire 2024, 7(9), 332; https://doi.org/10.3390/fire7090332 - 23 Sep 2024
Cited by 3 | Viewed by 1348
Abstract
Ship fire detection presents significant challenges in computer vision-based approaches due to factors such as the considerable distances from which ships must be detected and the unique conditions of the maritime environment. The presence of water vapor and high humidity further complicates the [...] Read more.
Ship fire detection presents significant challenges in computer vision-based approaches due to factors such as the considerable distances from which ships must be detected and the unique conditions of the maritime environment. The presence of water vapor and high humidity further complicates the detection and classification tasks for deep learning models, as these factors can obscure visual clarity and introduce noise into the data. In this research, we explain the development of a custom ship fire dataset, a YOLO (You Only Look Once)-v10 model with a fine-tuning combination of dehazing algorithms. Our approach integrates the power of deep learning with sophisticated image processing to deliver comprehensive solutions for ship fire detection. The results demonstrate the efficacy of using YOLO-v10 in conjunction with a dehazing algorithm, highlighting significant improvements in detection accuracy and reliability. Experimental results show that the YOLO-v10-based developed ship fire detection model outperforms several YOLO and other detection models in precision (97.7%), recall (98%), and [email protected] score (89.7%) achievements. However, the model reached a relatively lower score in terms of F1 score in comparison with YOLO-v8 and ship-fire-net model performances. In addition, the dehazing approach significantly improves the model’s detection performance in a haze environment. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

16 pages, 4126 KiB  
Article
An Efficient Multi-Scale Wavelet Approach for Dehazing and Denoising Ultrasound Images Using Fractional-Order Filtering
by Li Wang, Zhenling Yang, Yi-Fei Pu, Hao Yin and Xuexia Ren
Fractal Fract. 2024, 8(9), 549; https://doi.org/10.3390/fractalfract8090549 - 23 Sep 2024
Viewed by 1023
Abstract
Ultrasound imaging is widely used in medical diagnostics due to its non-invasive and real-time capabilities. However, existing methods often overlook the benefits of fractional-order filters for denoising and dehazing. Thus, this work introduces an efficient multi-scale wavelet method for dehazing and denoising ultrasound [...] Read more.
Ultrasound imaging is widely used in medical diagnostics due to its non-invasive and real-time capabilities. However, existing methods often overlook the benefits of fractional-order filters for denoising and dehazing. Thus, this work introduces an efficient multi-scale wavelet method for dehazing and denoising ultrasound images using a fractional-order filter, which integrates a guided filter, directional filter, fractional-order filter, and haze removal to the different resolution images generated by a multi-scale wavelet. In the directional filter stage, an eigen-analysis of each pixel is conducted to extract structural features, which are then classified into edges for targeted filtering. The guided filter subsequently reduces speckle noise in homogeneous anatomical regions. The fractional-order filter allows the algorithm to effectively denoise while improving edge definition, irrespective of the edge size. Haze removal can effectively eliminate the haze caused by attenuation. Our method achieved significant improvements, with PSNR reaching 31.25 and SSIM 0.905 on our ultrasound dataset, outperforming other methods. Additionally, on external datasets like McMaster and Kodak24, it achieved the highest PSNR (29.68, 28.62) and SSIM (0.858, 0.803). Clinical evaluations by four radiologists confirmed its superiority in liver and carotid artery images. Overall, our approach outperforms existing speckle reduction and structural preservation techniques, making it highly suitable for clinical ultrasound imaging. Full article
(This article belongs to the Section Life Science, Biophysics)
Show Figures

Figure 1

Back to TopTop