An Improved Random Forest Approach on GAN-Based Dataset Augmentation for Fog Observation
Abstract
:1. Introduction
2. Data and Methodology
2.1. Experiment and Data
2.2. Data Augmentation
2.3. Extraction of Image Features Related to Fog Density
2.4. Deep Learning Approaches: VGG16, VGG19, ResNet50, DenseNet169 and the Improved Random Forest
- Hierarchical Clustering. Initially, each decision tree is treated as an independent cluster. The Dunn index is used to calculate the similarity between any two decision trees, and the two clusters with the smallest similarity are merged. This process is repeated until the number of remaining clusters reaches a predetermined value. Then, the decision tree with the best classification performance is selected from each cluster to form a new Random Forest model.
- K-Medoids Clustering. The cluster centers obtained from hierarchical clustering are used as the initial clusters for k-Medoids clustering. The similarity between the unclassified decision trees and each cluster center is calculated, and the decision trees are reassigned based on the nearest neighbor principle. Then, the decision tree with the best performance within each cluster is selected as the new cluster center. This process is repeated until the cluster centers stabilize or the maximum number of iterations is reached.
- Model Training and Prediction: The preprocessed feature data are input into the improved Random Forest model for training. The model constructs a large number of decision trees, with each tree independently predicting the fog density. The final output is the average of the predictions from all decision trees.
2.5. Assessment Method
3. Result
3.1. Augmented Data
3.2. Relationship Between Image Features and Fog Density
3.3. Estimation of Fog Density
4. Conclusions and Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, Z. STUDIES of FOG in CHINA over THE PAST 40 YEARS. Acta Meteorol. Sin. 2001, 5, 616–624. [Google Scholar] [CrossRef]
- Bao, Z.; Tang, Y.; Li, C. Road Traffic Safety Technology Series: Highway Traffic Safety and Meteorological Impact, 1st ed.; People’s Traffic Press: Beijing, China, 2008; pp. 1–15. ISBN 9787114070693. [Google Scholar]
- GB/T 27964-2011; Fog Forecast. Meteorological Standard. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China; Standardization Administration of China: Beijing, China, 2011; pp. 1–6.
- Wang, Y.; Jia, L.; Li, X.; Lu, Y.; Hua, D. A measurement method for slant visibility with slant path scattered radiance correction by lidar and the SBDART model. Opt. Express 2020, 29, 837–853. [Google Scholar] [CrossRef] [PubMed]
- Xian, J.; Han, Y.; Huang, S.; Sun, D.; Li, X. Novel lidar algorithm for horizontal visibility measurement and sea fog monitoring. Opt. Express 2018, 26, 34853–34863. [Google Scholar] [CrossRef]
- Li, Y.; Sun, H.; Xu, M. The Present Situation and Problems on Detecting Fog by Remote Sensing with Meteorological Satellite. Remote. Sens. Technol. Appl. 2000, 15, 223–227. [Google Scholar] [CrossRef]
- Tang, K.; Yang, J.; Wang, J. Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef]
- Yuan, D.; Huang, J.; Yang, X.; Cui, J. Improved random forest classification approach based on hybrid clustering selection. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 1559–1563. [Google Scholar] [CrossRef]
- Li, Q.; Tang, S.; Peng, X.; Ma, Q. A Method of Visibility Detection Based on the Transfer Learning. J. Atmos. Ocean. Technol. 2019, 36, 1945–1956. [Google Scholar] [CrossRef]
- Lo, W.L.; Zhu, M.; Fu, H. Meteorology Visibility Estimation by Using Multi-Support Vector Regression Method. J. Adv. Inf. Technol. 2020, 11, 40–47. [Google Scholar] [CrossRef]
- Jonnalagadda, J.; Hashemi, M. Forecasting Atmospheric Visibility Using Auto Regressive Recurrent Neural Network. In Proceedings of the 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science, Las Vegas, NV, USA, 11–13 August 2020; pp. 209–215. [Google Scholar] [CrossRef]
- Li, J.; Lo, W.L.; Fu, H.; Chung, H.S.H. A Transfer Learning Method for Meteorological Visibility Estimation Based on Feature Fusion Method. Appl. Sci. 2021, 11, 997. [Google Scholar] [CrossRef]
- Lo, W.L.; Shu, H.; Fu, H. Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation. Atmosphere 2021, 12, 828. [Google Scholar] [CrossRef]
- Li, Y.; Ji, Y.; Fu, J.; Chang, X. FGS-Net: A Visibility Estimation Method Based on Statistical Feature Stream in Fog Area. Res. Sq. 2023. [Google Scholar] [CrossRef]
- Choi, Y.; Choe, H.-G.; Choi, J.Y.; Kim, K.T.; Kim, J.-B.; Kim, N.-I. Automatic Sea Fog Detection and Estimation of Visibility Distance on CCTV. J. Coast. Res. 2018, 85, 881–885. [Google Scholar] [CrossRef]
- Zhang, F.; Yu, T.; Li, Z.; Wang, K.; Chen, Y.; Huang, Y.; Kuang, Q. Deep Quantified Visibility Estimation for Traffic Image. Atmosphere 2022, 14, 61. [Google Scholar] [CrossRef]
- Busch, C.; Debes, E. Wavelet transform for visibility analysis in fog situations. IEEE Intell. Syst. 1998, 13, 66–71. [Google Scholar] [CrossRef]
- Hautiére, N.; Tarel, J.-P.; Lavenant, J.; Aubert, D. Automatic fog detection and estimation of visibility distance through use of an on board camera. Mach. Vis. Appl. 2006, 17, 8–20. [Google Scholar] [CrossRef]
- Negru, M.; Nedevschi, S. Image based fog detection and visibility estimation for driving assistance systems. In Proceedings of the 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 5–7 September 2013; pp. 163–168. [Google Scholar]
- Guo, F.; Peng, H.; Tang, J.; Zou, B.; Tang, C. Visibility detection approach to road scene foggy images. Ksii Trans. Internet Inf. Syst. 2016, 10, 4419–4441. [Google Scholar]
- Wauben, W.; Roth, M. Exploration of fog detection and visibility estimation from camera images. In Proceedings of the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of Observation (CIMOTECO), Madrid, Spain, 27–30 September 2016; pp. 1–14. [Google Scholar]
- Yang, L.; Muresan, R.; Al-Dweik, A.; Hadjileontiadis, L.J. Image based visibility estimation algorithm for intelligent transportation systems. IEEE Access 2018, 6, 76728–76740. [Google Scholar] [CrossRef]
- Cheng, X.; Liu, G.; Hedman, A.; Wang, K.; Li, H. Expressway visibility estimation based on image entropy and piecewise stationary time series analysis. arXiv 2018, arXiv:1804.04601. [Google Scholar]
- Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef]
- Chai, J.; Zeng, H.; Li, A.; Ngai, E.W.T. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Maaten, L.v.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar] [CrossRef]
- Breiman, L. Random forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Yang, W.; Zhao, Y.; Li, Q.; Zhu, F.; Su, Y. Multi visual feature fusion based fog visibility estimation for expressway surveillance using deep learning network. Expert Syst. Appl. 2023, 234, 121151. [Google Scholar] [CrossRef]
- Miao, K.; Zhou, J.; Tao, P.; Liu, C.; Tao, Y. Visibility recognition of fog figure based on self-adaptive hybrid convolutional neural network. Comput. Eng. Appl. 2020, 56, 205–212. [Google Scholar] [CrossRef]
- Huang, L.; Zhang, Z.; Xiao, P.; Sun, J.; Zhou, X. Classification and application of highway visibility based on deep learning. Trans. Atmos. Sci. 2022, 45, 203–211. [Google Scholar]
- Karras, T.; Aittala, M.; Hellsten, J.; Laine, S.; Lehtinen, J.; Aila, T. Training generative adversarial networks with limited data. Adv. Neural Inf. Process. Syst. 2020, 33, 12104–12114. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2014, 63, 139–144. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ’completely blind’ image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Ruderman, D.L. The statistics of natural images. Netw. Comput. Neural Syst. 1994, 5, 517–548. [Google Scholar] [CrossRef]
- Makkar, D.; Malhotra, M. Single Image Haze Removal Using Dark Channel Prior. Int. J. Eng. Comput. Sci. 2016, 5, 15467–15473. [Google Scholar] [CrossRef]
- Hasler, D.; Suesstrunk, S.E. Measuring colorfulness in natural images. Proc. SPIE 2003, 5007, 87. [Google Scholar]
- Choi, L.K.; You, J.; Bovik, A.C. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef] [PubMed]
- Gu, K.; Zhai, G.; Yang, X.; Zhang, W. Using Free Energy Principle For Blind Image Quality Assessment. IEEE Trans. Multimed. 2015, 17, 50–63. [Google Scholar] [CrossRef]
- Berns, R.S. Billmeyer and Saltzman’s Principles of Color Technology, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
- Ruderman, D.L.; Cronin, T.W.; Chiao, C.-C. Statistics of cone responses to natural images: Implications for visual coding. J. Opt. Soc. Am. 1998, 15, 2036. [Google Scholar] [CrossRef]
Fog Density | Visibility Range |
---|---|
Light Fog | 1000 m 10,000 m |
Moderate Fog | 500 m 1000 m |
Dense Fog | 200 m 500 m |
Thick Fog | 50 m 200 m |
Very Thick Fog | 50 m |
0–50 m | 50–200 m | 200–500 m | 500–1000 m | 1000–10,000 m | Av. | |
---|---|---|---|---|---|---|
Inception Score | 3.81 | 3.21 | 4.26 | 3.42 | 3.33 | 3.61 |
FID value | 98.01 | 98.19 | 98.91 | 93.53 | 92.69 | 96.27 |
Fog-Relevant Features | Serial Number | Correlation Coefficient |
---|---|---|
Coefficients of MSCN Variance | F1 | 0.493 |
Dark channel | F2 | 0.562 |
Colorfulness | F3 | 0.581 |
Sharpness | F4 | 0.457 |
Coefficient of sharpness variance | F5 | 0.477 |
Entropy | F6 | 0.481 |
Combination of saturation and value in HSV space | F7 | 0.440 |
Chroma | F8 | 0.632 |
Variance of chroma | F9 | 0.534 |
Weber contrast of luminance | F10 | 0.555 |
Local contrast | F11 | 0.512 |
Contrast energy (gray) | F12 | 0.354 |
Contrast energy (yb) | F13 | 0.313 |
Contrast energy (rg) | F14 | 0.367 |
Gradient magnitude | F15 | 0.295 |
Color variance | F16 | 0.308 |
Fog Density | ||||||
---|---|---|---|---|---|---|
Very Thick Fog | Thick Fog | Dense Fog | Moderate Fog | Light Fog | Total | |
VGG-16 (%) | 63.2 | 73.3 | 84.1 | 85.5 | 87.7 | 83.9 |
VGG-19 (%) | 64.7 | 71.4 | 84.7 | 86.6 | 86.3 | 85.6 |
ResNet-50 (%) | 68.5 | 76.5 | 85.3 | 88.4 | 90.1 | 86.9 |
DenseNet-169 (%) | 64.7 | 69.3 | 83.4 | 89.6 | 90.1 | 85.8 |
Random Forest (%) | 55.1 | 68.7 | 83.4 | 88.9 | 90.8 | 84.1 |
Random Forest based on hybrid clustering (%) | 58.5 | 70.5 | 85.5 | 89.7 | 91.1 | 86.4 |
Fog Density | ||||||
---|---|---|---|---|---|---|
Very Thick Fog | Thick Fog | Dense Fog | Moderate Fog | Light Fog | Total | |
VGG-16 (%) | 81.3 | 88.3 | 86.1 | 88.5 | 89.7 | 86.2 |
VGG-19 (%) | 82.7 | 86.2 | 89.7 | 90.3 | 91.3 | 89.6 |
ResNet-50 (%) | 83.4 | 89.3 | 91.3 | 89.1 | 90.4 | 88.9 |
DenseNet-169 (%) | 84.3 | 89.6 | 91.4 | 92.6 | 93.6 | 91.2 |
Random Forest (%) | 89.0 | 92.7 | 91.4 | 89.9 | 92.3 | 90.1 |
Random Forest based on hybrid clustering (%) | 89.8 | 91.7 | 94.5 | 92.7 | 94.9 | 93.0 |
Reference | Used Method | Visibility Range (m) | Feature Extractor | Classifier/ Regressor | Dataset | Accuracy |
---|---|---|---|---|---|---|
Li et al. [12] | Deep learning approach based on the fusion of extracted features from the selected subregions for visibility estimation. | 0–12,000 | VGG-16 | Multi-SVR | HKO (4841 images) | 0.88 |
Lo et al. [13] | PSO-based transfer learning approach for feature selection and Multi-SVR model to estimate visibility. | 10,000–40,000 | VGG-19 DenseNet ResNet_50 VGG-16 VGG-19 DenseNet ResNet_50 | Multi-SVR | Private Dataset (6048 images) | 0.88 0.90 0.91 0.90 0.90 0.91 0.93 |
Liu et al. [14] | STCN-Net model that combines engineered and learned features. | 50–10,000 | Swin-T + ResNet-18 | Fully Connected | VID I | 0.98 |
Choi et al. [15] | Detection of daytime sea fog and estimating visibility distance from CCTV images. | 0–20,000 | VGG19 | Fully Connected | Private Dataset (5104 images) | 0.72 |
Zhang et al. [16] | Estimation of quantified visibility based on physical laws and deep learning architectures. | 0–35,000 | DQVENet | Specific algorithm | QVEData | 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, Y.; Zhao, P.; Xu, B.; Liang, J. An Improved Random Forest Approach on GAN-Based Dataset Augmentation for Fog Observation. Appl. Sci. 2024, 14, 9657. https://doi.org/10.3390/app14219657
Cao Y, Zhao P, Xu B, Liang J. An Improved Random Forest Approach on GAN-Based Dataset Augmentation for Fog Observation. Applied Sciences. 2024; 14(21):9657. https://doi.org/10.3390/app14219657
Chicago/Turabian StyleCao, Yucan, Panpan Zhao, Balin Xu, and Jingshu Liang. 2024. "An Improved Random Forest Approach on GAN-Based Dataset Augmentation for Fog Observation" Applied Sciences 14, no. 21: 9657. https://doi.org/10.3390/app14219657