Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

EHNQ: Subjective and Objective Quality Evaluation of Enhanced Night-Time Images

Published: 01 September 2023 Publication History

Abstract

Vision-based practical applications, such as consumer photography and automated driving systems, greatly rely on enhancing the visibility of images captured in night-time environments. For this reason, various image enhancement algorithms (EHAs) have been proposed. However, little attention has been given to the quality evaluation of enhanced night-time images. In this paper, we conduct the first dedicated exploration of the subjective and objective quality evaluation of enhanced night-time images. First, we build an enhanced night-time image quality (EHNQ) database, which is the largest of its kind so far. It includes 1,500 enhanced images generated from 100 real night-time images using 15 different EHAs. Subsequently, we perform a subjective quality evaluation and obtain subjective quality scores on the EHNQ database. Thereafter, we present an objective blind quality index for enhanced night-time images (BEHN). Enhanced night-time images usually suffer from inappropriate brightness and contrast, deformed structure, and unnatural colorfulness. In BEHN, we capture perceptual features that are highly relevant to these three types of corruptions, and we design an ensemble training strategy to map the extracted features into the quality score. Finally, we conduct extensive experiments on EHNQ and EAQA databases. The experimental and analysis results validate the performance of the proposed BEHN compared with the state-of-the-art approaches. Our EHNQ database is publicly available for download at <uri>https://sites.google.com/site/xiangtaooo/</uri>.

References

[1]
X. Fu, Y. Liao, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 4965–4977, Dec. 2015.
[2]
X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, Feb. 2017.
[3]
H. Kuang, L. Chen, F. Gu, J. Chen, L. Chan, and H. Yan, “Combining region-of-interest extraction and image enhancement for nighttime vehicle detection,” IEEE Intell. Syst., vol. 31, no. 3, pp. 57–65, May/Jun. 2016.
[4]
Z. Shi, M. Zhu, B. Guo, and M. Zhao, “A photographic negative imaging inspired method for low illumination night-time image enhancement,” Multimedia Tools Appl., vol. 76, no. 13, pp. 15027–15048, Jul. 2017.
[5]
S. Hao, X. Han, Y. Guo, and M. Wang, “Decoupled low-light image enhancement,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 18, no. 4, pp. 1–19, Nov. 2022.
[6]
K. Gu, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, “Automatic contrast enhancement technology with saliency preservation,” IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 9, pp. 1480–1494, Sep. 2014.
[7]
S.-Y. Yu and H. Zhu, “Low-illumination image enhancement algorithm based on a physical lighting model,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 1, pp. 28–37, Jan. 2019.
[8]
R. Kumar and A. K. Bhandari, “Fuzzified contrast enhancement for nearly invisible images,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 5, pp. 2802–2813, May 2022.
[9]
K. Xu, H. Chen, C. Xu, Y. Jin, and C. Zhu, “Structure-texture aware network for low-light image enhancement,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 8, pp. 4983–4996, Aug. 2022.
[10]
A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, Dec. 2012.
[11]
W. Xue, X. Mou, L. Zhang, A. C. Bovik, and X. Feng, “Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features,” IEEE Trans. Image Process., vol. 23, no. 11, pp. 4850–4862, Nov. 2014.
[12]
C. Yan, T. Teng, Y. Liu, Y. Zhang, H. Wang, and X. Ji, “Precise no-reference image quality evaluation based on distortion identification,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 17, no. 3s, pp. 1–21, Oct. 2021.
[13]
Q. Li, W. Lin, J. Xu, and Y. Fang, “Blind image quality assessment using statistical structural and luminance features,” IEEE Trans. Multimedia, vol. 18, no. 12, pp. 2457–2469, Dec. 2016.
[14]
Y. Liuet al., “Unsupervised blind image quality evaluation via statistical measurements of structure, naturalness, and perception,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 4, pp. 929–943, Apr. 2020.
[15]
Y. Liu, K. Gu, X. Li, and Y. Zhang, “Blind image quality assessment by natural scene statistics and perceptual characteristics,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 16, no. 3, pp. 1–91, Aug. 2020.
[16]
W. Xia, Y. Yang, J.-H. Xue, and J. Xiao, “Domain fingerprints for no-reference image quality assessment,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 4, pp. 1332–1341, Apr. 2021.
[17]
K. Gu, W. Lin, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, “No-reference quality metric of contrast-distorted images based on information maximization,” IEEE Trans. Cybern., vol. 47, no. 12, pp. 4559–4565, Dec. 2017.
[18]
K. Gu, D. Tao, J.-F. Qiao, and W. Lin, “Learning a no-reference quality assessment model of enhanced images with big data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 4, pp. 1301–1313, Apr. 2018.
[19]
K. Gu, G. Zhai, W. Lin, and M. Liu, “The analysis of image contrast: From quality assessment to automatic enhancement,” IEEE Trans. Cybern., vol. 46, no. 1, pp. 284–297, Jan. 2016.
[20]
G. Zhai, W. Sun, X. Min, and J. Zhou, “Perceptual quality assessment of low-light image enhancement,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 17, no. 4, pp. 1–24, Nov. 2021.
[21]
M. H. Khosravi and H. Hassanpour, “Blind quality metric for contrast-distorted images based on eigendecomposition of color histograms,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 1, pp. 48–58, Jan. 2020.
[22]
T. Xiang, Y. Yang, and S. Guo, “Blind night-time image quality assessment: Subjective and objective approaches,” IEEE Trans. Multimedia, vol. 22, no. 5, pp. 1259–1272, May 2020.
[23]
Methodology for the Subjective Assessment of the Quality of Television Pictures, document R.ITU-R BT.500-13, International Telecommunication Union, 2012.
[24]
J. S. Lim, Two-Dimensional Signal and Image Processing. Englewood Cliffs, NJ, USA: Prentice-Hall, 1990.
[25]
C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation of 2D histograms,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 5372–5384, Dec. 2013.
[26]
Q. Wang, X. Fu, X.-P. Zhang, and X. Ding, “A fusion-based method for single backlit image enhancement,” in Proc. ICIP, 2016, pp. 4077–4081.
[27]
X. Fu, D. Zeng, Y. Huang, X. Ding, and X.-P. Zhang, “A variational framework for single low light image enhancement using bright channel prior,” in Proc. IEEE Global Conf. Signal Inf. Process., Dec. 2013, pp. 1085–1088.
[28]
S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, Sep. 2013.
[29]
X. Fu, Y. Sun, M. LiWang, Y. Huang, X.-P. Zhang, and X. Ding, “A novel retinex based approach for image enhancement with illumination adjustment,” in Proc. ICASSP, May 2014, pp. 1190–1194.
[30]
X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proc. CVPR, Jun. 2016, pp. 2782–2790.
[31]
X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process., vol. 129, pp. 82–96, Dec. 2016.
[32]
L. Cai and J. Qian, “Night color image enhancement using fuzzy set,” in Proc. CISP, Oct. 2009, pp. 1–4.
[33]
H. Lin and Z. Shi, “Multi-scale retinex improvement for nighttime image enhancement,” Optik, vol. 125, no. 24, pp. 7143–7148, Dec. 2014.
[34]
J. Wei, Q. Zhijie, X. Bo, and Z. Dean, “A nighttime image enhancement method based on retinex and guided filter for object recognition of apple harvesting robot,” Int. J. Adv. Robot. Syst., vol. 15, no. 1, pp. 1–12, Jan. 2018.
[35]
X. Donget al., “Fast efficient algorithm for enhancement of low lighting video,” in Proc. ICME, 2011, pp. 1–6.
[36]
Z. Chen, T. Jiang, and Y. Tian, “Quality assessment for comparing image enhancement algorithms,” in Proc. CVPR, Jun. 2014, pp. 3003–3010.
[37]
Y. Fang, K. Ma, Z. Wang, W. Lin, Z. Fang, and G. Zhai, “No-reference quality assessment of contrast-distorted images based on natural scene statistics,” IEEE Signal Process. Lett., vol. 22, no. 7, pp. 838–842, Jul. 2015.
[38]
X. Min, G. Zhai, K. Gu, X. Yang, and X. Guan, “Objective quality evaluation of dehazed images,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 8, pp. 2879–2892, Aug. 2019.
[39]
Q. Wu, L. Wang, K. N. Ngan, H. Li, F. Meng, and L. Xu, “Subjective and objective de-raining quality assessment towards authentic rain image,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 3883–3897, Nov. 2020.
[40]
Subjective Video Quality Assessment Methods for Multimedia Applications, Int. Telecommun. Union, Geneva, Switzerland, 1999.
[41]
A. M. van Dijk, J.-B. Martens, and A. B. Watson, “Quality asessment of coded images using numerical category scaling,” Proc. SPIE, vol. 2451, pp. 90–101, Feb. 1995.
[42]
K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, Dec. 2011.
[43]
J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 12, pp. 1338–1350, Dec. 2001.
[44]
C. Li and T. Chen, “Aesthetic visual quality assessment of paintings,” IEEE J. Sel. Topics Signal Process., vol. 3, no. 2, pp. 236–252, Apr. 2009.
[45]
I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson, “Image statistics and the perception of surface qualities,” Nature, vol. 447, no. 7141, pp. 206–209, May 2007.
[46]
Y. Zhou, L. Li, J. Wu, K. Gu, W. Dong, and G. Shi, “Blind quality index for multiply distorted images using biorder structure degradation and nonlocal statistics,” IEEE Trans. Multimedia, vol. 20, no. 11, pp. 3019–3032, Nov. 2018.
[47]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[48]
C. Chen, H. Zhao, H. Yang, T. Yu, C. Peng, and H. Qin, “Full-reference screen content image quality assessment by fusing multilevel structure similarity,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 17, no. 3, pp. 1–21, Aug. 2021.
[49]
J. Larsson, M. S. Landy, and D. J. Heeger, “Orientation-selective adaptation to first- and second-order patterns in human visual cortex,” J. Neurophysiol., vol. 95, no. 2, pp. 862–881, Feb. 2006.
[50]
T. Xiang, Y. Yang, H. Liu, and S. Guo, “Visual security evaluation of perceptually encrypted images based on image importance,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 4129–4142, Nov. 2020.
[51]
Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Trans. Image Process., vol. 19, no. 6, pp. 1657–1663, Jun. 2010.
[52]
T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.
[53]
G. Yue, C. Hou, and T. Zhou, “Blind quality assessment of tone-mapped images considering colorfulness, naturalness, and structure,” IEEE Trans. Ind. Electron., vol. 66, no. 5, pp. 3784–3793, May 2019.
[54]
T.-J. Liu and K.-H. Liu, “No-reference image quality assessment by wide-perceptual-domain scorer ensemble method,” IEEE Trans. Image Process., vol. 27, no. 3, pp. 1138–1151, Mar. 2018.
[55]
Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119–139, 1997.
[56]
T. Windeatt, “Accuracy/diversity and ensemble MLP classifier design,” IEEE Trans. Neural Netw., vol. 17, no. 5, pp. 1194–1211, Sep. 2006.
[57]
L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE Trans. Image Process., vol. 24, no. 8, pp. 2579–2591, Aug. 2015.
[58]
L. Kang, P. Ye, Y. Li, and D. Doermann, “Convolutional neural networks for no-reference image quality assessment,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 1733–1740.
[59]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, Jun. 2016, pp. 770–778.
[60]
N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, “TID2008—A database for evaluation of full-reference visual quality assessment metrics,” Adv. Mod. Radioelectron., vol. 10, no. 4, pp. 30–45, 2009.
[61]
N. Ponomarenkoet al., “Color image database TID2013: Peculiarities and preliminary results,” in Proc. EUVIP, 2013, pp. 106–111.
[62]
E. C. Larson and D. M. Chandler, “Most apparent distortion: Full-reference image quality assessment and the role of strategy,” Proc. SPIE, vol. 19, no. 1, pp. 011006–1011006-21, 2010.

Cited By

View all
  • (2024)Benchmark Dataset and Pair-Wise Ranking Method for Quality Evaluation of Night-Time Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.339190726(9436-9449)Online publication date: 22-Apr-2024
  • (2024)Gap-Closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2023.331285126(3430-3443)Online publication date: 1-Jan-2024
  • (2024)Progressive Bidirectional Feature Extraction and Enhancement Network for Quality Evaluation of Night-Time ImagesIEEE Transactions on Multimedia10.1109/TMM.2023.328498826(1690-1705)Online publication date: 1-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Circuits and Systems for Video Technology
IEEE Transactions on Circuits and Systems for Video Technology  Volume 33, Issue 9
Sept. 2023
882 pages

Publisher

IEEE Press

Publication History

Published: 01 September 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Benchmark Dataset and Pair-Wise Ranking Method for Quality Evaluation of Night-Time Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.339190726(9436-9449)Online publication date: 22-Apr-2024
  • (2024)Gap-Closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2023.331285126(3430-3443)Online publication date: 1-Jan-2024
  • (2024)Progressive Bidirectional Feature Extraction and Enhancement Network for Quality Evaluation of Night-Time ImagesIEEE Transactions on Multimedia10.1109/TMM.2023.328498826(1690-1705)Online publication date: 1-Jan-2024
  • (2024)Multitask Deep Neural Network With Knowledge-Guided Attention for Blind Image Quality AssessmentIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.337534434:8(7577-7588)Online publication date: 1-Aug-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media