Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Unified Blind Quality Assessment of Compressed Natural, Graphic, and Screen Content Images

Published: 01 November 2017 Publication History

Abstract

Digital images in the real world are created by a variety of means and have diverse properties. A photographical natural scene image (NSI) may exhibit substantially different characteristics from a computer graphic image (CGI) or a screen content image (SCI). This casts major challenges to objective image quality assessment, for which existing approaches lack effective mechanisms to capture such content type variations, and thus are difficult to generalize from one type to another. To tackle this problem, we first construct a cross-content-type (CCT) database, which contains 1,320 distorted NSIs, CGIs, and SCIs, compressed using the high efficiency video coding (HEVC) intra coding method and the screen content compression (SCC) extension of HEVC. We then carry out a subjective experiment on the database in a well-controlled laboratory environment. Moreover, we propose a unified content-type adaptive (UCA) blind image quality assessment model that is applicable across content types. A key step in UCA is to incorporate the variations of human perceptual characteristics in viewing different content types through a multi-scale weighting framework. This leads to superior performance on the constructed CCT database. UCA is training-free, implying strong generalizability. To verify this, we test UCA on other databases containing JPEG, MPEG-2, H.264, and HEVC compressed images/videos, and observe that it consistently achieves competitive performance.

References

[1]
W. Zhu, W. Ding, J. Xu, Y. Shi, and B. Yin, “Screen content coding based on HEVC framework,” IEEE Trans. Multimedia, vol. 16, no. 5, pp. 1316–1326, Aug. 2014.
[2]
H. Yang, Y. Fang, and W. Lin, “Perceptual quality assessment of screen content images,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 4408–4421, Nov. 2015.
[3]
S. Wanget al., “Subjective and objective quality assessment of compressed screen content images,” IEEE J. Emerg. Sel. Topics Circuits Syst., vol. 6, no. 4, pp. 532–543, Dec. 2016.
[4]
T. Lin, P. Zhang, S. Wang, K. Zhou, and X. Chen, “Mixed chroma sampling-rate High Efficiency Video Coding for full-chroma screen content,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 1, pp. 173–185, Jan. 2013.
[5]
J. Xu, R. Joshi, and R. A. Cohen, “Overview of the emerging HEVC screen content coding extension,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 1, pp. 50–62, Jan. 2016.
[6]
Z. Wang and A. C. Bovik, “Mean squared error: Love it or leave it? A new look at signal fidelity measures,” IEEE Signal Process. Mag., vol. 26, no. 1, pp. 98–117, Jan. 2009.
[7]
W. Lin and C.-C. Jay Kuo, “Perceptual visual quality metrics: A survey,” J. Vis. Commun. Image Represent., vol. 22, no. 4, pp. 297–312, 2011.
[8]
K. Gu, L. Li, H. Lu, X. Min, and W. Lin, “A fast reliable image quality predictor by fusing micro- and macro-structures,” IEEE Trans. Ind. Electron., vol. 64, no. 5, pp. 3903–3912, May 2017.
[9]
K. Gu, D. Tao, J.-F. Qiao, and W. Lin, “Learning a no-reference quality assessment model of enhanced images with big data,” IEEE Trans. Neural Netw. Learn. Syst., to be published.
[10]
K. Ma, W. Liu, T. Liu, Z. Wang, and D. Tao, “dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs,” IEEE Trans. Image Process., vol. 26, no. 8, pp. 3951–3964, Aug. 2017.
[11]
G. Zhai, X. Wu, X. Yang, W. Lin, and W. Zhang, “A psychovisual quality metric in free-energy principle,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 41–52, Jan. 2012.
[12]
S. Wang, K. Gu, K. Zeng, Z. Wang, and W. Lin, “Perceptual screen content image quality assessment and compression,” in Proc. IEEE Int. Conf. Image Process., Sep. 2015, pp. 1434–1438.
[13]
K. Guet al., “Saliency-guided quality assessment of screen content images,” IEEE Trans. Multimedia, vol. 18, no. 6, pp. 1098–1110, Jun. 2016.
[14]
Z. Ni, L. Ma, H. Zeng, C. Cai, and K.-K. Ma, “Gradient direction for screen content image quality assessment,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1394–1398, Oct. 2016.
[15]
Z. Ni, L. Ma, H. Zeng, J. Chen, C. Cai, and K.-K. Ma, “ESIM: Edge similarity for screen content image quality assessment,” IEEE Trans. Image Process., vol. 26, no. 10, pp. 4818–4831, Oct. 2017.
[16]
Y. Fang, J. Yan, J. Liu, S. Wang, Q. Li, and Z. Guo, “Objective quality assessment of screen content images by uncertainty weighting,” IEEE Trans. Image Process., vol. 26, no. 4, pp. 2016–2027, Apr. 2017.
[17]
K. Gu, J. Zhou, J.-F. Qiao, G. Zhai, W. Lin, and A. C. Bovik, “Noreference quality assessment of screen content pictures,” IEEE Trans. Image Process., vol. 26, no. 8, pp. 4005–4018, Aug. 2017.
[18]
J. Xu, P. Ye, Q. Li, H. Du, Y. Liu, and D. Doermann, “Blind image quality assessment based on high order statistics aggregation,” IEEE Trans. Image Process., vol. 25, no. 9, pp. 4444–4457, Sep. 2016.
[19]
X. Minet al., “Blind quality assessment of compressed images via pseudo structural similarity,” in Proc. IEEE Int. Conf. Multimedia Expo, Jul. 2016, pp. 1–6.
[20]
G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, “Overview of the high efficiency video coding (HEVC) standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1649–1668, Dec. 2012.
[21]
Z. Wang, A. C. Bovik, and B. L. Evan, “Blind measurement of blocking artifacts in images,” in Proc. Int. Conf. Image Process., Sep. 2000, pp. 981–984.
[22]
A. C. Bovik and S. Liu, “DCT-domain blind measurement of blocking artifacts in DCT-coded images,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., May 2001, pp. 1725–1728.
[23]
Z. Wang, H. R. Sheikh, and A. C. Bovik, “No-reference perceptual quality assessment of JPEG compressed images,” in Proc. Int. Conf. Image Process., Sep. 2002, pp. I-477–I-480.
[24]
C. Perra, F. Massidda, and D. D. Giusto, “Image blockiness evaluation based on Sobel operator,” in Proc. IEEE Int. Conf. Image Process., Sep. 2005, pp. 389–392.
[25]
L. Li, W. Lin, and H. Zhu, “Learning structural regularity for evaluating blocking artifacts in JPEG images,” IEEE Signal Process. Lett., vol. 21, no. 8, pp. 918–922, Aug. 2014.
[26]
L. Li, H. Zhu, G. Yang, and J. Qian, “Referenceless measure of blocking artifacts by Tchebichef kernel analysis,” IEEE Signal Process. Lett., vol. 21, no. 1, pp. 122–125, Jan. 2014.
[27]
H. Nemoto, P. Hanhart, P. Korshunov, and T. Ebrahimi, “Ultra-eye: UHD and HD images eye tracking dataset,” in Proc. 6th Int. Workshop Quality Multimedia Exper., Sep. 2014, pp. 39–40.
[28]
HEVC Reference Software. [Online]. Available: https://hevc.hhi.fraunhofer.de/
[29]
Methodology for the Subjective Assessment of the Quality of Television Pictures, document Rec. ITU-R BT.500-13, Jan. 2012.
[30]
J. Shi and C. Tomasi, “Good features to track,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 1994, pp. 593–600.
[31]
Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Proc. Asilomar Conf. Signals, Syst. Comput., Nov. 2003, pp. 1398–1402.
[32]
A. Rehman, K. Zeng, and Z. Wang, “Display device-adapted video quality-of-experience assessment,” Proc. SPIE, vol. 9394, p. 939406, Mar. 2015.
[33]
J. Wang, A. Rehman, K. Zeng, S. Wang, and Z. Wang, “Quality prediction of asymmetrically distorted stereoscopic 3D images,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3400–3414, Nov. 2015.
[34]
P. G. J. Barten, “Formula for the contrast sensitivity of the human eye,” Proc. SPIE, vol. 5294, pp. 231–238, 2003.
[35]
K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull., vol. 124, no. 3, pp. 372–422, Nov. 1998.
[36]
K. Rayner, “Eye movements and attention in reading, scene perception, and visual search,” Quart. J. Experim. Psychol., vol. 62, no. 8, pp. 1457–1506, 2009.
[37]
A. K. Moorthy and A. C. Bovik, “Blind image quality assessment: From natural scene statistics to perceptual quality,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3350–3364, Dec. 2011.
[38]
M. A. Saad, A. C. Bovik, and C. Charrier, “Blind image quality assessment: A natural scene statistics approach in the DCT domain,” IEEE Trans. Image Process., vol. 21, no. 8, pp. 3339–3352, Aug. 2012.
[39]
A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, Dec. 2012.
[40]
K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using free energy principle for blind image quality assessment,” IEEE Trans. Multimedia, vol. 17, no. 1, pp. 50–63, Jan. 2015.
[41]
P. Ye, J. Kumar, L. Kang, and D. Doermann, “Unsupervised feature learning framework for no-reference image quality assessment,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2012, pp. 1098–1105.
[42]
A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘completely blind’ image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, Mar. 2013.
[43]
L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE Trans. Image Process., vol. 24, no. 8, pp. 2579–2591, Aug. 2015.
[44]
Q. Wu, Z. Wang, and H. Li, “A highly efficient method for blind image quality assessment,” in Proc. IEEE Int. Conf. Image Process., Sep. 2015, pp. 339–343.
[45]
L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, and A. C. Kot, “Noreference image blur assessment based on discrete orthogonal moments,” IEEE Trans. Cybern., vol. 46, no. 1, pp. 39–50, Jan. 2016.
[46]
N. D. Narvekar and L. J. Karam, “A no-reference image blur metric based on the cumulative probability of blur detection (CPBD),” IEEE Trans. Image Process., vol. 20, no. 9, pp. 2678–2683, Sep. 2011.
[47]
P. V. Vu and D. M. Chandler, “A fast wavelet-based algorithm for global and local image sharpness estimation,” IEEE Signal Process. Lett., vol. 19, no. 7, pp. 423–426, Jul. 2012.
[48]
C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A spectral and spatial measure of local perceived sharpness in natural images,” IEEE Trans. Image Process., vol. 21, no. 3, pp. 934–945, Mar. 2012.
[49]
R. Hassen, Z. Wang, and M. M. A. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2798–2810, Jul. 2013.
[50]
J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986.
[51]
H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, Nov. 2006.
[52]
D. J. Sheskin, Handbook Parametric Nonparametric Statistical Procedures. Boca Raton, FL, USA: CRC Press, 2003.
[53]
H. R. Sheikh, Z. Wang, A. C. Bovik, and L. K. Cormack, Image and Video Quality Assessment Research at LIVE. [Online]. Available: http://live.ece.utexas.edu/research/quality
[54]
E. C. Larson and D. M. Chandler, “Most apparent distortion: Fullreference image quality assessment and the role of strategy,” J. Electron. Imag., vol. 19, no. 1, p. 011006, Jan. 2010.
[55]
N. Ponomarenkoet al., “Image database TID2013: Peculiarities, results and perspectives,” Signal Process., Image Commun., vol. 30, pp. 57–77, Jan. 2015.
[56]
Y. Horita, K. Shibata, Y. Kawayoke, and Z. P. Sazzad, MICT Image Quality Evaluation Database. [Online]. Available: http://mict.eng.utoyama.ac.jp/mictdb.html
[57]
VQEG. (2010). Report on the Validation of Video Quality Models for High Definition Video Content. [Online]. Available: http://www.its.bldrdoc.gov/vqeg/projects/hdtv/hdtv.aspx
[58]
P. V. Vu and D. M. Chandler, “ViS3: An algorithm for video quality assessment via analysis of spatial and spatiotemporal slices,” J. Electron. Imag., vol. 23, no. 1, p. 013016, Feb. 2014.
[59]
F. Zhang, S. Li, L. Ma, Y. C. Wong, and K. N. Ngan. IVP Subjective Quality Video Database. [Online]. Available: http://ivp.ee.cuhk.edu.hk/research/database/subjective
[60]
K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Trans. Image Process., vol. 19, no. 6, pp. 1427–1441, Jun. 2010.
[61]
M. A. Saad, A. C. Bovik, and C. Charrier, “Blind prediction of natural video quality,” IEEE Trans. Image Process., vol. 23, no. 3, pp. 1352–1365, Mar. 2014.
[62]
A. Mittal, M. A. Saad, and A. C. Bovik, “A completely blind video integrity oracle,” IEEE Trans. Image Process., vol. 25, no. 1, pp. 289–300, Jan. 2016.

Cited By

View all
  • (2024)No-Reference Image Quality Assessment Based on Machine Learning and Outlier Entropy SamplesPattern Recognition and Image Analysis10.1134/S105466182470007X34:2(275-287)Online publication date: 1-Jun-2024
  • (2024)Perceptual Quality Assessment of Face Video Compression: A Benchmark and An Effective MethodIEEE Transactions on Multimedia10.1109/TMM.2024.338026026(8596-8608)Online publication date: 1-Jan-2024
  • (2024)A Dataset and Benchmark for 3D Scene Plausibility AssessmentIEEE Transactions on Multimedia10.1109/TMM.2024.335345626(6529-6541)Online publication date: 12-Jan-2024
  • Show More Cited By

Index Terms

  1. Unified Blind Quality Assessment of Compressed Natural, Graphic, and Screen Content Images
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        Publisher

        IEEE Press

        Publication History

        Published: 01 November 2017

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 15 Oct 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)No-Reference Image Quality Assessment Based on Machine Learning and Outlier Entropy SamplesPattern Recognition and Image Analysis10.1134/S105466182470007X34:2(275-287)Online publication date: 1-Jun-2024
        • (2024)Perceptual Quality Assessment of Face Video Compression: A Benchmark and An Effective MethodIEEE Transactions on Multimedia10.1109/TMM.2024.338026026(8596-8608)Online publication date: 1-Jan-2024
        • (2024)A Dataset and Benchmark for 3D Scene Plausibility AssessmentIEEE Transactions on Multimedia10.1109/TMM.2024.335345626(6529-6541)Online publication date: 12-Jan-2024
        • (2024)Blind Image Quality Assessment via Transformer Predicted Error Map and Perceptual Quality TokenIEEE Transactions on Multimedia10.1109/TMM.2023.332571926(4641-4651)Online publication date: 1-Jan-2024
        • (2024)Towards Thousands to One Reference: Can We Trust the Reference Image for Quality Assessment?IEEE Transactions on Multimedia10.1109/TMM.2023.331026826(3278-3290)Online publication date: 1-Jan-2024
        • (2024)Blind Quality Evaluator of Light Field Images by Group-Based Representations and Multiple Plane-Oriented Perceptual CharacteristicsIEEE Transactions on Multimedia10.1109/TMM.2023.326837026(607-622)Online publication date: 1-Jan-2024
        • (2024)Image Quality Assessment: Measuring Perceptual Degradation via Distribution Measures in Deep Feature SpacesIEEE Transactions on Image Processing10.1109/TIP.2024.340917633(4044-4059)Online publication date: 1-Jan-2024
        • (2024)Deep Feature Statistics Mapping for Generalized Screen Content Image Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2024.339375433(3227-3241)Online publication date: 1-May-2024
        • (2024)TOPIQ: A Top-Down Approach From Semantics to Distortions for Image Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2024.337846633(2404-2418)Online publication date: 22-Mar-2024
        • (2024)Image denoising with a non-monotone boosted DCA for non-convex modelsComputers and Electrical Engineering10.1016/j.compeleceng.2024.109306117:COnline publication date: 1-Jul-2024
        • Show More Cited By

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media