Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

ESIM: Edge Similarity for Screen Content Image Quality Assessment

Published: 01 October 2017 Publication History

Abstract

In this paper, an accurate full-reference <italic>image quality assessment</italic> (IQA) model developed for assessing <italic>screen content images</italic> (SCIs), called the <italic>edge similarity</italic> (ESIM), is proposed. It is inspired by the fact that the <italic>human visual system</italic> (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features&#x2014;i.e., <italic>edge contrast</italic>, <italic>edge width</italic>, and <italic>edge direction</italic>. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed <italic>edge-width pooling</italic> strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

References

[1]
H. Yang, Y. Fang, and W. Lin, “Perceptual quality assessment of screen content images,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 4408–4421, Aug. 2015.
[2]
S. Wang, L. Ma, Y. Fang, W. Lin, S. Ma, and W. Gao, “Just noticeable difference estimation for screen content images,” IEEE Trans. Image Process., vol. 25, no. 8, pp. 3838–3851, May 2016.
[3]
J. Xu, R. Joshi, and R. A. Cohen, “Overview of the emerging HEVC screen content coding extension,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 1, pp. 50–62, Jan. 2016.
[4]
K. Gu, G. Zhai, W. Lin, X. Yang, and W. Zhang, “Learning a blind quality evaluation engine of screen content images,” Neurocomputing, vol. 196, pp. 140–149, Jul. 2016.
[5]
Z. Ma, W. Wang, M. Xu, and H. Yu, “Advanced screen content coding using color table and index map,” IEEE Trans. Image Process., vol. 23, no. 10, pp. 4399–4412, Oct. 2014.
[6]
W. Lin and C.-C. J. Kuo, “Perceptual visual quality metrics: A survey,” J. Visual Commun. Image Representation, vol. 22, no. 4, pp. 297–312, 2011.
[7]
B. Girod, “What’s wrong with mean-squared error?” in Digital Images and Human Vision. Cambridge, MA, USA: MIT Press, 1993, pp. 207–220.
[8]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[9]
Z. Ni, L. Ma, H. Zeng, C. Cai, and K.-K. Ma, “Gradient direction for screen content image quality assessment,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1394–1398, Aug. 2016.
[10]
G. Chen, C. Yang, and S. Xie, “Edge-based structural similarity for image quality assessment,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., May 2006, pp. 14–19.
[11]
W. Xue and X. Mou, “An image quality assessment metric based on non-shift edge,” in Proc. IEEE Int. Conf. Image Process., Sep. 2011, pp. 3309–3312.
[12]
X. Zhang, X. Feng, W. Wang, and W. Xue, “Edge strength similarity for image quality assessment,” IEEE Signal Process. Lett., vol. 20, no. 4, pp. 319–322, Apr. 2013.
[13]
A. Liu, W. Lin, and M. Narwaria, “Image quality assessment based on gradient similarity,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1500–1512, Apr. 2012.
[14]
W. Xue, L. Zhang, X. Mou, and A. C. Bovik, “Gradient magnitude similarity deviation: A highly efficient perceptual image quality index,” IEEE Trans. Image Process., vol. 23, no. 2, pp. 684–695, Feb. 2014.
[15]
L. Zhang, Y. Shen, and H. Li, “VSI: A visual saliency-induced index for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 23, no. 10, pp. 4270–4281, Aug. 2014.
[16]
H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp. 430–444, Feb. 2006.
[17]
H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 2117–2128, Dec. 2005.
[18]
F. Gao and J. Yu, “Biologically inspired image quality assessment,” Signal Process., vol. 124, pp. 210–219, Jul. 2016.
[19]
C. Deng and D. Tao, “Color image quality assessment with biologically inspired feature and machine learning,” Vis. Commun. Image Process., vol. 124, pp. 77440Y-1–77440Y–7, Aug. 2010.
[20]
Z. Wang, E. P. Simoncelli, and A. C. Bovil, “Multi-scale structural similarity for image quality assessment,” in Proc. IEEE Conf. Signals Syst. Comput., vol. 2. Nov. 2003, pp. 1398–1402.
[21]
F. Gao, D. Tao, X. Gao, and X. Li, “Learning to rank for blind image quality assessment,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10, pp. 2275–2290, Oct. 2015.
[22]
T. Liu, K. Liu, J. Lin, W. Lin, and C.-C. J. Kuo, “A paraboost method to image quality assessment,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 1, pp. 107–121, Jan. 2017.
[23]
S. Wang, C. Deng, W. Lin, G. Huang, and B. Zhao, “NMF-based image quality assessment using extreme learning machine,” IEEE Trans. Cybern., vol. 47, no. 1, pp. 232–243, Jan. 2017.
[24]
W. Hou, X. Gao, D. Tao, and X. Li, “Blind image quality assessment via deep learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 6, pp. 1275–1286, Jun. 2015.
[25]
W. Zhang, C. Qu, L. Ma, J. Guan, and R. Huang, “Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network,” Pattern Recognit., vol. 59, pp. 176–187, Nov. 2016.
[26]
F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, and Y. Zhu, “Deepsim: Deep similarity for image quality assessment,” Neurocomputing, vol. 257, pp. 104–114, Feb. 2017.
[27]
H. Wang, J. Fu, W. Lin, S. Hu, C.-C. J. Kuo, and L. Zuo, “Image quality assessment based on local linear information and distortion-specific compensation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 915–926, Feb. 2017.
[28]
S. Bosse, D. Maniry, and K. Müller, T. Wiegand, and W. Samek. (Dec. 2016). “Deep neural networks for no-reference and full-reference image quality assessment.” [Online]. Available: https://arxiv.org/abs/1612.01697
[29]
S. Wang, K. Gu, K. Zeng, Z. Wang, and W. Lin, “Objective quality assessment and perceptual compression of screen content images,” IEEE Comput. Graph. Appl., May 2016. 10.1109/MCG.2016.46.
[30]
K. Gu, S. Wang, G. Zhai, S. Ma, and W. Lin, “Screen image quality assessment incorporating structural degradation measurement,” in Proc. IEEE Int. Symp. Circuits Syst., May 2015, pp. 125–128.
[31]
P. J. L. van Beek, “Edge-based image representation and coding,” Ph.D. dissertation, Dept. Elect. Eng., Delft Univ. Technol., Delft, The Netherlands, 1995.
[32]
Z. Ni, L. Ma, H. Zeng, C. Cai, and K.-K. Ma, “Screen content image quality assessment using edge model,” in Proc. IEEE Int. Conf. Image Process., Aug. 2016, pp. 81–85.
[33]
T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio, “Robust object recognition with cortex-like mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 3, pp. 411–426, Mar. 2007.
[34]
D. Song and D. Tao, “C1 units for scene classification,” in Proc. 19th Int. Conf. Pattern Recognit., 2008, pp. 1–4.
[35]
D. Song and D. Tao, “Biologically inspired feature manifold for scene classification,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 174–184, Jan. 2010.
[36]
L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, Aug. 2011.
[37]
H. Zeng, A. Yang, K. N. Ngan, and M. H. Wang, “Perceptual sensitivity-based rate control method for High Efficiency Video Coding,” Multimedia Tools Appl., vol. 75, pp. 10383–10396, Oct. 2015.
[38]
A. Yang, H. Zeng, J. Chen, J. Zhu, and C. Cai, “Perceptual feature guided rate distortion optimization for High Efficiency Video Coding,” Multidimensional Syst. Signal Process., Mar. 2016, pp. 1–18. 10.1007/s11045-016-0395-2.
[39]
Methodology for the Subjective Assessment of the Quality of Television Pictures, document Rec. ITU-R BT.500-11, International Telecommunications Union, 2012.
[40]
M. H. Pinson and S. Wolf, “Comparing subjective video quality testing methodologies,” Proc. SPIE Vis. Commun. Image Process., pp. 573–582, Jun. 2003.
[41]
D. M. Chandler, “Seven challenges in image quality assessment: Past, present, and future research,” ISRN Signal Process., vol. 2013, 2013, Art. no.
[42]
A. M. Dijk, J.-B. Martens, and A. B. Waston, “Quality assessment of coded images using numerical category scaling,” Proc. SPIE Adv. Image Video Commun. Storage Technol., pp. 90–101, Feb. 1995.
[43]
K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Trans. Image Process., vol. 19, no. 6, pp. 1427–1441, Jun. 2010.
[44]
VQEG. (Aug. 2015). Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment. [Online]. Available: http://www.its.bldrdoc.gov/vqeg/vqeg-home.aspx
[45]
H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, Nov. 2006.
[46]
Z. Wang and Q. Li, “Information content weighting for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 5, pp. 1185–1198, May 2011.
[47]
E. C. Larson and D. M. Chandler, “Most apparent distortion: Full-reference image quality assessment and the role of strategy,” J. Electron. Imag., vol. 19, no. 1, pp. 011006-1–011006-21, 2010.
[48]
S.-H. Bae and M. Kim, “A novel image quality assessment with globally and locally consilient visual quality perception,” IEEE Trans. Image Process., vol. 25, no. 5, pp. 2392–2406, Apr. 2016.

Cited By

View all
  • (2024)Unifying Pictorial and Textual Features for Screen Content Image Quality EvaluationProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3657610(1099-1103)Online publication date: 30-May-2024
  • (2024)Width-Adaptive CNN: Fast CU Partition Prediction for VVC Screen Content CodingIEEE Transactions on Multimedia10.1109/TMM.2024.341011626(9372-9382)Online publication date: 5-Jun-2024
  • (2024)Deep Feature Statistics Mapping for Generalized Screen Content Image Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2024.339375433(3227-3241)Online publication date: 1-May-2024
  • Show More Cited By

Index Terms

  1. ESIM: Edge Similarity for Screen Content Image Quality Assessment
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      Publisher

      IEEE Press

      Publication History

      Published: 01 October 2017

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 14 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Unifying Pictorial and Textual Features for Screen Content Image Quality EvaluationProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3657610(1099-1103)Online publication date: 30-May-2024
      • (2024)Width-Adaptive CNN: Fast CU Partition Prediction for VVC Screen Content CodingIEEE Transactions on Multimedia10.1109/TMM.2024.341011626(9372-9382)Online publication date: 5-Jun-2024
      • (2024)Deep Feature Statistics Mapping for Generalized Screen Content Image Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2024.339375433(3227-3241)Online publication date: 1-May-2024
      • (2024)Graph-Represented Distribution Similarity Index for Full-Reference Image Quality AssessmentIEEE Transactions on Image Processing10.1109/TIP.2024.339056533(3075-3089)Online publication date: 24-Apr-2024
      • (2024)The Bjøntegaard Bible Why Your Way of Comparing Video Codecs May Be WrongIEEE Transactions on Image Processing10.1109/TIP.2023.334669533(987-1001)Online publication date: 1-Jan-2024
      • (2024)Blind quality assessment of screen content images via edge histogram descriptor and statistical momentsThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03108-140:8(5341-5356)Online publication date: 1-Aug-2024
      • (2023)Just noticeable visual redundancy forecastingProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i3.25399(2965-2973)Online publication date: 7-Feb-2023
      • (2023)Visual Security Index Combining CNN and Filter for Perceptually Encrypted Light Field ImagesACM Transactions on Multimedia Computing, Communications, and Applications10.1145/361292420:1(1-15)Online publication date: 3-Aug-2023
      • (2023)Compressed Screen Content Image Super ResolutionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/358996319:6(1-20)Online publication date: 30-Mar-2023
      • (2023)Visual Redundancy Removal of Composite Images via Multimodal LearningProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612118(6765-6773)Online publication date: 26-Oct-2023
      • Show More Cited By

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media