Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Subjective and Objective Quality Assessment for in-the-Wild Computer Graphics Images

Published: 11 December 2023 Publication History

Abstract

Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc. In practice, the quality of CGIs consistently suffers from poor rendering during production, inevitable compression artifacts during the transmission of multimedia applications, and low aesthetic quality resulting from poor composition and design. However, few works have been dedicated to dealing with the challenge of computer graphics image quality assessment (CGIQA). Most image quality assessment (IQA) metrics are developed for natural scene images (NSIs) and validated on databases consisting of NSIs with synthetic distortions, which are not suitable for in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled laboratory environment to obtain the accurate perceptual ratings of the CGIs. Then, we propose an effective deep learning–based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation. Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database and other CGIQA-related databases. The database is released at https://github.com/zzc-1998/CGIQA6K.

References

[1]
Weiming Bai, Zhipeng Zhang, Bing Li, Pei Wang, Yangxi Li, Congxuan Zhang, and Weiming Hu. 2021. Robust texture-aware computer-generated image forensic: Benchmark and algorithm. IEEE Transactions on Image Processing 30 (2021), 8439–8453.
[2]
Nabajeet Barman, Emmanuel Jammeh, Seyed Ali Ghorashi, and Maria G. Martini. 2019. No-reference video quality estimation based on machine learning for passive gaming video streaming applications. IEEE Access 7 (2019), 74511–74527.
[3]
Nabajeet Barman, Saman Zadtootaghaj, Steven Schmidt, Maria G. Martini, and Sebastian Möller. 2018. GamingVideoSET: A dataset for gaming video streaming applications. In 2018 16th Annual Workshop on Network and Systems Support for Games (NetGames). IEEE, 1–6.
[4]
RECOMMENDATION ITU-R BT. 2002. Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union (2002).
[5]
Hangwei Chen, Xiongli Chai, Feng Shao, Xuejin Wang, Qiuping Jiang, Mengxiang Chao, and Yo-Sung Ho. 2021. Perceptual quality assessment of cartoon images. IEEE Transactions on Multimedia (2021).
[6]
Kuan-Ta Chen, Yu-Chun Chang, Hwai-Jung Hsu, De-Yu Chen, Chun-Ying Huang, and Cheng-Hsin Hsu. 2013. On the quality of service of cloud gaming systems. IEEE Transactions on Multimedia 16, 2 (2013), 480–495.
[7]
C. Cui, H. Liu, T. Lian, L. Nie, L. Zhu, and Y. Yin. 2018. Distribution-oriented aesthetics assessment with semantic-aware hybrid network. IEEE Transactions on Multimedia 21, 5 (2018), 1209–1220.
[8]
R. Datta, D. Joshi, J. Li, and J. Z. Wang. 2006. Studying aesthetics in photographic images using a computational approach. In European Conference on Computer Vision. Springer, 288–301.
[9]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248–255.
[10]
Alena Denisova and Paul Cairns. 2015. First person vs. third person perspective in digital games: do player preferences affect immersion?. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 145–148.
[11]
Sagnik Dhar, Vicente Ordonez, and Tamara L. Berg. 2011. High level describable attributes for predicting aesthetics and interestingness. In CVPR 2011. IEEE, 1657–1664.
[12]
Yu Fan, Zicheng Zhang, Wei Sun, Xiongkuo Min, Ning Liu, Quan Zhou, Jun He, Qiyuan Wang, and Guangtao Zhai. 2022. A no-reference quality assessment metric for point cloud based on captured video sequences. In IEEE International Workshop on Multimedia Signal Processing. IEEE, 1–5.
[13]
Ieva Gintere. 2019. A new digital art game: The art of the future. In SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference, Vol. 4. 346–360.
[14]
S. Alireza Golestaneh and Kris Kitani. 2020. No-reference image quality assessment via feature fusion and multi-task learning. arXiv preprint arXiv:2006.03783 (2020).
[15]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems 27 (2014).
[16]
Donald P. Greenberg, Kenneth E. Torrance, Peter Shirley, James Arvo, Eric Lafortune, James A. Ferwerda, Bruce Walter, Ben Trumbore, Sumanta Pattanaik, and Sing-Choong Foo. 1997. A framework for realistic image synthesis. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. 477–494.
[17]
Ke Gu, Guangtao Zhai, Xiaokang Yang, and Wenjun Zhang. 2014. Using free energy principle for blind image quality assessment. IEEE Transactions on Multimedia 17, 1 (2014), 50–63.
[18]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[19]
Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tamás Szirányi, Shujun Li, and Dietmar Saupe. 2017. The Konstanz natural video database (KoNViD-1k). In 2017 9th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 1–6.
[20]
V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. 2020. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29 (2020), 4041–4056.
[21]
J. Hou, S. Yang, and W. Lin. 2020. Object-level attention for aesthetic rating distribution prediction. In Proceedings of the 28th ACM International Conference on Multimedia. ACM, 816–824.
[22]
Bo Hu, Leida Li, Jinjian Wu, and Jiansheng Qian. 2020. Subjective and objective quality assessment for image restoration: A critical survey. Signal Processing: Image Communication 85 (2020), 115839.
[23]
Bo Hu, Shuaijian Wang, Xinbo Gao, Leida Li, Ji Gan, and Xixi Nie. 2023. Reduced-reference image deblurring quality assessment based on multi-scale feature enhancement and aggregation. Neurocomputing 547 (2023), 126378.
[24]
Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7132–7141.
[25]
Qiuping Jiang, Yuese Gu, Chongyi Li, Runmin Cong, and Feng Shao. 2022. Underwater image enhancement quality evaluation: Benchmark dataset and objective metric. IEEE Transactions on Circuits and Systems for Video Technology 32, 9 (2022), 5959–5974.
[26]
Qiuping Jiang, Zhentao Liu, Ke Gu, Feng Shao, Xinfeng Zhang, Hantao Liu, and Weisi Lin. 2022. Single image super-resolution quality assessment: A real-world dataset, subjective studies, and an objective metric. IEEE Transactions on Image Processing 31 (2022), 2279–2294.
[27]
Yueying Kao, Ran He, and Kaiqi Huang. 2017. Deep aesthetic quality assessment with semantic information. IEEE Transactions on Image Processing 26, 3 (2017), 1482–1495.
[28]
Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. 2021. MUSIQ: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5148–5157.
[29]
Y. Ke, X. Tang, and F. Jing. 2006. The design of high-level features for photo quality assessment. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), Vol. 1. IEEE, 419–426.
[30]
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[31]
S. Kong, X. Shen, Z. Lin, R. Mech, and C. Fowlkes. 2016. Photo aesthetics ranking network with attributes and content adaptation. In European Conference on Computer Vision. Springer, 662–679.
[32]
Asif Ali Laghari, Hui He, Kamran Ali Memon, Rashid Ali Laghari, Imtiaz Ali Halepoto, and Asiya Khan. 2019. Quality of experience (QoE) in cloud gaming models: A review. Multiagent and Grid Systems 15, 3 (2019), 289–304.
[33]
Eric Cooper Larson and Damon Michael Chandler. 2010. Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging 19, 1 (2010), 011006.
[34]
Dingquan Li, Tingting Jiang, Weisi Lin, and Ming Jiang. 2018. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Transactions on Multimedia 21, 5 (2018), 1221–1234.
[35]
Leida Li, Yipo Huang, Jinjian Wu, Yuzhe Yang, Yaqian Li, Yandong Guo, and Guangming Shi. 2023. Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023).
[36]
Leida Li, Weisi Lin, Xuesong Wang, Gaobo Yang, Khosro Bahrami, and Alex C. Kot. 2015. No-reference image blur assessment based on discrete orthogonal moments. IEEE Transactions on Cybernetics 46, 1 (2015), 39–50.
[37]
Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. 2019. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 510–519.
[38]
Hanhe Lin, Vlad Hosu, and Dietmar Saupe. 2019. KADID-10k: A large-scale artificially distorted IQA database. In 2019 11th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 1–3.
[39]
Suiyi Ling, Junle Wang, Wenming Huang, Yundi Guo, Like Zhang, Yanqing Jing, and Patrick Le Callet. 2020. A subjective study of multi-dimensional aesthetic assessment for mobile game image. In Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications. 47–53.
[40]
Yongxu Liu, Jinjian Wu, Leida Li, Weisheng Dong, and Guangming Shi. 2022. Quality assessment of UGC videos based on decomposition and recomposition. IEEE Transactions on Circuits and Systems for Video Technology 33, 3 (2022), 1043–1054.
[41]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10012–10022.
[42]
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. 2022. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11976–11986.
[43]
Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Junxia He, Qiyuan Wang, Zicheng Zhang, Tao Wang, and Guangtao Zhai. 2022. Deep neural network for blind visual quality assessment of 4K content. IEEE Transactions on Broadcasting (2022).
[44]
X. Lu, Z. Lin, X. Shen, R. Mech, and J. Z. Wang. 2015. Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. In Proceedings of the IEEE International Conference on Computer Vision. 990–998.
[45]
S. Ma, J. Liu, and C. Wen Chen. 2017. A-Lamp: Adaptive layout-aware multi-patch deep convolutional neural network for photo aesthetic assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4535–4544.
[46]
Daniel Mackay. 2017. The Fantasy Role-playing Game: A New Performing Art. McFarland.
[47]
Bernard Mendiburu. 2012. 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen. Routledge.
[48]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision. Springer, 405–421.
[49]
Xiongkuo Min, Kede Ma, Ke Gu, Guangtao Zhai, Zhou Wang, and Weisi Lin. 2017. Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Transactions on Image Processing 26, 11 (2017), 5462–5474.
[50]
Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. 2018. Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting 64, 2 (2018), 508–517.
[51]
Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing 21, 12 (2012), 4695–4708.
[52]
Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. 2012. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters 20, 3 (2012), 209–212.
[53]
Naila Murray, Luca Marchesotti, and Florent Perronnin. 2012. AVA: A large-scale database for aesthetic visual analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2408–2415.
[54]
Niranjan D. Narvekar and Lina J. Karam. 2011. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing 20, 9 (2011), 2678–2683.
[55]
M. Nishiyama, T. Okabe, I. Sato, and Y. Sato. 2011. Aesthetic quality classification of photographs based on color harmony. In CVPR 2011. IEEE, 33–40.
[56]
Jongchan Park, Sanghyun Woo, Joon-Young Lee, and In So Kweon. 2018. BAM: Bottleneck attention module. arXiv preprint arXiv:1807.06514 (2018).
[57]
Ryan M. Patton. 2013. Games as an artistic medium: Investigating complexity thinking in game-based art pedagogy. Studies in Art Education 55, 1 (2013), 35–50.
[58]
Zhenyu Peng, Qiuping Jiang, Feng Shao, Wei Gao, and Weisi Lin. 2021. LGGD+: Image retargeting quality assessment by measuring local and global geometric distortions. IEEE Transactions on Circuits and Systems for Video Technology 32, 6 (2021), 3422–3437.
[59]
Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann.
[60]
Nikolay Ponomarenko, Lina Jin, Oleg Ieremeiev, Vladimir Lukin, Karen Egiazarian, Jaakko Astola, Benoit Vozel, Kacem Chehdi, Marco Carli, Federica Battisti, et al. 2015. Image database TID2013: Peculiarities, results and perspectives. Signal Processing: Image Communication 30 (2015), 57–77.
[61]
Michele A. Saad, Alan C. Bovik, and Christophe Charrier. 2012. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing 21, 8 (2012), 3339–3352.
[62]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4510–4520.
[63]
H. R. Sheikh, M. F. Sabir, and A. C. Bovik. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing 15, 11 (2006), 3440–3451.
[64]
Hamid R. Sheikh, Muhammad F. Sabir, and Alan C. Bovik. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing 15, 11 (2006), 3440–3451.
[65]
K. Sheng, W. Dong, C. Ma, X. Mei, F. Huang, and B.-G. Hu. 2018. Attention-based multi-patch aggregation for image aesthetic assessment. In Proceedings of the 26th ACM International Conference on Multimedia. ACM, 879–886.
[66]
Y. Shu, Q. Li, L. Liu, and G. Xu. 2021. Semi-supervised adversarial learning for attribute-aware photo aesthetic assessment. IEEE Transactions on Multimedia (2021).
[67]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[68]
Tianshu Song, Leida Li, Pengfei Chen, Hantao Liu, and Jiansheng Qian. 2022. Blind image quality assessment for authentic distortions by intermediary enhancement and iterative training. IEEE Transactions on Circuits and Systems for Video Technology 32, 11 (2022), 7592–7604.
[69]
Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. 2020. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3667–3676.
[70]
Gary J. Sullivan, Jens-Rainer Ohm, Woo-Jin Han, and Thomas Wiegand. 2012. Overview of the high efficiency video coding (HEVC) standard. IEEE Transactions on Circuits and Systems for Video Technology 22, 12 (2012), 1649–1668.
[71]
Wei Sun, Xiongkuo Min, Danyang Tu, Siwei Ma, and Guangtao Zhai. 2023. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE Journal of Selected Topics in Signal Processing (2023).
[72]
X. Sun, H. Yao, R. Ji, and S. Liu. 2009. Photo assessment based on computational visual attention model. In Proceedings of the 17th ACM International Conference on Multimedia. ACM, 541–544.
[73]
H. Talebi and P. Milanfar. 2018. NIMA: Neural image assessment. IEEE Transactions on Image Processing 27, 8 (2018), 3998–4011.
[74]
Gerald A. Voorhees, Joshua Call, and Katie Whitlock. 2012. Guns, Grenades, and Grunts: First-Person Shooter Games. Bloomsbury Publishing USA.
[75]
Shiqi Wang, Ke Gu, Xiang Zhang, Weisi Lin, Li Zhang, Siwei Ma, and Wen Gao. 2016. Subjective and objective quality assessment of compressed screen content images. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 6, 4 (2016), 532–543.
[76]
Tao Wang, Wei Sun, Xiongkuo Min, Wei Lu, Zicheng Zhang, and Guangtao Zhai. 2021. A multi-dimensional aesthetic quality assessment model for mobile game images. In 2021 International Conference on Visual Communications and Image Processing (VCIP). IEEE, 1–5.
[77]
Tao Wang, Wei Sun, Wei Wu, Ying Chen, Xiongkuo Min, Wei Lu, Zicheng Zhang, and Guangtao Zhai. 2022. A deep learning based multi-dimensional aesthetic quality assessment method for mobile game images. IEEE Transactions on Games (2022).
[78]
Shaoguo Wen, Suiyi Ling, Junle Wang, Ximing Chen, Yanqing Jing, and Patrick Le Callet. 2022. Subjective and objective quality assessment of mobile gaming video. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 1810–1814. DOI:https://doi.org/10.1109/ICASSP43922.2022.9746547
[79]
Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. 2018. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV). 3–19.
[80]
Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. 2023. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. arxiv:2211.04894 [cs.CV]
[81]
Jizheng Xu, Rajan Joshi, and Robert A. Cohen. 2015. Overview of the emerging HEVC screen content coding extension. IEEE Transactions on Circuits and Systems for Video Technology 26, 1 (2015), 50–62.
[82]
Huan Yang, Yuming Fang, and Weisi Lin. 2015. Perceptual quality assessment of screen content images. IEEE Transactions on Image Processing 24, 11 (2015), 4408–4421.
[83]
S. Yang, Q. Jiang, W. Lin, and Y. Wang. 2019. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In Proceedings of the 27th ACM International Conference on Multimedia. ACM, 1383–1391.
[84]
Zhenqiang Ying, Maniratnam Mandal, Deepti Ghadiyaram, and Alan Bovik. 2021. Patch-VQ: ‘Patching Up’ the video quality problem. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
[85]
Junyong You and Jie Yan. 2022. Explore spatial and channel attention in image quality assessment. In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 26–30.
[86]
Xiangxu Yu, Zhengzhong Tu, Zhenqiang Ying, Alan C. Bovik, Neil Birkbeck, Yilin Wang, and Balu Adsumilli. 2022. Subjective quality assessment of user-generated content gaming videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 74–83.
[87]
Saman Zadtootaghaj, Steven Schmidt, Saeed Shafiee Sabet, Sebastian Möller, and Carsten Griwodz. 2020. Quality estimation models for gaming video streaming services using perceptual video quality dimensions. In Proceedings of the 11th ACM Multimedia Systems Conference. 213–224.
[88]
Guangtao Zhai and Xiongkuo Min. 2020. Perceptual image quality assessment: A survey. Science China Information Sciences 63, 11 (2020), 1–52.
[89]
L. Zhang, Y. Gao, R. Zimmermann, Q. Tian, and X. Li. 2014. Fusion of multichannel local and global structural cues for photo aesthetics evaluation. IEEE Transactions on Image Processing 23, 3 (2014), 1419–1429.
[90]
Weixia Zhang, Kede Ma, Jia Yan, Dexiang Deng, and Zhou Wang. 2018. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology 30, 1 (2018), 36–47.
[91]
X. Zhang, X. Gao, W. Lu, and L. He. 2019. A gated peripheral-foveal convolutional neural network for unified image aesthetic prediction. IEEE Transactions on Multimedia 21, 11 (2019), 2815–2826.
[92]
Zicheng Zhang, Wei Sun, Xiongkuo Min, Tao Wang, Wei Lu, and Guangtao Zhai. 2021. No-reference quality assessment for 3D colored point cloud and mesh models. IEEE Transactions on Circuits and Systems for Video Technology (2021).
[93]
Zicheng Zhang, Wei Sun, Xiongkuo Min, Tao Wang, Wei Lu, Wenhan Zhu, and Guangtao Zhai. 2021. A no-reference visual quality metric for 3D color meshes. In IEEE International Conference on Multimedia & Expo Workshops. IEEE, 1–6.
[94]
Zicheng Zhang, Wei Sun, Xiongkuo Min, Wei Wu, Ying Chen, and Guangtao Zhai. 2022. Treating point cloud as moving camera videos: A no-reference quality assessment metric. arXiv preprint arXiv:2208.14085 (2022).
[95]
Zicheng Zhang, Wei Sun, Xiongkuo Min, Quan Zhou, Jun He, Qiyuan Wang, and Guangtao Zhai. 2023. MM-PCQA: Multi-modal learning for no-reference point cloud quality assessment. International Joint Conference on Artificial Intelligence (2023).
[96]
Zicheng Zhang, Wei Sun, Xiongkuo Min, Wenhan Zhu, Tao Wang, Wei Lu, and Guangtao Zhai. 2021. A no-reference evaluation metric for low-light image enhancement. In IEEE International Conference on Multimedia & Expo. 1–6.
[97]
Zicheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, and Guangtao Zhai. 2023. MD-VQA: Multi-dimensional quality assessment for UGC live videos. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023).

Cited By

View all
  • (2024)GMS-3DQA: Projection-Based Grid Mini-patch Sampling for 3D Model Quality AssessmentACM Transactions on Multimedia Computing, Communications, and Applications10.1145/364381720:6(1-19)Online publication date: 8-Mar-2024
  • (2024)Thqa: A Perceptual Quality Assessment Database for Talking Heads2024 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP51287.2024.10647507(15-21)Online publication date: 27-Oct-2024
  • (2024)AttentionLUT: Attention Fusion-Based Canonical Polyadic LUT for Real-Time Image EnhancementICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10445905(3255-3259)Online publication date: 14-Apr-2024

Index Terms

  1. Subjective and Objective Quality Assessment for in-the-Wild Computer Graphics Images

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 20, Issue 4
    April 2024
    676 pages
    EISSN:1551-6865
    DOI:10.1145/3613617
    • Editor:
    • Abdulmotaleb El Saddik
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 December 2023
    Online AM: 02 November 2023
    Accepted: 26 October 2023
    Revised: 15 September 2023
    Received: 17 July 2023
    Published in TOMM Volume 20, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Computer graphics images
    2. in-the-wild distortions
    3. image quality assessment
    4. no-reference

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)452
    • Downloads (Last 6 weeks)44
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)GMS-3DQA: Projection-Based Grid Mini-patch Sampling for 3D Model Quality AssessmentACM Transactions on Multimedia Computing, Communications, and Applications10.1145/364381720:6(1-19)Online publication date: 8-Mar-2024
    • (2024)Thqa: A Perceptual Quality Assessment Database for Talking Heads2024 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP51287.2024.10647507(15-21)Online publication date: 27-Oct-2024
    • (2024)AttentionLUT: Attention Fusion-Based Canonical Polyadic LUT for Real-Time Image EnhancementICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10445905(3255-3259)Online publication date: 14-Apr-2024
    • (2024)Q-Instruct: Improving Low-Level Visual Abilities for Multi-Modality Foundation Models2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02408(25490-25500)Online publication date: 16-Jun-2024

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media