Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Textured Mesh Quality Assessment: Large-scale Dataset and Deep Learning-based Quality Metric

Published: 05 June 2023 Publication History

Abstract

Over the past decade, three-dimensional (3D) graphics have become highly detailed to mimic the real world, exploding their size and complexity. Certain applications and device constraints necessitate their simplification and/or lossy compression, which can degrade their visual quality. Thus, to ensure the best Quality of Experience, it is important to evaluate the visual quality to accurately drive the compression and find the right compromise between visual quality and data size. In this work, we focus on subjective and objective quality assessment of textured 3D meshes. We first establish a large-scale dataset, which includes 55 source models quantitatively characterized in terms of geometric, color, and semantic complexity, and corrupted by combinations of five types of compression-based distortions applied on the geometry, texture mapping, and texture image of the meshes. This dataset contains over 343k distorted stimuli. We propose an approach to select a challenging subset of 3,000 stimuli for which we collected 148,929 quality judgments from over 4,500 participants in a large-scale crowdsourced subjective experiment. Leveraging our subject-rated dataset, a learning-based quality metric for 3D graphics was proposed. Our metric demonstrates state-of-the-art results on our dataset of textured meshes and on a dataset of distorted meshes with vertex colors. Finally, we present an application of our metric and dataset to explore the influence of distortion interactions and content characteristics on the perceived quality of compressed textured meshes.

Supplementary Material

tog-22-0011-File004 (tog-22-0011-file004.mov)
Supplementary video

References

[1]
M. Abid, M. Perreira Da Silva, and P. Le Callet. 2020. Perceptual characterization of 3D graphical contents based on attention complexity measures. In Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications (QoEVMA’20). 31–36.
[2]
Ilyass Abouelaziz, Aladine Chetouani, Mohammed El Hassouni, Longin Jan Latecki, and Hocine Cherifi. 2020. No-reference mesh visual quality assessment via ensemble of convolutional neural networks and compact multi-linear pooling. Pattern Recogn. 100 (2020).
[3]
Ilyass Abouelaziz, Mohammed El Hassouni, and Hocine Cherifi. 2017. A convolutional neural network framework for blind mesh visual quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). 755–759.
[4]
E. Alexiou and T. Ebrahimi. 2017. On the performance of metrics to predict quality in point cloud representations. In Applications of Digital Image Processing XL, Andrew G. Tescher (Ed.), Vol. 10396. International Society for Optics and Photonics, SPIE, 282–297.
[5]
E. Alexiou and T. Ebrahimi. 2018. Point cloud quality assessment metric based on angular similarity. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’18). 1–6.
[6]
E. Alexiou, E. Upenik, and T. Ebrahimi. 2017. Towards subjective quality assessment of point cloud imaging in augmented reality. In Proceedings of the IEEE 19th International Workshop on Multimedia Signal Processing (MMSP’17). 1–6.
[7]
E. Alexiou, I. Viola, T. M. Borges, T. A. Fonseca, R. L. de Queiroz, and T. Ebrahimi. 2019. A comprehensive study of the rate-distortion performance in MPEG point cloud compression. APSIPA Trans. Signal Info. Process. 8 (2019), e27.
[8]
E. Alexiou, N. Yang, and T. Ebrahimi. 2020. PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 1–6.
[9]
N. Aspert, D. Santa-Cruz, and T. Ebrahimi. 2002. MESH: Measuring errors between surfaces using the Hausdorff distance. In Proceedings of the IEEE International Conference on Multimedia and Expo.
[10]
Sebastian Bosse, Dominique Maniry, Klaus-Robert Müller, Thomas Wiegand, and Wojciech Samek. 2018. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27, 1 (2018), 206–219.
[11]
F. Caillaud, V. Vidal, F. Dupont, and G. Lavoué. 2016. Progressive compression of arbitrary textured meshes. Comput. Graph. Forum 35, 7 (Oct.2016), 475–484.
[12]
A. Chetouani. 2018. Three-dimensional mesh quality metric with reference based on a support vector regression model. J. Electr. Imag. 27, 4 (2018), 1–9.
[13]
Kyriaki Christaki, Emmanouil Christakis, and Petros Drakoulis. 2018. Subjective visual quality assessment of immersive 3D media compressed by open-source static 3D mesh codecs. In Proceedings of the 25th International Conference on MultiMedia Modeling (MMM’18). 1–12.
[14]
Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, and Guido Ranzuglia. 2008. MeshLab: An open-source mesh processing tool. In Proceedings of the Eurographics Italian Chapter Conference. The Eurographics Association.
[15]
Massimiliano Corsini, Elisa Drelie Gelasca, Touradj Ebrahimi, and Mauro Barni. 2007. Watermarked 3D mesh quality assessment. IEEE Trans. Multimedia 9 (2007), 247–256.
[16]
Ulrich Engelke, Maulana Kusuma, Hans-Jürgen Zepernick, and Manora Caldera. 2009. Reduced-reference metric design for objective perceptual quality assessment in wireless imaging. Signal Process.: Image Commun. 24, 7 (2009), 525–547.
[17]
Fei Gao, Yi Wang, Panpeng Li, Min Tan, Jun Yu, and Yani Zhu. 2017. DeepSim: Deep similarity for image quality assessment. Neurocomputing 257 (2017), 104–114. Machine Learning and Signal Processing for Big Multimedia Analysis.
[18]
Michael Garland and Paul S. Heckbert. 1998. Simplifying surfaces with color and texture using quadric error metrics. In Proceedings of the Conference on Visualization. 263–269.
[19]
Deepti Ghadiyaram and Alan C. Bovik. 2016. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25, 1 (2016), 372–387.
[20]
Jinjiang Guo, Vincent Vidal, Irene Cheng, Anup Basu, Atilla Baskurt, and Guillaume Lavoué. 2016. Subjective and objective visual quality assessment of textured 3D meshes. ACM Trans. Appl. Percept. 14 (102016), 1–20.
[21]
Jesús Gutiérrez, Toinon Vigier, and Patrick Le Callet. 2020. Quality evaluation of 3D objects in mixed reality for different lighting conditions. Electr. Imag. 2020 (Jan.2020).
[22]
Zhouyan He, Gangyi Jiang, Zhidi Jiang, and Mei Yu. 2021. Towards a colored point cloud quality assessment method using colored texture and curvature projection. In Proceedings of the IEEE International Conference on Image Processing (ICIP’21). 1444–1448.
[23]
Tobias Hoßfeld, Christian Keimel, Matthias Hirth, Bruno Gardlo, Julian Habigt, Klaus Diepold, and Phuoc Tran-Gia. 2014. Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimedia 16, 2 (2014), 541–558.
[24]
Xun Huang, Ming-Yu Liu, Serge Belongi, and Jan Kautz. 2018. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV’18). 179–196.
[25]
ITU-R BT.500-13. 2012. Methodology for the subjective assessment of the quality of television pictures BT Series Broadcasting service. Technical report. International Telecommunication Union, Geneva, Switzerland.
[26]
ITU-T P.910. 2008. Subjective video quality assessment methods for multimedia applications. Technical report. International Telecommunication Union, Geneva, Switzerland.
[27]
A. Javaheri, C. Brites, F. Pereira, and J. Ascenso. 2017. Subjective and objective quality evaluation of compressed point clouds. In Proceedings of the IEEE 19th International Workshop on Multimedia Signal Processing (MMSP’17). 1–6.
[28]
A. Javaheri, C. Brites, F. Pereira, and J. Ascenso. 2019. Point cloud rendering after coding: Impacts on subjective and objective quality. Retrieved from https://arXiv:1912.09137.
[29]
A. Javaheri, C. Brites, F. Pereira, and J. Ascenso. 2020. Mahalanobis based point to distribution metric for point cloud geometry quality evaluation. IEEE Signal Process. Lett. 27 (2020), 1350–1354.
[30]
Rafael Zequeira Jiménez, Laura Fernández Gallardo, and Sebastian Möller. 2018. Influence of number of stimuli for subjective speech quality assessment in crowdsourcing. In Proceedings of the 10th International Conference on Quality of Multimedia Experience (QoMEX’18) (2018), 1–6.
[31]
Le Kang, Peng Ye, Yi Li, and David Doermann. 2014. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1733–1740.
[32]
S. A. Karunasekera and N. G. Kingsbury. 1995. A distortion measure for blocking artifacts in images based on human visual sensitivity. IEEE Trans. Image Process. 4, 6 (1995), 713–724.
[33]
L. Krasula, K. Fliegel, P. Le Callet, and M. Klíma. 2016. On the accuracy of objective image and video quality models: New methodology for performance evaluation. In Proceedings of the 8th International Conference on Quality of Multimedia Experience (QoMEX) (2016), 1–6.
[34]
Guillaume Lavoué. 2009. A local roughness measure for 3D meshes and its application to visual masking. ACM Trans. Appl. Percept. 5, 4, Article 21 (2009), 23 pages.
[35]
Guillaume Lavoué. 2011. A multiscale metric for 3D mesh visual quality assessment. Comput. Graph. Forum 30, 5 (2011), 1427–1437.
[36]
Guillaume Lavoué, Mohamed Chaker Larabi, and Libor Vasa. 2016. On the efficiency of image metrics for evaluating the visual quality of 3D models. IEEE Trans. Visual. Comput. Graph. 22, 8 (2016), 1987–1999.
[37]
Davi Lazzarotto, Evangelos Alexiou, and Touradj Ebrahimi. 2021. Benchmarking of objective quality metrics for point cloud compression. In Proceedings of the 23rd IEEE International Workshop on Multimedia Signal Processing (MMSP’21). 1–6.
[38]
Qi Liu, Hui Yuan, Raouf Hamzaoui, Honglei Su, Junhui Hou, and Huan Yang. 2021b. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 30 (2021), 6623–6636.
[39]
Qi Liu, Hui Yuan, Honglei Su, Hao Liu, Yu Wang, Huan Yang, and Junhui Hou. 2021c. PQA-Net: Deep no reference point cloud quality assessment via multi-view projection. IEEE Trans. Circ. Syst. Video Technol. 31, 12 (2021), 4645–4660.
[40]
Yipeng Liu, Qi Yang, Yiling Xu, and Le Yang. 2021a. Point cloud quality assessment: Dataset construction and learning-based no-reference approach. Retrieved from https://arXiv:2012.11895.
[41]
Andrea Maggiordomo, Federico Ponchio, Paolo Cignoni, and Marco Tarini. 2020. Real-world textured things: A repository of textured models generated with modern photo-reconstruction tools. Retrieved from https://arXiv:2004.14753.
[42]
Rafał Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. 30, 4, Article 40 (July2011), 14 pages.
[43]
Rafał K. Mantiuk, Anna Tomaszewska, and Radosław Mantiuk. 2012. Comparison of four subjective methods for image quality assessment. Comput. Graph. Forum 31, 8 (Dec.2012), 2478–2491. http://doi.wiley.com/10.1111/j.1467-8659.2012.03188.x
[44]
R. Mekuria, K. Blom, and P. Cesar. 2017. Design, implementation, and evaluation of a point cloud codec for tele-immersive video. IEEE Trans. Circ. Syst. Video Technol. 27, 4 (Apr.2017), 828–842.
[45]
Gabriel Meynet, Julie Digne, and Guillaume Lavoué. 2019. PC-MSDM: A quality metric for 3D point clouds. In Proceedings of the 11th International Conference on Quality of Multimedia Experience (QoMEX’19). 1–3.
[46]
Gabriel Meynet, Yana Nehmé, Julie Digne, and Guillaume Lavoué. 2020. PCQM: A full-reference quality metric for colored 3D point clouds. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 1–6.
[47]
Yana Nehmé, Patrick Le Callet, Florent Dupont, Jean-Philippe Farrugia, and Guillaume Lavoué. 2021a. Exploring crowdsourcing for subjective quality assessment of 3D graphics. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP’21).
[48]
Yana Nehmé, Florent Dupont, Jean-Philippe Farrugia, Patrick Le Callet, and Guillaume Lavoué. 2021b. Visual quality of 3D meshes with diffuse colors in virtual reality: Subjective and objective evaluation. IEEE Trans. Visual. Comput. Graph. 27, 3 (2021), 2202–2219.
[49]
Yana Nehmé, Jean-Philippe Farrugia, Florent Dupont, Patrick Le Callet, and Guillaume Lavoué. 2020. Comparison of subjective methods for quality assessment of 3D graphics in virtual reality. ACM Trans. Appl. Percept. 18, 1 (Dec.2020), 1–23.
[50]
Anass Nouri, Christophe Charrier, and Olivier Lézoray. 2017. 3D blind mesh quality assessment index. In Proceedings of the IS&T International Symposium on Electronic Imaging.
[51]
Yixin Pan, I. Cheng, and A. Basu. 2005. Quality metric for approximating subjective evaluation of 3D objects. IEEE Trans. Multimedia 7, 2 (Apr.2005), 269–279.
[52]
S. Perry, H. P. Cong, L. A. da Silva Cruz, J. Prazeres, M. Pereira, A. Pinheiro, E. Dumic, E. Alexiou, and T. Ebrahimi. 2020. Quality evaluation of static point clouds encoded using MPEG codecs. In Proceedings of the IEEE International Conference on Image Processing (ICIP’20). 3428–3432.
[53]
Jens Preiss, Felipe Fernandes, and Philipp Urban. 2014. Color-image quality assessment: From prediction to optimization. IEEE Trans. Image Process. 23, 3 (2014), 1366–1378.
[54]
Maurice Quach, Aladine Chetouani, Giuseppe Valenzise, and Frederic Dufaux. 2021. A deep perceptual metric for 3D point clouds. Electr. Imag. 2021, 9 (2021), 2571–2577.
[55]
Judith Redi, Ernestasia Siahaan, Pavel Korshunov, Julian Habigt, and Tobias Hossfeld. 2015. When the crowd challenges the lab: Lessons learnt from subjective studies on image aesthetic appeal. In Proceedings of the 4th International Workshop on Crowdsourcing for Multimedia. 33–38.
[56]
Max Reimann, Ole Wegen, Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Matthias Trapp. 2021. Teaching data-driven video processing via crowdsourced data collection. In Proceedings of Eurographics’21: Education Papers.
[57]
Bernice E. Rogowitz and Holly E. Rushmeier. 2001. Are image quality metrics adequate to evaluate the quality of geometric objects? InProceedings of SPIE: The International Society for Optical Engineering.
[58]
Adrian Secord, Jingwan Lu, Adam Finkelstein, Manish Singh, and Andrew Nealen. 2011. Perceptual models of viewpoint preference. ACM Trans. Graph. 30, 5 (Oct.2011).
[59]
H. R. Sheikh and A. C. Bovik. 2006. Image information and visual quality. IEEE Trans. Image Process. 15, 2 (Feb.2006), 430–444.
[60]
H. Su, Z. Duanmu, W. Liu, Q. Liu, and Z. Wang. 2019. Perceptual quality assessment of 3D point clouds. In Proceedings of the IEEE International Conference on Image Processing (ICIP’19). 3182–3186.
[61]
Shishir Subramanyam, Jie Li, Irene Viola, and Pablo Cesar. 2020. Comparing the quality of highly realistic digital humans in 3DoF and 6DoF: A volumetric video case study. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR’20). IEEE, 127–136.
[62]
Hossein Talebi and Peyman Milanfar. 2018. NIMA: Neural image assessment. IEEE Trans. Image Process. 27, 8 (2018), 3998–4011.
[63]
Wen-xu Tao, Gang-yi Jiang, Zhi-di Jiang, and Mei Yu. 2021. Point Cloud Projection and Multi-Scale Feature Fusion Network Based Blind Quality Assessment for Colored Point Clouds. New York, NY, 5266–5272.
[64]
Taimoor Tariq, Okan Tarhan Tursun, Munchurl Kim, and Piotr Didyk. 2020. Why are deep representations good perceptual quality features? Computer Vision – ECCV 2020 (2020), 445–461.
[65]
Dihong Tian and G. AlRegib. 2008. Batex3: Bit allocation for progressive transmission of textured 3D models. IEEE Trans. Circ. Syst. Video Technol. 18, 1 (2008), 23–35.
[66]
Dong Tian, Hideaki Ochimizu, Chen Feng, Robert Cohen, and Anthony Vetro. 2017. Geometric distortion metrics for point cloud compression. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). 3460–3464.
[67]
Fakhri Torkhani, Kai Wang, and Jean-Marc Chassery. 2014. A curvature-tensor-based perceptual quality metric for 3D triangular meshes. Mach. Graph. Vision 23, 1 (2014), 59–82.
[68]
Fakhri Torkhani, Kai Wang, and Jean-Marc Chassery. 2015. Perceptual quality assessment of 3D dynamic meshes: Subjective and objective studies. Signal Process.: Image Commun. 31, 2 (Feb.2015), 185–204.
[69]
E. M. Torlig, E. Alexiou, T. A. Fonseca, R. L. de Queiroz, and T. Ebrahimi. 2018. A novel methodology for quality assessment of voxelized point clouds. In Applications of Digital Image Processing XLI, Andrew G. Tescher (Ed.), Vol. 10752. International Society for Optics and Photonics, SPIE, 174–190.
[70]
K. Vanhoey, B. Sauvage, P. Kraemer, and G. Lavoué. 2017. Visual quality assessment of 3D models: On the influence of light-material interaction. ACM Trans. Appl. Percept. 15, 1 (2017).
[71]
Libor Váša and Jan Rus. 2012. Dihedral angle mesh error: A fast perception correlated distortion measure for fixed connectivity triangle meshes. Comput. Graph. Forum 31, 5 (2012).
[72]
Irene Viola, Shishir Subramanyam, and Pablo Cesar. 2020. A color-based objective quality metric for point cloud contents. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 1–6.
[73]
Irene Viola, Shishir Subramanyam, Jie Li, and Pablo Cesar. 2022. On the impact of VR assessment on the Quality of Experience of Highly Realistic Digital Humans. Retrieved from https://arXiv:2201.07701.
[74]
VQEG. 2010. Report on the validation of video quality models for high definition video content. Technical report. Video Quality Experts Group.
[75]
Kai Wang, Fakhri Torkhani, and Annick Montanvert. 2012. A fast roughness-based approach to the assessment of 3D mesh visual quality. Comput. Graph. 36, 7 (2012), 808–818.
[76]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600–612.
[77]
Benjamin Watson. 2001. Measuring and predicting visual fidelity. ACM Siggraph (2001), 213–220.
[78]
Jinjian Wu, Jupo Ma, Fuhu Liang, Weisheng Dong, Guangming Shi, and Weisi Lin. 2020. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29 (2020), 7414–7426.
[79]
Xinju Wu, Yun Zhang, Chunling Fan, Junhui Hou, and Sam Kwong. 2021. Subjective quality database and objective study of compressed point clouds with 6DoF head-mounted display. IEEE Trans. Circ. Syst. Video Technol. 31, 12 (2021), 4630–4644.
[80]
Ceyuan Yang, Zhe Wang, Xinge Zhu, Chen Huang, Jianping Shi, and Dahua Lin. 2018. Pose guided human video generation. In Proceedings of the European Conference on Computer Vision (ECCV’18). 204–219.
[81]
Q. Yang, H. Chen, Z. Ma, Y. Xu, R. Tang, and J. Sun. 2020. Predicting the perceptual quality of point cloud: A 3D-to-2D projection-based exploration. IEEE Trans. Multimedia 23 (2020), 3877–3891.
[82]
Qi Yang, Zhan Ma, Yiling Xu, Zhu Li, and Jun Sun. 2020. Inferring point cloud quality via graph similarity. IEEE Trans. Pattern Anal. Mach. Intell. 44, 6 (2020), 3015–3029.
[83]
Zeynep Cipiloglu Yildiz, A. Cengiz Oztireli, and Tolga Capin. 2020. A machine learning framework for full-reference 3D shape quality assessment. Visual Comput. 36, 1 (2020), 127–139.
[84]
Honghai Yu and Stefan Winkler. 2013. Image complexity and spatial information. In Proceedings of the 5th International Workshop on Quality of Multimedia Experience (QoMEX’13). 12–17.
[85]
Emin Zerman, Pan Gao, Cagri Ozcinar, and Aljosa Smolic. 2019. Subjective and objective quality assessment for volumetric video compression. Electr. Imag. 2019, 10 (2019), 323–1.
[86]
Emin Zerman, Cagri Ozcinar, Pan Gao, and Aljosa Smolic. 2020. Textured mesh vs coloured point cloud: A subjective study for volumetric video compression. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). IEEE, 1–6.
[87]
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’18). 586–595.
[88]
Yujie Zhang, Qi Yang, and Yiling Xu. 2021. MS-GraphSIM: Inferring Point Cloud Quality via Multiscale Graph Similarity. ACM, New York, NY, 1230–1238.
[89]
Qing Zhu, Junqiao Zhao, Zhiqiang Du, and Yeting Zhang. 2010. Quantitative analysis of discrete 3D geometrical detail levels based on perceptual metric. Comput. Graph. 34, 1 (2010), 55–65.

Cited By

View all
  • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
  • (2024)Multi-view stereo of an object immersed in a refractive mediumJournal of Electronic Imaging10.1117/1.JEI.33.3.03300533:03Online publication date: 1-May-2024
  • (2024)A Survey on Realistic Virtual Human Animations: Definitions, Features and EvaluationsComputer Graphics Forum10.1111/cgf.1506443:2Online publication date: 30-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 42, Issue 3
June 2023
181 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3579817
  • Editor:
  • Carol O'Sullivan
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2023
Online AM: 14 April 2023
Accepted: 09 March 2023
Revised: 16 December 2022
Received: 23 February 2022
Published in TOG Volume 42, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Computer graphics
  2. perception
  3. 3D mesh
  4. texture
  5. visual quality assessment
  6. subjective quality evaluation
  7. objective quality evaluation
  8. dataset
  9. perceptual metric
  10. deep learning
  11. crowdsourcing

Qualifiers

  • Research-article

Funding Sources

  • French National Research Agency as part of ANR-PISCo

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)715
  • Downloads (Last 6 weeks)131
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
  • (2024)Multi-view stereo of an object immersed in a refractive mediumJournal of Electronic Imaging10.1117/1.JEI.33.3.03300533:03Online publication date: 1-May-2024
  • (2024)A Survey on Realistic Virtual Human Animations: Definitions, Features and EvaluationsComputer Graphics Forum10.1111/cgf.1506443:2Online publication date: 30-Apr-2024
  • (2024)Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual RealityIEEE Transactions on Image Processing10.1109/TIP.2024.346888133(5740-5754)Online publication date: 2024
  • (2024)Perceptual Crack Detection for Rendered 3D Textured Meshes2024 16th International Conference on Quality of Multimedia Experience (QoMEX)10.1109/QoMEX61742.2024.10598253(1-7)Online publication date: 18-Jun-2024
  • (2024)A Subjective Quality Evaluation of 3D Mesh With Dynamic Level of Detail in Virtual Reality2024 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP51287.2024.10647962(1225-1231)Online publication date: 27-Oct-2024
  • (2024)A Reduced-Reference Quality Assessment Metric for Textured Mesh Digital HumansICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10447636(2965-2969)Online publication date: 14-Apr-2024
  • (2024)SJTU-TMQA: A Quality Assessment Database for Static Mesh with Texture MapICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10445942(7875-7879)Online publication date: 14-Apr-2024
  • (2024)Regularized joint self-training: A cross-domain generalization method for image classificationEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108707134(108707)Online publication date: Aug-2024
  • (2024)A Quality-Based Criteria for Efficient View SelectionRobotics, Computer Vision and Intelligent Systems10.1007/978-3-031-59057-3_13(193-209)Online publication date: 8-May-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media