Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2647868.2654917acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Impact of Ultra High Definition on Visual Attention

Published: 03 November 2014 Publication History

Abstract

Ultra high definition (UHD) TV is rapidly replacing high definition (HD) TV but little is known of its effects on human visual attention. However, a clear understanding of this effect is important, since accurate models, evaluation methodologies, and metrics for visual attention are essential in many areas, including image and video compression, camera and displays manufacturing, artistic content creation, and advertisement. In this paper, we address this problem by creating a dataset of UHD resolution images with corresponding eye-tracking data, and we show that there is a statistically significant difference between viewing strategies when watching UHD and HD contents. Furthermore, by evaluating five representative computational models of visual saliency, we demonstrate the decrease in models' accuracies on UHD contents when compared to HD contents. Therefore, to improve the accuracy of computational models for higher resolutions, we propose a segmentation-based resolution-adaptive weighting scheme. Our approach demonstrates that taking into account information about resolution of the images improves the performance of computational models.

References

[1]
H. Alers, L. Bos, and I. Heynderickx. How the task of evaluating image quality influences viewing behavior. In International Workshop on Quality of Multimedia Experience (QoMEX), pages 167--172, Sept. 2011.
[2]
A. Borji and L. Itti. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):185--207, Jan. 2013.
[3]
A. Borji, D. N. Sihite, and L. Itti. Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study. IEEE Transactions on Image Processing, 22(1):55--69, 2013.
[4]
N. D. B. Bruce. Saliency, Attention and Visual Search: An Information Theoretic Approach. PhD thesis, York University, Canada, 2008. AAINR45988.
[5]
M. Cerf, J. Harel, W. Einhaeuser, and C. Koch. Predicting human gaze using low-level saliency combined with face detection. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 241--248. Curran Associates, Inc., 2008.
[6]
Z. Chen, W. Lin, and K. N. Ngan. Perceptual video coding: Challenges and approaches. In IEEE International Conference on Multimedia and Expo (ICME), pages 784--789, July 2010.
[7]
U. Engelke, H. Liu, J. Wang, P. L. Callet, I. Heynderickx, H.-J. Zepernick, and A. J. Maeder. Comparative Study of Fixation Density Maps. IEEE Transactions on Image Processing, 22(3):1121--1133, 2013.
[8]
U. Engelke, A. Maeder, and H. Zepernick. Visual attention modelling for subjective image quality databases. In IEEE International Workshop on Multimedia Signal Processing (MMSP), pages 1--6, Oct. 2009.
[9]
P. F. Felzenszwalb and D. P. Huttenlocher. Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2):167--181, Sept. 2004.
[10]
M. S. Gide and L. J. Karam. Comparative evaluation of visual saliency models for quality assessment task. In International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), pages 37--40, Jan. 2012.
[11]
S. Gilani, R. Subramanian, H. Hua, S. Winkler, and S.-C. Yen. Impact of image appeal on visual attention during photo triaging. In IEEE International Conference on Image Processing (ICIP), pages 231--235, Sept. 2013.
[12]
S. Goferman, L. Zelnik-Manor, and A. Tal. Context-aware saliency detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2376--2383, June 2010.
[13]
D. M. Green and J. A. Swets. Signal Detection Theory and Psychophysics. Wiley, New York, 1966.
[14]
J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Advances in neural information processing systems, volume 19, pages 545--552. MIT Press, 2007.
[15]
T. Ito. Future television -- Super Hi-Vision and beyond. In IEEE Asian Solid State Circuits Conference (A-SSCC), pages 1--4, Nov. 2010.
[16]
L. Itti. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing, 13(10):1304--1318, Oct. 2004.
[17]
L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254--1259, Nov. 1998.
[18]
ITU-R BT.2022. General viewing conditions for subjective assessment of quality of SDTV and HDTV television pictures on flat panel displays. International Telecommunication Union, Aug. 2012.
[19]
ITU-R BT.500-13. Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union, Jan. 2012.
[20]
P. Jermann, M.-A. Nuessli, and K. Sharma. Attentional Episodes and Focus. In Dual Eye-Tracking workshop in ACM Conference on Computer Supported Cooperative Work, Seattle, Washington, USA, Feb. 2012.
[21]
T. Judd, F. Durand, and A. Torralba. A model of saliency-based visual attention for rapid scene analysis. Journal of Vision 2011, 11(4):1254--1259, Apr. 2011.
[22]
T. Judd, F. Durand, and A. Torralba. A Benchmark of Computational Models of Saliency to Predict Human Fixations. Technical Report MIT-CSAIL-TR-2012-001, CSAIL, MIT, Jan. 2012.
[23]
T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In IEEE International Conference on Computer Vision (ICCV), pages 2106--2113, Sept. 2009.
[24]
K. Masaoka, M. Emoto, M. Sugawara, and Y. Nojiri. Contrast effect in evaluating the sense of presence for wide displays. Journal of the Society for Information Display, 14(9):785--791, 2006.
[25]
H. Nemoto, P. Hanhart, P. Korshunov, and T. Ebrahimi. Ultra-Eye: UHD and HD images eye tracking dataset. In International Workshop on Quality of Multimedia Experience (QoMEX), Sept. 2014.
[26]
J. Redi, H. Liu, P. Gastaldo, R. Zunino, and I. Heynderickx. How to apply spatial saliency into objective metrics for JPEG compressed images? In IEEE International Conference on Image Processing (ICIP), pages 961--964, Nov. 2009.
[27]
J. Redi, H. Liu, R. Zunino, and I. Heynderickx. Interactions of visual attention and quality perception. In Proc. SPIE, volume 7865, pages 78650S-78650S-11, Feb. 2011.
[28]
N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit. Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics. In IEEE International Conference on Computer Vision (ICCV), pages 1153--1160, Dec. 2013.
[29]
K. Vu, K. A. Hua, S. Member, and W. Tavanapong. Image Retrieval Based on Regions of Interest. IEEE Transactions on Knowledge and Data Engineering, 15:1045--1049, 2003.
[30]
D. Wang, G. Li, W. Jia, and X. Luo. Saliency-driven Scaling Optimization for Image Retargeting. Visual Computing, 27(9):853--860, Sept. 2011.
[31]
S. Winkler and R. Subramanian. Overview of Eye tracking Datasets. In International Workshop on Quality of Multimedia Experience (QoMEX), pages 212--217, July 2013.
[32]
L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell. SUN: A Bayesian framework for saliency using natural statistics. Journal of vision, 8(7):32.1--20, 2008.
[33]
Q. Zhao and C. Koch. Learning a saliency map using fixated locations in natural scenes. Journal of vision, 11(3):1--15, Mar. 2011.

Cited By

View all
  • (2022)Quality of 8K Ultra-High-Definition Television Viewing Experience in Practical Viewing ConditionsIEEE Transactions on Broadcasting10.1109/TBC.2021.310503168:1(2-12)Online publication date: Mar-2022
  • (2021)A visual discomfort recognition model based on the fusion of multisource informationJournal of the Society for Information Display10.1002/jsid.108430:2(128-140)Online publication date: 22-Oct-2021
  • (2018)Subjective and Objective Quality Assessment of Compressed 4K UHD Videos for Immersive ExperienceIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2017.268350428:7(1467-1480)Online publication date: Jul-2018
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '14: Proceedings of the 22nd ACM international conference on Multimedia
November 2014
1310 pages
ISBN:9781450330633
DOI:10.1145/2647868
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 November 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eye-tracking
  2. saliency map
  3. subjective evaluations
  4. ultra high definition
  5. visual attention

Qualifiers

  • Research-article

Funding Sources

Conference

MM '14
Sponsor:
MM '14: 2014 ACM Multimedia Conference
November 3 - 7, 2014
Florida, Orlando, USA

Acceptance Rates

MM '14 Paper Acceptance Rate 55 of 286 submissions, 19%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)2
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Quality of 8K Ultra-High-Definition Television Viewing Experience in Practical Viewing ConditionsIEEE Transactions on Broadcasting10.1109/TBC.2021.310503168:1(2-12)Online publication date: Mar-2022
  • (2021)A visual discomfort recognition model based on the fusion of multisource informationJournal of the Society for Information Display10.1002/jsid.108430:2(128-140)Online publication date: 22-Oct-2021
  • (2018)Subjective and Objective Quality Assessment of Compressed 4K UHD Videos for Immersive ExperienceIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2017.268350428:7(1467-1480)Online publication date: Jul-2018
  • (2016)A new HD and UHD video eye tracking datasetProceedings of the 7th International Conference on Multimedia Systems10.1145/2910017.2910622(1-6)Online publication date: 10-May-2016

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media