Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3604321.3604331acmotherconferencesArticle/Chapter ViewAbstractPublication PagesimxConference Proceedingsconference-collections
research-article

High-level cinematic knowledge to predict inter-observer visual congruency

Published: 27 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    When watching the same visual stimulus, humans can exhibit a wide range of gaze behaviors. These variations can be caused by bottom-up factors (i.e. features of the stimulus itself) or top-down factors (i.e. characteristics of the observers). Inter-observer visual congruency is a measure of this range. Moreover, it has been shown that cinematic techniques, such as camera motion or shot editing, have a significant impact on this measure [17]. In this work, we first propose a metric for measuring IOC in videos, taking into account the dynamic nature of the stimuli. Then, we propose a model for predicting inter-observer visual congruency in the context of feature films, by using high-level cinematic annotation as prior information in a deep learning framework.

    References

    [1]
    Katherine Breeden and Pat Hanrahan. 2017. Gaze Data for the Analysis of Attention in Feature Films. ACM Transactions on Applied Perception 14, 4 (2017). https://doi.org/10.1145/3127588
    [2]
    Alexandre Bruckert, Marc Christie, and Olivier Le Meur. 2022. Where to look at the movies: Analyzing visual attention to understand movie editing. Behavior Research Methods (2022), 1–20.
    [3]
    Alexandre Bruckert, Yat Hong Lam, Marc Christie, and Olivier Le Meur. 2019. Deep Learning For Inter-Observer Congruency Prediction. In 2019 IEEE International Conference on Image Processing (ICIP). 3766–3770. https://doi.org/10.1109/ICIP.2019.8803596
    [4]
    Hannah F. Chua, Julie E. Boland, and Richard E. Nisbett. 2005. Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences 102, 35 (2005), 12629–12633. https://doi.org/10.1073/pnas.0506162102
    [5]
    Michael Dorr, Thomas Martinetz, Karl R. Gegenfurtner, and Erhardt Barth. 2010. Variability of eye movements when viewing dynamic natural scenes. Journal of Vision 10, 10 (2010), 28–28.
    [6]
    Robert B. Goldstein, Russell L. Woods, and Eli Peli. 2007. Where people look when watching movies: do all viewers look at the same place?Computers in Biology and Medicine 37, 7 (2007), 957––964. https://doi.org/10.1016/j.compbiomed.2006.08.018
    [7]
    Samyak Jain, Pradeep Yarlagadda, Shreyank Jyoti, Shyamgopal Karthik, Ramanathan Subramanian, and Vineet Gandhi. 2021. ViNet: Pushing the limits of Visual Modality for Audio-Visual Saliency Prediction. arxiv:2012.06170 [cs.CV]
    [8]
    Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. 2017. The Kinetics Human Action Video Dataset. arxiv:1705.06950 [cs.CV]
    [9]
    Olivier Le Meur, Thierry Baccino, and Aline Roumy. 2011. Prediction of the Inter-Observer Visual Congruency (IOVC) and Application to Image Ranking. In Proceedings of the 19th ACM International Conference on Multimedia. 373–382. https://doi.org/10.1145/2072298.2072347
    [10]
    Olivier Le Meur, Antoine Coutrot, Zhi Liu, Pia Rämä, Adrien Le Roch, and Andrea Helo. 2017. Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From Childhood to Adulthood. IEEE Transactions on Image Processing 26, 10 (2017), 4777–4789. https://doi.org/10.1109/TIP.2017.2722238
    [11]
    Parag K. Mital, Tim J. Smith, Robin L. Hill, and John M. Henderson. 2011. Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion. Cognitive Computation 3, 1 (2011), 5––24. https://doi.org/10.1007/s12559-010-9074-z
    [12]
    Robert J Peters, Asha Iyer, Laurent Itti, and Christof Koch. 2005. Components of bottom-up gaze allocation in natural images. Vision research 45, 18 (2005), 2397–2416.
    [13]
    Anis Rahman, Denis Pellerin, and Dominique Houzet. 2014. Influence of number, location and size of faces on gaze in video. Journal of Eye Movement Research 7, 2 (2014), 891––901. https://doi.org/10.16910/jemr.7.2.5
    [14]
    Shafin Rahman and Neil D. B. Bruce. 2016. Factors Underlying Inter-Observer Agreement in Gaze Patterns: Predictive Modelling and Analysis. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications(ETRA ’16). 155–162. https://doi.org/10.1145/2857491.2857495
    [15]
    Umesh Rajashekar, Lawrence K. Cormack, and Alan C. Bovik. 2004. Point-of-gaze analysis reveals visual search strategies. In Human Vision and Electronic Imaging IX, Vol. 5292. SPIE, 296 – 306. https://doi.org/10.1117/12.537118
    [16]
    Yasuhito Sawahata, Rajiv Khosla, Kazuteru Komine, Nobuyuki Hiruma, Takayuki Itou, Seiji Watanabe, Yuji Suzuki, Yumiko Hara, and Nobuo Issiki. 2008. Determining comprehension and quality of TV programs using eye-gaze tracking. Pattern Recognition 41, 5 (2008), 1610–1626. https://doi.org/10.1016/j.patcog.2007.10.010
    [17]
    Tim J. Smith and Parag K. Mital. 2013. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes. Journal of Vision 13, 8 (2013), 16–16. https://doi.org/10.1167/13.8.16
    [18]
    Antonio Torralba, Monica S. Castelhano, Aude Oliva, and John M. Henderson. 2006. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological Review 113 (2006), 766–786.
    [19]
    Wenguan Wang, Jianbing Shen, Jianwen Xie, Ming-Ming Cheng, Haibin Ling, and Ali Borji. [n. d.]. Revisiting Video Saliency Prediction in the Deep Learning Era. https://mmcheng.net/videosal/.
    [20]
    Wenguan Wang, Jianbing Shen, Jianwen Xie, Ming-Ming Cheng, Haibin Ling, and Ali Borji. 2019. Revisiting Video Saliency Prediction in the Deep Learning Era. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019). https://doi.org/10.1109/TPAMI.2019.2924417
    [21]
    Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. 2018. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. In ECCV 2018. 318–335. https://doi.org/10.1007/978-3-030-01267-0_19
    [22]
    Jiaomin Yue, Qiang Lu, Dandan Zhu, Xiongkuo Min, Xiao-Ping Zhang, and Guangtao Zhai. 2021. Inter-Observer Visual Congruency in Video-Viewing. In 2021 International Conference on Visual Communications and Image Processing (VCIP). 1–5. https://doi.org/10.1109/VCIP53242.2021.9675428
    [23]
    C. Zach, T. Pock, and H. Bischof. 2007. A Duality Based Approach for Realtime TV-L1 Optical Flow. In Pattern Recognition. 214–223. https://doi.org/978-3-540-74936-3

    Index Terms

    1. High-level cinematic knowledge to predict inter-observer visual congruency

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      IMXw '23: Proceedings of the 2023 ACM International Conference on Interactive Media Experiences Workshops
      June 2023
      143 pages
      ISBN:9798400708459
      DOI:10.1145/3604321
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. cinematography
      2. gaze congruency
      3. neural networks

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      IMXw '23

      Acceptance Rates

      Overall Acceptance Rate 69 of 245 submissions, 28%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 16
        Total Downloads
      • Downloads (Last 12 months)16
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media