Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3536220.3558544acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper
Open access

Impact of aesthetic movie highlights on semantics and emotions: a preliminary analysis

Published: 07 November 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Aesthetic highlight detection is a challenge for understanding affective processes underlying emotional movie experience. Aesthetic highlights in movies are scenes with aesthetic values and attributes in terms of form and content. Deep understanding of human emotions while watching movies and automatic recognition of emotions evoked by watching movies are critically important for a wide range of applications, such as affective content creation, analysis, and summarization.
    Many empirical studies on emotions have formulated theory-driven and data-driven models to uncover the underlying mechanism of emotions using discrete ad dimensional paradigms. Nevertheless, these approaches to emotions do not fully reveal all underlying processes of emotional experience. Recent neuroscience findings has led to the development of multi-process frameworks that aim to characterize emotions as a multi-componential phenomena. In particular, multi-process frameworks can be useful to study emotional movie experience.
    In this work, we carry out statistical analysis of the componential paradigm on emotions while watching aesthetic highlights in full-length movies. We focus on the effect of the aesthetic highlights on intensity of emotional movie experience. We explore occurrence frequency of different semantic categories involved in constructing different types of the aesthetic highlights. Moreover, we investigate the applicability of machine learning classifiers in predicting the aesthetic highlights from movie scene semantics based features.

    References

    [1]
    Yoann Baveye, Emmanuel Dellandréa, Christel Chamaret, and Liming Chen. 2015. Deep learning vs. kernel methods: Performance for emotion prediction in videos. In 2015 international conference on affective computing and intelligent interaction (acii). IEEE, 77–83.
    [2]
    Michael Borenstein, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2010. A basic introduction to fixed-effect and random-effects models for meta-analysis. Research synthesis methods 1, 2 (2010), 97–111.
    [3]
    Guillaume Chanel, Cyril Rebetez, Mireille Bétrancourt, and Thierry Pun. 2011. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 41, 6 (2011), 1052–1063.
    [4]
    Joaquim Comas, Decky Aspandi, and Xavier Binefa. 2020. End-to-end facial and physiological model for affective computing and applications. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 93–100.
    [5]
    Damien Dupré, Anna Tcherkassof, and Michel Dubois. 2015. Emotions triggered by innovative products: A multi-componential approach of emotions for user experience tools. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 772–777.
    [6]
    Paul Ekman. 1999. Basic emotions. Handbook of cognition and emotion 98, 45-60 (1999), 16.
    [7]
    Johnny Fontaine, Klaus Scherer, and Cristina Soriano. 2013. The why, the what, and the how of the GRID instrument. In Components of emotional meaning: A sourcebook. Oxford University Press, 83–97.
    [8]
    Jeffrey M Girard. 2014. CARMA: Software for continuous affect rating and media annotation. Journal of Open Research Software 2, 1 (2014), e5.
    [9]
    Gabin Gninkoun and Mohammad Soleymani. 2011. Automatic violence scenes detection: A multi-modal approach. In Working Notes Proceedings of the MediaEval 2011 Workshop.
    [10]
    Marco Granato, Davide Gadia, Dario Maggiorini, and Laura A Ripamonti. 2020. An empirical study of players’ emotions in VR racing games based on a dataset of physiological data. Multimedia Tools and Applications 79, 45 (2020), 33657–33686.
    [11]
    Alan Hanjalic and Li-Qun Xu. 2005. Affective video content representation and modeling. IEEE transactions on multimedia 7, 1 (2005), 143–154.
    [12]
    Hang-Bong Kang. 2003. Affective content detection using HMMs. In Proceedings of the eleventh ACM international conference on Multimedia. 259–262.
    [13]
    Stamos Katsigiannis and Naeem Ramzan. 2017. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE journal of biomedical and health informatics 22, 1(2017), 98–107.
    [14]
    Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2011. Deap: A database for emotion analysis; using physiological signals. IEEE transactions on affective computing 3, 1 (2011), 18–31.
    [15]
    Theodoros Kostoulas, Guillaume Chanel, Michal Muszynski, Patrizia Lombardo, and Thierry Pun. 2015. Dynamic time warping of multimodal signals for detecting highlights in movies. In Proceedings of the 1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence. 35–40.
    [16]
    Theodoros Kostoulas, Guillaume Chanel, Michal Muszynski, Patrizia Lombardo, and Thierry Pun. 2015. Identifying aesthetic highlights in movies from clustering of physiological and behavioral signals. In 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX). IEEE, 1–6.
    [17]
    Theodoros Kostoulas, Guillaume Chanel, Michal Muszynski, Patrizia Lombardo, and Thierry Pun. 2017. Films, affective computing and aesthetic experience: Identifying emotional and aesthetic highlights from multimodal signals in a social setting. Frontiers in ICT 4(2017), 11.
    [18]
    Eleni Kroupi, Jean-Marc Vesin, and Touradj Ebrahimi. 2013. Phase-amplitude coupling between EEG and EDA while experiencing multimedia content. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE, 865–870.
    [19]
    Ting Li, Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, and Liming Chen. 2015. Continuous arousal self-assessments validation using real-time physiological responses. In Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia. 39–44.
    [20]
    Kristen A Lindquist, Tor D Wager, Hedy Kober, Eliza Bliss-Moreau, and Lisa Feldman Barrett. 2012. The brain basis of emotion: a meta-analytic review. The Behavioral and brain sciences 35, 3 (2012), 121.
    [21]
    Slobodan Marković. 2012. Components of aesthetic experience: aesthetic fascination, aesthetic appraisal, and aesthetic emotion. i-Perception 3, 1 (2012), 1–17.
    [22]
    Maëlan Quentin Menétrey, Gelareh Mohammadi, Joana Leitão, and Patrik Vuilleumier. 2022. Emotion recognition in a multi-componential framework: the role of physiology. Frontiers in computer science 4 (2022), 773256.
    [23]
    Mohammadhossein Moghimi, Robert Stone, and Pia Rotshtein. 2017. Affective recognition in dynamic and interactive virtual environments. IEEE Transactions on Affective Computing 11, 1 (2017), 45–62.
    [24]
    Gelareh Mohammadi, Kangying Lin, and Patrik Vuilleumier. 2019. Towards understanding emotional experience in a componential framework. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 123–129.
    [25]
    Gelareh Mohammadi, Dimitri Van De Ville, and Patrik Vuilleumier. 2020. Brain networks subserving functional core processes of emotions identified with componential modelling. bioRxiv (2020). https://doi.org/10.1101/2020.06.10.145201 arXiv:https://www.biorxiv.org/content/early/2020/06/12/2020.06.10.145201.full.pdf
    [26]
    Gelareh Mohammadi and Patrik Vuilleumier. 2020. A multi-componential approach to emotion recognition and the effect of personality. IEEE Transactions on Affective Computing(2020).
    [27]
    Christian Mühl, Brendan Allison, Anton Nijholt, and Guillaume Chanel. 2014. A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces 1, 2 (2014), 66–84.
    [28]
    Michal Muszynski, Theodoros Kostoulas, Guillaume Chanel, Patrizia Lombardo, and Thierry Pun. 2015. Spectators’ synchronization detection based on manifold representation of physiological signals: Application to movie highlights detection. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 235–238.
    [29]
    Michal Muszynski, Theodoros Kostoulas, Patrizia Lombardo, Thierry Pun, and Guillaume Chanel. 2016. Synchronization among groups of spectators for highlight detection in movies. In Proceedings of the 24th ACM international conference on Multimedia. 292–296.
    [30]
    Michal Muszynski, Theodoros Kostoulas, Patrizia Lombardo, Thierry Pun, and Guillaume Chanel. 2018. Aesthetic highlight detection in movies based on synchronization of spectators’ reactions. ACM Trans. Multimedia Comput. Commun. Appl.(2018).
    [31]
    Michal Muszynski, Jamie Zelazny, Jeffrey M Girard, and Louis-Philippe Morency. 2020. Depression severity assessment for adolescents at high risk of mental disorders. In Proceedings of the 2020 International Conference on Multimodal Interaction. 70–78.
    [32]
    Anyi Rao, Linning Xu, Yu Xiong, Guodong Xu, Qingqiu Huang, Bolei Zhou, and Dahua Lin. 2020. A local-to-global approach to multi-modal movie scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10146–10155.
    [33]
    Yong Rui, Anoop Gupta, and Alex Acero. 2000. Automatically extracting highlights for TV baseball programs. In Proceedings of the eighth ACM international conference on Multimedia. 105–115.
    [34]
    James A Russell and Albert Mehrabian. 1977. Evidence for a three-factor theory of emotions. Journal of research in Personality 11, 3 (1977), 273–294.
    [35]
    Klaus Scherer, Johnny Fontaine, and Cristina Soriano. 2013. CoreGRID and MiniGRID: development and validation of two short versions of the GRID instrument. In Components of emotional meaning: a sourcebook. Oxford University Press, 523–541.
    [36]
    Klaus R Scherer. 2005. What are emotions? And how can they be measured?Social science information 44, 4 (2005), 695–729.
    [37]
    Klaus R Scherer. 2009. The dynamic architecture of emotion: Evidence for the component process model. Cognition and emotion 23, 7 (2009), 1307–1351.
    [38]
    Klaus R Scherer. 2009. Emotions are emergent processes: they require a dynamic computational architecture. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3459–3474.
    [39]
    Mohamad-Hoseyn Sigari, Hamid Soltanian-Zadeh, and Hamid-Reza Pourreza. 2015. Fast highlight detection and scoring for broadcast soccer video summarization using on-demand feature extraction and fuzzy inference. International Journal of Computer Graphics 6, 1 (2015), 13–36.
    [40]
    Mohammad Soleymani, Sadjad Asghari-Esfeden, Maja Pantic, and Yun Fu. 2014. Continuous emotion detection using EEG signals and facial expressions. In 2014 IEEE international conference on multimedia and expo (ICME). IEEE, 1–6.
    [41]
    Mohammad Soleymani, Guillaume Chanel, Joep JM Kierkels, and Thierry Pun. 2008. Affective ranking of movie scenes using physiological signals and content analysis. In Proceedings of the 2nd ACM Workshop on Multimedia Semantics. 32–39.
    [42]
    Ramanathan Subramanian, Julia Wache, Mojtaba Khomami Abadi, Radu L Vieriu, Stefan Winkler, and Nicu Sebe. 2016. ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing 9, 2 (2016), 147–160.
    [43]
    Jussi Tarvainen, Mats Sjöberg, Stina Westman, Jorma Laaksonen, and Pirkko Oittinen. 2014. Content-based prediction of movie style, aesthetics, and affect: Data set and baseline experiments. IEEE Transactions on Multimedia 16, 8 (2014), 2085–2098.
    [44]
    Carien van Reekum, Tom Johnstone, Rainer Banse, Alexandre Etter, Thomas Wehrle, and Klaus Scherer. 2004. Psychophysiological responses to appraisal dimensions in a computer game. Cognition and emotion 18, 5 (2004), 663–688.
    [45]
    Tor D Wager, Jian Kang, Timothy D Johnson, Thomas E Nichols, Ajay B Satpute, and Lisa Feldman Barrett. 2015. A Bayesian model of category-specific emotional brain responses. PLoS computational biology 11, 4 (2015), e1004066.
    [46]
    Hee Lin Wang and Loong-Fah Cheong. 2006. Affective understanding in film. IEEE Transactions on circuits and systems for video technology 16, 6(2006), 689–704.
    [47]
    Huan Yang, Baoyuan Wang, Stephen Lin, David Wipf, Minyi Guo, and Baining Guo. 2015. Unsupervised extraction of video highlights via robust recurrent auto-encoders. In Proceedings of the IEEE international conference on computer vision. 4633–4641.
    [48]
    Ting Yao, Tao Mei, and Yong Rui. 2016. Highlight detection with pairwise deep ranking for first-person video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition. 982–990.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '22 Companion: Companion Publication of the 2022 International Conference on Multimodal Interaction
    November 2022
    225 pages
    ISBN:9781450393898
    DOI:10.1145/3536220
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 November 2022

    Check for updates

    Author Tags

    1. aesthetic movie highlights
    2. affective computing
    3. component process model
    4. emotions
    5. movie highlight detection

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Conference

    ICMI '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 373
      Total Downloads
    • Downloads (Last 12 months)198
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media