Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2660114.2660115acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

Published: 07 November 2014 Publication History

Abstract

Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.

References

[1]
R. F. Baumeister, E. Bratslavsky, C. Finkenauer, and K. D. Vohs. Bad is stronger than good. Review of General Psychology, 5(4):323--370, 2001.
[2]
Y. Baveye, J.-N. Bettinelli, E. Dellandrea, L. Chen, and C. Chamaret. A large video database for computational models of induced emotion. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), pages 13--18, 2013.
[3]
Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen. From crowdsourced rankings to affective ratings. In 1st International Workshop on Multimedia A_ective Computing (MAC), July 2014.
[4]
M. M. Bradley and P. J. Lang. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1):49--59, Mar. 1994.
[5]
S. Carvalho, J. Leite, S. Galdo-Álvarez, and O. Gon_calves. The emotional movie database (EMDB): a self-report and psychophysiological study. Applied Psychophysiology and Biofeedback, 37(4):279--294, 2012.
[6]
K.-T. Chen, C.-C. Wu, Y.-C. Chang, and C.-L. Lei. A crowdsourceable QoE evaluation framework for multimedia content. In Proceedings of the 17th ACM International Conference on Multimedia, MM '09, pages 491--500, 2009.
[7]
B. L. Fredrickson. What good are positive emotions? Review of General Psychology, 2(3):300--319, 1998.
[8]
A. Hanjalic and L.-Q. Xu. Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1):143--154, Feb. 2005.
[9]
S. Koelstra, C. Mühl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. DEAP: a database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing, 3(1):18--31, Jan. 2012.
[10]
K. Krippendorff. Estimating the reliability, systematic error and random error of interval data. Educational and Psychological Measurement, 30(1):61--70, Apr. 1970.
[11]
P. J. Lang, M. M. Bradley, and B. N. Cuthbert. International affective picture system (IAPS): Technical manual and affective ratings. The Center for Research in Psychophysiology, University of Florida, 1999.
[12]
P. J. Lang, M. K. Greenwald, M. M. Bradley, and A. O. Hamm. Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30(3):261--273, May 1993.
[13]
N. Malandrakis, A. Potamianos, G. Evangelopoulos, and A. Zlatintsi. A supervised approach to movie emotion tracking. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2376--2379, May 2011.
[14]
D. McDu_, R. E. Kaliouby, and R. W. Picard. Crowdsourcing facial responses to online videos. IEEE Transactions on Affective Computing, 3(4):456--468, 2012.
[15]
S. M. Mohammad and P. D. Turney. Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3):436--465, Aug. 2013.
[16]
G. Peeters and J. Czapinski. Positive-negative asymmetry in evaluations: The distinction between affective and informational negativity effects. European Review of Social Psychology, 1(1):33--60, Jan. 1990.
[17]
P. Philippot. Inducing and assessing differentiated emotion-feeling states in the laboratory. Cognition & Emotion, 7(2):171--193, 1993.
[18]
J. A. Redi, T. Hofeld, P. Korshunov, F. Mazza, I. Povoa, and C. Keimel. Crowdsourcing-based multimedia subjective evaluations: A case study on image recognizability and aesthetic appeal. In Proceedings of the 2Nd ACM International Workshop on Crowdsourcing for Multimedia, CrowdMM '13, pages 29--34, 2013.
[19]
F. Ribeiro, D. Florencio, C. Zhang, and M. Seltzer. CROWDMOS: An approach for crowdsourcing mean opinion score studies. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2416--2419, May 2011.
[20]
L. Riek, M. O'Connor, and P. Robinson. Guess what? a game for affective annotation of video using crowd sourcing. In Affective Computing and Intelligent Interaction, volume 6974, pages 277--285, 2011.
[21]
J. A. Russell. Core affect and the psychological construction of emotion. Psychological Review, 110(1):145--172, 2003.
[22]
A. Schaefer, F. Nils, X. Sanchez, and P. Philippot. Assessing the e_ectiveness of a large database of emotion-eliciting _lms: A new tool for emotion researchers. Cognition & Emotion, 24(7):1153--1172, Nov. 2010.
[23]
J. A. Sloboda. Empirical studies of emotional response to music. In Cognitive bases of musical communication., pages 33--46. American Psychological Association, 1992.
[24]
M. Soleymani and M. Larson. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. In Proceedings of the ACM SIGIR 2010 workshop on crowdsourcing for search evaluation (CSE 2010), pages 4--8, July 2010.
[25]
M. Soleymani, M. Larson, T. Pun, and A. Hanjalic. Corpus development for affective video indexing. IEEE Transactions on Multimedia, 16(4):1075--1089, June 2014.

Cited By

View all
  • (2016)Towards Making an Anonymous and One-Stop Online Reporting System for Third-World CountriesProceedings of the 7th Annual Symposium on Computing for Development10.1145/3001913.3006633(1-4)Online publication date: 18-Nov-2016
  • (2016)Crowdsourcing Empathetic IntelligenceACM Transactions on Intelligent Systems and Technology10.1145/28973697:4(1-27)Online publication date: 2-May-2016
  • (2015)LIRIS-ACCEDE: A Video Database for Affective Content AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2015.23965316:1(43-55)Online publication date: 1-Jan-2015
  • Show More Cited By

Index Terms

  1. A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CrowdMM '14: Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia
    November 2014
    84 pages
    ISBN:9781450331289
    DOI:10.1145/2660114
    • General Chairs:
    • Judith Redi,
    • Mathias Lux
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. affective computing
    2. affective video datasets
    3. crowdsourced annotations
    4. experimental validation
    5. inter-rater reliability

    Qualifiers

    • Research-article

    Conference

    MM '14
    Sponsor:
    MM '14: 2014 ACM Multimedia Conference
    November 7, 2014
    Florida, Orlando, USA

    Acceptance Rates

    CrowdMM '14 Paper Acceptance Rate 8 of 26 submissions, 31%;
    Overall Acceptance Rate 16 of 42 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)6
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 25 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2016)Towards Making an Anonymous and One-Stop Online Reporting System for Third-World CountriesProceedings of the 7th Annual Symposium on Computing for Development10.1145/3001913.3006633(1-4)Online publication date: 18-Nov-2016
    • (2016)Crowdsourcing Empathetic IntelligenceACM Transactions on Intelligent Systems and Technology10.1145/28973697:4(1-27)Online publication date: 2-May-2016
    • (2015)LIRIS-ACCEDE: A Video Database for Affective Content AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2015.23965316:1(43-55)Online publication date: 1-Jan-2015
    • (2015)Dynamic time-alignment k-means kernel clustering for time sequence clustering2015 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2015.7351259(2532-2536)Online publication date: Sep-2015

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media