Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2390848.2390864acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Personalized music emotion classification via active learning

Published: 02 November 2012 Publication History

Abstract

We propose using active learning in a personalized music emotion classification framework to solve subjectivity, one of the most challenging issues in music emotion recognition (MER). Personalization is the most direct method to tackle subjectivity in MER. However, almost all of the state-of-the-art personalized MER systems require a huge amount user participation, which is a non-neglegible problem in real systems. Active learning seeks to reduce human annotation efforts, by automatically selecting the most informative instances for human relabeling to train the classifier. Experimental results on a Chinese music dataset demonstrate that our method can effectively reduce as much as 80% of the requirement of human annotation without decreasing F-measure. Different query selection criteria of active learning were also investigated, and we found that informativeness criterion which selects the most uncertain instances performed best in general. We finally show the condition of successful active learning in personalized MER is that label consistency from the same user.

References

[1]
D. Cabrera et al. Psysound: A computer program for psychoacoustical analysis. In Proceedings of the Australian Acoustical Society Conference, volume 24, pages 47--54, 1999.
[2]
C. Chang and C. Lin. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011.
[3]
H. Chen and Y. Yang. Prediction of the distribution of perceived music emotions using discrete samples. Audio, Speech, and Language Processing, IEEE Transactions on, (99):1--1, 2011.
[4]
P. Juslin and P. Laukka. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3):217--238, 2004.
[5]
J. Landis and G. Koch. The measurement of observer agreement for categorical data. Biometrics, pages 159--174, 1977.
[6]
J. Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61--74, 1999.
[7]
E. Schmidt and Y. Kim. Prediction of time-varying musical mood distributions from audio. ISMIR2010, pages 465--470, 2010.
[8]
A. Schmitt, U. Tschaffon, and W. Minker. Inter-labeler agreement for anger detection in interactive voice response systems. In Intelligent Environments (IE), 2010 Sixth International Conference on, pages 112--115. IEEE, 2010.
[9]
B. Settles. Active learning literature survey. University of Wisconsin, Madison, 2010.
[10]
S. Tong and E. Chang. Support vector machine active learning for image retrieval. In Proceedings of the ninth ACM international conference on Multimedia, pages 107--118. ACM, 2001.
[11]
S. Tong and D. Koller. Support vector machine active learning with applications to text classification. The Journal of Machine Learning Research, 2:45--66, 2002.
[12]
K. Trohidis, G. Tsoumakas, G. Kalliris, and I. Vlahavas. Multilabel classification of music into emotions. In Proc. 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, PA, USA, volume 2008, 2008.
[13]
G. Tzanetakis and P. Cook. Marsyas: A framework for audio analysis. Organised sound, 4(3):169--175, 1999.
[14]
Y. Yang, Y. Lin, and H. Chen. Personalized music emotion recognition. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 748--749. ACM, 2009.
[15]
Y. Yang, C. Liu, and H. Chen. Music emotion classification: a fuzzy approach. In Proceedings of the 14th annual ACM international conference on Multimedia, pages 81--84. ACM, 2006.
[16]
Y. Yang, Y. Su, Y. Lin, and H. Chen. Music emotion recognition: The role of individuality. In Proceedings of the international workshop on Human-centered multimedia, pages 13--22. ACM, 2007.
[17]
J. Zhang, R. Chan, and P. Fung. Extractive speech summarization by active learning. In Automatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop on, pages 392--397. IEEE, 2009.

Cited By

View all
  • (2023)A Taxonomy of Methods, Tools, and Approaches for Enabling Collaborative AnnotationProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638074(1-12)Online publication date: 16-Oct-2023
  • (2022)TROMPA-MER: an open dataset for personalized music emotion recognitionJournal of Intelligent Information Systems10.1007/s10844-022-00746-060:2(549-570)Online publication date: 19-Sep-2022
  • (2021)Adaptability of Simple Classifier and Active Learning in Music Emotion RecognitionProceedings of the 4th International Conference on Electronics, Communications and Control Engineering10.1145/3462676.3462679(13-19)Online publication date: 9-Apr-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MIRUM '12: Proceedings of the second international ACM workshop on Music information retrieval with user-centered and multimodal strategies
November 2012
82 pages
ISBN:9781450315913
DOI:10.1145/2390848
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 November 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. active learning
  2. music emotion classification
  3. personalization

Qualifiers

  • Research-article

Conference

MM '12
Sponsor:
MM '12: ACM Multimedia Conference
November 2, 2012
Nara, Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)1
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)A Taxonomy of Methods, Tools, and Approaches for Enabling Collaborative AnnotationProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638074(1-12)Online publication date: 16-Oct-2023
  • (2022)TROMPA-MER: an open dataset for personalized music emotion recognitionJournal of Intelligent Information Systems10.1007/s10844-022-00746-060:2(549-570)Online publication date: 19-Sep-2022
  • (2021)Adaptability of Simple Classifier and Active Learning in Music Emotion RecognitionProceedings of the 4th International Conference on Electronics, Communications and Control Engineering10.1145/3462676.3462679(13-19)Online publication date: 9-Apr-2021
  • (2021)Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applicationsIEEE Signal Processing Magazine10.1109/MSP.2021.310623238:6(106-114)Online publication date: Nov-2021
  • (2017)Developing a benchmark for emotional analysis of musicPLOS ONE10.1371/journal.pone.017339212:3(e0173392)Online publication date: 10-Mar-2017
  • (2017)Component Tying for Mixture Model Adaptation in Personalization of Music Emotion RecognitionIEEE/ACM Transactions on Audio, Speech and Language Processing10.1109/TASLP.2017.269356525:7(1409-1420)Online publication date: 1-Jul-2017
  • (2015)Modeling the Affective Content of Music with a Gaussian Mixture ModelIEEE Transactions on Affective Computing10.1109/TAFFC.2015.23974576:1(56-68)Online publication date: 1-Jan-2015
  • (2014)Linear regression-based adaptation of music emotion recognition models for personalization2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2014.6853979(2149-2153)Online publication date: May-2014
  • (2012)2nd international ACM workshop on music information retrieval with user-centered and multimodal strategies (MIRUM)Proceedings of the 20th ACM international conference on Multimedia10.1145/2393347.2396541(1509-1510)Online publication date: 29-Oct-2012

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media