Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2663204.2663247acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Unsupervised Domain Adaptation for Personalized Facial Emotion Recognition

Published: 12 November 2014 Publication History
  • Get Citation Alerts
  • Abstract

    The way in which human beings express emotions depends on their specific personality and cultural background. As a consequence, person independent facial expression classifiers usually fail to accurately recognize emotions which vary between different individuals. On the other hand, training a person-specific classifier for each new user is a time consuming activity which involves collecting hundreds of labeled samples. In this paper we present a personalization approach in which only unlabeled target-specific data are required. The method is based on our previous paper [20] in which a regression framework is proposed to learn the relation between the user's specific sample distribution and the parameters of her/his classifier. Once this relation is learned, a target classifier can be constructed using only the new user's sample distribution to transfer the personalized parameters. The novelty of this paper with respect to [20] is the introduction of a new method to represent the source sample distribution based on using only the Support Vectors of the source classifiers. Moreover, we present here a simplified regression framework which achieves the same or even slightly superior experimental results with respect to [20] but it is much easier to reproduce.

    References

    [1]
    T. Ahonen, A. Hadid, and M. Pietikäinen. Face description with local binary patterns: Application to face recognition. IEEE Trans. on PAMI, 28(12):2037--2041, 2006.
    [2]
    G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to a new unlabeled sample. In NIPS, 2011.
    [3]
    L. Bruzzone and M. Marconcini. Domain adaptation problems: A DASVM classification technique and a circular validation strategy. IEEE Trans. on PAMI, 32(5):770--787, 2010.
    [4]
    J. Chen, X. Liu, P. Tu, and A. Aragones. Learning person-specific models for facial expression and action unit recognition. Pattern Recognition Letters, 34(15):1964--1970, 2013.
    [5]
    W.-S. Chu, F. De La Torre, and J. F. Cohn. Selective transfer machine for personalized facial action unit detection. In CVPR, 2013.
    [6]
    H. Daume. Frustratingly easy domain adaptation. In Proc. of Association for Computational Linguistics, pages 256--263, 2007.
    [7]
    H. Dibeklioğlu, T. Gevers, A. A. Salah, and R. Valenti. A smile can reveal your age: Enabling facial dynamics in age estimation. In ACM Multimedia, 2012.
    [8]
    P. Ekman. Universals and cultural differences in facial expressions of emotion. In Proc. Nebraska Symp. Motivation, 1971.
    [9]
    A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Schölkopf. Covariate shift by kernel mean matching. Dataset shift in machine learning, 3(4):5, 2009.
    [10]
    Z. Hammal and J. F. Cohn. Automatic detection of pain intensity. In ICMI, 2012.
    [11]
    T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999.
    [12]
    G. Littlewort, M. S. Bartlett, and K. Lee. Faces of pain: automated measurement of spontaneous facial expressions of genuine and posed pain. In ICMI, 2007.
    [13]
    P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In CVPR Workshops (CVPRW), 2010.
    [14]
    P. Lucey, J. F. Cohn, K. M. Prkachin, P. E. Solomon, S. W. Chew, and I. Matthews. Painful monitoring: Automatic pain monitoring using the unbc-mcmaster shoulder pain expression archive database. Image Vision Comput., 30(3):197--205, 2012.
    [15]
    A. Martinez and S. Du. A model of the perception of facial expressions of emotion by humans: Research overview and perspectives. Journal of Machine Learning Research, 13:1589--1608, 2012.
    [16]
    S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Trans. on Knowledge and Data Engineering, 22(10):1345--1359, 2010.
    [17]
    K. M. Prkachin and P. E. Solomon. The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain, 139(2):267--274, 2008.
    [18]
    O. Rudovic, V. Pavlovic, and M. Pantic. Context-sensitive conditional ordinal random fields for facial action intensity estimation. In ICCV Workshops, 2013.
    [19]
    E. Sangineto. Pose and expression independent facial landmark localization using dense SURF and the Hausdorff distance. IEEE Trans. on PAMI, 35(3):624--638, 2013.
    [20]
    E. Sangineto, G. Zen, E. Ricci, and N. Sebe. We are not all equal: Personalizing models for facial expression analysis with transductive parameter transfer. In ACM Multimedia, 2014.
    [21]
    M. F. Valstar, M. Pantic, Z. Ambadar, and J. F. Cohn. Spontaneous vs. posed facial behavior: automatic analysis of brow actions. In ICMI, 2006.
    [22]
    J. Yang, R. Yan, and A. G. Hauptmann. Adapting svm classifiers to data with shifted distributions. In IEEE International Conference on Data Mining (ICDM) Workshops, 2007.
    [23]
    Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. on PAMI, 31(1):39--58, 2009.

    Cited By

    View all
    • (2024)Synthesizing facial expressions in dyadic human–robot interactionSignal, Image and Video Processing10.1007/s11760-024-03202-418:S1(909-918)Online publication date: 11-May-2024
    • (2023)Sample Expansion and Classification Model of Maize Leaf Diseases Based on the Self-Attention CycleGANSustainability10.3390/su15181342015:18(13420)Online publication date: 7-Sep-2023
    • (2023)Gaze Target Detection Based on Predictive Consistency EmbeddingJournal of Image and Signal Processing10.12677/JISP.2023.12201512:02(144-157)Online publication date: 2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
    November 2014
    558 pages
    ISBN:9781450328852
    DOI:10.1145/2663204
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. action unit detection
    2. facial expression recognition
    3. unsupervised domain adaptation

    Qualifiers

    • Poster

    Funding Sources

    Conference

    ICMI '14
    Sponsor:

    Acceptance Rates

    ICMI '14 Paper Acceptance Rate 51 of 127 submissions, 40%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)35
    • Downloads (Last 6 weeks)2

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Synthesizing facial expressions in dyadic human–robot interactionSignal, Image and Video Processing10.1007/s11760-024-03202-418:S1(909-918)Online publication date: 11-May-2024
    • (2023)Sample Expansion and Classification Model of Maize Leaf Diseases Based on the Self-Attention CycleGANSustainability10.3390/su15181342015:18(13420)Online publication date: 7-Sep-2023
    • (2023)Gaze Target Detection Based on Predictive Consistency EmbeddingJournal of Image and Signal Processing10.12677/JISP.2023.12201512:02(144-157)Online publication date: 2023
    • (2022)Multimodal Across Domains Gaze Target DetectionProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556624(420-431)Online publication date: 7-Nov-2022
    • (2022)Toward Personalized Affect-Aware Socially Assistive Robot Tutors for Long-Term Interventions with Children with AutismACM Transactions on Human-Robot Interaction10.1145/352611111:4(1-28)Online publication date: 8-Sep-2022
    • (2022)Adapting the Interplay Between Personalized and Generalized Affect Recognition Based on an Unsupervised Neural FrameworkIEEE Transactions on Affective Computing10.1109/TAFFC.2020.300265713:3(1349-1365)Online publication date: 1-Jul-2022
    • (2022)Automatic Recognition Methods Supporting Pain Assessment: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2019.294677413:1(530-552)Online publication date: 1-Jan-2022
    • (2022)Multimodal Self-Assessed Personality Estimation During Crowded Mingle Scenarios Using Wearables Devices and CamerasIEEE Transactions on Affective Computing10.1109/TAFFC.2019.293060513:1(46-59)Online publication date: 1-Jan-2022
    • (2022)Continual Learning for Adaptive Affective Human-Robot Interaction2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW57231.2022.10086015(1-5)Online publication date: 18-Oct-2022
    • (2022)DeepFN: Towards Generalizable Facial Action Unit Recognition with Deep Face Normalization2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII55700.2022.9953868(1-8)Online publication date: 18-Oct-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media