Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2522848.2522859acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

One of a kind: inferring personality impressions in meetings

Published: 09 December 2013 Publication History

Abstract

We present an analysis on personality prediction in small groups based on trait attributes from external observers. We use a rich set of automatically extracted audio-visual nonverbal features, including speaking turn, prosodic, visual activity, and visual focus of attention features. We also investigate whether the thin sliced impressions of external observers generalize to the whole meeting in the personality prediction task. Using ridge regression, we have analyzed both the regression and classification performance of personality prediction. Our experiments show that the extraversion trait can be predicted with high accuracy in a binary classification task and visual activity features give higher accuracies than audio ones. The highest accuracy for the extraversion trait, is 75\%, obtained with a combination of audio-visual features. Openness to experience trait also has a significant accuracy, only when the whole meeting is used as the unit of processing.

References

[1]
M. L. Knapp and J. A. Hall, phNonverbal Communication in Human Interaction.\hskip 1em plus 0.5em minus 0.4em\relax Wadsworth, Cengage Learning, 2008.
[2]
F. Pianesi, N. Mana, and A. Cappelletti, "Multimodal recognition of personality traits in social interactions," ICMI, 2008.
[3]
B. Lepri, S. Ramanathan, K. Kalimeri, J. Staiano, F. Pianesi, and N. Sebe, "Connecting meeting behavior with extraversion - a systematic study," T. Affective Computing, vol. 3, no. 4, pp. 443--455, 2012.
[4]
J. Staiano, B. Lepri, S. Ramanathan, N. Sebe, and F. Pianesi, "Automatic modeling of personality states in small group interactions," ACM Multimedia, 2011, pp. 989--992.
[5]
J.-I. Biel and D. Gatica-Perez, "The youtube lens: Crowdsourced personality impressions and audiovisual analysis of vlogs," IEEE Transactions on Multimedia, 2012.
[6]
J.-I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez, "Facetube: predicting personality from facial expressions of emotion in online conversational video," in Proceedings International Conference on Multimodal Interfaces (ICMI-MLMI), 2012.
[7]
G. Mohammadi, A. Origlia, M. Filippone, and A. Vinciarelli, "From speech to personality: mapping voice quality and intonation into personality differences," in phACM Multimedia, 2012, pp. 789--792.
[8]
L. M. Batrinca, N. Mana, B. Lepri, F. Pianesi, and N. Sebe, "Please, tell me about yourself: automatic personality assessment using short self-presentations," ICMI, 2011, pp. 255--262.
[9]
G. Chittaranjan, J. Blom, and D. Gatica-Perez, "Mining large-scale smartphone data for personality studies," phPersonal and Ubiquitous Computing, vol. 17, no. 3, pp. 433--450, 2013.
[10]
D. O. OlguÌn, P. A. Gloor, and A. Pentland, "Capturing individual and group behavior with wearable sensors." in phAAAI Spring Symposium: Human Behavior Modeling. AAAI, 2009, pp. 68--74.
[11]
R. Gifford, phThe SAGE Handbook of Nonverbal Communication. SAGE Publications, Inc., 2006, ch. Personality and Nonverbal Behavior: A Complex Conundrum, pp. 159--181.
[12]
D. Sanchez-Cortes, O. Aran, M. Mast, and D. Gatica-Perez, "A nonverbal behavior approach to identify emergent leaders in small groups," Multimedia, IEEE Transactions on, vol. 14, no. 3, pp. 816--832, 2012.
[13]
D. Sanchez-Cortes, O. Aran, and D. Gatica-Perez, "An audio visual corpus for emergent leader analysis," ICM-MLMI'11: Workshop on Multimodal Corpora for Machine Learning: Taking Stock and Road mapping the Future, Nov 2011.
[14]
S. D. Gosling, P. J. Rentfrow, and W. B. Swann, "A very brief measure of the big-five personality domains," phJournal of Research in Personality, vol. 37, pp. 504--528, 2003.
[15]
R. Lippa, "The nonverbal display and judgment of extraversion, masculinity, femininity, and gender diagnosticity: A lens model analysis," Journal of Research in Personality, vol. 32, no. 1, pp. 80 -- 107, 1998.
[16]
D. Sanchez-Cortes, O. Aran, D. B. Jayagopi, M. Schmid Mast, and D. Gatica-Perez, "Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition," Journal on Multimodal User Interfaces, 2012.
[17]
J. Dovidio and S. Ellyson, "Decoding visual dominance: Attributions of power based on relative percentages of looking while speaking and looking while listening," Social Psychology Quarterly, vol. 45, no. 2, pp. 106--113, 1982.
[18]
J.-I. Biel, O. Aran, and D. Gatica-Perez, "You are known by how you vlog: Personality impressions and nonverbal behavior in youtube," in phProceedings of AAAI International Conference on Weblogs and Social Media (ICWSM), 2011.
[19]
T. Hastie, R. Tibshirani, and J. Friedman, phThe Elements of Statistical Learning, ser. Springer Series in Statistics.\hskip 1em plus 0.5em minus 0.4em\relax New York, NY, USA: Springer New York Inc., 2001.

Cited By

View all
  • (2024)Investigation on the Use of Mora in Assessment of L2 Speakers’ Japanese Language ProficiencySocial Computing and Social Media10.1007/978-3-031-61305-0_5(67-83)Online publication date: 1-Jun-2024
  • (2023)Estimating and Visualizing Persuasiveness of Participants in Group DiscussionsJournal of Information Processing10.2197/ipsjjip.31.3431(34-44)Online publication date: 2023
  • (2023)Analysis of the Relationship among Recruitment Evaluations and Personality Impressions using Video InterviewsTotal Quality Science10.17929/tqs.9.539:1(53-61)Online publication date: 10-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
December 2013
630 pages
ISBN:9781450321297
DOI:10.1145/2522848
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. multimodal analysis
  2. nonverbal behavior
  3. personality prediction
  4. social interaction

Qualifiers

  • Poster

Conference

ICMI '13
Sponsor:

Acceptance Rates

ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)7
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Investigation on the Use of Mora in Assessment of L2 Speakers’ Japanese Language ProficiencySocial Computing and Social Media10.1007/978-3-031-61305-0_5(67-83)Online publication date: 1-Jun-2024
  • (2023)Estimating and Visualizing Persuasiveness of Participants in Group DiscussionsJournal of Information Processing10.2197/ipsjjip.31.3431(34-44)Online publication date: 2023
  • (2023)Analysis of the Relationship among Recruitment Evaluations and Personality Impressions using Video InterviewsTotal Quality Science10.17929/tqs.9.539:1(53-61)Online publication date: 10-Oct-2023
  • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
  • (2023)GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction VideosCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3616184(323-329)Online publication date: 9-Oct-2023
  • (2023)Synerg-eye-zing: Decoding Nonlinear Gaze Dynamics Underlying Successful Collaborations in Co-located TeamsProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614104(545-554)Online publication date: 9-Oct-2023
  • (2023)Self-Supervised Learning of Person-Specific Facial Dynamics for Automatic Personality RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2021.306460114:1(178-195)Online publication date: 1-Jan-2023
  • (2023)Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain FusionICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10095072(1-5)Online publication date: 4-Jun-2023
  • (2023)Personality trait estimation in group discussions using multimodal analysis and speaker embeddingJournal on Multimodal User Interfaces10.1007/s12193-023-00401-017:2(47-63)Online publication date: 8-Feb-2023
  • (2023)Analysis on the Language Use of L2 Japanese Speakers Regarding to Their Proficiency in Group Discussion ConversationsSocial Computing and Social Media10.1007/978-3-031-35915-6_5(55-67)Online publication date: 9-Jul-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media