Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
chapter

Multimodal analysis of social signals

Published: 01 October 2018 Publication History
First page of PDF

References

[1]
O. Aran and D. Gatica-Perez. 2013. One of a kind: Inferring personality impressions in meetings. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 11--18. 216, 217
[2]
T. Baltrusaitis, C. Ahuja, and L.-P. Morency. 2017. Multimodal machine learning: A survey and taxonomy. Technical report, arXiv. http://arxiv.org/abs/1705.09406. 218, 219
[3]
L. Batrinca, B. Lepri, N. Mana, and F. Pianesi. 2012. Multimodal recognition of personality traits in human-computer collaborative tasks. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 39--46. 216, 217
[4]
L. M. Batrinca, N. Mana, B. Lepri, F. Pianesi, and N. Sebe. 2011. Please, tell me about yourself: Automatic personality assessment using short self-presentations. In Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 255--262. 216, 217
[5]
P. Berkhin. 2006. A survey of clustering data mining techniques. In J. Kogan, C. Nicholas, and M. Teboulle, editors, Grouping Multidimensional Data, pp. 25--72. Springer Verlag. 219
[6]
J.-I.-Isaac Biel, V. Tsiminaki, J. Dines, and D. Gatica-Perez. 2013. Hi youtube!: Personality impressions and verbal content in social video. In Proceedings of ACM International Conference on Multimodal Interaction, pp. 119--126, 2013. 216, 217
[7]
D. M. Blei. 2012. Probabilistic topic models. Communications of ACM, 55(4): 77--84. 219
[8]
P. Brunet and R. Cowie. 2012. Towards a conceptual framework of research on Social Signal Processing. Journal of Multimodal User Interfaces, 6(3-4): 101--115. 203
[9]
M. Chatterjee, S. Park, L.-P. Morency, and S. Scherer. 2015. Combining two perspectives on classifying multimodal data for recognizing speaker traits. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 7--14. 216, 217
[10]
L. Chen, G. Feng, J. Joe, C. W. Wee Leong, C. Kitchen, and C. M. Lee. 2014. Towards automated assessment of public speaking skills using multimodal cues. In Proceedings of the 16th International Conference on Multimodal Interaction, pp. 200--203. 216, 217
[11]
K. Curtis, G. J. F. Jones, and N. Campbell. 2015. Effects of good speaking techniques on audience engagement. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 35--42. 216, 217
[12]
C. Darwin. 1872 The expression of emotion in animals and man. John Murray. 203
[13]
E. Delaherche and M. Chetouani. 2011. Characterization of coordination in an imitation task: Human evaluation and automatically computable cues. In Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 343--350. 216, 217
[14]
S. Demyanov, J. Bailey, K. Ramamohanarao, and C. Leckie. 2015. Detection of deception in the mafia party game. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 335--342. 216, 217
[15]
H. Dibeklioğlu, Z. Hammal, Y. Yang, and J. F. Cohn. 2015. Multimodal detection of depression in clinical interviews. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 307--310. 216, 217
[16]
S. Ghosh, M. Chatterjee, and L.-P. Morency. 2014. A multimodal context-based approach for distress assessment. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 240--246. 216, 217
[17]
J. F. Grafsgaard, J. B. Wiggins, A. K. Vail, K. E. Boyer, E. N. Wiebe, and J. C. Lester. 2014. The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 42--49. 216, 217
[18]
H. Hung and B. Kröse. 2011. Detecting F-formations as dominant sets. In Proceedings of the International Conference on Multimodal Interfaces, pp. 231--238. 216, 217
[19]
K. Kalimeri, B. Lepri, and F. Pianesi. 2013. Going beyond traits: Multimodal classification of personality states in the wild. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 27--34. 216, 217
[20]
J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas. 1998. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3): 226--239. 209, 221
[21]
R. Kohavi and D. Wolpert. 1996. Bias plus variance decomposition for zero-one loss functions. In Proceedings of the International Conference on Machine Learning, pp. 275--83. 214
[22]
L. I. Kuncheva and C. J. Whitaker. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 51(2): 181--207. 212, 213
[23]
M. Mehu and K. Scherer. 2012. A psycho-ethological approach to Social Signal Processing. Cognitive Processing, 13(2): 397--414. 203
[24]
G. Mohammadi, S. Park, K. Sagae, A. Vinciarelli, and L.-P. Morency. 2013. Who is persuasive?: The role of perceived personality and communication modality in social multimedia. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 19--26. 216, 217
[25]
Y. Nakano and Y. Fukuhara. 2012. Estimating conversational dominance in multiparty interaction. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 77--84. 216, 217
[26]
L. S. Nguyen and D. Gatica-Perez. 2015. I would hire you in a minute: Thin slices of nonverbal behavior in job interviews. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 51--58. 216, 217
[27]
F. Nihei, Y. I. Nakano, Y. Hayashi, H.-H. Hung, and S. Okada. 2014. Predicting influential statements in group discussions using speech and head motion information. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 136--143. 216, 217
[28]
D. J. Ozer and V. Benet-Martinez. 2006. Personality and the prediction of consequential outcomes. Annual Reviews of Psychology, 57:401--421. 218
[29]
S. Park, H. S. Shim, M. Chatterjee, K. Sagae, and L.-P. Morency. 2014. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 50--57. 216, 217
[30]
S. R. Partan and P. Marler. 1999. Communication goes multimodal. Science, 283(5406): 1272--1273. 203, 206
[31]
S. R. Partan and P. Marler. 2005. Issues in the classification of multimodal communication signals. The American Naturalist, 166(2): 231--245. 205, 206, 207
[32]
I. Poggi. 2007. Mind, Hands, Face and Body. A Goal and Belief View of Multimodal Communication. Weidler. 203, 208
[33]
I. Poggi and F. D'Errico. 2012. Social Signals: a framework in terms of goals and beliefs. Cognitive Processing, 13(2): 427--445. 203
[34]
N. Raiman, H. Hung, and G. Englebienne. 2011. Move, and i will tell you who you are: Detecting deceptive roles in low-quality data. In Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 201--204. 216, 217
[35]
V. Ramanarayanan, C. W. Leong, L. Chen, G. Feng, and D. Suendermann-Oeft. 2015. Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 23--30. 212, 216, 217
[36]
C. Rowe and T. Guilford. 1996 Hidden colour aversions in domestic chicks triggered by pyrazine odours of insect warning displays. Nature, 383(6600): 520--522. 203, 206, 207
[37]
F. A. Salim, F. Haider, O. Conlan, S. Luz, and N. Campbell. 2015. Analyzing multimodality of video for user engagement assessment. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 287--290. 216, 217
[38]
S. J. Scheffer, G. W. Uetz, and G. E. Stratton. 1996. Sexual selection, male morphology, and the efficacy of courtship signalling in two wolf spiders (araneae: Lycosidae). Behavioral Ecology and Sociobiology, 38(1): 17--23. 203, 207
[39]
S. Scherer, G. Stratou, and L.-P. Morency. 2013. Audiovisual behavior descriptors for depression assessment. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 135--140. 216, 217
[40]
B. Siddiquie, D. Chisholm, and A. Divakaran. 2015. Exploiting multimodal affect and semantics to identify politically persuasive web videos. In Proceedings of the 2015 ACM International Conference on Multimodal Interaction, pp. 203--210. 216, 217
[41]
S. Strohkorb, I. Leite, N. Warren, and B. Scassellati. 2015. Classification of children's social dominance in group interactions with robots. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 227--234. 216, 217
[42]
R. Subramanian, Y. Yan, J. Staiano, O. Lanz, and N. Sebe. 2013. On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 3--10. 212, 216, 217
[43]
E. K. Tang, P. N. Sugnathan, and X. Yao. 2006. An analysis of diversity measures. Machine Learning, 65(5): 247--271. 212, 213, 214
[44]
A. K.Vail, J. F. Grafsgaard, J. B. Wiggins, J. C. Lester, and K. E. Boyer. 2014. Predicting learning and engagement in tutorial dialogue: A personality-based model. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 255--262. 216, 217
[45]
A. Vinciarelli, M. Pantic, and H. Bourlard. 2009. Social Signal Processing: Survey of an emerging domain. Image and Vision Computing Journal, 27(12): 1743--1759. 204, 208, 215, 229, 517, 518
[46]
A. Vinciarelli, M. Pantic, D. Heylen, C. Pelachaud, I. Poggi, F. D'Errico, and M. Schroeder. 2012. Bridging the gap between social animal and unsocial machine: A survey of Social Signal Processing. IEEE Transactions on Affective Computing, 3(1): 69--87. 204, 208, 215
[47]
T. Wortwein, M. Chollet, B. Schauerte, L.-P. Morency, R. Stiefelhagen, and S. Scherer. 2015. Multimodal public speaking performance assessment. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 43--50. 216, 217
[48]
P. Zachar. 2014. Beyond natural kinds: Toward a "relevant" "scientific" taxonomy in psychiatry. In H. Kincaid and J. A. Sullivan, editors, Classifying Psychopathology, pp. 75--104. MIT Press. 219

Cited By

View all
  • (2021)EMIDASProceedings of the 36th Annual ACM Symposium on Applied Computing10.1145/3412841.3441891(107-115)Online publication date: 22-Mar-2021
  • (2019)Natural language generation for social robotics: opportunities and challengesPhilosophical Transactions of the Royal Society B: Biological Sciences10.1098/rstb.2018.0027374:1771(20180027)Online publication date: 29-Apr-2019

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Books
The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 2
October 2018
2034 pages
ISBN:9781970001716
DOI:10.1145/3107990

Publisher

Association for Computing Machinery and Morgan & Claypool

Publication History

Published: 01 October 2018

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Chapter

Appears in

ACM Books

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)1
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2021)EMIDASProceedings of the 36th Annual ACM Symposium on Applied Computing10.1145/3412841.3441891(107-115)Online publication date: 22-Mar-2021
  • (2019)Natural language generation for social robotics: opportunities and challengesPhilosophical Transactions of the Royal Society B: Biological Sciences10.1098/rstb.2018.0027374:1771(20180027)Online publication date: 29-Apr-2019

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media