Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages

A Zadeh, R Zellers, E Pincus… - IEEE Intelligent …, 2016 - ieeexplore.ieee.org
IEEE Intelligent Systems, 2016ieeexplore.ieee.org
People share their opinions, stories, and reviews through online video sharing websites
every day. The automatic analysis of these online opinion videos is bringing new or
understudied research challenges to the field of computational linguistics and multimodal
analysis. Among these challenges is the fundamental question of exploiting the dynamics
between visual gestures and verbal messages to be able to better model sentiment. This
article addresses this question in four ways: introducing the first multimodal dataset with …
People share their opinions, stories, and reviews through online video sharing websites every day. The automatic analysis of these online opinion videos is bringing new or understudied research challenges to the field of computational linguistics and multimodal analysis. Among these challenges is the fundamental question of exploiting the dynamics between visual gestures and verbal messages to be able to better model sentiment. This article addresses this question in four ways: introducing the first multimodal dataset with opinion-level sentiment intensity annotations; studying the prototypical interaction patterns between facial gestures and spoken words when inferring sentiment intensity; proposing a new computational representation, called multimodal dictionary, based on a language-gesture study; and evaluating the authors' proposed approach in a speaker-independent paradigm for sentiment intensity prediction. The authors' study identifies four interaction types between facial gestures and verbal content: neutral, emphasizer, positive, and negative interactions. Experiments show statistically significant improvement when using multimodal dictionary representation over the conventional early fusion representation (that is, feature concatenation).
ieeexplore.ieee.org