eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos

X Wu, H Sun, J Xue, R Zhai, X Kong, J Nie… - arXiv preprint arXiv …, 2023 - arxiv.org
X Wu, H Sun, J Xue, R Zhai, X Kong, J Nie, L He
arXiv preprint arXiv:2311.17335, 2023arxiv.org
Nowadays, short videos (SVs) are essential to information acquisition and sharing in our life.
The prevailing use of SVs to spread emotions leads to the necessity of emotion recognition
in SVs. Considering the lack of SVs emotion data, we introduce a large-scale dataset named
eMotions, comprising 27,996 videos. Meanwhile, we alleviate the impact of subjectivities on
labeling quality by emphasizing better personnel allocations and multi-stage annotations. In
addition, we provide the category-balanced and test-oriented variants through targeted data …
Nowadays, short videos (SVs) are essential to information acquisition and sharing in our life. The prevailing use of SVs to spread emotions leads to the necessity of emotion recognition in SVs. Considering the lack of SVs emotion data, we introduce a large-scale dataset named eMotions, comprising 27,996 videos. Meanwhile, we alleviate the impact of subjectivities on labeling quality by emphasizing better personnel allocations and multi-stage annotations. In addition, we provide the category-balanced and test-oriented variants through targeted data sampling. Some commonly used videos (e.g., facial expressions and postures) have been well studied. However, it is still challenging to understand the emotions in SVs. Since the enhanced content diversity brings more distinct semantic gaps and difficulties in learning emotion-related features, and there exists information gaps caused by the emotion incompleteness under the prevalently audio-visual co-expressions. To tackle these problems, we present an end-to-end baseline method AV-CPNet that employs the video transformer to better learn semantically relevant representations. We further design the two-stage cross-modal fusion module to complementarily model the correlations of audio-visual features. The EP-CE Loss, incorporating three emotion polarities, is then applied to guide model optimization. Extensive experimental results on nine datasets verify the effectiveness of AV-CPNet. Datasets and code will be open on https://github.com/XuecWu/eMotions.
arxiv.org