Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

To read this content please select one of the options below:

Analyzing audiovisual data for understanding user's emotion in human−computer interaction environment

Juan Yang (College of Computer Science and Technology, School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China)
Zhenkun Li (College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China)
Xu Du (National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, China)

Data Technologies and Applications

ISSN: 2514-9288

Article publication date: 1 November 2023

Issue publication date: 15 April 2024

151

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

Keywords

Acknowledgements

The study was supported by the National Natural Science Foundation of China (62107032) and the Philosophy and Social Science Research Project of Hubei Provincial Education Department (21Q029).

Citation

Yang, J., Li, Z. and Du, X. (2024), "Analyzing audiovisual data for understanding user's emotion in human−computer interaction environment", Data Technologies and Applications, Vol. 58 No. 2, pp. 318-343. https://doi.org/10.1108/DTA-08-2023-0414

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles