Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Multimodal Emotion Recognition


Multimodal emotion recognition is the process of recognizing emotions from multiple modalities, such as speech, text, and facial expressions.

Papers and Code

MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition

Apr 28, 2024
Viaarxiv icon

Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities

Apr 18, 2024
Figure 1 for Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities
Figure 2 for Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities
Figure 3 for Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities
Figure 4 for Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities
Viaarxiv icon

Cooperative Sentiment Agents for Multimodal Sentiment Analysis

Add code
Apr 19, 2024
Figure 1 for Cooperative Sentiment Agents for Multimodal Sentiment Analysis
Figure 2 for Cooperative Sentiment Agents for Multimodal Sentiment Analysis
Figure 3 for Cooperative Sentiment Agents for Multimodal Sentiment Analysis
Figure 4 for Cooperative Sentiment Agents for Multimodal Sentiment Analysis
Viaarxiv icon

Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios

Apr 11, 2024
Figure 1 for Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios
Figure 2 for Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios
Figure 3 for Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios
Figure 4 for Multimodal Emotion Recognition by Fusing Video Semantic in MOOC Learning Scenarios
Viaarxiv icon

MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models

Add code
Apr 11, 2024
Figure 1 for MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Figure 2 for MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Figure 3 for MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Figure 4 for MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Viaarxiv icon

MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild

Apr 13, 2024
Viaarxiv icon

GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT

May 03, 2024
Figure 1 for GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT
Figure 2 for GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT
Figure 3 for GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT
Figure 4 for GMP-ATL: Gender-augmented Multi-scale Pseudo-label Enhanced Adaptive Transfer Learning for Speech Emotion Recognition via HuBERT
Viaarxiv icon

UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause

Add code
Mar 30, 2024
Viaarxiv icon

Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition

Add code
Mar 30, 2024
Figure 1 for Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition
Figure 2 for Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition
Figure 3 for Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition
Figure 4 for Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition
Viaarxiv icon

Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations

Apr 25, 2024
Figure 1 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 2 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 3 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Figure 4 for Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Viaarxiv icon