<italic>GenSumm</italic>: A Joint Framework for Multi-Task Tweet Classification and Summarization Using Sentiment Analysis and Generative Modelling
Social media platforms like Twitter act as the medium for communication among people, government agencies, NGOs, and other relief providing agencies in widespread humanitarian havoc during a disaster outbreak when other communication means might not be ...
Contrastive Learning Based Modality-Invariant Feature Acquisition for Robust Multimodal Emotion Recognition With Missing Modalities
Multimodal emotion recognition (MER) aims to understand the way that humans express their emotions by exploring complementary information across modalities. However, it is hard to guarantee that full-modality data is always available in real-world ...
Fusion and Discrimination: A Multimodal Graph Contrastive Learning Framework for Multimodal Sarcasm Detection
Identifying sarcastic clues from both textual and visual information has become an important research issue, called Multimodal Sarcasm Detection. In this article, we investigate multimodal sarcasm detection from a novel perspective, where a multimodal ...
VAD: A Video Affective Dataset With Danmu
- Shangfei Wang,
- Xin Li,
- Feiyi Zheng,
- Jicai Pan,
- Xuewei Li,
- Yanan Chang,
- Zhou’an Zhu,
- Qiong Li,
- Jiahe Wang,
- Yufei Xiao
Although video affective content analysis has great potential in many applications, it has not been thoroughly studied due to limited datasets. In this article, we construct a large-scale video affective dataset with danmu (VAD). It consists of 19,267 ...
FBSTCNet: A Spatio-Temporal Convolutional Network Integrating Power and Connectivity Features for EEG-Based Emotion Decoding
Electroencephalography (EEG)-based emotion recognition plays a key role in the development of affective brain-computer interfaces (BCIs). However, emotions are complex and extracting salient EEG features underlying distinct emotional states is inherently ...
CFN-ESA: A Cross-Modal Fusion Network With Emotion-Shift Awareness for Dialogue Emotion Recognition
Multimodal emotion recognition in conversation (ERC) has garnered growing attention from research communities in various fields. In this paper, we propose a Cross-modal Fusion Network with Emotion-Shift Awareness (CFN-ESA) for ERC. Extant approaches ...
Hierarchical Shared Encoder With Task-Specific Transformer Layer Selection for Emotion-Cause Pair Extraction
Emotion Cause Pair Extraction (ECPE) aims to extract emotions and their causes from a document. Powerful emotion and cause extraction abilities have proven essential in achieving accurate ECPE. However, most existing methods employ shared feature learning ...
Evaluation of Virtual Agents’ Hostility in Video Games
Non-Playable Characters (NPCs) are a subtype of virtual agents that populate video games by endorsing social roles in the narrative. To infer NPCs’ roles, players evaluate NPCs’ appearance and behaviors, usually by ascribing human traits to ...
Modeling Category Semantic and Sentiment Knowledge for Aspect-Level Sentiment Analysis
To classify the sentiment polarity of the aspect entity in a sentence, most existing research evaluates the semantic knowledge among a certain aspect of a sentence and corresponding context as significant clues for the task. However, available ...
CiABL: Completeness-Induced Adaptative Broad Learning for Cross-Subject Emotion Recognition With EEG and Eye Movement Signals
Although multimodal physiological data from the central and peripheral nervous systems can objectively respond to human emotional states, the individual differences caused by non-stationary and low signal-to-noise properties bring several challenges to ...
MAST-GCN: Multi-Scale Adaptive Spatial-Temporal Graph Convolutional Network for EEG-Based Depression Recognition
Recently, depression recognition through EEG has gained significant attention. However, two challenges have not been properly addressed in prior automated depression recognition and classification studies: 1) EEG data lacks an explicit topological ...
Improving Representation With Hierarchical Contrastive Learning for Emotion-Cause Pair Extraction
Emotion-cause pair extraction (ECPE) aims to extract emotions and their corresponding cause from a document. The previous works have made great progress. However, there exist two major issues in existing works. First, most existing works mainly focus on ...
Spectral-Spatial Attention Alignment for Multi-Source Domain Adaptation in EEG-Based Emotion Recognition
In electroencephalographic-based (EEG-based) emotion recognition, high non-stationarity and individual differences in EEG signals could lead to significant discrepancies between sessions/subjects, making generalization to a new session/subject very ...
Multimodal Prediction of Obsessive-Compulsive Disorder and Comorbid Depression Severity and Energy Delivered by Deep Brain Electrodes
- Saurabh Hinduja,
- Ali Darzi,
- Itir Onal Ertugrul,
- Nicole Provenza,
- Ron Gadot,
- Eric A. Storch,
- Sameer A. Sheth,
- Wayne K. Goodman,
- Jeffrey F. Cohn
To develop reliable, valid, and efficient measures of obsessive-compulsive disorder (OCD) severity, comorbid depression severity, and total electrical energy delivered (TEED) by deep brain stimulation (DBS), we trained and compared random forests ...
Bridge Graph Attention Based Graph Convolution Network With Multi-Scale Transformer for EEG Emotion Recognition
In multichannel electroencephalograph (EEG) emotion recognition, most graph-based studies employ shallow graph model for spatial characteristics learning due to node over-smoothing caused by an increase in network depth. To address over-smoothing, we ...
A Weighted Co-Training Framework for Emotion Recognition Based on EEG Data Generation Using Frequency-Spatial Diffusion Transformer
Emotion recognition based on EEG signals has been a challenging task. The acquisition of EEG signals is complex, time-consuming, and has a high overhead. Artificial Intelligence Generated Content technology has been developing rapidly in image and sound ...
Weakly Correlated Multimodal Sentiment Analysis: New Dataset and Topic-Oriented Model
- Wuchao Liu,
- Wengen Li,
- Yu-Ping Ruan,
- Yulou Shu,
- Juntao Chen,
- Yina Li,
- Caili Yu,
- Yichao Zhang,
- Jihong Guan,
- Shuigeng Zhou
Existing multimodal sentiment analysis models focus more on fusing highly correlated image-text pairs, and thus achieves unsatisfactory performance on multimodal social media data which usually manifests weak correlations between different modalities. To ...
Boosting Micro-Expression Recognition via Self-Expression Reconstruction and Memory Contrastive Learning
Micro-expression (ME) is an instinctive reaction that is not controlled by thoughts. It reveals one's inner feelings, which is significant in sentiment analysis and lie detection. Since micro-expression is expressed as subtle facial changes within ...
SynSem-ASTE: An Enhanced Multi-Encoder Network for Aspect Sentiment Triplet Extraction With Syntax and Semantics
Aspect Sentiment Triplet Extraction (ASTE) is an essential task in fine-grained opinion mining and sentiment analysis that involves extracting triplets consisting of aspect terms, opinion terms, and their associated sentiment polarities from texts. While ...
EmoTake: Exploring Drivers’ Emotion for Takeover Behavior Prediction
- Yu Gu,
- Yibing Weng,
- Yantong Wang,
- Meng Wang,
- Guohang Zhuang,
- Jinyang Huang,
- Xiaolan Peng,
- Liang Luo,
- Fuji Ren
The blossoming semi-automated vehicles allow drivers to engage in various non-driving-related tasks, which may stimulate diverse emotions, thus affecting takeover safety. Though the effects of emotion on takeover behavior have recently been examined, how ...
EEG Microstates and fNIRS Metrics Reveal the Spatiotemporal Joint Neural Processing Features of Human Emotions
Emotions deeply influence human behavior and decision-making. Currently, the spatiotemporal joint neural processing pattern of human emotions remains largely unclear. This study employed EEG-fNIRS simultaneous recordings to capture the spatiotemporal ...
<italic>VyaktitvaNirdharan</italic>: Multimodal Assessment of Personality and Trait Emotional Intelligence
Automatic personality assessment (APA) has immense potential to improve decision-making and human-machine interaction. Numerous techniques for APA have been proposed in existing literature, with prior psychological studies demonstrating convergent ...
Rethinking Inconsistent Context and Imbalanced Regression in Depression Severity Prediction
As one of the world's most prevalent mental illnesses, depression is not easy to detect since it affects different people in different ways. Recently, linguistic features extracted from transcribed texts have been widely explored in depression ...
Affect-Conditioned Image Generation
In creativity support and computational co-creativity contexts, the task of discovering appropriate prompts for use with text-to-image generative models remains difficult. In many cases the creator wishes to evoke a certain impression with the image, but ...
U-Shaped Distribution Guided Sign Language Emotion Recognition With Semantic and Movement Features
Emotional expression is a bridge to human communication, especially for the hearing impaired. This paper proposes a sign language emotion recognition method based on semantic and movement features by exploring the relationship between emotion valence and ...
Towards Generalised and Incremental Bias Mitigation in Personality Computing
Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of ...
A Wide Evaluation of ChatGPT on Affective Computing Tasks
With the rise of foundation models, a new artificial intelligence paradigm has emerged, by simply using general purpose foundation models with prompting to solve problems instead of training a separate machine learning model for each problem. Such models ...