Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
Interpretable Clusters for Representing Citizens’ Sense of Belonging through Interaction with Cultural Heritage
- Guillermo Jiménez-Díaz,
- Belen Diaz-Agudo,
- Luis Emilio Bruni,
- Nele Kadastik,
- Anna Follo,
- Rossana Damiano,
- Manuel Striani,
- Angel Sanchez-Martin,
- Antonio Lieto
Journal on Computing and Cultural Heritage (JOCCH), Volume 17, Issue 4Article No.: 74, Pages 1–22https://doi.org/10.1145/3665142The EU H2020 project social cohesion, participation, and inclusion through cultural engagement (SPICE) focuses on developing, designing, and implementing new methods and digital tools for citizen curation. This article delineates several software tools ...
- short-paperDecember 2024
Envisioning Ubiquitous Biosignal Interaction with Multimedia
- Ekaterina R. Stepanova,
- Alice C. Haynes,
- Laia Turmo Vidal,
- Francesco Chiossi,
- Abdallah El Ali,
- Luis Quintero,
- Yoav Luft,
- Nadia Campo Woytuk,
- Sven Mayer
MUM '24: Proceedings of the International Conference on Mobile and Ubiquitous MultimediaPages 495–500https://doi.org/10.1145/3701571.3701609Biosensing technologies are on their way to becoming ubiquitous in multimedia interaction. These technologies capture physiological data, such as heart rate, breathing, skin conductance, and brain activity. Researchers are exploring biosensing from ...
- panelNovember 2024
What should we do with Emotion AI? Towards an Agenda for the Next 30 Years
CSCW Companion '24: Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social ComputingPages 98–101https://doi.org/10.1145/3678884.3689135What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human ...
- research-articleNovember 2024
The Synergy of Dialogue and Art: Exploring the Potential of Multimodal AI Chatbots in Emotional Support
CSCW Companion '24: Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social ComputingPages 147–153https://doi.org/10.1145/3678884.3681843The rapid advancements in generative AI have spurred the development of AI chatbots for emotional support. In this work, we designed ArtTheraCat, a novel multimodal chatbot that promotes mental health by integrating supportive dialogue with artistic ...
- research-articleNovember 2024
U.S. Job-Seekers' Organizational Justice Perceptions of Emotion AI-Enabled Interviews
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 8, Issue CSCW2Article No.: 454, Pages 1–42https://doi.org/10.1145/3686993Emotion AI is increasingly used to automatically evaluate asynchronous hiring interviews. Although touted for increasing hiring fit and reducing bias, it is unclear how job-seekers perceive emotion AI-enabled asynchronous interviews. This gap is striking,...
-
- research-articleNovember 2024
MSP-GEO Corpus: A Multimodal Database for Understanding Video-Learning Experience
ICMI '24: Proceedings of the 26th International Conference on Multimodal InteractionPages 488–497https://doi.org/10.1145/3678957.3685737Video-based learning has become a popular, scalable, and effective approach for students to learn new skills. Many of the challenges for video-based learning can be addressed with machine learning models. However, the available datasets often lack the ...
- research-articleNovember 2024
Integrating Multimodal Affective Signals for Stress Detection from Audio-Visual Data
ICMI '24: Proceedings of the 26th International Conference on Multimodal InteractionPages 22–32https://doi.org/10.1145/3678957.3685717Stress detection in real-world settings presents significant challenges due to the complexity of human emotional expression influenced by biological, psychological, and social factors. While traditional methods like EEG, ECG, and EDA sensors provide ...
- short-paperOctober 2024
MRAC'24 Track 2: 2nd International Workshop on Multimodal and Responsible Affective Computing
MRAC '24: Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective ComputingPages 39–40https://doi.org/10.1145/3689092.3696103MRAC'24 is the continuation of last year's MRAC'23. The main goal of this workshop is to promote the application of affective computing technology in real-world scenarios. We have equipped this workshop with MER'24 Challenge, which provides a platform ...
- research-articleOctober 2024
MRAC Track 1: 2nd Workshop on Multimodal, Generative and Responsible Affective Computing
MRAC '24: Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective ComputingPages 1–6https://doi.org/10.1145/3689092.3690042With the rapid advancements in multimodal generative technology, Affective Computing research has provoked discussion about the potential consequences of AI systems equipped with emotional intelligence. Affective Computing involves the design, evaluation,...
- research-articleOctober 2024
Can Expression Sensitivity Improve Macro- and Micro-Expression Spotting in Long Videos?
MRAC '24: Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective ComputingPages 30–38https://doi.org/10.1145/3689092.3689396Spotting people's expressions is pivotal, as it directly reflects emotions, particularly those underlying feelings and intentions that may not be expressed verbally. Therefore, detecting macro- and micro-expressions play a critical role in psychological ...
- keynoteOctober 2024
Seeing in 3D: Assistive Robotics with Advanced Computer Vision
MRAC '24: Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective ComputingPages 8–9https://doi.org/10.1145/3689092.3689392Robotics has made significant progress in cases of structured and constrained environments, e.g., manufacturing. However, it is still in its infancy when it comes to applications in unstructured and unconstrained situations e.g., social environments. In ...
- short-paperOctober 2024
MuSe '24: The 5th Multimodal Sentiment Analysis Challenge and Workshop: Social Perception & Humor
MuSe'24: Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and HumorPages 10–11https://doi.org/10.1145/3689062.3695939The 5th Multimodal Sentiment Analysis Challenge (MuSe), a workshop in conjunction with ACM Multimedia '24, is focused on Multimodal Machine Learning in the domain of Affective Computing. Two different sub-challenges are proposed: Social Perception Sub-...
- short-paperOctober 2024
Multimodal Humor Detection and Social Perception Prediction
MuSe'24: Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and HumorPages 60–64https://doi.org/10.1145/3689062.3689376The parallel audio-visual-text data contains vast amount of information. Thus it is essential to develop machine learning algorithms that can utilise them efficiently. In this work, we investigated unimodal and multimodal solutions for MuSe Humor and ...
- research-articleOctober 2024
The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition
- Shahin Amiriparian,
- Lukas Christ,
- Alexander Kathan,
- Maurice Gerczuk,
- Niklas Müller,
- Steffen Klug,
- Lukas Stappen,
- Andreas König,
- Erik Cambria,
- Björn W. Schuller,
- Simone Eulitz
MuSe'24: Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and HumorPages 1–9https://doi.org/10.1145/3689062.3689088The Multimodal Sentiment Analysis Challenge (MuSe) 2024 addresses two contemporary multimodal affect and sentiment analysis problems: In the Social Perception Sub-Challenge (MuSe-Perception), participants will predict 16 different social attributes of ...
- research-articleOctober 2024
LLM-Driven Multimodal Fusion for Human Perception Analysis
- Sergio Esteban-Romero,
- Iván Martín-Fernández,
- Manuel Gil-Martín,
- David Griol-Barres,
- Zoraida Callejas-Carrión,
- Fernando Fernández-Martínez
MuSe'24: Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and HumorPages 45–51https://doi.org/10.1145/3689062.3689084The Multimodal Sentiment Analysis Challenge presents two distinct sub-challenges related to human perception characteristics. This paper focuses on the MUSE-PERCEPTION challenge, which aims to predict the perceptual attributes of CEOs from video data. ...
- research-articleOctober 2024
Larger Encoders, Smaller Regressors: Exploring Label Dimensionality Reduction and Multimodal Large Language Models as Feature Extractors for Predicting Social Perception
- Iván Martín-Fernández,
- Sergio Esteban-Romero,
- Jaime Bellver-Soler,
- Fernando Fernández-Martínez,
- Manuel Gil-Martín
MuSe'24: Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and HumorPages 20–27https://doi.org/10.1145/3689062.3689083Designing reliable automatic models for social perception can contribute to a better understanding of human behavior, enabling more trustworthy experiences in the multimedia on-line communication environment. However, predicting social attributes from ...
- research-articleOctober 2024
Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition
BCIMM '24: Proceedings of the 1st International Workshop on Brain-Computer Interfaces (BCI) for Multimedia UnderstandingPages 9–17https://doi.org/10.1145/3688862.3689112The integration of human emotions into multimedia applications shows great potential for enriching user experiences and enhancing engagement across various digital platforms. Unlike traditional methods such as questionnaires, facial expressions, and ...
- abstractOctober 2024
Label-Efficient Emotion and Sentiment Analysis
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 11300–11301https://doi.org/10.1145/3664647.3689173Emotion and sentiment analysis (ESA) assists machines to serve humans more intelligently. However, collecting large-scale high-quality datasets for training ESA models in a supervised manner is expensive, time-consuming, and difficult in practice. This ...
- research-articleOctober 2024
Towards Engagement Prediction: A Cross-Modality Dual-Pipeline Approach using Visual and Audio Features
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 11383–11389https://doi.org/10.1145/3664647.3688986Engagement estimation is crucial for advancing natural human-computer interaction, allowing artificial agents to dynamically adjust their responses based on user engagement levels and creating more intuitive and immersive experiences. Despite ...
- research-articleOctober 2024
WSEL: EEG Feature Selection with Weighted Self-expression Learning for Incomplete Multi-dimensional Emotion Recognition
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 350–359https://doi.org/10.1145/3664647.3681570Due to the small size of valid samples, multi-source EEG features with high dimensionality can easily cause problems such as overfitting and poor real-time performance of the emotion recognition classifier. Feature selection has been demonstrated as an ...