Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- abstractOctober 2018
Group Interaction Frontiers in Technology
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 660–662https://doi.org/10.1145/3242969.3272960Analysis of group interaction and team dynamics is an important topic in a wide variety of fields, owing to the amount of time that individuals typically spend in small groups for both professional and personal purposes, and given how crucial group ...
- demonstrationOctober 2018
MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous Systems
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 557–558https://doi.org/10.1145/3242969.3266297Autonomous systems in remote locations have a high degree of autonomy and there is a need to explain what they are doing and why , in order to increase transparency and maintain trust. This is particularly important in hazardous, high-risk scenarios. ...
- demonstrationOctober 2018
Online Privacy-Safe Engagement Tracking System
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 553–554https://doi.org/10.1145/3242969.3266295Tracking learners' engagement is useful for monitoring their learning quality. With an increasing number of online video courses, a system that can automatically track learners' engagement is expected to significantly help in improving the outcomes of ...
- demonstrationOctober 2018
EVA: A Multimodal Argumentative Dialogue System
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 551–552https://doi.org/10.1145/3242969.3266292This work introduces EVA, a multimodal argumentative Dialogue System that is capable of discussing controversial topics with the user. The interaction is structured as an argument game in which the user and the system select respective moves in order to ...
- abstractOctober 2018
Human-Habitat for Health (H3): Human-habitat Multimodal Interaction for Promoting Health and Well-being in the Internet of Things Era
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 664–665https://doi.org/10.1145/3242969.3265862This paper presents an introduction to the "Human-Habitat for Health (H3): Human-habitat multimodal interaction for promoting health and well-being in the Internet of Things era" workshop, which was held at the 20th ACM International Conference on ...
-
- short-paperOctober 2018
EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 653–656https://doi.org/10.1145/3242969.3264993This paper details the sixth Emotion Recognition in the Wild (EmotiW) challenge. EmotiW 2018 is a grand challenge in the ACM International Conference on Multimodal Interaction 2018, Colarado, USA. The challenge aims at providing a common platform to ...
- research-articleOctober 2018
Cascade Attention Networks For Group Emotion Recognition with Face, Body and Image Cues
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 640–645https://doi.org/10.1145/3242969.3264991This paper presents our approach for group-level emotion recognition sub-challenge in the EmotiW 2018. The task is to classify an image into one of the group emotions such as positive, negative, and neutral. Our approach mainly explores three cues, ...
- short-paperOctober 2018
Group-Level Emotion Recognition Using Hybrid Deep Models Based on Faces, Scenes, Skeletons and Visual Attentions
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 635–639https://doi.org/10.1145/3242969.3264990This paper presents a hybrid deep learning network submitted to the 6th Emotion Recognition in the Wild (EmotiW 2018) Grand Challenge [9], in the category of group-level emotion recognition. Advanced deep learning models trained individually on faces, ...
- short-paperOctober 2018
Multi-Feature Based Emotion Recognition for Video Clips
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 630–634https://doi.org/10.1145/3242969.3264989In this paper, we present our latest progress in Emotion Recognition techniques, which combines acoustic features and facial features in both non-temporal and temporal mode. This paper presents the details of our techniques used in the Audio-Video ...
- research-articleOctober 2018
Group-Level Emotion Recognition using Deep Models with A Four-stream Hybrid Network
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 623–629https://doi.org/10.1145/3242969.3264987Group-level Emotion Recognition (GER) in the wild is a challenging task gaining lots of attention. Most recent works utilized two channels of information, a channel involving only faces and a channel containing the whole image, to solve this problem. ...
- research-articleOctober 2018
An Ensemble Model Using Face and Body Tracking for Engagement Detection
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 616–622https://doi.org/10.1145/3242969.3264986Precise detection and localization of learners' engagement levels are useful for monitoring their learning quality. In the emotiW Challenge's engagement detection task, we proposed a series of novel improvements, including (a) a cluster-based framework ...
- short-paperOctober 2018
An Attention Model for Group-Level Emotion Recognition
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 611–615https://doi.org/10.1145/3242969.3264985In this paper we propose a new approach for classifying the global emotion of images containing groups of people. To achieve this task, we consider two different and complementary sources of information: i) a global representation of the entire image (ii)...
- research-articleOctober 2018
Predicting Engagement Intensity in the Wild Using Temporal Convolutional Network
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 604–610https://doi.org/10.1145/3242969.3264984Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the ...
- short-paperOctober 2018
Automatic Engagement Prediction with GAP Feature
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 599–603https://doi.org/10.1145/3242969.3264982In this paper, we propose an automatic engagement prediction method for the Engagement in the Wild sub-challenge of EmotiW 2018. We first design a novel Gaze-AU-Pose (GAP) feature taking into account the information of gaze, action units and head pose ...
- short-paperOctober 2018
Deep Recurrent Multi-instance Learning with Spatio-temporal Features for Engagement Intensity Prediction
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 594–598https://doi.org/10.1145/3242969.3264981This paper elaborates the winner approach for engagement intensity prediction in the EmotiW Challenge 2018. The task is to predict the engagement level of a subject when he or she is watching an educational video in diverse conditions and different ...
- short-paperOctober 2018
An Occam's Razor View on Learning Audiovisual Emotion Recognition with Small Training Sets
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 589–593https://doi.org/10.1145/3242969.3264980This paper presents a light-weight and accurate deep neural model for audiovisual emotion recognition. To design this model, the authors followed a philosophy of simplicity, drastically limiting the number of parameters to learn from the target datasets,...
- short-paperOctober 2018
Video-based Emotion Recognition Using Deeply-Supervised Neural Networks
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 584–588https://doi.org/10.1145/3242969.3264978Emotion recognition (ER) based on natural facial images/videos has been studied for some years and considered a comparatively hot topic in the field of affective computing. However, it remains a challenge to perform ER in the wild, given the noises ...
- research-articleOctober 2018
Large Vocabulary Continuous Audio-Visual Speech Recognition
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 538–541https://doi.org/10.1145/3242969.3264976We like to conversate with other people using both sounds and visuals, as our perception of speech is bimodal. Essentially echoing the same speech structure, we manage to integrate the two modalities and often understand the message better than with the ...
- research-articleOctober 2018
Responding with Sentiment Appropriate for the User's Current Sentiment in Dialog as Inferred from Prosody and Gaze Patterns
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 529–533https://doi.org/10.1145/3242969.3264974Multi-modal sentiment detection from natural video/audio streams has recently received much attention. I propose to use this multi-modal information to develop a technique, Sentiment Coloring , that utilizes the detected sentiments to generate effective ...
- research-articleOctober 2018
Data Driven Non-Verbal Behavior Generation for Humanoid Robots
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 520–523https://doi.org/10.1145/3242969.3264970Social robots need non-verbal behavior to make an interaction pleasant and efficient. Most of the models for generating non-verbal behavior are rule-based and hence can produce a limited set of motions and are tuned to a particular scenario. In contrast,...