Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- invited-talkOctober 2016
Embodied media: expanding human capacity via virtual reality and telexistence (keynote)
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPage 3https://doi.org/10.1145/2993148.3011261The information we acquire in real life gives us a holistic experience that fully incorporates a variety of sensations and bodily motions such as seeing, hearing, speaking, touching, smelling, tasting, and moving. However, the sensory modalities that ...
- extended-abstractOctober 2016
International workshop on multimodal analyses enabling artificial agents in human- machine interaction (workshop summary)
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 604–605https://doi.org/10.1145/2993148.3007634In this paper a brief overview of the third workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. The paper is focussing on the main aspects intended to be discussed in the workshop reflecting the main scope of the ...
- extended-abstractOctober 2016
1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary)
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 601–603https://doi.org/10.1145/2993148.3007633This is an introductory paper for the workshop entitled ‘Multi-Sensorial Approaches to Human-Food Interaction’ held at ICMI 2016, which took place the 16th of November, 2016 in Tokyo, Japan. Here we discuss our objectives and the relevance of the ...
- extended-abstractOctober 2016
ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary)
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 593–595https://doi.org/10.1145/2993148.3007630In this paper the organisers present a brief overview of the 2nd International Workshop on Emotion Representations and Modelling for Companion Systems (ERM4CT). The ERM4CT 2016 Workshop is held in conjunction with the 18th ACM International Conference ...
- demonstrationOctober 2016
Multimodal system for public speaking with real time feedback: a positive computing perspective
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 408–409https://doi.org/10.1145/2993148.2998536A multimodal system for public speaking with real time feedback has been developed using the Microsoft Kinect. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. The system detects ...
- demonstrationOctober 2016
Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 400–401https://doi.org/10.1145/2993148.2998522In this paper we describe a demonstration of our multimodal affective feedback designs, used in research to expand the emotional expressivity of interfaces. The feedback leverages inherent associations and reactions to thermal, vibrotactile, auditory ...
- short-paperOctober 2016
Kawaii feeling estimation by product attributes and biological signals
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 563–566https://doi.org/10.1145/2993148.2997621Kansei values are critical factors in manufacturing in Japan. As one kansei value, kawaii, which is a positive adjective that denotes such positive connotations as cute, lovable, and charming, is becoming more important. Our research systematically ...
- short-paperOctober 2016
Multimodal positive computing system for public speaking with real-time feedback
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 541–545https://doi.org/10.1145/2993148.2997616A multimodal system with real-time feedback for public speaking has been developed. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. To date we have focused on the following ...
- research-articleOctober 2016
Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 13–20https://doi.org/10.1145/2993148.2993190Content authoring of verbal and nonverbal behavior is a limiting factor when developing agents for repeated social interactions with the same user. We present PIP, an agent that crowdsources its own multimodal language behavior using a method we call ...
- short-paperOctober 2016
Smooth eye movement interaction using EOG glasses
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 307–311https://doi.org/10.1145/2993148.2993181Orbits combines a visual display and an eye motion sensor to allow a user to select between options by tracking a cursor with the eyes as the cursor travels in a circular path around each option. Using an off-the-shelf Jins MEME pair of eyeglasses, we ...
- short-paperOctober 2016
Sound emblems for affective multimodal output of a robotic tutor: a perception study
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionPages 256–260https://doi.org/10.1145/2993148.2993169Human and robot tutors alike have to give careful consideration as to how feedback is delivered to students to provide a motivating yet clear learning context. Here, we performed a perception study to investigate attitudes towards negative and positive ...