Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleNovember 2006
Human computing and machine understanding of human behavior: a survey
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 239–248https://doi.org/10.1145/1180995.1181044A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation ...
- ArticleNovember 2006
Using maximum entropy (ME) model to incorporate gesture cues for SU detection
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 185–192https://doi.org/10.1145/1180995.1181035Accurate identification of sentence units (SUs) in spontaneous speech has been found to improve the accuracy of speech recognition, as well as downstream applications such as parsing. In recent multimodal investigations, gestur]al features were utilized,...
- ArticleNovember 2006
Gaze-X: adaptive affective multimodal interface for single-user office scenarios
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 171–178https://doi.org/10.1145/1180995.1181032This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user's actions and emotions are modeled and then used to adapt the HCI and support the user in his or her ...
- ArticleNovember 2006
A 'need to know' system for group classification
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 155–161https://doi.org/10.1145/1180995.1181030This paper outlines the design of a distributed sensor classification system with abnormality detection intended for groups of people who are participating in coordinated activities. The system comprises an implementation of a distributed Dynamic ...
- ArticleNovember 2006
Collaborative multimodal photo annotation over digital paper
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 131–132https://doi.org/10.1145/1180995.1181023The availability of metadata annotations over media content such as photos is known to enhance retrieval and organization, particularly for large data sets. The greatest challenge for obtaining annotations remains getting users to perform the large ...
- ArticleNovember 2006
Cross-modal coordination of expressive strength between voice and gesture for personified media
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 43–50https://doi.org/10.1145/1180995.1181006The aim of this paper is to clarify the relationship between the expressive strengths of gestures and voice for embodied and personified interfaces. We conduct perceptual tests using a puppet interface, while controlling singing-voice expressions, to ...
- ArticleNovember 2006
Automatic detection of group functional roles in face to face interactions
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 28–34https://doi.org/10.1145/1180995.1181003In this paper, we discuss a machine learning approach to automatically detect functional roles played by participants in a face to face interaction. We shortly introduce the coding scheme we used to classify the roles of the group members and the corpus ...
- ArticleNovember 2006
Collaborative multimodal photo annotation over digital paper
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 4–11https://doi.org/10.1145/1180995.1181000The availability of metadata annotations over media content such as photos is known to enhance retrieval and organization, particularly for large data sets. The greatest challenge for obtaining annotations remains getting users to perform the large ...