Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleNovember 2011
Virtual worlds and active learning for human detection
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 393–400https://doi.org/10.1145/2070481.2070556Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive ...
- research-articleNovember 2011
Please, tell me about yourself: automatic personality assessment using short self-presentations
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 255–262https://doi.org/10.1145/2070481.2070528Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other"s first impressions and increase effectiveness. This paper addresses the automatically ...
- research-articleNovember 2011
Finding audio-visual events in informal social gatherings
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 247–254https://doi.org/10.1145/2070481.2070527In this paper we address the problem of detecting and localizing objects that can be both seen and heard, e.g., people. This may be solved within the framework of data clustering. We propose a new multimodal clustering algorithm based on a Gaussian ...
- posterNovember 2011
Robust user context analysis for multimodal interfaces
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 81–88https://doi.org/10.1145/2070481.2070498Multimodal Interfaces that enable natural means of interaction using multiple modalities such as touch, hand gestures, speech, and facial expressions represent a paradigm shift in human-computer interfaces. Their aim is to allow rich and intuitive ...
- research-articleNovember 2011
Adaptive facial expression recognition using inter-modal top-down context
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 27–34https://doi.org/10.1145/2070481.2070488The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial ...
- research-articleNovember 2011
Crowdsourced data collection of facial responses
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 11–18https://doi.org/10.1145/2070481.2070486In the past, collecting data to train facial expression and affect recognition systems has been time consuming and often led to data that do not include spontaneous expressions. We present the first crowdsourced data collection of dynamic, natural and ...
- keynoteNovember 2011
Still looking at people
ICMI '11: Proceedings of the 13th international conference on multimodal interfacesPages 1–2https://doi.org/10.1145/2070481.2070483There is a great need for programs that can describe what people are doing from video. Among other applications, such programs could be used to search for scenes in consumer video; in surveillance applications; to support the design of buildings and of ...