Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2070481.2070498acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Robust user context analysis for multimodal interfaces

Published: 14 November 2011 Publication History

Abstract

Multimodal Interfaces that enable natural means of interaction using multiple modalities such as touch, hand gestures, speech, and facial expressions represent a paradigm shift in human-computer interfaces. Their aim is to allow rich and intuitive multimodal interaction similar to human-to-human communication and interaction. From the multimodal system's perspective, apart from the various input modalities themselves, user context information such as states of attention and activity, and identities of interacting users can help greatly in improving the interaction experience. For example, when sensors such as cameras (webcams, depth sensors etc.) and microphones are always on and continuously capturing signals in their environment, user context information is very useful to distinguish genuine system-directed activity from ambient speech and gesture activity in the surroundings, and distinguish the "active user" from among a set of users. Information about user identity may be used to personalize the system's interface and behavior -- e.g. the look of the GUI, modality recognition profiles, and information layout -- to suit the specific user. In this paper, we present a set of algorithms and an architecture that performs audiovisual analysis of user context using sensors such as cameras and microphone arrays, and integrates components for lip activity and audio direction detection (speech activity), face detection and tracking (attention), and face recognition (identity). The proposed architecture allows the component data flows to be managed and fused with low latency, low memory footprint, and low CPU load, since such a system is typically required to run continuously in the background and report events of attention, activity, and identity, in real-time, to consuming applications.

References

[1]
Oviatt, S.L. Multimodal interfaces, Handbook of Human-Computer Interaction (revised edition), (ed. by J. Jacko & A. Sears), Lawrence Erlbaum Assoc: New Jersey, 2006.
[2]
AMI Project, "State-of-the-art overview - Recognition of Attentional Cues in Meetings", http://www.amiproject.org.
[3]
AMI Project, "State-of-the-art overview - Localization and Tracking of Multiple Interlocutors with Multiple Sensors", http://www.amiproject.org, January 2007.
[4]
Siracusa, M., et. al. "A multimodal approach for determining speaker location and focus", ICMI,03, Vancouver, Canada, Nov. 05 - 07, 2003. ACM, New York, NY, 77--80.
[5]
Fraunhofer SHORE, http://www.iis.fraunhofer.de/en/bf/bv/ks/gpe/demo
[6]
Kapralos, B., Jenkin, M., and Milios, E., "Audio-visual localization of multiple speakers in a video teleconferencing setting", IJIST, Vol. 13(1):95--105, 2003.
[7]
Yoshimi, B., and Pingali, G., "A multimodal speaker detection and tracking system for teleconferencing," in Proc. ACM Conf. Multimedia, 2002.
[8]
Zhang, C. et al., "Boosting-Based Multimodal Speaker Detection for Distributed Meeting Videos", IEEE Trans. on Multimedia (10), No. 8, Dec. 2008, pp. 1541--1552.
[9]
Lin et al., "Meta-classification: Combining Multimodal Classifiers", O.R. Zaïane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 217--231, 2003.
[10]
Kittler, J. et al., "On Combining Classifiers", IEEE Trans. Pattern Anal. Mach. Intell. 20, 3 (Mar. 1998), 226--239.
[11]
Tax, D. M. J., et. al., "Combining multiple classifiers by averaging or by multiplying?", PR 33(9): 1475--148, 2000.
[12]
Andrea Microphone Array, http://www.andreaelectronics.com/buy/microphones.htm
[13]
Dabholkar, J., Dey, P., "Robust Lip Activity Detection using Bi-Directional Optical Flow", Hewlett-Packard Labs Tech Report HPL-2010--87.
[14]
Knapp, C. H., and Carter, G. C.,"The generalized correlation method for estimation of time-delay", IEEE Trans. Acoust., Speech and Audio Proc., vol ASSP-24, (4), 320--327, 1976.

Cited By

View all
  • (2022)An Interaction Design Dome to Guide Software Robot Development in the Banking SectorDigital Science10.1007/978-3-030-93677-8_21(244-256)Online publication date: 17-Jan-2022
  • (2013)DESIProceedings of the 11th Asia Pacific Conference on Computer Human Interaction10.1145/2525194.2525308(388-399)Online publication date: 24-Sep-2013
  • (2012)GPU-based approaches for real-time sound source localization using the SRP-PHAT algorithmThe International Journal of High Performance Computing Applications10.1177/109434201245216627:3(291-306)Online publication date: Aug-2012
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '11: Proceedings of the 13th international conference on multimodal interfaces
November 2011
432 pages
ISBN:9781450306416
DOI:10.1145/2070481
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 November 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. human-computer-interaction.
  2. multimodal systems
  3. speech
  4. user context

Qualifiers

  • Poster

Conference

ICMI'11
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 31 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)An Interaction Design Dome to Guide Software Robot Development in the Banking SectorDigital Science10.1007/978-3-030-93677-8_21(244-256)Online publication date: 17-Jan-2022
  • (2013)DESIProceedings of the 11th Asia Pacific Conference on Computer Human Interaction10.1145/2525194.2525308(388-399)Online publication date: 24-Sep-2013
  • (2012)GPU-based approaches for real-time sound source localization using the SRP-PHAT algorithmThe International Journal of High Performance Computing Applications10.1177/109434201245216627:3(291-306)Online publication date: Aug-2012
  • (2012)Designing multiuser multimodal gestural interactions for the living roomProceedings of the 14th ACM international conference on Multimodal interaction10.1145/2388676.2388693(61-62)Online publication date: 22-Oct-2012
  • (2011)The picture says it all!Proceedings of the 13th international conference on multimodal interfaces10.1145/2070481.2070499(89-96)Online publication date: 14-Nov-2011

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media