Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleNovember 2006
A new approach to haptic augmentation of the GUI
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 372–379https://doi.org/10.1145/1180995.1181064Most users do not experience the same level of fluency in their interactions with computers that they do with physical objects in their daily life. We believe that much of this results from the limitations of unimodal interaction. Previous efforts in ...
- ArticleNovember 2006
Multimodal fusion: a new hybrid strategy for dialogue systems
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 357–363https://doi.org/10.1145/1180995.1181061This is a new hybrid fusion strategy based primarily on the implementation of two former and differentiated approaches to multimodal fusion [11] in multimodal dialogue systems. Both approaches, their predecessors and their respective advantages and ...
- ArticleNovember 2006
Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 347–356https://doi.org/10.1145/1180995.1181060New language constantly emerges from complex, collaborative human-human interactions like meetings -- such as, for instance, when a presenter handwrites a new term on a whiteboard while saying it. Fixed vocabulary recognizers fail on such new terms, ...
- ArticleNovember 2006
Word graph based speech rcognition error correction by handwriting input
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 339–346https://doi.org/10.1145/1180995.1181059We propose a convenient handwriting user interface for correcting speech recognition errors efficiently. Via the proposed hand-marked correction on the displayed recognition result, substitution, deletion and insertion errors can be corrected ...
- ArticleNovember 2006
The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback
- Matthew S. Prewett,
- Liuquin Yang,
- Frederick R. B. Stilson,
- Ashley A. Gray,
- Michael D. Coovert,
- Jennifer Burke,
- Elizabeth Redden,
- Linda R. Elliot
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 333–338https://doi.org/10.1145/1180995.1181057Information display systems have become increasingly complex and more difficult for human cognition to process effectively. Based upon Wicken's Multiple Resource Theory (MRT), information delivered using multiple modalities (i.e., visual and tactile) ...
- ArticleNovember 2006
Enabling multimodal communications for enhancing the ability of learning for the visually impaired
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 326–332https://doi.org/10.1145/1180995.1181056Students who are blind are typically one to three years behind their seeing counterparts in mathematics and science. We posit that a key reason for this resides in the inability of such students to access multimodal embodied communicative behavior of ...
- ArticleNovember 2006
Toward haptic rendering for a virtual dissection
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 310–317https://doi.org/10.1145/1180995.1181054In this paper we present a novel data structure combined with geometrically efficient techniques to simulate a "tissue peeling" method for deformable bodies. This is done to preserve the basic shape of a body in conjunction with soft-tissue deformation ...
- ArticleNovember 2006
Haptic phonemes: basic building blocks of haptic communication
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 302–309https://doi.org/10.1145/1180995.1181053A haptic phoneme represents the smallest unit of a constructed haptic signal to which a meaning can be assigned. These haptic phonemes can be combined serially or in parallel to form haptic words, or haptic icons, which can hold more elaborate meanings ...
- ArticleNovember 2006
Explorations in sound for tilting-based interfaces
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 295–301https://doi.org/10.1145/1180995.1181052Everyday experience as well as recent studies tell that information contained in ecological sonic feedback may improve human control of, and interaction with, a system. This notion is particularly worthwhile to consider in the context of mobile, tilting-...
- ArticleNovember 2006
Toward open-microphone engagement for multiparty interactions
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 273–280https://doi.org/10.1145/1180995.1181049There currently is considerable interest in developing new open-microphone engagement techniques for speech and multimodal interfaces that perform robustly in complex mobile and multiparty field environments. State-of-the-art audio-visual open-...
- ArticleNovember 2006
Collaborative multimodal photo annotation over digital paper
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 131–132https://doi.org/10.1145/1180995.1181023The availability of metadata annotations over media content such as photos is known to enhance retrieval and organization, particularly for large data sets. The greatest challenge for obtaining annotations remains getting users to perform the large ...
- ArticleNovember 2006
Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis
- Jennifer L. Burke,
- Matthew S. Prewett,
- Ashley A. Gray,
- Liuquin Yang,
- Frederick R. B. Stilson,
- Michael D. Coovert,
- Linda R. Elliot,
- Elizabeth Redden
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 108–117https://doi.org/10.1145/1180995.1181017In a meta-analysis of 43 studies, we examined the effects of multimodal feedback on user performance, comparing visual-auditory and visual-tactile feedback to visual feedback alone. Results indicate that adding an additional modality to visual feedback ...
- ArticleNovember 2006
Which one is better?: information navigation techniques for spatially aware handheld displays
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 100–107https://doi.org/10.1145/1180995.1181016Information navigation techniques for handheld devices support interacting with large virtual spaces on small displays, for example finding targets on a large-scale map. Since only a small part of the virtual space can be shown on the screen at once, ...
- ArticleNovember 2006
From vocal to multimodal dialogue management
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 59–67https://doi.org/10.1145/1180995.1181008Multimodal, speech-enabled systems pose different research problems when compared to unimodal, voice-only dialogue systems. One of the important issues is the question of how a multimodal interface should look like in order to make the multimodal ...
- ArticleNovember 2006
Human perception of intended addressee during computer-assisted meetings
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 20–27https://doi.org/10.1145/1180995.1181002Recent research aims to develop new open-microphone engagement techniques capable of identifying when a speaker is addressing a computer versus human partner, including during computer-assisted group interactions. The present research explores: (1) how ...
- ArticleNovember 2006
Collaborative multimodal photo annotation over digital paper
ICMI '06: Proceedings of the 8th international conference on Multimodal interfacesPages 4–11https://doi.org/10.1145/1180995.1181000The availability of metadata annotations over media content such as photos is known to enhance retrieval and organization, particularly for large data sets. The greatest challenge for obtaining annotations remains getting users to perform the large ...