Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/958432.958457acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

Capturing user tests in a multimodal, multidevice informal prototyping tool

Published: 05 November 2003 Publication History

Abstract

Interaction designers are increasingly faced with the challenge of creating interfaces that incorporate multiple input modalities, such as pen and speech, and span multiple devices. Few early stage prototyping tools allow non-programmers to prototype these interfaces. Here we describe CrossWeaver, a tool for informally prototyping multimodal, multidevice user interfaces. This tool embodies the informal prototyping paradigm, leaving design representations in an informal, sketched form, and creates a working prototype from these sketches. CrossWeaver allows a user interface designer to sketch storyboard scenes on the computer, specifying simple multimodal command transitions between scenes. The tool also allows scenes to target different output devices. Prototypes can run across multiple standalone devices simultaneously, processing multimodal input from each one. Thus, a designer can visually create a multimodal prototype for a collaborative meeting or classroom application. CrossWeaver captures all of the user interaction when running a test of a prototype. This input log can quickly be viewed visually for the details of the users' multimodal interaction or it can be replayed across all participating devices, giving the designer information to help him or her analyze and iterate on the interface design.

References

[1]
Sinha, A. K. and J. A. Landay. Embarking on Multimodal Interface Design. In Proceedings of International Conference on Multimodal Interfaces ICMI 2002. (Pittsburgh, PA, October 2002). p. 355--360.
[2]
Oviatt, S., P. Cohen, L. Wu, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson, and D. Ferro, Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. Human-Computer Interaction, 2000. 15(4): p. 263--322.
[3]
Sinha, A. K. and J. A. Landay. Visually Prototyping Perceptual User Interfaces Through Multimodal Storyboarding. In Workshop on Perceptive User Interfaces PUI 2001 (Orlando, FL, November. 2001), p.
[4]
Rekimoto, J. Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. UIST 1997. (Banff, Alberta, Canada, 1997).
[5]
Klemmer, S. R., A. K. Sinha, J. Chen, J. A. Landay, N. Aboobaker, and A. Wang, SUEDE: A Wizard of Oz Prototyping Tool for Speech User Interfaces. The 13th Annual ACM Symposium on User Interface Software and Technology: UIST 2000. November 2000. CHI Letters, 2(2): p. 1--10.
[6]
Landay, J. A. and B. A. Myers, Sketching Interfaces: Toward More Human Interface Design. IEEE Computer, 2001. 34(3): p. 56--64.
[7]
Moran, L. B., A. J. Cheyer, L. E. Julia, D. L. Martin, and S. Park, Multimodal User Interfaces in the Open Agent Architecture. Knowledge-Based Systems, 1998. 10(5): p. 295--304.
[8]
Landay, J. A. and B. A. Myers. Interactive Sketching for the Early Stages of User Interface Design. In Proceedings of Human Factors in Computing Systems: CHI '95. (Denver, CO, 1995), p. 43--50.
[9]
Lin, J., M. W. Newman, J. I. Hong, and J. A. Landay, DENIM: Finding a Tighter Fit Between Tools and Practice for Web Site Design. In Proceedings of Human Factors in Computing Systems, CHI 2000, April 2000. CHI Letters, 2(1): p. 510--517.
[10]
Klemmer, S. R., M. Thomsen, E. Phelps-Goodman, R. Lee, and J. A. Landay, Where Do Web Sites Come From? Capturing and Interacting with Design History. In Proceedings of Human Factors in Computing Systems: CHI 2002, 2002. CHI Letters, 4(1): p. 1--10.
[11]
Bailey, B. P., J. A. Konstan, and J. V. Carlis. DEMAIS: Designing Multimedia Applications with Interactive Storyboards. In Proceedings of the 9th ACM International Conference on Multimedia. (Ottawa, Ontario, Canada, 2001).
[12]
Cohen, P. R., M. Johnston, D. McGee, S. Oviatt, J. Pittman, I. Smith, L. Chen, and J. Clow, QuickSet: Multimodal Interaction for Distributed Applications, In Proceedings of ACM Multimedia 97. (Seattle, WA, 1997). p. 31--40.
[13]
Clow, J. and S. L. Oviatt. STAMP: an Automated Tool for Analysis of Multimodal System Performance. In Proceedings of the International Conference on Spoken Language Processing. (Sydney, Australia, 1998).
[14]
Rettig, M., Prototyping for Tiny Fingers. Communications of the ACM, 1994. 37(4): p. 21--27.
[15]
Reekie, J., M. Shilman, H. Hse, and S. Neuendorffer, Diva, a Software Infrastructure for Visualizing and Interacting with Dynamic Information Spaces. http://www.gigascale.org/diva/, 1998.

Cited By

View all
  • (2022)Prototyping InContext: Exploring New Paradigms in User Experience ToolsProceedings of the 2022 International Conference on Advanced Visual Interfaces10.1145/3531073.3531175(1-5)Online publication date: 6-Jun-2022
  • (2019)Editing Spatial Layouts through Tactile Templates for People with Visual ImpairmentsProceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300436(1-11)Online publication date: 2-May-2019
  • (2014)Electronic sketching on a multi-platform contextInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2013.08.01872:1(45-52)Online publication date: 1-Jan-2014
  • Show More Cited By

Recommendations

Reviews

Birol O. Aygün

This paper describes an interesting tool, developed by the authors, that helps designers create an on-screen prototype of a device. The authors use a bread toaster to illustrate the kind of design activity they support. Basically, their approach consists of providing a storyboard of the various actions a design may take, providing an easy way to specify the sequence of actions (put in the slices of bread, then pull down the toaster bar), and graphically displaying the output of each action on one or more output devices. These output devices might include a personal computer (PC) screen, a personal digital assistant (PDA) device, or a speaker (to which text-to-speech output may be directed). The emphasis of the paper is on the flexibility with which the input to the model can be provided (via mouse click, keyboard, or voice input). The input may be a single input from one or more of these devices (a mouse click, a character from the keyboard, or a word input via a microphone), or an AND combination of inputs entered within one second of each other (a keyboard input of "d" and voice input of "down"). The tool contains many other capabilities, which I don't have the space to mention. The tool seems to be thought out and implemented fairly thoroughly. In fact, some of its capabilities may not be initially used by novice users. It appears to me that all of this multimodal, multidevice input and output may not be necessary to design the user interface of a toaster. It would be interesting if the authors had described other applications where these capabilities may indeed be justified. There may be many: the dashboard controls in a car, a complex audio/video set, and so on. The authors should have tried out their software on designers of these types of equipment to better understand their requirements. The software seems like a solution looking for a problem. Perhaps this is simply the technological exploration phase of a project, before a more disciplined software engineering approach is applied to understanding the requirements, which in turn may be met with available technology. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces
November 2003
318 pages
ISBN:1581136218
DOI:10.1145/958432
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 November 2003

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. informal prototyping
  2. mobile interface design
  3. multidevice
  4. multimodal
  5. pen and speech input
  6. sketching

Qualifiers

  • Article

Conference

ICMI-PUI03
Sponsor:
ICMI-PUI03: International Conference on Multimodal User Interfaces
November 5 - 7, 2003
British Columbia, Vancouver, Canada

Acceptance Rates

ICMI '03 Paper Acceptance Rate 45 of 130 submissions, 35%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Prototyping InContext: Exploring New Paradigms in User Experience ToolsProceedings of the 2022 International Conference on Advanced Visual Interfaces10.1145/3531073.3531175(1-5)Online publication date: 6-Jun-2022
  • (2019)Editing Spatial Layouts through Tactile Templates for People with Visual ImpairmentsProceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300436(1-11)Online publication date: 2-May-2019
  • (2014)Electronic sketching on a multi-platform contextInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2013.08.01872:1(45-52)Online publication date: 1-Jan-2014
  • (2013)Interactive prototyping of tabletop and surface applicationsProceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems10.1145/2494603.2480313(229-238)Online publication date: 24-Jun-2013
  • (2010)Component-based high fidelity interactive prototyping of post-WIMP interactionsInternational Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction10.1145/1891903.1891961(1-4)Online publication date: 8-Nov-2010
  • (2009)An open source workbench for prototyping multimodal interactions based on off-the-shelf heterogeneous componentsProceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems10.1145/1570433.1570480(245-254)Online publication date: 15-Jul-2009
  • (2008)A three-dimensional characterization space of software components for rapidly developing multimodal interfacesProceedings of the 10th international conference on Multimodal interfaces10.1145/1452392.1452421(149-156)Online publication date: 20-Oct-2008
  • (2008)Design and usability evaluation of multimodal interaction with finite state machines: a conceptual frameworkJournal on Multimodal User Interfaces10.1007/s12193-008-0004-22:1(53-60)Online publication date: 23-Apr-2008
  • (2008)Adapting paper prototyping for designing user interfaces for multiple display environmentsPersonal and Ubiquitous Computing10.1007/s00779-007-0147-212:3(269-277)Online publication date: 25-Jan-2008
  • (2007)Natural multimodal dialogue systemsProceedings of the 9th international conference on Multimodal interfaces10.1145/1322192.1322243(291-298)Online publication date: 12-Nov-2007
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media