Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Interpretation of Shape-Related Iconic Gestures in Virtual Environments

  • Conference paper
  • First Online:
Gesture and Sign Language in Human-Computer Interaction (GW 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2298))

Included in the following conference series:

  • 1210 Accesses

Abstract

So far, approaches towards gesture recognition focused mainly on deictic and emblematic gestures. Iconics, viewed as iconic signs in the sense of Peirce, are different from deictics and emblems, for their relation to the referent is based on similarity. In the work reported here, the breakdown of the complex notion of similarity provides the key idea towards a computational model of gesture semantics for iconic gestures. Based on an empirical study, we describe first steps towards a recognition model for shape-related iconic gestures and its implementation in a prototype gesture recognition system. Observations are focused on spatial concepts and their relation to features of iconic gestural expressions. The recognition model is based on a graph-matching method which compares the decomposed geometrical structures of gesture and object.

Windows, Icons, Menus, and Pointing interfaces

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Christian Benoit, Jean-Claude Martin, Catherine Pelachaud, Lambert Schomaker, and Bernhard Suhm. Audio-visual and multimodal speech systems. In D. Gibbon, editor, Handbook of Standards and Resources for Spoken Language Systems-Supplement Volume D. to appear.

    Google Scholar 

  2. P. Feyereisen, M. Van de Wiele, and F. Dubois. The meaning of gestures: What can be understood without speech? European Bulletin of Cognitive Psychology, 8:3–25, 1988.

    Google Scholar 

  3. U. Hadar and B. Butterworth. Iconic gestures, imagery, and word retrieval in speech. Semiotica, 115(1/2):147–172, 1997.

    Article  Google Scholar 

  4. Caroline Hummels. Gestural design tools: prototypes, experiments and scenarios. PhD thesis, Technische Universiteit Delft, 2000.

    Google Scholar 

  5. A. Kendon. Gesticulation and speech: Two aspects of the process of utterance. In M. R. Key, editor, The Relationship of Verbal and Nonverbal Communication, pages 207–227. Mouton, The Hague, 1980.

    Google Scholar 

  6. Sotaro Kita. How representational gestures help speaking. In McNeill [13], chapter 8, pages 162–185.

    Google Scholar 

  7. David B. Koons, Sparrell Carlton J., and Thorisson Kristinn R. Intelligent Multimedia Interfaces, chapter 11. MIT Press, Cambridge, Mass., USA, 1993.

    Google Scholar 

  8. Stefan Kopp and Ipke Wachsmuth. A knowledge-based approach for lifelike gesture animation. In W. Horn, editor, ECAI 2000-Proceedings of the 14th European Conference on Arti.cial Intelligence, pages 663–667, Amsterdam, 2000. IOS Press.

    Google Scholar 

  9. Marc Erich Latoschik. A general framework for multimodal interaction in virtual reality systems: PrOSA. In VR2001 workshop proceedings: The Future of VR and AR Interfaces: Multi-modal, Humanoid, Adaptive and Intelligent, 2001. in press.

    Google Scholar 

  10. W. J. Levelt. Speaking. MIT press, Cambridge, Massachusetts, 1989.

    Google Scholar 

  11. D. McNeill. Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago, 1992.

    Google Scholar 

  12. David McNeill. Catchments and contexts: Non-modular factors in speech and gesture production. In McNeill [13], chapter 15, pages 312–328.

    Google Scholar 

  13. David McNeill, editor. Language and Gesture. Language, Culture and Cognition. Cambridge University Press, Cambridge, 2000.

    Google Scholar 

  14. Bruno T. Messmer. Efficient Graph Matching Algorithms for Preprocessed Model Graphs. PhD thesis, University of Bern, Switzerland, 1996.

    Google Scholar 

  15. Jean-Luc Nespoulous and Andre Roch Lecours. Gestures: Nature and function. In Nespoulous, Perron, and Lecours, editors, The Biological Foundations of Gestures: Motor and Semiotic Aspects. Lawrence Erlbaum Associates, Hillsday N. J., 1986.

    Google Scholar 

  16. Vladimir I. Pavlovic, Rajeev Sharma, and Thomas S. Huang. Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):677–695, July 1997.

    Article  Google Scholar 

  17. Charles Sanders Peirce. Collected Papers of Charles Sanders Peirce. The Belknap Press of Harvard University Press, Cambridge, 1965.

    Google Scholar 

  18. Siegmund Prillwitz, Regina Leven, Heiko Zienert, Thomas Hanke, and Jan Henning. HamNoSys Version 2.0: Hamburg Notation System for Sign Languages: An Introductory Guide, volume 5 of International Studies on Sign Language and Communication of the Deaf. Signum Press, Hamburg, 1989.

    Google Scholar 

  19. Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E. McCullough, and Rashid Ansari. Gesture and speech multimodal conversational interaction in monocular video. Course Notes of the Interdisciplinary College “Cognitive and Neurosciences”, Günne, Germany, March 2001.

    Google Scholar 

  20. Timo Sowa and Ipke Wachsmuth. Coverbal iconic gestures for object descriptions in virtual environments: An empirical study. Technical Report 2001/03, Collaborative Research Center “Situated Artificial Communicators” (SFB 360), University of Bielefeld, 2001.

    Google Scholar 

  21. Carlton J. Sparrell and David B. Koons. Interpretation of coverbal depictive gestures. In AAAI Spring Symposium Series: Intelligent Multi-Media Multi-Modal Systems, pages 8–12. Stanford University, March 1994.

    Google Scholar 

  22. Henrik Tramberend. Avocado: A distributed virtual reality framework. In Proceedings of the IEEE Virtual Reality, pages 14–21, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sowa, T., Wachsmuth, I. (2002). Interpretation of Shape-Related Iconic Gestures in Virtual Environments. In: Wachsmuth, I., Sowa, T. (eds) Gesture and Sign Language in Human-Computer Interaction. GW 2001. Lecture Notes in Computer Science(), vol 2298. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47873-6_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-47873-6_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43678-2

  • Online ISBN: 978-3-540-47873-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics