Abstract
This paper presents a novel approach to sign language recognition that provides extremely high classification rates on minimal training data. Key to this approach is a 2 stage classification procedure where an initial classification stage extracts a high level description of hand shape and motion. This high level description is based upon sign linguistics and describes actions at a conceptual level easily understood by humans. Moreover, such a description broadly generalises temporal activities naturally overcoming variability of people and environments. A second stage of classification is then used to model the temporal transitions of individual signs using a classifier bank of Markov chains combined with Independent Component Analysis. We demonstrate classification rates as high as 97.67% for a lexicon of 43 words using only single instance training outperforming previous approaches where thousands of training examples are required.
Chapter PDF
Similar content being viewed by others
References
Bowden, R., Sarhadi, M.: A non-linear model of shape and motion for tracking finger spelt american sign language. Image and Vision Computing 20(9-10), 597–607 (2002)
D.B. (ed.): Dictionary of British Sign Language. British Deaf Association, Faber and Faber (1992) ISBN: 0571143466
Fels, S.S., Hinton, G.: Glove-talk: A neural network interface between a dataglove and a speech synthesiser. IEEE Trans. on Neural Networks 4(1), 2–8 (1993)
Kadous, M.W.: Machine recognition of auslan signs using powergloves: towards large lexicon recognition of sign language. In: Proc. Workshop on the Integration of Gesture in Language and Speech, pp. 165–174 (1996)
Kim, J., Jang, W., Bien, Z.: A dynamic gesture recognition system for the korean sign language (ksl). IEEE Trans. Systems, Man and Cybernetics 26(2), 354–359 (1996)
Liang, R., Ouhyoung, M.: A real time continuous gesture recognition system for sign language. In: Intl. Conf. on Automatic Face and Gesture Recognition, pp. 558–565 (1998)
Lockton, R., Fitzgibbon, A.W.: Real-time gesture recognition using deterministic boosting. In: Proc. British Machine Vision Conf. (2002)
Starner, T., Pentland, A.: Visual recognition of american sign language using hidden markov models. In: Intl. Conf. on Automatic Face and Gesture Recognition, pp. 189–194 (1995)
Stenger, B., Thayananthan, A., Torr, P., Cipolla, R.: Filtering using a tree-based estimator. In: Proc. Intl. Conf. on Computer Vision, vol. II, pp. 1063–1070 (2003)
Sutton-Spence, R., Woll, B.: The Linguistics of British Sign Language, An Introduction. Cambridge University Press, Cambridge (1999)
Vogler, C., Metaxas, D.: Asl recognition based on a coupling between hmms and 3d motion analysis. In: Proc. Intl. Conf. on Computer Vision, pp. 363–369 (1998)
Vogler, C., Metaxas, D.: Towards scalability in asl recognition: Breaking down signs into phonemes. In: Gesture Workshop, pp. 17–99 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bowden, R., Windridge, D., Kadir, T., Zisserman, A., Brady, M. (2004). A Linguistic Feature Vector for the Visual Interpretation of Sign Language. In: Pajdla, T., Matas, J. (eds) Computer Vision - ECCV 2004. ECCV 2004. Lecture Notes in Computer Science, vol 3021. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24670-1_30
Download citation
DOI: https://doi.org/10.1007/978-3-540-24670-1_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21984-2
Online ISBN: 978-3-540-24670-1
eBook Packages: Springer Book Archive