Abstract
For deaf persons to have ready access to information and communication technologies (ICTs), the latter must be usable in sign language (SL), i.e., include interlanguage interfaces. Such applications will be accepted by deaf users if they are reliable and respectful of SL specificities—use of space and iconicity as the structuring principles of the language. Before developing ICT applications, it is necessary to model these features, both to enable analysis of SL videos and to generate SL messages by means of signing avatars. This paper presents a signing space model, implemented within a context of automatic analysis and automatic generation, which are currently under development.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
A proform is a handshape, which refers to an entity previously signed in the discourse. The proform not only characterizes an entity among several ones, but also provides a particular point of view on this entity regarding the context. It is used to spatialize an entity in the signing space and to express entity relations and actions.
Allen’s temporal relationships are expressed as follows: v: or, =: equal, <: precedes, m: immediately precedes, o: partially overlaps, e: completely overlaps at end.
References
Allen, J.F.: Towards a general theory of action, time. In: Allen, J., Hendler, J., Tate, A. (eds.) Readings in Planning, pp. 464–479. Kaufmann, San Mateo (1990)
Baader, F., et al. (eds.) The Description Logic Handbook. Cambridge University Press, Cambridge. ISBN 0521781760 (2003)
Bowden, R., Windridge, D., Kadir, T., Zisserman, A., Brady, M.: A linguistic feature vector for the visual interpretation of sign language. In: Pajdla, T., Matas, J. (eds.) Proceedings of 8th European Conference on Computer Vision, ECCV04. LNCS3022, vol. 1, pp. 391–401. Springer (2004)
Braffort, A.: Reconnaissance et Compréhension de gestes, application à la langue des signes. PhD thesis, Université Paris-XI Orsay (1996)
Braffort, A.: ARGo: an architecture for sign language recognition and interpretation. In: Harling, P., Edwards, A. (eds.) “Progress in Gestural Interaction”, 1st International Gesture Workshop (GW’96), Springer, Heidelberg (1997)
Braffort, A.: Research on computer science and sign language: ethical aspects. In: Wachsmuth, I., Sowa, T. (eds.) “Gesture and Sign Language in Human-Computer Interaction”, selected revised papers of the 4th International Gesture Workshop (GW’01), LNCS LNAI 2298, Springer, Heidelberg (2002)
Braffort, A., Bossard, B., Segouat, J., Bolot, L. et Lejeune, F.: Modélisation des relations spatiales en langue des signes française. In: Proceedings of traitement Automatique de la Langue des Signes, CNRS, ATALA (2005)
Braffort, A., Lejeune, F.: Spatialised semantic relations in French sign language: toward a computational modelling. In: Gibet, S. (ed.) “Gesture in Human-Computer Interaction and Simulation”, selected revised papers of the 6th International Gesture Workshop (GW’05), LNCS LNAI 3881, Springer, Heidelberg (2006)
Cuxac, C.: French sign language: proposition of a structural explanation by iconicity. In: Braffort, A., Gherbi, R., Gibet, S., et al. (eds.) “Gesture-based Communication in Human-Computer Interaction”, selected revised papers of the 3rd International Gesture Workshop (GW’99), LNCS LNAI 1739, Springer, Heidelberg (1999)
Dalle, P., Lenseigne, B.: Vision-based sign language processing using a predictive approach and linguistic knowledge. In: IAPR Conference on Machine Vision Applications–MVA Tsukuba Science City, Japan. IAPR, pp. 510–513 (2005)
Fasel, B., Luettin, J.: Automatic facial expression analysis : a survey. Pattern Recognit. 36, 259–275 (2003)
Filhol, M., Braffort, A.: A sequential approach to lexical sign description, LREC 2006—Workshop on Sign Languages, Genova, Italy (2006)
Garcia, B., Boutet, D., Braffort, A. Dalle, P.: Sign language in graphical form: methodology, modellisation and representations for gestural communication, in Interacting Bodies (ISGS), Lyon, France (2005)
Gavrila, D.M.: The visual analysis of human movement: a survey. Comput. Vis. Image Underst. 73(1):82–98 (1999)
Hanke T.: HamNoSys—an introductory guide. Signum, Hamburg (1989)
Huenerfauth, M.: Spatial representation of classifier predicates for machine translation into American sign language. In: Workshop on Representation and Processing of Sign Language, 4th International Conference on Language Resources and Evaluation (LREC 2004), pp. 24–31, Lisbon, Portugal (2004)
Kennaway, R.: Synthetic animation of deaf signing gestures. In: Wachsmuth, I., Sowa, T. (eds.) “Gesture and Sign Language in Human-Computer Interaction”, selected revised papers of the 4th International Gesture Workshop (GW’01), LNCS LNAI 2298, Springer, Heidelberg (2002)
Lenseigne, B., Gianni, F., Dalle, P.: A new gesture representation for sign language analysis. In: Workshop on Representation and Processing of Sign Language, 4th International Conference on Language Resources and Evaluation (LREC 2004), pp. 85–90, Lisbon, Portugal (2004)
Lenseigne, B., Dalle, P.: Using signing space as a representation for sign language processing. In: Gibet, S. (ed.) “Gesture in Human-Computer Interaction and Simulation”, selected revised papers of the 6th International Gesture Workshop (GW’05), LNCS LNAI 3881, Springer, Heidelberg (2006)
Liddell, S.: Grammar, Gesture and Meaning in American Sign Language. Cambridge University Press, Cambridge (2003)
Marshall, I., Safar, E.: Sign language generation in an ALE HPSG (invited speaker), in HPSG-2004. In: Muller, S. (ed.) The Proceedings of the 11th International Conference on Head-Driven Phrase Structure Grammar Center for Computational Linguistics, Katholieke Universiteit, Leuven, pp. 189–201 (2004)
Mercier, H, Peyras, J., Dalle, P.: Toward an efficient and accurate AAM fitting on appearance varying faces. In: 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, pp. 363–368 (2005)
Ong, S., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 2(6), 873–891 (2005)
Vogler, C., Metaxas, D.: Handshapes and movements: multiple-channel American sign language recognition. In: Camurri, A., Volpe, G. (eds.) “Gesture-based Communication in Human-Computer Interaction”, selected revised papers of the 5th International Gesture Workshop (GW’03), LNCS LNAI, vol. 2915, Springer, Heidelberg (2004)
Yang, M., Egman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. PAMI 24(1), 34–58 (2002)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Braffort, A., Dalle, P. Sign language applications: preliminary modeling. Univ Access Inf Soc 6, 393–404 (2008). https://doi.org/10.1007/s10209-007-0103-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10209-007-0103-y