Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

Providing signed content on the Internet by synthesized animation

Published: 01 September 2007 Publication History

Abstract

Written information is often of limited accessibility to deaf people who use sign language. The eSign project was undertaken as a response to the need for technologies enabling efficient production and distribution over the Internet of sign language content. By using an avatar-independent scripting notation for signing gestures and a client-side web browser plug-in to translate this notation into motion data for an avatar, we achieve highly efficient delivery of signing, while avoiding the inflexibility of video or motion capture. Tests with members of the deaf community have indicated that the method can provide an acceptable quality of signing.

References

[1]
Adamo-Villani, N. and Beni, G. 2004. Automated finger spelling by highly realistic 3D animation. British J. Educat. Tech. 35, 3, 345--362.
[2]
Allen, T. E. 1986. Patterns of academic achievement among hearing impaired students: 1974 and 1983. In Deaf Children in America, A. N. Schildroth and M. A. Harchmer, Eds. College Hill Press, San Diego, CA, 161--206.
[3]
Baker, A., Hendrikx, A., Knoors, H., van der Lem, T., Levelt, W., Schadee, M., and Wesemann, J. 1997. Meer dan een Gebaar {More than a sign}. Report on the recognition of Sign Language of the Netherlands (NGT) as an official language. Commissioned by the Ministry of Education, Culture and Science and the Ministry of Welfare, Health and Sport of the Dutch Government. The Hague. SDU publishers.
[4]
Conrad, R. 1979. The Deaf Schoolchild. Harper and Row, NY, NY.
[5]
Cox, S. J., Lincoln, M., Nakisa, M., Wells, M., Tutt, M., and Abbott, S. 2003. The development and evaluation of a speech to sign translation system to assist transactions. Int. J. Hum. Comput. Interact. 16, 2, 141--161.
[6]
Cox, S. J., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., and Abbott, S. 2002. TESSA, a system to aid communication with deaf people. In 5th International ACM SIGCAPH Conference on Assistive Technologies (ASSETS). Edinburgh, Scotland. ACM, 205--212.
[7]
Deuchar, M. 1984. British Sign Language. Routledge, London, UK.
[8]
Elliott, R., Glauert, J. R. W., Jennings, V., and Kennaway, J. R. 2004. An overview of the SiGML notation and SiGMLSigning software system. In 4th International Conference on Language Resources and Evaluation (LREC 2004). Lisbon, Portugal. O. Streiter and C. Vettori, Eds. 98--104.
[9]
Elliott, R., Glauert, J. R. W., and Kennaway, J. R. 2004. A framework for non-manual gestures in a synthetic signing system. In 2nd Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT) Cambridge, UK. S. Keates, P. J. Clarkson, P. Langdon, and P. Robinson, Eds. 127--136.
[10]
Elliott, R., Glauert, J. R. W., and Kennaway, J. R. 2005. Developing techniques to support scripted sign language performance by a virtual human. In 3rd International Conference on UA in HCI. Las Vegas, NV, Universal Access in HCI: Exploring New Dimensions of Diversity. Vol. 8 (CD--ROM).
[11]
Elliott, R., Glauert, J. R. W., Kennaway, J. R., and Marshall, I. 2000. Development of language processing support for the ViSiCAST project. In 4th International ACM SIGCAPH Conference on Assistive Technologies (ASSETS'00). Washington DC. ACM.
[12]
Francik, J. and Fabian, P. 2002. Animating sign language in the real time. In 20th IASTED International Multi-Conference on Applied Informatics. 276--281.
[13]
Gibet, S. and Lebourque, T. 2001. High-level specification and animation of communicative gestures. J. Visual Lang. Comput. 12, 657--687.
[14]
Gordon Jr., R. G., Ed. 2005. Ethnologue: Languages of the World. SIL International, Dallas, TX. http://www.ethnologue.com/.
[15]
Grieve-Smith, A. B. 2001. SignSynth: A sign language synthesis application using Web3D and Perl. In 4th International Workshop on Gesture and Sign Language Based Human-Computer Interaction. Lecture Notes in Artificial Intelligence, vol. 2298. I. Wachsmuth and T. Sowa, Eds. Springer, 134--145.
[16]
Hanke, T. 2002. HamNoSys in a sign language generation context. In Progress in Sign Language Research. R. Schulmeister and H. Reinitzer, Eds. Signum Verlag, Hamburg, Germany, 249-- 264.
[17]
Holt, J. A., Traxler, C. B., and Allen, T. E. 1996. Interpreting the scores: A user's guide to the 9th edition Stanford Achievement Test for educators of deaf and hard-of-hearing students. Unpublished manuscript, Gallaudet University, Washington DC.
[18]
Kennaway, J. R. 2001. Synthetic animation of deaf signing gestures. In 4th International Workshop on Gesture and Sign Language Based Human-Computer Interaction. Lecture Notes in Artificial Intelligence, vol. 2298. I. Wachsmuth and T. Sowa, Eds. Springer, 146-- 157.
[19]
Kennaway, J. R. 2003. Experience with and requirements for a gesture description language for synthetic animation. In 5th International Workshop on Gesture and Sign Language Based Human-Computer Interaction. Lecture Notes in Artificial Intelligence, vol. 2915. A. Camurri and G. Volpe, Eds. Springer, 300--311.
[20]
Lebourque, T. and Gibet, S. 1999a. A complete system for the specification and the generation of sign language gestures. In 3rd International Workshop on Gesture-Based Communication in Human-Computer Interaction. Lecture Notes in Artificial Intelligence, vol. 1739. A. Braffort, R. Gherbu, S. Gibet, J. Richardson, and D. Teil, Eds. Springer, 227--238.
[21]
Lebourque, T. and Gibet, S. 1999b. High level specification and control of communication gestures: the GESSYCA system. Comput. Animat. '99. 24--35.
[22]
Pelachaud, C. 2003. Overview of representation languages for ECAs. Tech. rep., Paris VIII, IUT Montreuil.
[23]
Pezeshkpour, F., Marshall, I., Elliott, R., and Bangham, J. 1999. Development of a legible deaf signing virtual human. In Proceedings IEEE International Conference on Multimedia Computing and Systems. Florence, Italy, IEEE, 333--338.
[24]
Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J., and Colleagues. 1989. HamNoSys Version 2.0: Hamburg Notation System for Sign Languages---An Introductory Guide. International Studies on Sign Language and the Communication of the Deaf, Volume 5. University of Hamburg. http://www.sign-lang.uni-hamburg.de/Projekte/HamNoSys/HNS4.0/HNS4.0eng/Contents.html.
[25]
Sáfár, E. and Marshall, I. 2001a. The architecture of an English-text-to-sign-languages translation system. In Recent Advances in Natural Language Processing (RANLP). Tzigov Chark, Bulgaria, G. Angelova, Ed. 223--228.
[26]
Sáfár, E. and Marshall, I. 2001b. Translation of English text to a DRS-based, sign language oriented semantic representation. In Conference sur le Traitment Automatique des Langues Naturelles (TALN'01). Vol. 2. Tours, France, 297--306.
[27]
Sheard, M., van der Schoot, S., Zwitserlood, I., Verlinden, M., and Weber, I. 2004. Evaluation Reports 1 and 2 of the EU project Essential Sign Language Information on Government Networks.
[28]
Suszczanska, N., Szmal, P., and Francik, J. 2002. Translating Polish texts into sign language in the TGT system. In 20th IASTED International Multi-Conference on Applied Informatics. 282--287.
[29]
Tolani, D. and Badler, N. I. 1996. Real-time inverse kinematics of the human arm. Presence 5, 4, 393--401.
[30]
Tolani, D., Goswami, A., and Badler, N. I. 2000. Real-time inverse kinematics techniques for anthropomorphic limbs. Graphical Models 62, 5, 353--388.
[31]
University of Pennsylvania. 2005. IKAN: Inverse Kinematics using ANalytical methods. http://hms.upenn.edu/software/ik/ik.html.
[32]
Vink, M. and Schermer, T. 2005. Verslag onderzoek naar het gebruik van de avatar in vergelijking met tekeningen en films van gebaren (Report of research on the use of an avatar compared with drawings or films of gestures). Tech. rep., Nederlands Gebarencentrum (Dutch Sign Language Center).
[33]
Wauters, L. N. 2005. Reading comprehension in deaf children: The impact of the mode of acquisition of word meanings. Ph.D. thesis, Radboud University Nijmegen.
[34]
Wells, J. C. 1997. SAMPA computer readable phonetic alphabet. In Handbook of Standards and Resources for Spoken Language Systems, Part IV, Section B, D. Gibbon, R. Moore, and R. Winski, Eds. Mouton de Gruyter, Berlin, Germany. http://www.phon.ucl.ac.uk/home/sampa.
[35]
Wong, J., Holden, E., Lowe, N., and Owens, R. 2003. Real-time facial expressions in Auslan Tuition System. In 5th IASTED International Conference on Computer Graphics and Imaging. http://auslantuition.csse.uwa.edu.au/help.html.
[36]
Yeats, S., Holden, E., and Owens, R. 2003. An animated Auslan tuition system. Int. J. Machine Graphics Vision 12, 2, 203--214.
[37]
Zhao, J. and Badler, N. I. 1994. Inverse kinematics positioning using nonlinear programming for highly articulated figures. ACM Trans. Graph. 13, 4, 313--336.
[38]
Zwitserlood, I. 2005. Synthetic signing. In The World of Content Creation, Management, and Delivery. (IBC'05). 352--357.

Cited By

View all
  • (2024)Considerations on generating facial nonmanual signals on signing avatarsUniversal Access in the Information Society10.1007/s10209-024-01090-6Online publication date: 25-Mar-2024
  • (2024)Exploring automatic text-to-sign translation in a healthcare settingUniversal Access in the Information Society10.1007/s10209-023-01042-623:1(35-57)Online publication date: 1-Mar-2024
  • (2023)Evolution and Trends in Sign Language Avatar Systems: Unveiling a 40-Year Journey via Systematic ReviewMultimodal Technologies and Interaction10.3390/mti71000977:10(97)Online publication date: 16-Oct-2023
  • Show More Cited By

Index Terms

  1. Providing signed content on the Internet by synthesized animation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Computer-Human Interaction
    ACM Transactions on Computer-Human Interaction  Volume 14, Issue 3
    September 2007
    124 pages
    ISSN:1073-0516
    EISSN:1557-7325
    DOI:10.1145/1279700
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 September 2007
    Published in TOCHI Volume 14, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Animation
    2. HamNoSys
    3. SiGML
    4. avatar
    5. deaf accessibility
    6. scripting
    7. sign language
    8. virtual reality

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 13 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Considerations on generating facial nonmanual signals on signing avatarsUniversal Access in the Information Society10.1007/s10209-024-01090-6Online publication date: 25-Mar-2024
    • (2024)Exploring automatic text-to-sign translation in a healthcare settingUniversal Access in the Information Society10.1007/s10209-023-01042-623:1(35-57)Online publication date: 1-Mar-2024
    • (2023)Evolution and Trends in Sign Language Avatar Systems: Unveiling a 40-Year Journey via Systematic ReviewMultimodal Technologies and Interaction10.3390/mti71000977:10(97)Online publication date: 16-Oct-2023
    • (2023)A Text-To-SL Synthesis System Using 3D Avatar Technology2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)10.1109/ICASSPW59220.2023.10193734(1-5)Online publication date: 4-Jun-2023
    • (2023)Signing Avatars - Multimodal Challenges for Text-to-sign Generation2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG57933.2023.10042759(1-8)Online publication date: 5-Jan-2023
    • (2022)Toward an evaluation model for signing avatarsNafath10.54455/MCN.20.057:20Online publication date: 23-May-2022
    • (2022)Attitudes Toward Signing Avatars Vary Depending on Hearing Status, Age of Signed Language Acquisition, and Avatar TypeFrontiers in Psychology10.3389/fpsyg.2022.73091713Online publication date: 10-Feb-2022
    • (2022)SynLibras: A Disentangled Deep Generative Model for Brazilian Sign Language Synthesis2022 35th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)10.1109/SIBGRAPI55357.2022.9991748(210-215)Online publication date: 24-Oct-2022
    • (2021)Supporting Sign Language Narrations in the MuseumHeritage10.3390/heritage50100015:1(1-20)Online publication date: 21-Dec-2021
    • (2021)GAN based indian sign language synthesisProceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing10.1145/3490035.3490301(1-8)Online publication date: 19-Dec-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media