Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3519391.3519396acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article

Understanding Challenges and Opportunities of Technology-Supported Sign Language Learning

Published: 18 April 2022 Publication History

Abstract

Around 466 million people in the world live with hearing loss, with many benefiting from sign language as a mean of communication. Through advancements in technology-supported learning, autodidactic acquisition of sign languages, e.g., American Sign Language (ASL), has become possible. However, little is known about the best practices for teaching signs using technology. This work investigates the use of different conditions for teaching ASL signs: audio, visual, electrical muscle stimulation (EMS), and visual combined with EMS. In a user study, we compare participants’ accuracy in executing signs, recall ability after a two-week break, and user experience. Our results show that the conditions involving EMS resulted in the best overall user experience. Moreover, ten ASL experts rated the signs performed with visual and EMS combined highest. We conclude our work with the potentials and drawbacks of each condition and present implications that will benefit the design of future learning systems.

Supplementary Material

MP4 File (ahs2022-5.mp4)
Supplemental video

References

[1]
João Gabriel Abreu, João Marcelo Teixeira, Lucas Silva Figueiredo, and Veronica Teichrieb. 2016. Evaluating sign language recognition using the myo armband. In 2016 XVIII Symposium on Virtual and Augmented Reality (SVR). IEEE, IEEE, Piscataway, New Jersey, United States, 64–70.
[2]
Sílvia Grasiella Moreira Almeida, Frederico Gadelha Guimarães, and Jaime Arturo Ramírez. 2014. Feature extraction in Brazilian Sign Language Recognition based on phonological structure and using RGB-D sensors. Expert Systems with Applications 41, 16 (2014), 7259–7271.
[3]
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 16–31. https://doi.org/10.1145/3308561.3353774
[4]
Jorge Jonathan Cadeñanes Garnicaand María Angélica González Arrieta. 2014. Augmented Reality Sign Language Teaching Model for Deaf Children. In Distributed Computing and Artificial Intelligence, 11th International Conference, Sigeru Omatu, Hugues Bersini, Juan M. Corchado, Sara Rodríguez, Paweł Pawlewski, and Edgardo Bucciarelli (Eds.). Springer International Publishing, Cham, 351–358.
[5]
Paul Chandler and John Sweller. 1991. Cognitive load theory and the format of instruction. Cognition and instruction 8, 4 (1991), 293–332.
[6]
Michelene T. H. Chi and Ruth Wylie. 2014. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educational Psychologist 49, 4 (2014), 219–243. https://doi.org/10.1080/00461520.2014.965823 arXiv:https://doi.org/10.1080/00461520.2014.965823
[7]
Esomonu Chibuike. 2020. The Challenges of Teaching Sign Language to Pupils With Hearing Impairment in Special Education Primary School, Ibom Layout, Calabar. Ibom Layout, Calabar (April 1, 2020)(2020).
[8]
Sarah Faltaous, Aya Abdulmaksoud, Markus Kempe, Florian Alt, and Stefan Schneegass. 2021. GeniePutt: Augmenting human motor skills through electrical muscle stimulation. it - Information Technology 63, 3 (2021), 157–166. https://doi.org/10.1515/itit-2020-0035
[9]
Sarah Faltaous, Joshua Neuwirth, Uwe Gruenefeld, and Stefan Schneegass. 2020. SaVR: Increasing Safety in Virtual Reality Environments via Electrical Muscle Stimulation. In 19th International Conference on Mobile and Ubiquitous Multimedia (Essen, Germany) (MUM 2020). Association for Computing Machinery, New York, NY, USA, 254–258. https://doi.org/10.1145/3428361.3428389
[10]
Sarah Faltaous and Stefan Schneegass. 2020. HCI Model: A Proposed Extension to Human-Actuation Technologies. In 19th International Conference on Mobile and Ubiquitous Multimedia (Essen, Germany) (MUM 2020). Association for Computing Machinery, New York, NY, USA, 306–308. https://doi.org/10.1145/3428361.3432081
[11]
Sidney S Fels and Geoffrey E Hinton. 1993. Glove-talk: A neural network interface between a data-glove and a speech synthesizer. IEEE transactions on Neural Networks 4, 1 (1993), 2–8.
[12]
Susan Goldin-Meadow and Diane Brentari. 2017. Gesture and language: Distinct subsystem of an integrated whole. Behavioral and Brain Sciences 40 (2017).
[13]
M. Guadagnoli, W. Holcomb, and M. Davis. 2002. The efficacy of video feedback for learning the golf swing. Journal of Sports Sciences 20, 8 (2002), 615–622. https://doi.org/10.1080/026404102320183176 12190281.
[14]
Jan Gugenheimer, Katrin Plaumann, Florian Schaub, Patrizia Di Campli San Vito, Saskia Duck, Melanie Rabus, and Enrico Rukzio. 2017. The Impact of Assistive Technology on Communication Quality Between Deaf and Hearing Individuals. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW ’17). Association for Computing Machinery, New York, NY, USA, 669–682. https://doi.org/10.1145/2998181.2998203
[15]
Mahmoud Hassan, Florian Daiber, Frederik Wiehr, Felix Kosmalla, and Antonio Krüger. 2017. Footstriker: An EMS-based foot strike assistant for running. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 1 (2017), 1–18.
[16]
Marie-Claude Hepp-Reymond, Vihren Chakarov, Jürgen Schulte-Mönting, Frank Huethe, and Rumyana Kristeva. 2009. Role of proprioception and vision in handwriting. Brain research bulletin 79, 6 (2009), 365–370.
[17]
Kevin Huang, Ellen Yi-Luen Do, and Thad Starner. 2008. PianoTouch: A wearable haptic piano instruction system for passive learning of piano skills. In 2008 12th IEEE international symposium on wearable computers. IEEE, IEEE, Piscataway, New Jersey, United States, 41–44.
[18]
Arthur R Jensen. 1971. Individual differences in visual and auditory memory.Journal of Educational Psychology 62, 2 (1971), 123.
[19]
Nicholas O Jungheim. 2000. GESTURE AS A COMMUNICATION STRATEGY IN SECOND LANGUAGE DISCOURSE: A STUDY OF LEARNERS OF FRENCH AND SWEDISH. Marianne Gullberg. Lund, Sweden: Lund University Press, 1998. Pp. 253.Studies in Second Language Acquisition 22, 1 (2000), 122–123.
[20]
Lih-Jen Kau, Wan-Lin Su, Pei-Ju Yu, and Sin-Jhan Wei. 2015. A real-time portable sign language translation system. In 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, IEEE, Piscataway, New Jersey, United States, 1–4.
[21]
Adam Kendon. 2000. Language and gesture: Unity or duality. Language and gesture 2(2000).
[22]
Adam Kendon. 2004. Gesture: Visible action as utterance. Cambridge University Press.
[23]
Adam Kendon. 2008. Some reflections on the relationship between ‘gesture’and ‘sign’. Gesture 8, 3 (2008), 348–366.
[24]
E.S. Klima and U. Bellugi. 1979. The Signs of Language. Harvard University Press. https://books.google.de/books?id=WeBOn6N8PJ8C
[25]
Slavko Krapež and Franc Solina. 1999. Synthesis of the sign language of the deaf from the sign video clips. (1999).
[26]
Annelies Kusters and Sujit Sahasrabudhe. 2018. Language ideologies on the difference between gesture and sign. Language & Communication 60 (2018), 44 – 63. https://doi.org/10.1016/j.langcom.2018.01.008
[27]
Alina Kuznetsova, Laura Leal-Taixé, and Bodo Rosenhahn. 2013. Real-time sign language recognition using a consumer depth camera. In Proceedings of the IEEE international conference on computer vision workshops. IEEE, Piscataway, New Jersey, United States, 83–90.
[28]
Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008. Construction and evaluation of a user experience questionnaire. In Symposium of the Austrian HCI and usability engineering group. Springer, Springer, Berlin, Germany, 63–76.
[29]
David McNeill. 1992. Hand and mind: What gestures reveal about thought. University of Chicago press.
[30]
Irit Meir, Wendy Sandler, Carol Padden, Mark Aronoff, 2010. Emerging sign languages. Oxford handbook of deaf studies, language, and education 2 (2010), 267–280.
[31]
Ross E Mitchell, Travas A Young, Bellamie Bachelda, and Michael A Karchmer. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Language Studies 6, 3 (2006), 306–335.
[32]
World Health Organization. [n.d.]. Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss. (Accessed on 01/27/2021).
[33]
Prajwal Paudyal, Ayan Banerjee, and Sandeep K.S. Gupta. 2016. SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 282–293. https://doi.org/10.1145/2856767.2856794
[34]
Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Mohamad Soudki, Ayan Banerjee, and Sandeep KS Gupta. 2019. Learn2Sign: Explainable AI for Sign Language Learning. In IUI Workshops.
[35]
Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt, and Michael Rohs. 2015. Cruise Control for Pedestrians: Controlling Walking Direction Using Electrical Muscle Stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2505–2514. https://doi.org/10.1145/2702123.2702190
[36]
Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt, and Michael Rohs. 2015. Cruise control for pedestrians: Controlling walking direction using electrical muscle stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2505–2514.
[37]
Wendy Sandler. 2012. Dedicated gestures and the emergence of sign language. Gesture 12, 3 (2012), 265–307.
[38]
Stefan Schneegass, Albrecht Schmidt, and Max Pfeiffer. 2016. Creating User Interfaces with Electrical Muscle Stimulation. Interactions 24, 1 (Dec. 2016), 74–77. https://doi.org/10.1145/3019606
[39]
Caitlyn E Seim, David Quigley, and Thad E Starner. 2014. Passive haptic learning of typing skills facilitated by wearable computers. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. ACM New York, NY, USA, New York, NY, USA, 2203–2208.
[40]
Katja Stefan, Leonardo G Cohen, Julie Duque, Riccardo Mazzocchio, Pablo Celnik, Lumy Sawaki, Leslie Ungerleider, and Joseph Classen. 2005. Formation of a motor memory by action observation. Journal of Neuroscience 25, 41 (2005), 9339–9346.
[41]
Jr. Stokoe, William C.2005. Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. The Journal of Deaf Studies and Deaf Education 10, 1 (01 2005), 3–37. https://doi.org/10.1093/deafed/eni001
[42]
Rachel Sutton-Spence and Bencie Woll. 1999. The linguistics of British Sign Language: an introduction. Cambridge University Press, Cambridge, England.
[43]
Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM New York, NY, USA, New York, NY, USA, 543–552.
[44]
Sho Tatsuno, Tomohiko Hayakawa, and Masatoshi Ishikawa. 2017. Supportive training system for sports skill acquisition based on electrical stimulation. In 2017 IEEE World Haptics Conference (WHC). IEEE, IEEE, Piscataway, New Jersey, United States, 466–471.
[45]
Sarah F Taub. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge University Press, Cambridge, England.
[46]
Sherman Wilcox and Phyllis Perrin Wilcox. 1997. Learning to see: Teaching American Sign Language as a second language. Gallaudet University Press, Washington D.C.
[47]
Jeremy D Wong, Dinant A Kistemaker, Alvin Chin, and Paul L Gribble. 2012. Can proprioceptive training improve motor learning?Journal of neurophysiology 108, 12 (2012), 3313–3321.
[48]
Mohammad Zakipour, Ali Meghdari, and Minoo Alemi. 2016. RASA: A Low-Cost Upper-Torso Social Robot Acting as a Sign Language Teaching Assistant. In Social Robotics, Arvin Agah, John-John Cabibihan, Ayanna M. Howard, Miguel A. Salichs, and Hongsheng He (Eds.). Springer International Publishing, Cham, 630–639.

Cited By

View all
  • (2024)Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer InteractionProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642585(1-16)Online publication date: 11-May-2024
  • (2022)From Perception to Action: A Review and Taxonomy on Electrical Muscle Stimulation in HCIProceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia10.1145/3568444.3568460(159-171)Online publication date: 27-Nov-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AHs '22: Proceedings of the Augmented Humans International Conference 2022
March 2022
350 pages
ISBN:9781450396325
DOI:10.1145/3519391
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 April 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Sign language learning
  2. audio
  3. electrical muscle stimulation
  4. visual.

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

AHs 2022
AHs 2022: Augmented Humans 2022
March 13 - 15, 2022
Kashiwa, Chiba, Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)51
  • Downloads (Last 6 weeks)11
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer InteractionProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642585(1-16)Online publication date: 11-May-2024
  • (2022)From Perception to Action: A Review and Taxonomy on Electrical Muscle Stimulation in HCIProceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia10.1145/3568444.3568460(159-171)Online publication date: 27-Nov-2022

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media