Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Evaluation of Language Feedback Methods for Student Videos of American Sign Language

Published: 04 April 2017 Publication History
  • Get Citation Alerts
  • Abstract

    This research investigates how to best present video-based feedback information to students learning American Sign Language (ASL); these results are relevant not only for the design of a software tool for providing automatic feedback to students but also in the context of how ASL instructors could convey feedback on students’ submitted work. It is known that deaf children benefit from early exposure to language, and higher levels of written language literacy have been measured in deaf adults who were raised in homes using ASL. In addition, prior work has established that new parents of deaf children benefit from technologies to support learning ASL. As part of a long-term project to design a tool to automatically analyze a video of a students’ signing and provide immediate feedback about fluent and non-fluent aspects of their movements, we conducted a study to compare multiple methods of conveying feedback to ASL students, using videos of their signing. Through two user studies, with a Wizard-of-Oz design, we compared multiple types of feedback in regard to users’ subjective judgments of system quality and the degree students’ signing improved (as judged by an ASL instructor who analyzed recordings of students’ signing before and after they viewed each type of feedback). The initial study revealed that displaying videos to students of their signing, augmented with feedback messages about their errors or correct ASL usage, yielded higher subjective scores and greater signing improvement. Students gave higher subjective scores to a version in which time-synchronized pop-up messages appeared overlaid on the student's video to indicate errors or correct ASL usage. In a subsequent study, we found that providing images of correct ASL face and hand movements when providing feedback yielded even higher subjective evaluation scores from ASL students using the system.

    Supplementary Material

    huenerfauth (huenerfauth.zip)
    Supplemental movie, appendix, image and software files for, Evaluation of Language Feedback Methods for Student Videos of American Sign Language

    References

    [1]
    Ben Bahan. 2016. The wolf who cried sheep. Online video. Retrieved from https://youtu.be/7Y44OUbwthQ.
    [2]
    Helen Cooper, Brian Holt, and Richard Bowden. 2011. Sign language recognition. In Visual Analysis of Humans. Springer, 539--562.
    [3]
    Gallaudet Research Institute. 2011. Regional and National Summary Report of Data from the 2009--10 Annual Survey of Deaf and Hard of Hearing Children and Youth.
    [4]
    David Goldberg, Dennis Looney, and Natalia Lusin. 2015. Enrollments in Languages other than English in US Institutions of Higher Education, Fall 2013. Modern Language Association.
    [5]
    Susan Goldin-Meadow and Rachel I. Mayberry. 2001. How do profoundly deaf children learn to read? Learn Disabil Pract Res 16, 4, 222--229.
    [6]
    Foad Hamidi and Melanie Baljko. 2013. Automatic speech recognition: A shifted role in early speech intervention? In Proceedings of the SLPAT Workshop 2013. 55--61.
    [7]
    Valerie Henderson, Seungyon Lee, Helene Brashear, Harley Hamilton, Thad Starner, and Steve Hamilton. 2005. Development of an ASL game for deaf children.In Proceedings of the 2005 Conference on Interaction Design and Children (Boulder, CO).
    [8]
    Matt Huenerfauth, Elaine Gale, Brian Penly, Mackenzie Willard, and Dhananjai Hariharan. 2015. Comparing methods of displaying language feedback for student videos of american sign language. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’15). ACM, New York, 139--146. DOI=http://dx.doi.org/10.1145/2700648.2809859
    [9]
    Pengfei Lu and Matt Huenerfauth. 2014. Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Comput. Speech Lang. 28, 3 (May 2014). 812--831. Elsevier.
    [10]
    Dawn MacLaughlin, Carol Neidle, and David Greenfield. 2000. SignStreamTM User's Guide, Version 2.0. American Sign Language Linguistic Research Project, Report Number 9, Boston University, Boston, MA. Retrieved from http://www.bu.edu/asllrp/reports.html#RPT9.
    [11]
    Mark Marschark, Patricia Sapere, Carol M. Convertino, Connie Mayer, Loes Wauters, and Thomastine Sarchet. 2009. Are deaf students’ reading challenges really about reading? Am. Ann. Deaf 154, 4, 357--370.
    [12]
    Rachel I. Mayberry and Ellen B. Eichen. 1991. The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. J Mem. Lang. 30:486--498.
    [13]
    Ross E. Mitchell and Michael A. Karchmer. 2004. Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Lang Studies 4, 2, 138--163.
    [14]
    Ross E. Mitchell, Travas A. Young, Bellamie Bachleda, and Michael A. Karchmer. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang Studies 6, 3, 306--335.
    [15]
    Caio D. D. Monteiro, Ricardo Gutierrez-Osuna, and Frank M. Shipman. 2012. Design and evaluation of classifier for identifying sign language videos in video sharing sites. In Proceedings of ASSETS’12. 191--198. DOI=10.1145/2384916.2384950
    [16]
    Frank R. Lin, John K. Niparko, and Luigi Ferrucci. 2011. Hearing loss prevalence in the US. Arch. Intern. Med. 17, 1, 20, 1851--1852.
    [17]
    Carol Neidle, Judy Kegl, Dawn MacLaughlin, Ben Bahan, and Robert G. Lee. 2000. The Syntax of ASL: Functional Categories and Hierarchical Structure. Cambridge: MIT Press.
    [18]
    Russel S. Rosen. 2004. Beginning L2 production errors in ASL lexical phonology: A cognitive phonology model. Sign Language 8 Linguistics 7, 1, 31--61.
    [19]
    Jenny L. Singleton and Elissa L. Newport. 2004. When learners surpass their models: The acquisition of American Sign Language from inconsistent input. Cognit. Psych. 49, 4 (Dec. 2004), 370--407.
    [20]
    Patricia Spencer and R. Lederberg. 1997. Different modes, different models: Communication and language of young deaf children and their mothers. In Communication and Language: Discoveries from Atypical Development. M. Romski (Ed.). Harvard University Press. 203--230.
    [21]
    Michael Strong and Phillip M. Prinz. 1997. A study of the relationship between American Sign Language and English literacy. J. Deaf Stud. Deaf Educ. 2, 1, 37--46.
    [22]
    Carol Bloomquist Traxler. 2000. The Stanford achievement test, 9th edition: National norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5, 4, 337--348.
    [23]
    Clayton Valli, Ceil Lucas, Kristin J. Mulrooney, and Miako Villanueva. 2011. Linguistics of American Sign Language: An Introduction. Gallaudet University Press.
    [24]
    Haijing Wang, Alexandra Stefan, S. Moradi, Vassilis Athitsos, Carol Neidle, and F. Kamangar. 2010. A system for large vocabulary sign search. In Proceedings of the Workshop on Sign, Gesture and Activity.
    [25]
    Kimberly A. Weaver, Thad Starner, and Harley Hamilton. 2010. An evaluation of video intelligibility for novice ASL learners on a mobile device. In Proceedings of ASSETS’10. 107--114. DOI=10.1145/1878803.1878824
    [26]
    Kimberly A. Weaver and Thad Starner. 2011. We need to communicate: Helping hearing parents of deaf children learn American Sign Language. In Proceedings of ASSETS’11. 91--98. DOI=10.1145/2049536.2049554
    [27]
    Ronnie B. Wilbur. 2000. The use of ASL to support the development of English and literacy. J. Deaf Stud. Deaf Educ. 5, 1, 81--104.

    Cited By

    View all
    • (2023)Multi-Modal Multi-Channel American Sign Language RecognitionInternational Journal of Artificial Intelligence and Robotics Research10.1142/S297233532450001701:01Online publication date: 20-Dec-2023
    • (2022)Towards Accessible Sign Language Assessment and LearningProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556623(626-631)Online publication date: 7-Nov-2022
    • (2022)Understanding ASL Learners’ Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-StudyProceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3517428.3550367(1-5)Online publication date: 23-Oct-2022
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Accessible Computing
    ACM Transactions on Accessible Computing  Volume 10, Issue 1
    Special Issue (Part 2) of Papers from ASSETS 2015
    April 2017
    90 pages
    ISSN:1936-7228
    EISSN:1936-7236
    DOI:10.1145/3064528
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 April 2017
    Accepted: 01 December 2016
    Received: 01 April 2016
    Published in TACCESS Volume 10, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. American sign language
    2. education
    3. feedback
    4. user study

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)96
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Multi-Modal Multi-Channel American Sign Language RecognitionInternational Journal of Artificial Intelligence and Robotics Research10.1142/S297233532450001701:01Online publication date: 20-Dec-2023
    • (2022)Towards Accessible Sign Language Assessment and LearningProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556623(626-631)Online publication date: 7-Nov-2022
    • (2022)Understanding ASL Learners’ Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-StudyProceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3517428.3550367(1-5)Online publication date: 23-Oct-2022
    • (2021)The FATE Landscape of Sign Language AI DatasetsACM Transactions on Accessible Computing10.1145/343699614:2(1-45)Online publication date: 21-Jul-2021
    • (2021)An Intelligent Sign Communication Machine for People Impaired with Hearing and Speaking AbilitiesAdvanced Computing10.1007/978-981-16-0401-0_6(75-86)Online publication date: 11-Feb-2021
    • (2021)Engendering Trust in Automated Feedback: A Two Step Comparison of Feedbacks in Gesture Based LearningArtificial Intelligence in Education10.1007/978-3-030-78292-4_16(190-202)Online publication date: 11-Jun-2021
    • (2020)Feedback Strategies for Embodied Agents to Enhance Sign Language Vocabulary LearningProceedings of the 20th ACM International Conference on Intelligent Virtual Agents10.1145/3383652.3423871(1-8)Online publication date: 20-Oct-2020
    • (2020)Implementing Gesture Recognition in a Sign Language Learning Application2020 31st Irish Signals and Systems Conference (ISSC)10.1109/ISSC49989.2020.9180197(1-6)Online publication date: Jul-2020
    • (2019)Student Perceptions of Mobile Video Recording to Learn American Sign LanguageInternational Journal of Mobile and Blended Learning10.4018/IJMBL.201901010111:1(1-11)Online publication date: 1-Jan-2019
    • (2018)Recognizing American Sign Language Gestures from Within Continuous Videos2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW.2018.00280(2145-214509)Online publication date: Jul-2018

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media