Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3657242.3658598acmotherconferencesArticle/Chapter ViewAbstractPublication PagesinteraccionConference Proceedingsconference-collections
short-paper

Towards Human Action Recognition in Open Environments: A Proof of Concept on Spanish Sign Language Vowels

Published: 19 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Gestures and body language are fundamental communication mechanisms between human beings. Trying to make machines recognize this language is an active area of research in the field of computer vision that has applications in fields as diverse as video surveillance and healthcare. Usually, most of these developments are carried out in closed and controlled environments (e.g., a laboratory with specific conditions of light, climate, furniture, backgrounds, and so forth). Real-time motion and gesture recognition through cameras in an uncontrolled environment, open environments, or outdoor environments (such as a shopping mall, museum lobby, train station platform, or airport terminal) and allowing interaction with passers-by in these spaces is not so much studied. This task presents challenges related to the technological, social, and legal factors that need to be carefully addressed. This work is a first step towards human action recognition in open environments adapted to interaction without physical contact using videos captured with a camera in real-time in these spaces. In this manner, as a proof of concept we explore the development of a system that allows real-time recognition of Spanish Sign Language (Lengua de Signos Española - LSE), in particular, we select the vowels as gestures.

    References

    [1]
    Md Atiqur Rahman Ahad, Upal Mahbub, and Tauhidur Rahman. 2021. Contactless human activity analysis. Vol. 200. Springer.
    [2]
    Gary Bradski. 2000. The openCV library.Dr. Dobb’s Journal: Software Tools for the Professional Programmer 25, 11 (2000), 120–123.
    [3]
    Ángela Casado-García, César Domínguez, Manuel García-Domínguez, Jónathan Heras, Adrián Inés, Eloy Mata, and Vico Pascual. 2019. CLoDSA: a tool for augmentation in classification, localization, detection, semantic segmentation and instance segmentation tasks. BMC Bioinformatics 20, 1 (2019), 1–14.
    [4]
    Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan, Florian Metze, Jordi Torres, and Xavier Giro-i Nieto. 2021. How2sign: a large-scale multimodal dataset for continuous american sign language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2735–2744.
    [5]
    Jeremy Howard and Sylvain Gugger. 2020. Deep Learning for Coders with fastai and PyTorch. O’Reilly Media.
    [6]
    Jian Liu, Naveed Akhtar, and Ajmal Mian. 2019. Skepxels: Spatio-temporal image representation of human skeleton joints for action recognition. In CVPR workshops. 10–19.
    [7]
    Tahmida Mahmud and Mahmudul Hasan. 2021. Vision-Based Human Activity Recognition. Contactless Human Activity Analysis (2021), 1–42.
    [8]
    Tom O’Malley, Elie Bursztein, James Long, François Chollet, Haifeng Jin, Luca Invernizzi, 2019. KerasTuner. https://github.com/keras-team/keras-tuner.
    [9]
    Tansel Özyer, Duygu Selin Ak, and Reda Alhajj. 2021. Human action recognition approaches with video datasets—A survey. Knowledge-Based Systems 222 (2021), 106995.
    [10]
    Refat Khan Pathan, Munmun Biswas, Suraiya Yasmin, Mayeen Uddin Khandaker, Mohammad Salman, and Ahmed AF Youssef. 2023. Sign language recognition using the fusion of image and hand landmarks through multi-headed convolutional neural network. Scientific Reports 13, 1 (2023), 16975.
    [11]
    Konstantinos Poulinakis. [n. d.]. Complete Practical Tutorial on Keras Tuner GitHub. https://github.com/Poulinakis-Konstantinos/Blogging-Journey/blob/main/Keras-Tuner-Complete-Tutorial/keras-tuner.ipynb. Accessed: 2023-04-25.
    [12]
    Adrián Sánchez-Caballero, David Fuentes-Jiménez, and Cristina Losada-Gutiérrez. 2022. Real-time human action recognition using raw depth video-based recurrent neural networks. Multimedia Tools and Applications (2022), 1–23.
    [13]
    Arya Sarkar, Avinandan Banerjee, Pawan Kumar Singh, and Ram Sarkar. 2022. 3D Human Action Recognition: Through the eyes of researchers. Expert Systems with Applications (2022), 116424.
    [14]
    Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 806–813.
    [15]
    Aditya Sharma. 2023. Training the YOLOv8 Object Detector for OAK-D. https://pyimg.co/9qceip. Accessed: 2023-05-16.
    [16]
    Anwaar Ulhaq, Naveed Akhtar, Ganna Pogrebna, and Ajmal Mian. 2022. Vision transformers for action recognition: A survey. arXiv preprint arXiv:2209.05700 (2022).
    [17]
    Fan Zhang, Valentin Bazarevsky, Andrey Vakunov, Andrei Tkachenka, George Sung, Chuo-Ling Chang, and Matthias Grundmann. 2020. Mediapipe hands: On-device real-time hand tracking. arXiv preprint arXiv:2006.10214 (2020).

    Index Terms

    1. Towards Human Action Recognition in Open Environments: A Proof of Concept on Spanish Sign Language Vowels

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      Interacción '24: Proceedings of the XXIV International Conference on Human Computer Interaction
      June 2024
      155 pages
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 June 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Computer Vision
      2. Contactless Interface
      3. Human Action Recognition
      4. Human Gesture Recognition
      5. Spanish Sign Language
      6. Uncontrolled Environment

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Funding Sources

      • Comunidad Autónoma de La Rioja
      • MCIN/AEI/ 10.13039/501100011033

      Conference

      INTERACCION 2024

      Acceptance Rates

      Overall Acceptance Rate 109 of 163 submissions, 67%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 2
        Total Downloads
      • Downloads (Last 12 months)2
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 26 Jul 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media