Sign Language Recognition System Using Machine Learning
Sign Language Recognition System Using Machine Learning
https://doi.org/10.22214/ijraset.2022.47576
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
Abstract: Voice and language is the main thing that people understand one with the other. We can understand thoughts through
hearing each other. Even nowadays we can give commands using voice recognition. But what if one hears absolutely nothing
and eventually can't speak. Sign language is the main communication tool for hearing impaired and mute people and also to
ensure their independent lives, automatic sign language interpretation is an extensive area of research. With use of image
processing and artificial intelligence, many techniques and algorithms have been developed in this area. Every sign language
recognition system is trained to recognize characters and convert them into the desired pattern. The goal of the proposed system
to provide speech to infants, in this document ambidextrous Indian Sign Language is captured as a series of images and is
processed using Python and then converted to speech and text.
Technical keywords: Machine Learning, sign language, pre-processing.
I. INTRODUCTION
Sign languages are alive on a wide and global scale. There are more character languages in the world that are commonly used and
which are ASL (American Sign Language) ISL (Indian Sign Language), BSL (Bangladeshi Sign Language).MSL (Malay Sign
Language). These languages are created and developed with a lot of work and hands-on testing with the intention of feasibility for
the deaf and stupid persons. Any language is created with its word and it meaning Sign language is created as "Character" and
"Action of this character". Because here we are not able to make them understand the meaning of the sign by writing the word. since
they are deaf and cannot hear from birth, we cannot teach them words. We are motivated with aim to use new technologies for better
humanity. We found Machine learning like technologies can be used for conquering the backwardness occurred because of this
physical disability.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1387
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
The authors of the article [3] A Hidden Markov Model Based on Sign Language to Speech Conversion system in TAMIL. Fast and
eloquent speakers communicate their thoughts, thoughts and experiences through voice interaction with the people around them.
The difficulty of achieving the same level of communication is high in the case of the deaf and mute population, how they express
their emotions through sign language. Easy communication between ex and the latter is necessary to become an integral part of
society. The goal of this work is to develop a sign language recognition system that will help to realize this necessity. In the
proposed work it is a hand gesture recognition module based on accelerometer and gyroscope developed to recognize various hand
gestures that are translated into Tamil phrase and an HMM-based text-to-speech synthesizer is built for conversion corresponding
synthetic speech text.
The author of the article [4] American Sign Language to Text Translator and speech. The 2001 study by Viola and Jones is a
milestone in the development of an algorithm capable of detecting human faces in real time. The original technique was only used
for face detection, but many researchers used it to detect many other objects such as eyes, mouths, car license plates and traffic
signs. There are also hand signs among them successfully detected. This article proposed a system that can automatically detect
static hand signs of alphabets in American Sign Language (ASL). On for this, we adopted two combined concepts of AdaBoost and
Haar-like classifiers. In this work, to increase the accuracy of the system, we use a huge database for the training process and
generates impressive results. The translator was implemented and trained using a dataset of 28,000 samples hand sign images, 1000
images for each sign positive training images at different scales, lighting, and a dataset of 11,100 Negative samples images. All
positive images were taken with a Logitech webcam and the image size was set to the standard VGA resolution of 640x480.
Experiments show that our system can recognize all signs with an accuracy of 98.7. Input of this system is live video and the output
is text and speech.
The authors of the article [5] Design and implementation of Sign-to Speech/Text technology A system for deaf and dumb people.
This paper presents an approach to the design and implementation of a smart glove for deaf and mute people. Several researches
have been done to find an easier way for non-voice people to communicate with voice people and express themselves to the hearing
world. The development was done in sign language, but mainly in American sign language. This research aims to develop a
character to Arabic translator based on a smart glove connected wirelessly to a microcontroller and text/voice presentation devices.
An approach to display Arabic text was developed and programmed. The entire system was implemented, programmed,
encapsulated and tested with very good results.
The author of the article [6] On the design and implementation of Sign-to Speech/Text technology System. This paper explored
two approaches for the design and implementation of a sign language/text translator. They have approaches was developed and
implemented to display text and voice in two languages (Arabic and English). In the first part of this article, the system is based on
vision developed and demonstrated. In the second part, a glove-based system is designed and implemented. The second system is
based on a wireless interface gloves, microcontroller and presentation device for Arabic/English character translation. Two
developed systems were tested very good results.
The author of the article [7] Talking Hands - Indian Sign Language for Speech Translation gloves. According to the latest statistics,
about 7.5% of the population is hearing impaired and the only means of communication is Indian Sign Language used by them. In
this paper, we presented an approach that gives a technique for improving the sign language recognition system. In the proposed
method; for detection we will use sensors that are built into the glove gestures and convert them to speech using the Bluetooth
module and an Android smartphone. Gloves will help in making artificial one speech that provides an environment similar to
everyday communication that is difficult to reach for people with speech impairments.
The authors of the article [8] Sign Language to Speech – Helper System for speech disorders. People with speech and hearing
impairments need to socialize others with normally disabled people who can talk about their normal work. Deaf and mute people all
over the world use sign language used in their country or local language to communicate with other people. But one who has
undergone training is able to understand it. Sign language mainly includes hand and head gestures that express their meaning of
letters or words. Non-verbal such as shape, hand orientation, head movements and body movements are used to express the
speaker's idea. Use with speech impairment sign language to communicate. This is often either misinterpreted or incorrect
identified. This creates a communication gap between the physically disabled and a normal person. There is a communication gap
through sign language overcome a is possible for a deaf-mute without acoustic sounds and through signs.
The authors of the article [9] Devanagari Convert printed text to speech using OCR. this article describes a neoteric approach to
bridging the communication gap between deaf and normal human beings. In every community there is such a group of disabled
people who face serious difficulties in communication due to his speech and hearing impairments.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1388
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
These people use various gestures and symbols to communicate and receive their messages this way of communication is called sign
language. Still, communication. the problem doesn't end there because natural language speakers don't understand sign language,
which results in a communication gap. It exists for such ends the need to develop a system that can act as a sign language interpreter
speaker and translator for a natural language speaker. For this purpose, software solution was developed in this research using the
latest technology from Microsoft i.e. Kinect for Windows V2. Proposed the system is called Deaf Talk and works as a sign
language interpreter and a translator that provides a dual mode of communication between sign language speakers and native
speakers. Dual mode of communication has the following independent modules (1) Character/gesture to speech conversion (2)
Speech to sign language conversion. Sign-to-speech module, a person with a speech impediment must place themselves in the
Kinect field point of view (FOV) and then perform sign language gestures. System it receives gestures made through the Kinect
sensor and then understands them these gestures by comparing them with already stored trained gestures database. Once a gesture is
determined, it is mapped to a keyword corresponding to this gesture. The keywords are then sent to text-to-speech a conversion
engine that speaks or plays a sentence for natural language spokesman. Unlike sign-to-speech, speech-to-sign language is the
conversion module translates spoken language into sign language. In this event that a normal person places themselves in the field
of view of the Kinect sensor and speaks in their native language (in this case English). The system will then convert it to text using
the speech-to-text API. The keywords are then mapped to theirs corresponding pre-saved animated gestures and then the animations
are played on the screen for the spoken sentence. In this way, the disabled person can visualize a spoken sentence translated into 3D
animated sign language. Deaf Talk's accuracy is 87 percent for speech-to-sign and 84 percent for sign-to-speech.
The authors of the article [10] Sign-to-speech prototype using an SVM classifier. About 70 million people in this world are mute
people. PUSH are children who suffer from non-verbal autism. Communication between speech-impaired people and normal people
is very difficult. Normally people with speech impairments use sign language to communicate with others. Not everyone
understands sign language. In this paper a prototype is designed that provides speech output for sign language gestures to bridge the
communication gap between people with speech impairments and normal people. This prototype consists of a glove that has built-in
flex sensors, gyroscopes and accelerometers. These sensors capture user gestures in real time. The Arduino Nano microcontroller is
used to collect data from these sensors and sends them to a PC via Bluetooth. The computer processes the data sent by the Arduino
and runs a machine learning algorithm to classify the sign language gestures and predict the word. associated with each gesture.
Support Vector Machine (SVM) is used for this. classification. This prototype is very compact and can recognize both American
Sign Language (ASL) and Indian Sign Language (ISL). This prototype it not only speaks to mute people but also makes them
multilingual.
III. ALGORITHM
This system accepts input in the form of a live dataset. We know that we are processing data and training datasets in the system, so
we use modules: Pre-processing, Feature Extraction and Classification, all of which use our CNN algorithm. So first insert the
Images Live dataset, then pre-process the dataset (the pre-processing step is to clean the Image part and Remove Blur) Then the
system will extract the parameters or properties of the file in the extraction part. Then in classification where we use our CNN
algorithm to recognize sign language. These are the steps used to train the CNN.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1389
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
V. MATHEMATICAL MODEL
Let’s be the Whole system which consists: S= IP, Pro, OP. Where, A. IP is the input of the system. B. Pro is the procedure applied
to the system to process the given input. C. OP is the output of the system. A. Input: IP = u, F., where, 1. u be the user. 2. F be set of
files used for sending B. Procedure: B. Process 1. In this project capture the image from sign and compare with the dataset. 2.
According to image stored in dataset voice alert message give to the user. C. Output: • After sign detection the voice message alert
to the user
VII. LIMITATIONS
Most people do not understand sign language. A child should not rely on a translator. Fine motor problems lead to garbled signs. I
think AAC is faster to learn than signing.
VIII. CONCLUSION
Sign language is a tool to reduce the communication gap between the deaf and mute people and the normal person. This system,
which is proposed above, provides a methodology that aims to do the same thing as two-way communication possible. This method
proposed here facilitates the conversion to character into speech. This overcomes the requirement for translators from real time
conversion is used. The system acts as the voice of a deaf and mute person. This project is a step towards helping specially disabled
people. This can be further improved by making it more user-friendly, efficient, portable, compatible for multiple brands as well as
dynamic brands. This can be further improvised to be compatible with the mobile phones they use built-in camera in the phone. We
can increase the distance it can go can be used with a longer trans-receiver module or via Wi-Fi.
REFERENCES
[1] Neha Poddar, Shrushti Rao, Shruti Sawant Vrushali Somavanshi, Prof. Sumita Chandak (February 2015), "A Study of Sign Language Translation Using
Gesture Recognition", International Journal of Advanced Research in Computer and Communication Engineering
[2] Jayshree R. Pansare, Maya Ingle (2016),” Vision-Based Approach for American Sign Language Recognition Using Edge Orientation Histogram”, at 2016
International Conference on Image, Vision and Computing.
[3] Arslan Arif, Syed Tahir Hussain Rizvi, Iqra, Jawaid, Muhammad Adam Waleed, Techno-Talk: An American Sign Language (ASL) Translator, at CoDIT’16 -
April 6-8, 2016, Malta
[4] Justin K. Chen, Debabrata Sengupta, Rukmani Ravi Sundaram,” Sign Language Gesture Recognition with Unsupervised Feature Learning”
[5] Matheesha Fernando, Janaka Wijayanayaka,” Real Time Sign Language Recognition” at 2013 IEEE 8th International Conference on Industrial and Information
Systems, ICIIS 2013, Aug. 18-20, 2013, Sri
[6] Aarthi M Vijayalakshmi, “SIGN LANGUAGE TO SPEECH CONVERSION”, at 2016 FIFTH INTERNATIONAL CONFERENCE ON RECENT TRENDS
IN INFORMATION
[7] Caixia Wu, Chong Pan, YufengJin, Shengli Sun, and Guangyi Shi Shaoxing, “Improvement of Chinese Sign Language Translation System based on
Collaboration of Arm and Finger Sensing Nodes”, At The 6th Annual IEEE International Conference on Cyber Technology in Automation, Control and
Intelligent Systems June 19-22, 201
[8] Poonam Chavan1, Prof. Tushar Ghorpade2, Prof. Puja Padiya, “Indian Sign Language to Forecast Text using Leap Motion Sensor and RF Classifier” at 2016
Symposium on Colossal Data Analysis and Networking (CDAN)
[9] Ms.Manisha D. Raut, Ms. Pallavi Dhok, Mr. Ketan Machhale, Ms. Jaspreet Manjeet Hora, “A System for Recognition of Indian Sign Language for Deaf People
using Otsu’s Algorithm”, International Research Journal of Engineering and Technology (IRJET)
[10] C. Swapna, Shabnam Shaikh “Literature survey on hand gesture recognition for video processing” International Journal of Advanced Research in Computer
Science and Software Engineering (IJARC)
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1390