International Journal of Computer Applications Technology and Research, 2020
The most important thing in communication is hearing what isn't being said, this goes for the peo... more The most important thing in communication is hearing what isn't being said, this goes for the people who are deaf and mute. Sign language is the most natural means of exchanging information amongst these people. The deaf and mute people have to rely on an interpreter or some sort of visual communication while communicating with the rest of society. So, to bridge the gap between the deaf and dumb community and the rest of the world, there is the need for the translation system that will make the entire process of communication fluent. American Sign Language (ASL) is a visual and gestural language i.e. the brain processes linguistic information through eyes. This paper proposes a system that interprets the gestures in sign language enacted by the person to plain text. It is a robust as well as real-time Intelligent Translation System ensuring that none of these two parameters are compromised. This application would capture a live video stream of the user performing ASL from the webcam which then translates the gestures to corresponding text using Image Processing techniques and Deep Learning Models. It is then combined to form words that are passed to the spell checker and then combined to form sentences. The proposed solutions would thus enable quick and effective communication between the deaf and mute community and the rest of the society.
International Journal of Computer Applications Technology and Research, 2020
The most important thing in communication is hearing what isn't being said, this goes for the peo... more The most important thing in communication is hearing what isn't being said, this goes for the people who are deaf and mute. Sign language is the most natural means of exchanging information amongst these people. The deaf and mute people have to rely on an interpreter or some sort of visual communication while communicating with the rest of society. So, to bridge the gap between the deaf and dumb community and the rest of the world, there is the need for the translation system that will make the entire process of communication fluent. American Sign Language (ASL) is a visual and gestural language i.e. the brain processes linguistic information through eyes. This paper proposes a system that interprets the gestures in sign language enacted by the person to plain text. It is a robust as well as real-time Intelligent Translation System ensuring that none of these two parameters are compromised. This application would capture a live video stream of the user performing ASL from the webcam which then translates the gestures to corresponding text using Image Processing techniques and Deep Learning Models. It is then combined to form words that are passed to the spell checker and then combined to form sentences. The proposed solutions would thus enable quick and effective communication between the deaf and mute community and the rest of the society.
Uploads
Papers by Aayush Jain