Smart Translation
Smart Translation
Smart Translation
The aim of our project is to develop a communication system for the deaf people. This
project has two phases (1) It converts the audio message into the sign language, and (2)it
converts images into text/speech. In the first category we are going to consider takes audio as
input, convert this audio recording message into text and display the relevant Indian Sign
Language images or GIFs which are predefined. By using this system, the communication
between normal and deaf people gets easier. In the second category we are going to collect
the images and trained images using CNN and display the result.
INTRODUCTION
The aim of this project is to improve the communication with the people who has
hearing difficulties and using any sign language to express themselves. At the first sight, as
an idea, how difficult could make a sign languages converter. It is said that Sign language
is the mother language of deaf people. This includes the combination of hand movements,
arms or body and facial expressions. There are 135 types of sign languages all over the world.
Some of them are American Sign Language ,Indian Sign Language ,British Sign Language,
many more. We are using Indian Sign Language in this project. This system allows the deaf
community to enjoy all sorts of things that normal people do from daily interaction to
accessing the information. This application takes speech as input, converts it into text and
then displays the Indian Sign Language images.
Problem statement
Sign language is communication language used by the deaf peoples using face, hands or eyes
while using vocal tract. Sign language recognizer tool is used for recognizing sign language
of deaf and dumb people. Gesture recognition is an important topic due to the fact that
segmenting a foreground object from a cluttered background is a challenging problem.
There is a difference when human looks at an image and a computer looking at an image. For
Humans it is easier to find out what is in an image but not for a computer. It is because of
this, computer vision problems remain a challenge.
EXISTING SYSTEM
There have been many projects done on the sign languages that convert sign language as
input to text or audio as output. But audio to sign language conversion systems have been
rarely developed. It is useful to both normal and deaf people.
PROPOSED SYSTEM
In this project we introduce new technology that is audio to sign language translator using
python.
(1) In this it takes audio as input, search that recording using Google api, display
the text on screen and finally it gives sign code of given input using ISL
(Indian Sign Language) generator. All the words in the sentence are then
checked against the words in the dictionary containing images and GIFs
representing the words. If the words are not found, its corresponding synonym
is replaced. Set of gestures are predefined in the system.
(2) We are going to consider the Indian sign language image as input and convert
that into text/speech.
(a) Modules
2. Image to Text/Speech
Build Model
NLP
Natural language processing (NLP) is the ability of a computer program to understand human
language as it is spoken and written -- referred to as natural language. It is a component of
artificial intelligence.
NLP enables computers to understand natural language as humans do. Whether the language
is spoken or written, natural language processing uses artificial intelligence to take real-world
input, process it, and make sense of it in a way a computer can understand. Just as humans
have different sensors -- such as ears to hear and eyes to see -- computers have programs to
read and microphones to collect audio. And just as humans have a brain to process that input,
computers have a program to process their respective inputs. At some point in processing, the
input is converted to code that the computer can understand.
OBJECTIVES
The proposed system has learning material for deaf children which has impact on
them in learning language basics.
The skill will go a long way in their life in the context of school and afterwards. This
is the motivation behind taking up this research.
SYSTEM REQUIREMENTS
HARDWARE REQUIREMENTS
Ram : 4GB
SOFTWARE REQUIREMENTS
Framework : Django
Two-way Smart Communication System for Deaf & Dumb and Normal People.
Areesha Gul, Batool Zehra,Sadia Shah, Nazish Javed, Muhammad Imran Saleem, Two-way
Smart Communication System for Deaf & Dumb and Normal People [1] mentioned in their
research that they have introduced a two-waysmart communication system for deaf and dumb
and also for normal people.The system contains of two parts: In first part, hardware system is
used by deaf and dumb person for conveying their messages to a normal person. In second part
android application is used so that normal person need not to learn sign language to respond
them. This make two-way smart communication system with accuracy of 92.5%.
Development of full duplex intelligent communication system for deaf and dumb people. Surbhi
Rathi,Ujwalla Gawande, Development of full duplex intelligent communication system for
deafand dumb people [2] mentioned in their research that they have introduced dual way
communication system in which recognizes a hand gesture of Indian sign language and converts
the recognized gesture into speech and text and vice-versa. This system consists of skin color
filtering techniques forsegmentation. For features extraction, Eigen vectors and Eigen
valuestechniques are used. Eigen valueweighted Euclidean distance based classifier has used for
classification. In this system, prediction of words sign using one or both hand and dual way
communication.
A Novel approach as an aid for blind, deaf and dumb people. Rajapandian B, Harini V,
Raksha D, Sangeetha V, A Novel approach as an aid for blind, deaf and dumbpeople [3]
mentioned in their research that the system is very useful tool which removes the barrier of
disabilities in communication of the people suffering from any kind of the possible
communication ofdeafness, dumbness and blindness among themselves as well as with other
normal people. The personcan communicate with others as per his ability and desire. When a
deaf and dumb use American sign language to convey their message while those who can not
understand the sign language can make useof the device to get the Output in the braille
language or audio or normal text displayed in LCD.
Implementation Of Gesture Based Voice And Language Translator For Dumb People.
L. Anusha ,Y. Usha Devi, Implementation Of Gesture Based Voice And Language
Translator For Dumb People [4] mentioned in their research that the Trajectory recognition
algorithm is used to convertthe gesture into English alphabets. Voice RSS and Microsoft
translate are used to corresponding voiceoutput in Microsoft supported languages. The
system consists of three modes: (1) Training mode is used to generate and store the features
in database.(2) Testing mode is used to compare the generated features with the stored
features.(3) Translation mode is used to generate voice output for the given gestures.
SOFTWARE REQUIREMENT ANALYSIS
INTRODUCTION TO SRS
2.2 PURPOSE
Scope
The scope of this system is to presents a review on data mining techniques used for the
prediction of translation of sign language. It is evident from the system that data mining
technique, like classification, is highly efficient in prediction of Indian automobile.
SOFTWARE ARCHIETECTURE:
Python:
Flask:
Anaconda:
Anaconda is a free and open-source circulation of the programming dialects Python and R .
The dissemination accompanies the Python translator and different bundles identified with AI
and information science.
Essentially, the thought behind Anaconda is to make it simple for individuals inspired by
those fields to introduce all (or a large portion) of the bundles required with a solitary
establishment.
An open-source bundle and condition the executives framework called Conda, which makes
it simple to introduce/update bundles and make/load situations.
Jupyter Notebook, a shareable note pad that joins live code, representations and text.
Numpy:
NumPy is the principal bundle for logical registering with Python. It contains in addition to
other things:
• useful straight polynomial math, Fourier change, and arbitrary number abilities
Pandas:
pandas is an open source, BSD-authorized library giving elite, simple to-utilize information
structures and information investigation apparatuses for the Python programming language.
pandas is a Num FOCUS supported undertaking. This will help guarantee the achievement of
improvement of pandas as an a-list open-source venture, and makes it conceivable to give to
the task.
FUNCTIONAL REQUIREMENTS:
Data Collection
Data Preprocessing
Algorithm
( NLP, CNN )
Prediction
Comparison
Study
Methodology:
This project is divided into two sections. (1) It translates an audio message into sign
language, and (2) it translates images/video into text/speech.
Audio to sign language: In the audio to sign language conversion we are going to consider
audio as input and displaying the gif file as output. In this project we are going to collect the
datasets of gif file. Gui is developed for taking Audio as input to our project. The audio is
Converted to text using microphone. The generated text are need to be split and store it in
dictionary file. With help of Dependency parser sentence and obtaining relationship between
words. Each obtained words are considered as text. Text Pre-processing is done using NLP.
Dictionary based Machine Translations of input sentence using ISL grammar rules
Generation of Sign language with GIF file.
(2) Image/video to Text/Speech
In this approach we are going to collect the deaf and dumb people Indian sign language
datasets. Applying Pre Processing Techniques(resize, noise remove, grayconversion) for the
collected datasets. Splitting the data into train and test. Applying Cnn Algorithms for training
datasets. For the trained model we are going to Build Model and store in system. The model
we have build we are going to pass video as input to the model and recognising the language
based on the user hand movement.
CNN:
Convolutional Neural Networks have a different architecture than regular Neural Networks.
Regular Neural Networks transform an input by putting it through a series of hidden layers.
Every layer is made up of a set of neurons, where each layer is fully connected to all neurons
in the layer before. Finally, there is a last fully-connected layer — the output layer — that
represent the predictions.
Convolutional Neural Networks are a bit different. First of all, the layers are organised in 3
dimensions: width, height and depth. Further, the neurons in one layer do not connect to all the
neurons in the next layer but only to a small region of it. Lastly, the final output will be
reduced to a single vector of probability scores, organized along the depth dimension.
Module 1: Region Proposal. Generate and extract category independent region proposals,
e.g. candidate bounding boxes.
Module 2: Feature Extractor. Extract feature from each candidate region, e.g. using a deep
convolutional neural network.
Module 3: Classifier. Classify features as one of the known class, e.g. CNN classifier model.
Product Functions
2. Perform is done by each algorithm based on the constraints. prediction By entering the
audio can view the details of the vehicle details along with the result.
4. Comparison Study Prediction results and its stages of each algorithm is represented
through graph.
General Constraints
The results generated have to be entered in to the system and any error or any value
entered out of the boundary will not be understood by the system. In any case if the
database crashes, the whole information collected and the results generated will be of no
use.
This section provides a detailed description of all inputs into and outputs from the system.
It also gives a description of the hardware, software and communication interfaces and
provides basic prototypes of the user interface
Hardware Requirements
Software Requirements
Framework: Flask
Tool : Anaconda
Non-Functional requirements
4. Reliability: The system should not crash and should identify invalid input and produce
suitable error message.
5. Usability: The interface should be intuitive and easily navigable and user friendly.
6. Integrity: The software does not store any cache data or doesn‟t use system resources in
background.
Feasibility Study
The feasibility study which helps to find solutions to the problems of the project. The solution
which is given that how looks as a new system look like.
Technical Feasibility
The project entitled “smart translation for physically challenged people” is technically
feasible because of the below mentioned features. The project is developed in Python. The
web server is used to develop translation on local serve. The local server very neatly
coordinates between design and coding part. It provides a Graphical User Interface to design
an application while the coding is done in python. At the same time, it provides high level
reliability, availability and compatibility.
Economic Feasibility
In economic feasibility, cost benefit analysis is done in which expected costs and benefits are
evaluated. Economic analysis is used for effectiveness of the proposed system. In economic
feasibility the most important is cost-benefit analysis. The system is feasible because it does
not exceed the estimated cost and the estimated benefits are equal.
The project entitled smart translation for physically challenged pople is technically feasible
because of the below mentioned features. The system identify the sign language
recommendation behaviour and its stages based on data. The performance of the Data mining
techniques are compared based on their execution time and displayed it through graph.
SYSTEM DESIGN
SYSTEM DESIGN
The Software Design will be used to aid in software development for android application by
providing the details for how the application should built. Within the Software Design,
specifications are narrative and graphical documentation of the software design for the
project includes use case models, sequence diagrams and other supporting requirement
information.
4.1.1 Scope
This software Design Document is for a base level system, which will work as a proof of
concept for the use of building a system that provides a base level of functionality to show
feasibility for large-scale production use. The software Design Document, the focus placed
on generation of the documents and modification of the documents. The system will used in
conjunction with other pre-existing systems and will consist largely of a document interaction
faced that abstracts document interactions and handling of the document objects. This
Document provides the Design specifications of smart translation.
Architecture:
DATA FLOW DIAGRAM:
DFD 1:
ACTIVITY DIAGRAM: