May 2018
May 2018
May 2018
ABSTRACT: We live in the technology era where the whole world is fitted into a small screen. Gadgets play a vital
role in human life. Touch screen have become the interface between the gadgets and the human begins for interaction
and also to use them. Touch screen also known as human machine interface (HMI) have wiped out keypads. Due to
many issues caused by touch screen an evolution of touch less touch screen has occurred. Here the machine learning
method is used for the implementation of the touchless touch screen. The action such as swipe left, swipe right so on
are recognized and then Semi-supervised method is used to implement these actions by comparing them with the
trained dataset.
I. INTRODUCTION
Data science otherwise called data driven science is an interdisciplinary field of logical techniques, procedures,
calculations and framework to separate learning or experiences from information in different structures. It's an idea to
bring together machine learning, data examination and their related strategy.
Machine learning is a field of software engineering that enables PC frameworks to learn with information, without
being expressly modified. Machine learning was instituted in 1959 by Arthur Samuel. It investigates the examination
and development of algorithms that can learn from and make predictions on data. In machine learning focus is
essentially on prepared dataset, with the assistance of these prepared dataset the machine can learn and play out the
activities individually with no outside obstruction. It is a subfield of artificial intelligence, its primary objective is to
comprehend the structure of information and after that fit the information into the models which can be used and
comprehended by the general population. In spite of the fact that machine learning is a field of software engineering it
is entirely different from the conventional computational methodologies. In old processing strategies the calculations
are the arrangements of unequivocally customized directions utilized by the PCs to take care of the issue. Machine
learning calculations rather enable the PC to prepare on the information data sources and utilize measurable
investigation keeping in mind the end goal to get the coveted yield.
In machine learning, undertakings are broadly ordered in light of how learning is done or how input on learning is
given to the framework to create. The most broadly utilized machine learning strategies are supervised learning and
unsupervised learning. Supervised learning method trains the algorithms based on labeled data whereas the
unsupervised learning trains on unlabeled data. The blend of the two techniques offer ascent to another strategy known
as Semi-supervised learning strategy which prepares the calculation in light of labeled and unlabeled data.
Nilofar E. Chand, (2017) in [1] had proposed an optical pattern reorganization using a solid state optical matrix sensor
with lens to detect hand motions. The main drawback of this paper is that sensors used are of low sensitiveness and
the area of coverage was less. The operation performed was not reliable under the temperature greater than 35 degree
Celsius. The sensor was insensitive to very slow motion of the object. Hence object detection was not accurately
done.
Aditi Ohol, Suman Eoudar, Sukita Shettigar, Shubhum and, (2017) in [2] had implemented leap motion technology
for touch less touch screen application. Here leap motion focus only on the space above the hardware. It is not able to
see through the fingers example one finger overlap another finger. Has leap motion controller consists of 2 cameras 3
infrared LED’s where these track infrared light within the length of 850nm which is outside the visible light spectrum.
This infrared ray cannot be used for long distance. This rays can be blocked by anything or everything for example
from rain droplets to fog and dust particles it is also hazardous has the continuous exposure may lead to eyesight
problem and even blindness.
Mona M Moussa, (2015) in [3] proposed an enhanced method for human action recognition in this paper an
unsupervised method was used for the creation of dataset and SIFT (sift invariant feature transform) was used to
extract the features from the images. The SIFT does not work well when there is lightning changes and when images
are rotated, when the images are blurred.
Darshana Mistry and Asian Banerje, (2017) in [5] proposed The Comparison between the SIFT and SURF. SIFT is
used for finding unique features. In this paper it is said that the SURF is three times better than that of SIFT because
of using the integral image and box filters.
Qing Lei (2016) in [6] has proposed Multi Surface Analysis for Human Action Recognition in Video. In this paper the
features are extracted by horizontal and vertical surface. The SVM (support vector machine) and NBNN (naïve based
nearest neighbor) are used for classification and feature extraction here the extraction process takes more time.
Li Liu (2016) in [4] proposed Learning Spatio-Temporal Representation for Action Recognition: A Genetic Program
Approach which means the programs are encoded as set of genes this approach is very time consuming for the
extraction of features.
III. PROPOSED
In this we are proposing a semi-supervised action recognition for the touch less touch screen. Here the system is trained
with the datasets. The trained dataset contains various labeled and unlabeled videos of various actions, action such has
scrolling down which is represented by top to down action which is done by an hand, pen, pencil or any other stick,
similarly left scrolling, right scrolling, down scrolling, up scrolling, zoom in, zoom out, increasing and decreasing of
volume or brightness etc. is given as a trained dataset (fig.1). The action done by the users are recognized by the
separate web camera. Here the usage of sensor is been eliminated. The actions which are being done by the users are
recognized and compared with the trained dataset. This is done by extracting the features of both the action given by
the user and that of the dataset respectively. Feature are extracted by SURF (speeded up robust features). The extracted
features are been compared by PROSAC (progressive sample consensces).
Fig 1: shows the different hand gestures which are given to the system to build the trained data set
IV.WORKING
The machine (pc, tablet, mobile phone) is fitted with a separate webcam which is used for recognize the hand
gesture or the actions done by the users. The device or the machine is already trained with similar action videos
which is known stored as trained dataset. The actions done by the users is captured by the webcam and then sent
to the video splitter. With the help of Xuglar API of the video splitter the action is splitter into frames. The frames
are then given to OpenCV tool which detects the objects excluding the background details. The detected object is
applied with SURF algorithm where the features are extracted. The extracted features are compared with the
features of the trained dataset. By using PROSAC algorithm scored based classification is done. The action with
highest score is taken as the detected action. If the detection action is matching the trained dataset then value is 1
neither it is zero. When the value is 1 the machine is operated according to the action detected respectively else if
it is zero the machine does not perform any action.
V. FUTURE ENHANCEMENT
Here the memory used to store the dataset is all the more, subsequently the enhancement should be possible in
lessening the capacity memory which is involved by the prepared dataset.
VI. CONCLUSION
As the technology raising now a day’s machine learning has become a trending field. Here we have proposed the
touch less touch screen action which is done with the help of semi-supervised reorganization. The main concentration
was to eliminate the externally connected devices. Here leap motion controller contains infrared rays which may be
harmful use continuously. Hence the machine learning concept used here is cost effective when compared to that of the
controller and sensors used in other technology.
REFERENCES
[1] Nilofar E. chanda, “Study of touch less touch screen technology” International Journal of current engineering and scientific research. vol.4,
pp. 48-52, 2017.
[2] Aditi obol, suman, Govdar, Sukita Shettigar, shubhum Anand, “Touch Less Touch Screen User Interface” International Journal of Technical
Research and Applications. vol. 42, pp.59-63, Aug 2017.
[3] Mona M. Moussa, Elsayed Hamayed, Magda B. Fayek, Heba A. El Nemr “An enhanced method for human action recognition” Journal of
Advanced Research. vol.36, pp. 163-169, Jan 2015.
[4] L. Liu, L. Shao, X. Li, and K. Lu,” Learning spatio -temporal representations for action recognition: A genetic programming approach,” IEEE
trans. Cyber. vol. 46, pp. 158-170, Jan. 2016.[5] Darshana Minstry and Asim Banerjee, “Comparision of Feature detection Approaches: SIFT
AND SURF”. vol. 1, pp 1-7, Nov (2017).
[6] Qing Lei, “Multi Surface Analysis for Human Action Recognition in Video “Zhang et al. SpringPlus.vol.5, 2016