Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

General Regression Neural Network Approach For Image Transformation Based Hybrid Graphical Password Authentication System

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

International Journal of Engineering & Technology, 7 (3.

24) (2018) 454-460

International Journal of Engineering & Technology


Website: www.sciencepubco.com/index.php/IJET

Research paper

General Regression Neural Network Approach for Image


Transformation Based Hybrid Graphical Password
Authentication System
P. Baby Maruthi, Research Scholar, SPMVV, Tirupathi.

Prof. K. Sandhya Rani, Dept. of Computer Science, SPMVV, Tirupathi.

Abstract

In this digital generation, computer, and information security plays a prominent role for both individuals and business organizations. In this
interconnected business environment, information is the most valuable asset and it is of utmost importance to both individuals and
organizations. The task of protecting information can be achieved through authentication. Today, textual password authenticat ion is with
username and password combination commonly used for many web applications. But textual passwords are the weakest form of
authentication and it is easily guessed by the attacker by applying the various techniques such as brute force, dictionary at tack, etc. To
provide security from vulnerable attacks, graphical passwords are another alternative authentication mechanism for replacing the textual
passwords. This paper proposes image transformation based hybrid graphical password authentication model utilizes general re gression
neural network model and feature extraction methods for user identification. Three types of image transformations such as normal image,
mirror image and shift image are considered to enhance security. In this paper, three types of feature extraction techniques such as SURF,
LBP and HOG are considered for extracting image features. The performance of the proposed model is analysed, in terms of usability,
security and storage space analysis and the results proved that the proposed system is resistant against various attacks like brute force,
dictionary attack, shoulder surfing etc.

Keywords: Image Transformation, Feature Extraction, Graphical Passwords, General Regression Neural Network

using Hopfield Neural Network for both textual and graphical


1. Introduction passwords. Here, the passwords are converted into probabilistic
values. This paper presents how the user authentication can be
done for both textual and graphical passwords by using
Today, information security has become a prominent role and it
probabilistic values. The author claimed that the proposed
becomes a part of human life. The task of protecting information
graphical user authentication model provides better accuracy and
can be achieved by means of authentication. It is the process of
quicker response time to registration and password changes.
verifying the user’s identity to whom it claims to be.
In [3], user authentication with back propagation for both graphical
Authentication provides limited access to authorized users to
passwords and text passwords is proposed. Both text password
utilize the resources and prevents access from unauthorized
and graphical password should be normalized before it is supplied
persons. Now a day, most popular widely used authentication is
as an input to the multi forward back propagation neural network
textual password authentication. On the other hand, text passwords
consisting of one or more hidden layers. In this model, only weights
are easily guessed by the attackers. However, there is an alternative
are stored in the database and the server does not maintain the
method for replacing the textual passwords is graphical passwords
password table. The training times of different networks of input
in which authentication can be achieved either by means of
can be evaluated using HNN, Back Propagation Neural Network
selecting icons, pictures or by means of drawing symbols or
(BPNN), Brain-State-in-A-Box (BSB), Bidirectional Associative
signature. Graphical passwords are developed based on the fact
Memory (BAM). The BAM takes less time when compared to
that humans can remember more pictures than text and also
other networks and feed forward networks.
provides more resistant to dictionary attack, brute force attack, etc.
In paper [4], the password authentication using associative
For that reason, graphical passwords are growing in such a way in
memories like Hopfield neural networks (HNN), bidirectional
web applications and mobile applications.
associative memory, Brain-State-In-A-Box (BSB) is proposed. To
eliminate the drawbacks in password authentication using HNN, a
2. Related Work bidirectional associative memory (BAM) has been introduced. The
other neural networks like BSB, HNN, BAM and context sensitive
In paper [2], the author proposed password authentication model associative memory (CSAM) also introduced and compared with

).
Copyright © 2018 Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work is properly cited.
International Journal of Engineering & Technology 455

their corresponding training time. CSAM takes less training time The summation layer consists of two parts namely numerator part
when compared to the other associative memory models. The and denominator part. Numerator part consists of summation of
memory capacity and accuracy results are compared and presented product of training output data and activation function.
in this paper. Denominator part consists of summation of all activation functions.
Scalable Shoulder Surfing Resistant Textual-Graphical Password In the output layer, one neuron calculates the output when dividing
Authentication Scheme (S3PAS) is proposed in [5] and it provides the numerator part of the summation layer to the denominator part.
authentication for both textual and graphical passwords using The value of the output is calculated by using the following
neural networks. S3PAS generates random images at the time of equation.
login session. Username is given as input to the feed forward
neural network and weights are stored for mapping. Di2 = .

3. General Regression Neural Network Y(X) =


A General regression neural network (GRNN) is proposed by Where ‘X’ is the input sample and Xi is the training sample.
Donald F. Specht [5] in 1991. It is basically non-linear regression Di2 is the Euclidean distance between the vectors x and xi.
theory for function approximation or function estimation. GRNN is In the above equation, the output function and accuracy of GRNN is
one pass learning algorithm with parallel structure used for pretended by smoothing factor σ. The large values of smoothing
classification and prediction problems. It requires a fraction of time factor is appropriate for irregular data, whereas small value of
for training samples and creating the network and it is very much smoothing factor is suitable for regular data in order to acquire
faster than a regular standard feed forward neural networks. It good performance.
consists of four basic layers namely, input layer, pattern layer,
summation layer, and output layer. The block diagram of general
regression neural network is shown in figure 1. The input layer 4. Feature Extraction
retrieves input from the input vector and transmits data to the
pattern layer. The second layer is pattern layer and number of Feature extraction plays a significant role in the task of image
neurons in this layer is equal to the training samples i.e. the number identification and verification. Features are nothing but the
of neurons in the input layer is equal to the number of neurons in valuable information which is extracted from the images. The
the pattern layer. This layer computes the Euclidean distance primary goal of feature extraction technique is to reduce the
between stored patterns and input pattern. Gaussian function is original input image by evaluating specific features which are
also applied in order to obtain the high accuracy estimation. The capable of distinguishing the other input image patterns. Image
output of pattern layer are transferred as input to the next layer i.e. features are extracted and turns into lower dimension than the
summation layer. original image. Rather, storing an entire image in to the database,
the proposed model employs feature extraction techniques to
x1 x2 xi ... xn extract significant features from image and those features were
given as target vector and username as input for training the neural
network. Here, image is converted into feature vector. In this
paper, three types of features were considered, such as Speed up
Input
Robust Features (SURF), Local Binary Pattern (LBP), and
Layer Histogram of Oriented Gradients (HOG) [7-13].

 SURF
SURF descriptor is used to find out the interest points in an
image using determinant of hessian matrix. SURF finds extreme
interest points over the space and feature direction to generate
feature vectors. Thus, SURF is very useful to determine the
Pattern similarity of the images.
 LBP
Layer
LBP is a texture based feature extraction technique in which
the image is divided in to small regions called cells. For
calculating LBP pixel values, each cell in a pixel is compared
with its eight neighboring pixel values. After computing LBP
Summation values for each pixel in an image, histograms for each cell is
Layer calculated and it can be viewed as 256 dimensional feature
vectors. Now, concatenate the histograms of all cells and the
outcome is the feature vector of an entire image.
 HOG
The histogram of oriented gradients is another feature extraction
method used for object detection and is a dense feature extraction
method as it extracts features from all locations of image or from
region of interested portions of an 1image. It counts the
Output frequency of gradient orientations in local portions of an image.
Layer To get the local portions of an image, image is divided into small
portions called blocks. Each cell contains fixed number of
gradient orientation bins. Now, HOG is calculated for each cell
Fig. 1: General Regression Neural Network consists of weighted gradient to its corresponding angular bin. To
achieve robustness, normalization is done for each histogram
vector in a block.
456 International Journal of Engineering & Technology

5. Proposed System extract the graphical password features from other two feature
descriptors (LBP and HOG) also. Every time apply the same
The combination of click based and text based approaches are used procedure always whenever the user selects two images as a
in the proposed model to improve the strength of graphical user graphical password. Suppose user select three images as a
authentication in terms of usability and security. The proposed graphical password, now extracts 100 features from the first image,
model uses image transformation technique for graphical 100 features from the second image, and the last 100 features from
the last image. The resultant feature vector is obtained by
passwords and it classifies images into three different types of
concatenating all the three image features into a single vector.
transformations such as normal image, mirror image and shift
Whenever the user selects three images as a graphical password,
image. During registration, only one text password is assigned to
apply the same procedure and extract features from three various
the user after the selection of graphical password. In authentication
phase, the user has to enter the text password based on the type of feature descriptors (SURF, LBP, and HOG). Likewise, collect all
the users’ graphical passwords, and apply three types of feature
image transformation appeared. In addition to image
extraction techniques and then extract features by using the above
transformations, General Regression Neural network (GRNN) and
procedure and store these features in to separate vectors. Three
feature extraction techniques are considered in the proposed model
types of GRNN are considered for recognition of user given
which consists of two phases namely registration phase and
graphical passwords and it is shown in the following figure2. For
authentication phase.
the convenience of training the input vectors are converted into
 Registration Phase ASCII format.
In the registration phase, new user can register here for their
enrolment. User has to enter unique username and then he has to b) Training General Regression Neural Network with Textual
enter number of images that have to be selected as a graphical passwords
password. Image grid is displayed to the user in which user has to
choose images for making the graphical password. The maximum The proposed image transformation based hybrid graphical user
number of images that the user has to be selected as a graphical authentication model utilizes three types of text passwords which
password from the image grid is three. After image selection, are considered for three types of image transformations such as
server forwards a four digit text password to the user and finally the normal image, mirror image and shift image. For normal images,
orientation screen is displayed to the user. The orientation screen text passwords are assigned by the server to the user at the time of
helps the user to understand the concept of image transformations registration. For mirror images, the text passwords must be
involved in the authentication phase and to know about how the transformed in reverse order then it is called as mirror text
passwords are transformed from one image transformation to the password. For shift images, the text passwords are transformed so
other. There are three types of image transformations such as that the last digit of the text password must be zero.
normal image, mirror image and shift image. If the normal image is To improve security with text passwords, three types of GRNNs are
displayed in the screen, user should enter his text password which designed for three types of text passwords such as normal, mirror
is already assigned in the registration phase. Suppose mirror image and shift text password. For the convenience of training, the input
is displayed on the screen and then user should enter his text vectors are converted into the ASCII values of user names.
password in reverse order. If the displayed image is shift image For recognition of text password which is assigned for normal
then user should enter his text password and the last character of image, the General Regression Neural Network GRNN4 is trained
text password must be zero. The detailed description of registration with usernames as input vector and normal text password as target
process is explained in [1]. After registration has been completed, vector. For recognition of mirror text passwords which is assigned
feature extraction techniques are applied on graphical passwords. for mirror images, the General Regression Neural Network
The feature extraction techniques adopted in this paper are GRNN5 is trained with usernames as input vector and mirror text
discussed in the following section. password as target vector. For recognition of shift text passwords
which is assigned for shift images, the General Regression Neural
a) Training General Regression Neural Network with Graphical
Network GRNN6 is trained with usernames as input vector and
Passwords shift text password as target vector. If these networks are properly
In the proposed model, General Regression Neural Networks are trained then three types of networks stores the pairs of usernames
considered for graphical password authentication. For training and three different passwords.
GRNN, three types of feature extraction techniques namely SURF,
LBP, and HOG are considered and these features are extracted
from graphical passwords which are selected by the user. Three
types of General regression neural networks are designed for three
types of feature extraction techniques. Let us assume that, GRNN1
creates the network for SURF features, GRNN2 creates the
network for LBP features and GRNN3 creates the network for
HOG features. In the registration phase, users have a choice to
select images up to maximum of three. In the proposed model, 300
significant image features are taken from each feature descriptor.
Suppose user select one image as a graphical password extract 300
image features by using SURF feature descriptor and store it into a
feature vector. Similarly, 300 features are extracted by using LBP
feature descriptor and 300 features are extracted by using HOG
feature descriptor, and store it into a separate vector. For instance,
user should select two images as graphical password. Now, extract
150 SURF features from the first image and the next 150 SURF
features from the second image. Now concatenate both the feature
vectors into a single vector. This resultant feature vector contains
300 SURF features of both the two images. Simultaneously,
International Journal of Engineering & Technology 457

 Authentication Phase successfully, and then user should be allowed to enter into second
level of authentication. In the second level, user should prove his
The user recognition can be done through authentication phase. identity by entering the proper text password based on the
General Regression Neural Network approach for Image displayed transformation i.e. normal, mirror and shift image.
Transformation based Graphical Password Authentication System
(GRNNITGPAS) uses two factor authentications. Typically, the The procedure of GRNNITGPAS system during authentication
first one is graphical password authentication and the next one is phase can be divided into three types of modules. They are Main
text password authentication. In order to provide more security, Module, Graphical Password Verification Module and Text
user needs to prove his identity by submitting two types of Password Verification Module.
credentials at the time of authentication in order to improve
security. In the authentication phase, user has to enter a valid
username and then image grid is displayed to the user. In the first
level of authentication, user should recognize the correct graphical
password and also the correct order in which he had been selected
at the time of registration. Once, the graphical password is verified

Fig. 2: Training GRNN for Recognition of Graphical Passwords

Training GRNN4 with


Normal Text Train username as input and Normal
Trained
Passwords text password as target vector
GRNN4

USER
TEXT Mirror Text Train Training GRNN5 with username Trained
Transform to
PASSW Passwords as input and Mirror text GRNN5
ORD password as target vector
Training GRNN1 with
username as input and feature
SURF Feature
Train vector as target vector Trained
Mirror Text Extraction Trained
Passwords Training GRNN6 with username GRNN6 GRNN1
as input and Mirror text
password as target vector
Graphical LBP Feature Training GRNN2 with
Password Extraction username as input and feature Trained
Fig.3: vector as target vector
Training GRNN for Recognition of Textual Passwords GRNN2

graphical password from the grid as the images are in different


 Main Module transformations such as normal image, mirror image, or shift
HOG Featureimage.
User identification starts with the main module. The methodology
Training GRNN3 with
of main module algorithm can be specified as follows. Extraction Step 3: If user selects more than one image in the grid chosen as a Trained
Step 1: User should enter username. username as input and feature
graphical password then user should recognize the images in the GRNN3
Step 2: If the username is valid then the system displays image vector as target vector
same order in which he had already been selected at the time of
grid. User should carefully recognize the image while selecting his registration. The maximum number of images chosen from the
458 International Journal of Engineering & Technology

image grid is three. information. The sequence of steps in text password verification
Step 4: To determine the user selected graphical password module is specified as follows.
belongs to which type of transformed image, the mean of all normal Step 1: User should recognize that the displayed graphical
images in a grid are calculated and stored into a vector, and the password belongs to which type of image transformation.
value of Ni in this vector represents the mean value of ith image in Step 2: Let us consider, the user selected one image as a graphical
the grid. password and then he needs to enter one text password for one
Step 5: Now compute the mean value ‘M’ of user selected image.
graphical password. 2.1: If the user selected graphical password is a normal image that
Step 6: Find transforming factor ‘T’ by using the following is displayed on the screen, then user should enter a text password
equation. for normal images which is same as already assigned in the
T= M-Ni registration phase.
Transforming factor ‘T’ determines each image selected by the 2.1.1: The ASCII value of username is given as input to the trained
user from the grid belongs to which type of transformed image. GRNN4 and the output of the network is compared with the user
Step 7: Compute T for each image selected by the user as a entered text password.
graphical password. 2.1.2: If it is matched then the authentication is successful.
If T =0 then, Otherwise, authentication failed.
Normal image<-User selected image 2.2: If the user selected graphical password is a mirror image that
update status S=’N’ is displayed on the screen, then user should enter a text password in
Else reverse order.
If 0< T<1 then, 2.2.1: The ASCII value of username is given as input to the trained
Mirror image<-User selected image GRNN5 and the output of the network is compared with the user
update status S=’M’ entered text password.
Else 2.2.2: If it is matched then the authentication is successful.
User selected image is shift image Otherwise, authentication failed.
Update status S=’S’ 2.3: If the user selected graphical password is in mirror image that
Step 8: The user selected each graphical image and its status are is displayed on the screen, then user should enter a text password in
given as input to the graphical password verification module. reverse order.
2.3.1: The ASCII value of username is given as input to the trained
 Graphical Password Verification Module GRNN6 and the output of the network is compared with the user
entered text password.
Graphical password verification module retrieves the user selected 2.3.2: If it is matched then the authentication is successful.
graphical password along with status from main module. The Otherwise, authentication failed.
proposed model uses three types of feature extraction techniques Step 3: If user selects more than one image as a graphical
such as SURF, LBP and HOG. The sequence of steps in graphical password, then he should enter his text password as many times
password verification module can be specified as follows: number of images selected as a graphical password. But the text
Step1: The most significant three types of image features password is not same for all the cases and it should be differ based
(SURF, LBP and HOG) are extracted from user selected each on the transformed image.
image as per procedure explained in 5.1.1 and it is stored in three Step 4: If user enters his text passwords correctly then the user is
different feature vectors. authenticated otherwise authentication failed.
Step 2: The ASCII value of username is given as input to the Three types of trained GRNN neural networks are used for textual
trained GRNN1. The output of the GRNN1 is compared with password authentication but depending on the type of image
stored user selected graphical password. transformation of graphical password only the corresponding
Step 3: To compare the similarity between feature vectors across GRNN trained network is invoked for text password authentication
the images, compute cosine similarity, correlation, and Euclidean and the other two trained networks are not considered in
distance. The cosine similarity and correlation always be one for authentication process. As only one trained network is considered,
similar feature vectors and Euclidean distance approximates to the response time of text password authentication is fast.
zero. The performance of the proposed GRNNITGPAS model explained
Step 4: The ASCII value of username is given as input to the in the following section.
trained GRNN2. The output of the GRNN2 is compared with
stored user selected graphical password.
6. Experimental Results
Step 5: Go to Step 3.
Step 6: The ASCII value of username is given as input to the
The effectiveness of the proposed graphical password
trained GRNN3. The output of the GRNN3 is compared with
authentication system can be determined by its usability and
stored user selected graphical password.
security. Usability is much more important for developing a good
Step 7: Go to Step3.
user authentication model to achieve efficiency, effectiveness, and
Step 8: Once, graphical password verified successfully, the user
satisfaction.
selected each graphical image and its status are given as input to
the textual password verification module.
 Training Time
The procedure for text password verification module is explained
Once, the participants were registered successfully, in order to
below.
store an entire graphical password in to the database. The proposed
prototype uses three types of feature extraction techniques such as
 Textual Password Verification Module
SURF, LBP and HOG. To evaluate the performance of each feature
extraction descriptor, these three feature vectors are given to the
Text password verification is at the second level of authentication,
three different GRNNs. The training time of three types of GRNNs
when user authenticates graphical password verification
is calculated and it is shown in the following Table I.
successfully then, user should enter text passwords based on the
transformed graphical image displayed on the screen. This module Table i: training time using grnn for graphical passwords
receives the user selected graphical password and it status No. of Training Time in secs Total
International Journal of Engineering & Technology 459

Participants GRNN GRNN GRNN Images Fig 3: Login Time of GRNN with Three Feature Descriptors
with SURF with LBP with HOG
In the above graph, it is clear that SURF feature descriptor takes
20 6.79 5.25 6.18 39
more time to create the network than the other two feature
50 8.89 7.34 8.32 107
100 12.72 11.89 11.93 216
descriptors such as LBP, HOG.

In the above table, by using SURF feature descriptor, the training  Login time
time of GRNN for 20 participants takes 6.79sec. By using LBP and Participants’ login time also recorded in order to evaluate the
HOG feature descriptor, the training time of GRNN is 5.25and 6.18 effectiveness of the proposed prototype. The login time of three
sec. The training time of LBP takes very less time when compared different feature extraction methods is determined in the
to the other two feature descriptors shown in the above table. following Table III.
Three types of neural networks are designed for three types of text
passwords. The training time of three different GRNN by using The average login time of the proposed prototype by using SURF
three different text passwords such as normal text password, mirror feature descriptor is 48.15 sec. By using LBP feature descriptor,
text password and shift password is computed and it is shown in the mean is 46.95 sec and with HOG feature descriptor login time is
following Table II. The training time of GRNN for three feature 46.3 sec.
descriptors and the total number of participants’ registration time
Table iii: login time of grnn
were concluded in the following graph. Total No. of Participants=20 Total Number of images=39
Feature Login Mea Medi Standard
Table ii: training time using grnn for text passwords Descriptor by using Time sec n sec an sec Deviation
No. of GRNN4 with GRNN5 GRNN6 GRNN
Participants Normal Text with Mirror with Shift Text SURF 963 48. 53 16.82
password Text Password Password 15
20 1.34 1.31 1.56 LBP 939 46. 51 15.26
50 1.43 1.41 1.66 95
100 1.47 1.45 1.73 HOG 926 46. 50 15.15
3

 Storage Space Analysis

The proposed prototype utilizes very less space for


accommodating graphical passwords. The comparison of storing
an entire image in to a database and storing the

Table iv: storage space nalysis


Participants Storage space For graphical passwords in After applying feature extraction techniques using GRNN in Total Number
a database in KB. KB. of images
SURF LBP HOG
20 612 49 40 44 39
50 1731 118 95 104 107
100 3502 221 178 195 216

Table v: standard error rate measures


Feature Descriptor MSE R value RMSD NRMSD MAPE
SURF 1.5050e-09 0.9999 3.8795e-05 0.0016 0.1290

LBP 0.1022 1.0000 0.3196 3.1215e-04 0.9576


HOG 1.0543e-06 1.0000 0.0010 0.0015 0.1481

graphical password by extracting features from each feature and Mean Absolute Percentage Error (MAPE) calculated and
descriptor is shown in the Table IV. shown in the following table.
The total number of images chosen by the user for creating
graphical passwords by 20 participants is 39. The storage space for In Table V, it is clear that the MSE, RMSD, NRMSD, MAPE is
storing their entire graphical password (image) into a database is very low and R value is close to 1 by using GRNN. The satisfactory
612 kb. In the above table, it clearly defines that the storage space results were obtained by using GRNN. The following section
for the proposed graphical user authentication model contains describes the common security attacks against proposed hybrid
graphical passwords accommodates less space by using feature graphical password authentication models.
descriptors and GRNN. The storage space of feature descriptor
utilizes only 10% when compared to the images actually stored into  Shoulder surfing
a database. The proposed model occupies very less space for When authenticating systems placed in public places, should
accommodating graphical passwords. Three types of measures surfing attack is quite common and people may capture the
were implemented to evaluate the performance; Mean Squared password by viewing direct observation and also there may be a
Error (MSE), R (Regression) value, Root Mean Square Deviation chance of recording an entire authentication session. In the
(RMSD), Normalized Root-Mean-Square Deviation (NRMSD) proposed system, it is very hard to login even they record the entire
460 International Journal of Engineering & Technology

session. The reason is images are shuffled in a grid and also image neural memory models,” International Journal of Advanced
transformations also applied on the images in a grid. During login Information Technology (IJAIT), vol. 2, no. 1, pp. 75–85, 2012.
[5] Vachaspati, Pranjal, A. S. N. Chakravarthy, UCEV and Vizianagaram.
session the user has to enter text passwords which are changed “A Novel Soft Computing Authentication Scheme for Textual and
dynamically based on the image transformations. Hence, the Graphical Passwords.” (2013).
proposed system provides security against shoulder surfing attack. [6] Specht D (1991) A general regression neural network. IEEE Trans
 Dictionary Attacks Neural Networks 2(6):568–576.
In general, by using dictionary attacks, the attacker can easily guess [7] Jacob Toft Pedersen, “Study group SURF: Feature detection &
description” Published 2011, Q4 2011.
the textual password for authentication, whereas in case of
[8] Herbert bay, T Tuytelaars, L Van Gool,“ Speed Up Robust Features
graphical password authentication, it is not possible to guess the (SURF)”, Computer vision and Image Processing, Elsevier
text password. The proposed system uses the graphical password preprint,2008.
selection as a primary authentication method on top of it, after that [9] Matti Pietikäinen, Abdenour Hadid ,Guoying Zhao, Timo Ahonen, “
only the text password authentication is performed. Moreover, Local Binary Patterns for still images “, Computational Imaging and
these text passwords are not available anywhere in the database. Vision book series (CIVI, volume 40), pp 13-47.
[10] Ojala, T., Pietikäinen, M., Mäenpää, M.: “ Multiresolution gray-scale
Dictionary attacks are completely infeasible because no and rotation invariant texture classification with local binary patterns”.
pre-existing information is available regrading graphical IEEE Trans. Pattern Anal.Mach. Intell. 24(7), 971–987 (2002)
passwords and text passwords. [11] M. Heikkilä, M. Pietikäinen, and C. Schmid, “Description of interest
regions with local binary patterns”, Pattern Recognition, vol.42,
 Spyware Attack issue.3, pp.425-436, 2009.
[12] Awad, Ali & Hassaballah, M. (2016). Image Feature Detectors and
The proposed system protects against spyware attack because the Descriptors; Foundations and Applications.
graphical password recognition is at the preliminary step. 10.1007/978-3-319-28854-3.
Usernames and user credentials are not stored anywhere in the [13] Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for
database. Attacker gets succeed only when he knows that the human detection. IEEE Computer Vision and Pattern
passwords are available and it is somewhere in the database. So, it Recognition(CVPR).886-893.
is almost impossible for the attacker by using such type of spywares
in its own and it is mostly time effort and cost overhead to the
attacker.

7. Conclusion
In this paper, the proposed graphical password authentication
system utilizes image transformations and also three types of
feature extraction techniques such as SURF, LBP and HOG.
General Regression Neural Network is adopted for graphical
password authentication. Three types of GRNN are developed for
three types of feature descriptors such as SURF, LBP and HOG.
The response time of three types of trained GRNN for graphical
passwords are measured and compared among the three types of
feature descriptors. Three types of text passwords such as normal,
mirror and shift text passwords are also trained by using GRNN
and its response times are also computed. The performance of
GRNN is measured in terms of various error measure metrics and
satisfactory results are obtained. The various usability and security
features are also analysed and presented in this paper. The security
analysis of proposed general regression neural network approach
for image transformation based hybrid graphical password
authentication system is performed and obtained satisfactory
results. It is also proved that the proposed system is robust against
shoulder surfing, brute force attack, dictionary attack and spyware
attack.

References
[1] P., Baby Maruthi and Dr.K., Sandhya Rani, Image Transformation
Based Hybrid Graphical Password Authentication System (February 7,
2018). 2018 IADS International Conference on Computing,
Communications & Data Engineering (CCODE) 7-8 February.
Available at Elsevier
SSRN: https://ssrn.com/abstract=3168339 or http://dx.doi.org/10.213
9/ssrn.3168339
[2] ASN Chakravarthy, P S Avadhani, PESN Krishna Prasasd “A Novel
Approach For Authenticating Textual Or Graphical Passwords Using
Hopfield Neural Network”, Advanced Computing: An International
Journal ( ACIJ ), Vol.2, No.4, July 2011.
[3] ASN Chakravarthy and Prof.P S Avadhani,” A Probabilistic Approach
for Authenticating Text or Graphical Passwords Using Back
Propagation,” IJCSNS International Journal of Computer Science and
Network Security, VOL.11 No.5, May 2011.
[4] P. E. S. N. K. Prasasd, A. S. N. Chakravarthy and B. D. C. N. Prasad,
“Performance evaluation of password authentication using associative

You might also like