Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Face Recognition Based On LBP of GLCM Symmetrical Local Regions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

International Journal of Image Processing and Visual Communication

ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019

Face Recognition Based on LBP of GLCM


Symmetrical Local Regions
1Narasimha Reddy B.V, 1Chidambaram A, 1K B Raja and 1 Venugopal.K R
1
University Visvesvaraya College of Engineering,
Bangalore University, Bangalore, India
drnsreddy2020@gmail.com,chiduanjan@gmail.com

Abstract— Biometrics are used to identify human beings and also using classifiers such as Neural networks, support
effectively for various applications. In this paper, we propose face vector machine etc.
recognition based on LBP of GLCM symmetrical Local regions. Contribution:
The face image databases such as ORL, JAFFE and YALE are In this paper, the initial features of face images are extracted
used to test the performance of the proposed method. The Gray using GLCM technique.The LBP is applied on resized GLCM
Level Co-Occurrence Matrix (GLCM) technique is used with 135°
orientation pixel pairs through symmetric to generate initial
matrix to obtain final features to recognize a person.
features. The Local Binary Pattern (LBP) is applied on resized
GLCM matrix to generate effective final features. The Euclidean Organization:
Distance(ED) is used to compare final features of database and test The literature survey on existing techniques of face
images for performance verification. It is observed that, the recognition is given in section 2. The proposed model is
proposed method is better than the existing methods in terms of discussed in section 3. In section 4, the proposed algorithm is
performance. explained. The performance analysis is discussed in section 5.
The final concluding remarks are given in section 6.
Keywords—Biometrics,Face Recognition,GLCM,LBP

1. INTRODUCTION 2. LITERATURE SURVEY


Biometrics is used to evaluate the characteristics of biometrics Feiping Nie et al., [1] presented an effective
traits in terms of features for identification of human beings. optimization algorithm to solve a general L1-norm
The biometrics is broadly classified into two groups viz., maximization problem and then proposed a robust principal
physiological and behavioral biometric traits. The parts of component analysis with non-greedy L1-norm maximization.
human body such as finger, palm, iris, face etc., are belonging Depending upon the algorithm, solving the L1-norm
to the group of physiological biometrics. The characteristics of maximization problem the projection directions are optimized.
physiological traits are almost constant for decades and hence Similarly the greedy method, the robust principal component
identification of human beings are efficient and robust. The analysis with non-greedy L1-norm maximization method is
behavioral parameters of human beings such as signature, efficient. A practical result on real world datasets of the
voice, keystroke, gait etc. are belongs to the group of database such as Jaffe, Umist, Yale, Coil20, Palm and USPS
behavioral traits vary based on time, circumstance and health proves that the non-greedy method always evolves enhanced
conditions and hence identification of human beings are solution than of traditional greedy technique.Young Woong
relatively not efficient and robust. The face recognition is a Park and Diego Klabjan [2] presented certain algorithms to
challenging task in biometrics as the appearance of same face minimize the L1-PCA problem on the fitting error of the
image looks different in variation of pose, expression, reconstruction data. Initially developing of the exact
occlusion and illuminations. The face recognition has attracted reweighted algorithm, later on the approximation result is
researchers in the past decade owing to its high potential in a developed based on eigenpair approximation. At final stage
vast range of applications such as surveillance, online the approximation result is extended to the stochastic singular
identification of persons, access to electronic gadgets, digital value decomposition. In order to minimize the reconstruction
marketing. The face images are identified by extracting error, the IRLS algorithm is used to show the convergence of
features based on spatial domain, frequency domain and the eigenvalues and to speed up the algorithm for the large
hybrid domain techniques. The similarities and differences in scale data the approximation IRLS algorithm is used.
features of face images are computed using distance formulae Approximated IRLS with stochastic SVD decreases the size of
such as Euclidian distance, hamming distance, Chi square etc., the matrix and recovering the output by decomposition. The

http://www.ijipvc.org/IJIPVCV6I101.html 1
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
experimental results shows the three algorithms used are best solution for the boundary related problems of the Eikonal
in the performance and scalability is enhanced as using of solution.Classification algorithms are implemented such as K-
Eigen pair approximation and stochastic singular value nearest Neighbor (KNN), Neural Networks (NN) and Support
decomposition. Qiang Yu et al., [3] proposed Dia PCA with Vector Machines (SVM).Fiqri Malik Abdul Azis et al., [7]
non-greedy L1-norm maximization algorithm is more presented a system which can recognize the human face
effective to the outliers. Since the greedy technique of the L1- correctly during darkness time. During the absence of light it
norm maximization has a problem of projection vectors is very difficult to recognize the different human faces. As
getting struck. To overcome the problem the exclusive use of such, the accuracy of the system decreases, to repair this
greedy algorithm is implemented with Diagonal PCA. In Dia drawback and move to image enhancements that can improve
PCA the real images are transformed to the diagonal images the picture eminence.It uses techniques such as Histogram
and later Eigen decomposition is used to get the optimized Equalization, Contrast Limited Adaptive Equalization and
projection vectors. It preserves and retains the image Local Enhancement. In order to determine the value an Eigen
characteristics and is more dynamic to outliers than the L2- face method which uses the Principal Component Analysis.
norm based methods.The following method has been proposed
for feature extraction and face recognition. The Dia PCA-L1 Qin-Qin Tao et al., [8] proposed a algorithm called
non greedy method performance is greater than the Locality-sensitive Support Vector machine using kernel
conventional Dia PCA, 2D PCA and 2D PCA-L1.The combination (LS-KC-SVM).On the image, our representation
Practical results on the Yale and ORL database verifies the of a each local region by local model to achieve the little class
algorithms successfully. variation. The variations can be posture of the face, dark and
bright illuminations and exaggerated expression. The local
Irfan Ali Tunio et al., [4] proposed the password features are most competent than the global features. The
based authentication system and uses the MATLAB software convolution Neutral Network (CNN) is capable of using
for the face recognition purpose. The database consists of a multiple local networks to study local face features, since
group of students which contains 50 facial images. It generally CNN is having self learning characteristics. To make use of
uses the Eigen value method along with the Principal the property of local feature is possible with global and local
component analysis and the implementation is done using the kernels to the feature. Thus introducing of the combining
hardware. The approach is elaborated, initially the input image kernel to the LSSVM. Realistic experiments revelation found
is taken and processes for normalization parameters and then is superior to the existing face recognition techniques on the
applying the Eigen value decomposition and then obtaining databases such as CMU+MIT and FDDB datasets.Ranokmon
the output image. The hardware consists of green LED for Rujirakul and Chakchai So-In [9] presented an approach for
access allowed and red LED for access refused. This system is the Expressive face recognition with the artificial intelligence.
software based system and can be used in offices and colleges The three stages are used for the system those are Histogram
for security reasons. Since MATLAB cannot handle for the Equalization, Principal Component Analysis and Extreme
larger database and to overcome the problem C++ platform is Learning machine. For the Preprocessing operation, the image
used for the operation. Xin Liu et al., [5] proposed an open Histogram curve of the image is adjusted. In the second stage,
source face recognition method called as VIPL FaceNet, for the feature extraction a deep learning artificial intelligence
consist of seven layer convolution neural network.The with Principal Component Analysis is applied and for the base
network has seven convolution layers and three connected line classification format Extreme learning Machine is
layers are connected. The VIPL FaceNet has features such as adapted. This combination can attain high exactness with a
9*9 size for the convolution layer, removal of all local layers proportional time-complexity trade-off. LFW and KDEF are
which are normalized and decomposition of 5*5 AlexNet well known databases are chosen to verify the underlying
layers into the two 3*3 convolution layers AlexNet is method against a conventional PCA with various classification
compared with the VIPL FaceNet which has 20% of training techniques such as PCA net.For the final investigational
time and 60% of testing time and error rate up to 40% in the results, the projected method is advanced in the performance
face recognition LFW.The VIPL FaceNet successfully reaches compared to the traditional method. Ejaz Ul Haq et al., [10]
to 98.60% accuracy on one single network.By using the proposed a form which operates by taking different sample
complete C++ features, VIPL FaceNet SDK is under the BSD images and extracting Local Binary Patterns and building a
license. The algorithm VIPL FaceNet is used for the research histogram curves for the input images. In the extraction,
purpose in the academic as well as in industrial domain under includes preprocessing and LBP extraction of face images.
the real time scenarios. Rachid Ahdid et al., [6] proposed an Next stage includes learning methods and SVM classifier is
automatic 3-Dimensional face recognition system, using of the used to classify of facial images. Taking in account of
3-D fast marching algorithm to get Iso-Geodesic curves of the Experimental results, our HE-deep PCA-ELM is having
human face surfaces and the Riemannian geometry to greater results with the average of 83% and takes 208s in
calculate the geodesic distance between a pair of facial curves. computation time. Therefore using HE-Deep learning methods
The reference point is calculated by using nose end in the 3D- can having improvement in the facial recognition. Pradipta K.
face.The Fast Marching method is a technique to bring Banerjee and Asit K. Datta [11] presented a bass-pass

http://www.ijipvc.org/IJIPVCV6I101.html 2
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
correlation filter in the frequency domain for facial Directional Local Binary Pattern is used to satisfy the
recognition in case of dark illumination and noisy performance of the face recognition. MLBP calculates the
environment. The high pass filter is designed through a difference between average pixel values of blocks substituting
selection of a high pass filter and a continuous wavelet filter. the single pixel values. This technique is also used for getting
The operating range is chosen for the wavelet filter. A filter is the vector on the spatial information of the image and also
used to remove the unwanted frequencies and smoothen in reduces the image dimension. The method is applied on the
image peaks. A Band Pass Correlation Filter is showing a database such as CMU, PIE, and extended Yale B. MDLBP
unique and sharp peak at the correlation of YaleB and PIE has greater accuracy even in the variations of the illumination
faces. The Correlation filter is able to achieve high accuracy, while in the face recognition.
since on both Correlation planes and ROC curves of the
images under the noisy environment. The PSR values are Hae-Min Moon et al.,[15] proposed a method which
certainly high after implementing the BPCF compared to can implicate face recognition at longer distances. As the
standard high pass filters. distance increases, the recognition rate decreases. The method
resolves the change in the recognition rate resulting from
Deokwoo Lee and Hamid Krim [12] proposed a distance change in long distance recognition. Face images are
system which can reduce the high dimensional to the lower captured at the various distances from 1m to 9m and then
dimension of the image by using the geodesic distance and the reducing the face size by bilinear interpolation. The
coefficients are obtained using the Fourier Transform. The background illumination is adjusted by histogram equalization
geodesic distance can be calculated between the reference and then applying the Convolution Neural Network (CNN) to
point and the any chosen point, which can build as a matrix. extract the features. Convolution Neural network proposed in
So the 3D- image is converted to the 1D-image without losing the system consist of two convolution layers and two sub
the original data of the image and to do easy computation and sampling phases and finally obtaining feature vector. In the
analyzing purpose. As the Fourier Transform is implied this stage of feature matching, Euclidean distance were used to
gives the pixel values in terms of the coefficients, which uses calculate distance between the trained and training face
sampling internally. More number of coefficients will lead to features. Alaa Eleyan and Hasan Demirel,[16] proposed a
better face recognition ratio. The number of sample points on system for automatic facial recognition using the Haralick and
each curve is large enough so the experimental results revels GLCM features. The two methods can extract the features for
that Fourier domain provides high accuracy than the the face classification. In the first method, the Haralick
conventional methods. Shailaja A. patil and Pramod J. Deore features are obtained from the GLCM and in the second
[13] presented a combination of Local Binary Pattern (LBP) method the GLCM is converted to the matrix directly. For the
and Independent component Analysis (ICA) to overcome the feature matching nearest neighbor and neural network
challenges exists in the face recognition. The Challenges are classifiers are used. Preview on practical performance results
pose variations, Illumination variation and face expressions. the GLCM method is superior to the principal component
These factors severely cause a problem while achieving the analysis, linear discrimanant analysis, local binary pattern and
face detection. In the training database three images are used Gabor wavelets. The ORL, FERET, FRAV2D and Yale B
and the remaining images were used for the testing purpose. databases have shown very good results using the GLCM
The main characteristics of the Local Binary Pattern are a method with different number of levels.
binary code that describes the local texture pattern is built by
threshold a neighborhood by the gray values at the centre. The 3. PROPOSED MODEL
unwanted features are reduced by the independent component In this section, the face recognition model using GLCM and
analysis which is non-Gaussian components and Euclidean LBP on Local regions of resized GLCM is used to extract
Distance is used for matching the values from the test and the effective features for better performance of proposed method.
training database. Jin Liu et al., [14] presented a Multi The block diagram of proposed model is as shown in Figure 1.

http://www.ijipvc.org/IJIPVCV6I101.html 3
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
with 256 grey levels per pixel. Hence contains a total of 230
grayscale images in tiff format.
Each image of a person having different facial expression
or configuration: center-light, w/glasses, happy, left-light,
w/no glasses, normal, right-light, sad, sleepy, surprised, and
wink. The face image samples of a person are shown in
Figure 3

Fig 3 Sample of JAFFE face database

3.1.3 YALE Database [19]:

Fig 1 Proposed model The database comprises of a certain number of YALE


confront information. They caught 15 people pictures with 11
unique pictures for each people. Each person’s picture was
3.1 Face databases: taken in an upright, frontal position against a white
The standard available databases such as ORL, JAFFE and homogeneous foundation. The picture documents are in JPEG
organize. The measurement of each picture is 320 x 243 pixels
YALE are used to test the performance of proposed model.
with width 320 pixels, the height of 243 pixels.Each image
with various outward appearance or design: focus light,
3.1.1 ORL Database [17]:
cheerful, left-light, with glasses, without glasses, ordinary,
The database stands for Olivetti and Oracle Research
right-light, miserable, sluggish, shocked, and wink.The face
Laboratory database. The ORL database images of faces
pictures of a man appear in Figure 4
consists of a set of images taken at the lab between April 1992
to April 1994. They took images of 40 persons with 10
different images of each person. All images were taken at
different times, varying facial expressions, varying the
lightning. All the images were taken against a dark
homogenous background.
The faces in a frontal view are in upright position, with a Fig 4 Sample of YALE face database
slight left-right rotation. There are a total of 400 images of
size 112×92 with 256 gray level pixel values and Image files 3.2 Gray Level Co-Occurrence Matrix :
are in jpg format. The face image samples of a person are
shown in Figure 2 Texture is single amongst the necessary attributes utilized in
distinctive objects or area of concern in an image. Texture
consists of necessary data regarding the constructional
arrangement of surfaces. The textural highlights depends on
gray-tone spatial dependency have a general relevance in
image categorization. The three elementary model elements
utilized for person understanding of an images are spectral,
Fig 2 Samples of ORL face database textural and contextual features. A spectral features explains
the average tonal variations in varied ranges of the visible and
infrared part of a spectrum. Textural features in an image are
3.1.2 JAFFE Database [18]:
characterized by variations in pixel values in relation with
There are 23 different images of each of 10 distinct subjects or different patterns of pixels. The textural highlights presented
persons. All images were taken in an upright, frontal position by Haralick et al.,[20] contains data regarding image texture
against a white homogeneous background. The images files attributes like homogeneity, gray-tone linear dependencies,
are in TIFF format. The size of each image is 256 x 256 pixels, brightness, range and nature of regions present and also

http://www.ijipvc.org/IJIPVCV6I101.html 4
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
complexity of the image. Contextual characteristics consist of
data obtained from blocks of pictorial information nearby the Gray 1 2 3 4
region being inspected. The texture attributes of images are
obtained and placed in a matrix by the variance and difference tone
of entropy information. 1 #(1,1) #(1,2) #(1,3) #(1,4)
2 #(2,1) #(2,2) #(2,3) #(2,4)
A Co-occurrence matrix G is defined on n × m image I, 3 #(3,1) #(3,2) #(3,3) #(3,4)
parameterized by an offset (Δx, Δy) as given in equation 1
G(u,v)= 4 #(4,1) #(4,2) #(4,3) #(4,4)
1, 𝑖𝑓 𝐼(𝑥, 𝑦) = 𝑢 𝑎𝑛𝑑 𝐼(𝑥 + ∆𝑥, 𝑦 + ∆𝑦) = 𝑣
∑𝑛𝑥=1 ∑𝑚
𝑦=1 {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 Fig 6 General representation of GLCM
(1)

Where the (Δx, Δy), indicating the distance between the pixel
1 1 0 2 2 0 0 1
of interest and its neighbors. The offsets (Δx, Δy), parameters
does the co-occurrence matrix susceptible to orientation. 1 0 1 0 0 0 0 1
There are different offsets available and the distance
1 0 0 1 1 0 1 0
parameter Δ could be choosing to achieve a degree of
orientation. The different offsets such as {[0 1], [-1, 1], [-1, 0],
[-1 -1]} corresponds to 0°,45°,90°and 135° angles of 3 1 0 0 2 1 0 0
orientations.
(a) θ=0° (b) θ=45°
The value u indicates the initial pixel and the value v
are computed by taking the suitable offset in order to get the 2 1 1 1 2 2 0 0
pixels under the interest. The two summations which vary for
x from 1 to n and y from 1 to m are used for the computation 2 0 0 0 0 0 0 1
of pixels under the interest in both rows and columns with the
0 1 0 1 1 0 0 0
size of n and m.

The method first established to utilize the co- 2 0 0 1 0 0 1 2


occurrence probabilities using co-occurrence matrix for
obtaining range of texture attributes. The proposed technique
of an image is computed using a distance parameter d with (c) θ=90° (d) θ=135°
point of reference θ. Take a 4x4 test image matrix provided in
Figure 5 with four intensity values 1 to 4. A general GLC-
Fig 7 GLCM matrices for different angles
Matrix for the image is presented in Figure 6 where # (u, v)
indicates the numeric repetitions of gray levels u and v
indicates the neighbors fulfilling the state affirmed by distance The four GLCM matrices for an angles 0°,45°,90° and 135° of
parameter. test matrix shown in Figure 6 are shown in figure 7.The angle
zero degree GLCM matrix is obtained by considering
frequency of pixel pairs in horizontal direction.The GLCM of
45° matrix is formed by considering frequency of pixel pairs is
45°. The 90° and 135° GLCM matrices are obtained frequency
of pixels pairs in 90° and 135° respectively.
4 1 2 1 Example of 0° GLCM , the pixel pair (4,1) is appeared in 1 st
1 4 1 1 row, 2nd row and 3rd row of test matrix i.e., 3 times in the
matrix, hence 3 is filled in 4th row and 1st column in the
2 3 4 1 GLCM 0° matrix.Similarly, the pixel pair (1,1) is appeared
only one time in the 2nd row of test matrix, hence 1 is filled in
3 1 4 2 1st row and 1st column of the GLCM 0° matrix and is
continued to cover all pixel pairs in horizontal directions.
Fig 5 Test image matrix
3.2.1 GRAY LEVEL CO-OCCURRENCE MATRIX
WITH 135° AND SYMMERTICITY:

http://www.ijipvc.org/IJIPVCV6I101.html 5
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
The pixel pairs with an angle of 135° symmetric are
considered to extract initial features in the proposed
method.The 4×5 matrix is considered to convert it into 8×8
GLCM 135° symmetric matrix as shown in Figure 8.The
original matrix is shown in Figure 8 (a) and the corresponding
GLCM is shown in Figure 8 (b).The pixel pair (1,3) of an
angle 135° occur only one time in the matrix, hence the value
1 is filled in 1st row and 3rd column and also 1 in 3rd row and
1st column because of symmetry condition.The pixel pair (7,2)
with an angle of 135° appears two times, hence the value 2 is
filled in 7th row and 2nd column, also 2 is filled in 2nd row and
7th column as shown in Figure 8(b).Similarly all pairs with an
angle of 135° and symmetry are mapped to generate GLCM Fig 10 Image obtained after applying GLCM
matrix.The process continues till all pixel pair with an angle of
135° are exhausted.
262 380 27 33 12 3 1 0
380 3068 312 23 14 11 3 0
27 312 834 231 55 27 3 0
33 23 231 388 265 78 8 0
12 14 55 265 1534 700 29 1
3 11 27 78 700 6390 606 3
1 3 3 8 29 606 2070 3
0 0 0 0 1 3 3 0
matrix
x

Fig 11 Matrix obtained after applying GLCM


GLCM matrix
3.2.5 Local Binary Pattern
Fig 8 GLCM conversion The technique [21] being extensively used in two dimensional
texture examinations like identification of biometric traits and
Example of GLCM on an image of ORL database with 135° few image processing applications.The 3×3 portion of an
symmetric is demonstrated. A single image of ORL face image is considered and LBP is applied for neighborhood
database as shown in Figure 9 is considered and GLCM is geometry of an image.Each pixel intensity value of image is
applied on this image to obtain corresponding GLCM image replaced by new value based on binary values of eight
and matrix as shown in Figures 10 and 11 respectively. adjoining pixels.The pixel considered as centre pixel (Xc,Yc)
surrounded by eight pixels is to be replaced by a new decimal
values based on eight surrounded binary values obtained on
comparision with initial centre pixel intensity value. The 3×3
matrix is considered and LBP technique is used to replace
centre pixel value by new value based on the values of
surrounded pixels as shown in Figure 12.

Fig 9 Single image (Dimension of 112*92) 137 135 115


Zs1 Zs2 Zs3

Zs8 Zc Zs4 99 82 79
Zs7 Zs6 Zs5 70 54 45

(a) 3×3 matrix (b) Pixel intensity values


B7 B6 B5 1 1 1

B0 B4 1 0
B1 B2 B3 0 0 0

http://www.ijipvc.org/IJIPVCV6I101.html 6
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
(c) Binary matrix (d) Binary Equivalent values If p = (p1, p2) and q = (q1, q2) are pixels of two images then
the distance is given by the ED is computed using Equation 4
Fig 12 LBP operation
ED = √∑𝑀
𝑖=0(𝑃𝑖 − 𝑞𝑖)
2 (4)
The 3×3 matrix and its corresponding pixel intensity values
are shown in Figures 12 (a) and (b).The binary 3×3 matrix Where, M = No of coefficients in a vector.
and its corresponding binary equivalent values are shown in Pi= Coefficient value of vectors in database.
Figures 12 (c) and (d). qi = Coefficient value of vectors in test picture.
The centre pixel intensity value is say Zc and the surround
pixel intensity values are represented by Zsp, where P varies 4 PROPOSED ALGORITHM:
from 1 to 8. The binary equivalents are represented by Bp,
where surrounding pixels are based on Equation 2 Problem Definition: The face images are recognized to verify
1 , 𝑖𝑓 … 𝑍𝑠𝑝 ≥ 𝑍𝑐 a person for security issues. The proposed algorithm depends
𝐵𝑝 = {
0 , 𝑖𝑓 … 𝑍𝑠𝑝 < 𝑍𝑐 on GLCM and LBP analysis to perceive individuals
(2) successfully.
The binary equivalent values are converted into decimal value Objectives: The face recognition algorithm is developed with
by assigning weights to each binary and computed decimal the following objectives
value is assigned to centre pixel using an Equation 3 (i) To increase maximum and optimum TSR (3.6)values.
(ii) To reduce, the values of FRR, FAR, and EER.
𝐷𝑐 = ∑7𝑝=0 𝐵𝑝 2𝑝 (3)
The algorithm is developed by applying LBP analysis
Where Dc centre pixel value after LBP technique on GLCM matrix to obtain better performance
parameters and the algorithm is provided in Table 1
Dc=B020+ B121 + B222+ B323+ B424+ B525+ B626+ B727
=1×20+0+0+0+0+1×25+1×26+1×27 Table 1 Proposed Algorithm
=1+32+64+128
=225 Input: Standard databases of face images.
The original centre pixel intensity value Zc=82 is replace by
LBP value Dc=225, which is based on surrounding pixel Output: Computation of performance parameters
intensity values.Similarly all pixel intensity values in an image
are replaced by LBP equivalent values to obtain texture 1. The face images are considered from various
features of an image.The visualization of LBP on GLCM is as available standard face databases.
shown in Figure 13.
2. The face images are converted from RGB values to
gray intensity scale.

3. The face image of each person is considered and


Graycomatrix is applied.

4. The Graycomatrix function results with the image of


Gray level co-occurrence matrix with
135°,symmetricity and the 8×8 matrix size .

5. The 8×8 matrix is resized to 90×90 matrix size for


segmentation to 3×3 matrix.

6. The LBP technique is applied on the 90×90 matrix of


Fig 13 Image obtained after applying LBP on GLCM
GLCM which results with the 88×88 feature matrix.

3.6 EUCLIDEAN DISTANCE(ED): 7. In the test section, the image to be tested is converted
It is used to calculate the similarity between the database from RGB to gray intensity values.
images and test images, if the ED is smaller than the threshold
then the test image matches with database image and the
recognition is calculated. 8. The procedure is repeated from step 3 to step 6 to
obtain LBP feature matrix.

http://www.ijipvc.org/IJIPVCV6I101.html 7
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
EER=FRR=FAR
(7)
9. The ED is used to compare final LBP features of
database and test images to compute performance
5.1.4 Total Success Rate (TSR):
parameters.
The number of persons identified correctly in the database and
is defined as the ratio of the number of persons identified
correctly to the total number of persons in PID as given in
Equation 8.
5. PERFORMANCE ANALYSIS:
%TSR = No of persons identified correctly ×100
In this section, the definitions of performance parameters to Total number of persons in PID
evaluate proposed method and performance evaluation of (8)
proposed approach using various face databases are 5.1.5 Optimum TSR
discussed.
The value of TSR corresponding to EER, for which both
5.1 Definitions of performance parameters: FRR and FAR are minimum.
5.2 Performance evaluation of proposed approach using
The parameters such as error rates and success rates are used various face databases:
to test the proposed method are explained in this section.
The face recognition investigations to assess the proposed
5.1.1 False Recognition Rate (FRR): approach utilizing face databases, for example, ORL, JAFFE
and YALE are depicted in the following sections.
The number of genuine persons rejected as the persons not 5.2.1 Results using ORL face database:
belongs to database.It is defined as the ratio of total number of (i) Performance Evaluation with PID and POD combination
genuine persons rejected to the total number of Persons Inside of 10:10
Database(PID) as given in the Equation 5
The variations of FRR,FAR and TSR for different threshold
%FRR = Total number of genuine persons Rejected × 100 values for PID and POD combination of 10:10 is tabulated in
Total number of PID Table 2 and plotted in Figure 14
(5)
Table 2: The variations of performance parameters such as
FAR and FRR for different values of threshold with PID and
5.1.2 False Acceptance Rate (FRR): POD of 10:10.
THRESHOLD %TSR %FRR %FAR
The number of intruders are accepted as genuine persons.It is
defined as the ratio of number of intruders to the total number 0 0 1 0
of Persons Outside Database (POD) and is given in Equation 6 0.1 40 0.6 0
0.2 80 0.2 0
%FAR = Number of Intruders ×100 0.3 100 0 0.1
Number of persons in POD 0.4 100 0 0.6
(6)
0.5 100 0 0.9
0.6 100 0 1
5.1.3 Equal Error Rate (EER): 0.7 100 0 1
0.8 100 0 1
The ideal error rate of FRR is one for FAR is zero and FRR is
zero for FAR is one. As FRR decreases then FAR increases, 0.9 100 0 1
for FRR zero value the corresponding TSR is 100% but FAR 1 100 0 1
is also 100% and is disadvantage.The error value is selected in
such way that, both FRR and FAR shall be minimum i.e., the
intersection of FAR and FRR curves for particular threshold
and is called EER as given in Equation 7.

http://www.ijipvc.org/IJIPVCV6I101.html 8
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019

Fig 14 Variations of performance parameters with PID and


POD of 10:10.
Fig 15 Variations of performance parameters with PID and
The values of TSR and FAR increases to maximum values as POD of 10:20.
threshold increases,whereas the values of FRR decreases with
increase in threshold values.It is observed that the maximum
The values of TSR and FAR increases to maximum values as
percentage values of TSR,FRR and FAR are 100%.The values
threshold increases,whereas the values of FRR decreases with
of EER and optimum TSR values are 8% and 95%
respectively. increase in threshold values.It is observed that the maximum
percentage values of TSR,FRR and FAR are 100%.The values
of EER and optimum TSR values are 8% and 95%
(ii) Performance Evaluation with PID and POD combination respectively.
of 10:20

The variations of FRR,FAR and TSR for different threshold (iii) Performance Evaluation with PID and POD combination
values for PID and POD combination of 10:20 is tabulated in of 10:30
Table 3 and plotted in Figure 15
The variations of FRR,FAR and TSR for different threshold
Table 3: The variations of performance parameters such as values for PID and POD combination of 10:30 is tabulated in
FAR and FRR for different values of threshold with PID and Table 4 and plotted in Figure 16
POD of 10:20.
THRESHOLD %TSR %FRR %FAR
0 0 1 0 Table 4: The variations of performance parameters such as
FAR and FRR for different values of threshold with PID and
0.1 40 0.6 0 POD of 10:30.
0.2 80 0.2 0
0.3 100 0 0.1 THRESHOLD %TSR %FRR %FAR
0.4 100 0 0.6 0 0 1 0
0.5 100 0 0.8 0.1 40 0.6 0
0.2 80 0.2 0
0.6 100 0 0.85
0.3 100 0 0.28
0.7 100 0 0.95
0.4 100 0 0.63
0.8 100 0 1
0.5 100 0 0.78
0.9 100 0 1
0.6 100 0 0.88
1 100 0 1
0.7 100 0 0.98
0.8 100 0 1
0.9 100 0 1
1 100 0 1

http://www.ijipvc.org/IJIPVCV6I101.html 9
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019

Fig 16 Variations of performance parameters with PID and


POD of 10:30.
Fig 17 Variations of performance parameters with PID and
POD of 10:30.
The values of TSR and FAR increases to maximum values as
threshold increases,whereas the values of FRR decreases with
The values of TSR and FAR increases to maximum values as
increase in threshold values.It is observed that the maximum
threshold increases,whereas the values of FRR decreases with
percentage values of TSR,FRR and FAR are 100%.The values
increase in threshold values.It is observed that the maximum
of EER and optimum TSR values are 12% and 90%
percentage values of TSR,FRR and FAR are 100%.The values
respectively.
of EER and optimum TSR values are 12% and 86%
respectively.
(iv) Performance Evaluation with PID and POD combination
of 20:10
(v) Performance Evaluation with PID and POD combination
The variations of FRR,FAR and TSR for different threshold of 20:20
values for PID and POD combination of 20:10 is tabulated in
The variations of FRR,FAR and TSR for different threshold
Table 5 and plotted in Figure 17.
values for PID and POD combination of 20:20 is tabulated in
Table 6 and plotted in Figure 18

Table 5: The variations of performance parameters such as Table 6: The variations of performance parameters such as
FAR and FRR for different values of threshold with PID and FAR and FRR for different values of threshold with PID and
POD of 20:10. POD of 20:20.

THRESHOLD %TSR %FRR %FAR THRESHOLD %TSR %FRR %FAR


0 0 1 0 0 0 1 0
0.1 40 0.6 0 0.1 40 0.6 0
0.2 75 0.2 0 0.2 80 0.2 0
0.3 95 0 0.3 0.3 100 0 0.28
0.4 95 0 0.6 0.4 100 0 0.63
0.5 95 0 0.7 0.5 100 0 0.78
0.6 95 0 0.8 0.6 100 0 0.88
0.7 95 0 1 0.7 100 0 0.98
0.8 95 0 1 0.8 100 0 1
0.9 95 0 1 0.9 100 0 1
1 95 0 1 1 100 0 1

http://www.ijipvc.org/IJIPVCV6I101.html 10
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
The variations of FRR, FAR and TSR for different threshold
values for PID and POD combination of 5:4 is tabulated in
Table 8 and plotted in Figure 19

Table 8: The variations of performance parameters such as


FAR and FRR for different values of threshold with PID and
POD of 5:4.

THRESHOLD %TSR %FRR %FAR


0 0 1 0
0.1 100 0 0.25
0.2 100 0 0.75
Fig 18 Variations of performance parameters with PID and 0.3 100 0 0.75
POD of 20:20. 0.4 100 0 1
The values of TSR and FAR increases to maximum values as
0.5 100 0 1
threshold increases,whereas the values of FRR decreases with
increase in threshold values.It is observed that the maximum 0.6 100 0 1
percentage values of TSR,FRR and FAR are 100%.The values 0.7 100 0 1
of EER and optimum TSR values are 13% and 82%
respectively. 0.8 100 0 1
0.9 100 0 1
1 100 0 1
(vi) Comparision of performance parameters for different
combinations of PID and POD’s

The variations of percentage EER, optimum TSR and


maximum TSR for PID and POD combinations of 10:10,
10:20, 10:30, 20:10 and 20:20 are shown in Table 7

Table 7: The percentage EER, percentage optimum TSR and


percentage maximum TSR values for PID and POD
combinations of 10:10, 10:20, 10:30, 20:10 and 20:20.

PID-POD %EER %Optimum %Maximum


Combinations TSR TSR
10:10 08 95 100
Fig 19 Variations of performance parameters with PID and
POD of 5:4.
10:20 08 95 100
The values of TSR and FAR increases to maximum values as
threshold increases,whereas the values of FRR decreases with
10:30 12 90 100 increase in threshold values.It is observed that the maximum
percentage values of TSR,FRR and FAR are 100%.The values
20:10 12 86 95 of EER and optimum TSR values are 20% and 90%
respectively.
20:20 13 82 95
(ii) Performance Evaluation with PID and POD combination
of 5:5

5.2.2 Results using JAFFE face database: The variations of FRR,FAR and TSR for different threshold
values for PID and POD combination of 5:5 is tabulated in
(i) Performance Evaluation with PID and POD combination
Table 9 and plotted in Figure 20
of 5:4

http://www.ijipvc.org/IJIPVCV6I101.html 11
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
Table 10: The variations of performance parameters such as
FAR and FRR for different values of threshold with PID and
Table 9: The variations of performance parameters such as POD of 4:6.
FAR and FRR for different values of threshold with PID and THRESHOLD %TSR %FRR %FAR
POD of 5:5.
0 0 1 0
THRESHOLD %TSR %FRR %FAR 0.1 100 0 0.18
0 0 1 0 0.2 100 0 0.83
0.1 100 0 0.2 0.3 100 0 0.83
0.2 100 0 0.8 0.4 100 0 1
0.3 100 0 0.8 0.5 100 0 1
0.4 100 0 1 0.6 100 0 1
0.5 100 0 1 0.7 100 0 1
0.6 100 0 1 0.8 100 0 1
0.7 100 0 1 0.9 100 0 1
0.8 100 0 1 1 100 0 1
0.9 100 0 1
1 100 0 1

Fig 21: Variations of performance parameters with PID and


POD of 4:6

Fig 20: Variations of performance parameters with PID and The values of TSR and FAR increases to maximum values as
POD of 5:5. threshold increases,whereas the values of FRR decreases with
increase in threshold values.It is observed that the maximum
The values of TSR and FAR increases to maximum values as percentage values of TSR, FRR and FAR are 100%.The
threshold increases,whereas the values of FRR decreases with values of EER and optimum TSR values are 15% and 94%
increase in threshold values.It is observed that the maximum respectively.
percentage values of TSR, FRR and FAR are 100%.The
values of EER and optimum TSR values are 18% and 91% (iv) Performance Evaluation with PID and POD combination
respectively. of 3:7

(iii) Performance Evaluation with PID and POD combination The variations of FRR,FAR and TSR for different threshold
of 4:6 values for PID and POD combination of 3:7 is tabulated in
Table 11 and plotted in Figure 22.
The variations of FRR, FAR and TSR for different threshold
values for PID and POD combination of 4:6 is tabulated in Table 11: The variations of performance parameters such as
Table 10 and plotted in Figure 21. FAR and FRR for different values of threshold with PID and
POD of 3:7.

http://www.ijipvc.org/IJIPVCV6I101.html 12
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019

THRESHOLD %TSR %FRR %FAR PID-POD %EER %Optimum %Maximum


0 0 1 0 Combinations TSR TSR
5:4 20 90 100
0.1 100 0 0.13
5:5 18 91 100
0.2 100 0 0.71
4:6 15 94 100
0.3 100 0 0.85
3:7 13 90 100
0.4 100 0 1
0.5 100 0 1
0.6 100 0 1 5.2.3 Results using YALE face database:
0.7 100 0 1 (i) Performance Evaluation with PID and POD combination
of 10:5
0.8 100 0 1
0.9 100 0 1 The variations of FRR,FAR and TSR for different threshold
values for PID and POD combination of 10:5 is tabulated in
1 100 0 1 Table 13 and plotted in Figure 23

Table 13: The variations of performance parameters such as


FAR and FRR for different values of threshold with PID and
POD of 10:5.

THRESHOLD %TSR %FRR %FAR


0 10 0.9 0
0.1 70 0.3 0
0.2 90 0.1 0
0.3 100 0 0.4
0.4 100 0 0.6
0.5 100 0 0.8
Fig 22: Variations of performance parameters with PID and 0.6 100 0 0.8
POD of 3:7.
0.7 100 0 0.8
0.8 100 0 0.8
The values of TSR and FAR increases to maximum values as
threshold increases,whereas the values of FRR decreases with 0.9 100 0 1
increase in threshold values.It is observed that the maximum 1 100 0 1
percentage values of TSR,FRR and FAR are 100%.The values
of EER and optimum TSR values are 13% and 90%
respectively.

(v) Comparision of performance parameters for different


combinations of PID and POD’s

The variations of percentage EER, optimum TSR and


maximum TSR for PID and POD combinations of 5:4, 5:5, 4:6
and 3:7 are shown in Table 12.

Table 12: The percentage EER, percentage optimum TSR and


percentage maximum TSR values for PID and POD
combinations of 5:4, 5:5, 4:6, and 3:7.
Fig 23: Variations of performance parameters with PID and
POD of 10:5.

http://www.ijipvc.org/IJIPVCV6I101.html 13
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
increase in threshold values.It is observed that the maximum
The values of TSR and FAR increases to maximum values as percentage values of TSR,FRR and FAR are 100%.The values
threshold increases,whereas the values of FRR decreases with of EER and optimum TSR values are 8% and 93%
increase in threshold values.It is observed that the maximum respectively.
percentage values of TSR,FRR and FAR are 100%.The values
of EER and optimum TSR values are 09% and 91% (iii) Performance Evaluation with PID and POD combination
respectively. of 5:5

(ii) Performance Evaluation with PID and POD combination The variations of FRR,FAR and TSR for different threshold
of 5:10 values for PID and POD combination of 5:5 is tabulated in
Table 15 and plotted in Figure 25
The variations of FRR,FAR and TSR for different threshold
values for PID and POD combination of 5:10 is tabulated in Table 15: The variations of performance parameters such as
Table 14 and plotted in Figure 24 FAR and FRR for different values of threshold with PID and
POD of 5:5.
THRESHOLD %TSR %FRR %FAR
0 20 0.8 0
Table 14: The variations of performance parameters such as
0.1 80 0.2 0
FAR and FRR for different values of threshold with PID and
POD of 5:10. 0.2 100 0 0.2
0.3 100 0 0.2
THRESHOLD %TSR %FRR %FAR 0.4 100 0 0.6
0 20 0.8 0
0.5 100 0 1
0.1 80 0.2 0
0.6 100 0 1
0.2 100 0 0.1
0.7 100 0 1
0.3 100 0 0.2
0.8 100 0 1
0.4 100 0 0.6
0.9 100 0 1
0.5 100 0 0.9
1 100 0 1
0.6 100 0 0.9
0.7 100 0 0.9
0.8 100 0 0.9
0.9 100 0 0.9
1 100 0 1

Fig 25: Variations of performance parameters with PID and


POD of 5:5.

The values of TSR and FAR increases to maximum values as


Fig 24: Variations of performance parameters with PID and threshold increases,whereas the values of FRR decreases with
POD of 5:10. increase in threshold values.It is observed that the maximum
The values of TSR and FAR increases to maximum values as percentage values of TSR,FRR and FAR are 100%.The values
threshold increases,whereas the values of FRR decreases with

http://www.ijipvc.org/IJIPVCV6I101.html 14
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
of EER and optimum TSR values are 10% and 90% (v) Comparision of performance parameters for different
respectively. combinations of PID and POD’s
(iv) Performance Evaluation with PID and POD combination
of 8:7
The variations of percentage EER, optimum TSR and
maximum TSR for PID and POD combinations of 10:5, 5:10,
The variations of FRR,FAR and TSR for different threshold 5:5 and 8:7 are shown in Table 17
values for PID and POD combination of 8:7 is tabulated in
Table 16 and plotted in Figure 26
Table 12: The percentage EER, percentage optimum TSR and
Table 16: The variations of performance parameters such as percentage maximum TSR values for PID and POD
FAR and FRR for different values of threshold with PID and combinations of 10:5, 5:10, 5:5, and 8:7.
POD of 8:7. PID-POD %EER %Optimum %Maximum
THRESHOLD %TSR %FRR %FAR Combinations TSR TSR
0 12 0.88 0 10:5 09 91 100
0.1 62 0.38 0
0.2 86 0.12 0.42 5:10 08 93 100
0.3 100 0 0.71
5:5 10 90 100
0.4 100 0 0.86
0.5 100 0 0.86
8:7 10 91 100
0.6 100 0 0.86
0.7 100 0 0.86
0.8 100 0 0.86
0.9 100 0 1
5.2.4 Comparison of the proposed method with existing
1 100 0 1 methods:

The performance of proposed method is compared with


current methods presented by Alaa Eleyan et al.,[16],
Ganapathi V Sagar et al., [22], Pallavi D.Wadkar and Megha
Wankhade[23] and Arindam Kar et al.,[24].It is observed that,
the percentage TSR is high in the case of proposed method is
compared to current methods,as the texture features are
extracted using GLCM and LBP.

Table 18: Comparison of TSR values of the proposed method


with current methods.

S Technique Database %Maximum


I TSR
Fig 23: Variations of performance parameters with PID and
POD of 8:7. 1 GLCM[16] ORL 91.60

The values of TSR and FAR increases to maximum values as


threshold increases,whereas the values of FRR decreases with 2 Conv+DWT[22] ORL 93.33
increase in threshold values.It is observed that the maximum
percentage values of TSR, FRR and FAR are 100%.The
values of EER and optimum TSR values are 10% and 91% 3 PCA+2DPCA[23] ORL 90.5
respectively.

http://www.ijipvc.org/IJIPVCV6I101.html 15
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019

4 LBP[24] ORL 87.8 Analysis with Non-Greedy L1-Norm Maximization for


Face Recognition”,International Journal on
Neurocomputing, vol.171, pp. 57-62, June 2015.
5 PROPOSED ORL 100 [4] Irfan Ali Tunio, Shafiullah Soomro, Toufique Ahmed
METHOD Soomro, Mohammad Tarique Bhatti and Mohsin Shaikh,
“Face Recognition System by using Eigen Value
Decomposition”,International Journal of Computer
Science and Network Security,vol.18, No.5, pp.8-12,
The proposed method is better, for the following reasons: May 2018.
[5] Xin Liu, Meina Kan, Wanglong Wu, Shiguang Shan, and
1) The texture features are extracted using GLCM technique Xilin Chen, “VIPL FaceNet: An Open Source Deep Face
with symmetric pairs of 135° for initial effective features. Recognition SDK”, Higher Education Press and
Springer-Verlag Heidelberg, vol.11, issue.2, pp.508-
2) The final features are extracted by applying one more 518, August 2016.
texture feature LBP technique on GLC matrix to obtain much [6] Rachid Ahdid, Khaddouj Taifi, Said Said, Mohamed
more effective features for identification of face images. Fakir and Bouzid Manaut, “Automatic Face Recognition
System using Iso-Geodesic Curves in Riemanian
Manifold”,International Conference on Computer
6.CONCLUSION Graphics, Imagination and Visualization, pp.72-77,
2017.
[7] Fiqri Malik Abdul Azis, Muhammad Nasrun, Casi
The face recognition biometric algorithms are utilized as a part Setianingsih, Muhammad Ary Murti, “Face Recognition
of security issues. In this paper, we proposed face recognition In Night Day Using Method Eigenface”, International
in view of GLCM and LBP analysis. The face images of Conference on Signals and System , pp.103-108, 2018.
various databases such as ORL, JAFFE and YALE are used. [8] Qin-Qin Tao, Shu Zhan, Xiao-Hong Li, Toru Kurihara,
The GLCM technique is used on the face images of each “Robust Face Detection using local CNN and SVM
person. This results with the Gray level co-occurrence matrix based on Kernel Combination”,International Journal on
with 135° and accepting symmetric property. On the obtained Neurocomputing, pp.1-16, 2015.
8×8 matrix, LBP is applied as the analysis for the feature [9] Kanokmon Rujirakul, and Chakchai So-In, “Histogram
vectors. In the test section, similarly as the database section Equalized Deep PCA with ELM Classification for
the GLCM function is applied on the image with 135° and Expressive Face Recognition”, Web Information System
symmetric property. That results with GLCM, later on and Application Conference, pp.1-4, 2018.
applying the LBP analysis on the matrix to obtain feature [10] Ejaz UI Haq, Xu Huarong, Muhammad Irfan Khattak,
vectors. The features of database and test images are “Face Recognition by SVM using Local Binary
compared using ED to compute performance parameters. It is Patterns”, Web Information System and Application
observed that, the performance of the proposed method is Conference, pp.176-179, 2017.
better than the current methods. In future combining the [11] Pradipta K. Banerjee and Asit K. Datta, “Band-Pass
different angles of the orientation of pixels under interest and Correlation Filter for Illumination- and Noise-Tolerant
varying the distance with the adjacent pixels improves the Face Recognition”, International Conference on
facial recognition rate and to use the Hamming distance for signal,Image and Video Processing,Springer, pp.1-8,
the better matching purpose. Mar 2016.
[12] Deokwoo Lee and Hamid Krim ,”3D Face Recognition
7. REFERENCES in the Fourier Domain using Deformed Circular Curves”,
Springer Science+Business Media New York, pp.1-24,
[1] Feiping Nie, Heng Huang, Chris Ding, Dijun Luo, and May 2015.
Hua Wang, “Robust Principle Component Analysis With [13] Shailaja A. Patil, Pramod J. Deore, “Local Binary
Non -Greedy L1-Norm Maximization”, International Pattern based Face Recognition System for Automotive
Joint Conference on Artificial Intelligence, vol.2, pp.1- Security”, International Conference on Signal
19, July 2011. Processing, Computing and Control, pp.13-17, 2015.
[2] Young Woong Park and Diego Klabjan,“Three [14] Jin Liu, Yue Chen , Shengnan Sun, “Face Recognition
Iteratively Reweighted least Squares Algorithms for L1- based on Multi-Direction Local Binary Pattern”, IEEE
Norm Principle Component Analysis”, SpringerLink/ International Conference on Computer and
Knowledge and Information Systems, vol.54, issue.3, Communications, pp.1606-1610, 2017.
pp.541-565, February 2017. [15] Hae-Min Moon, Chang Ho Seo and Sung Bum Pan, “A
[3] Qiang Yu, Rong wang , Xiaojun Yang , Bing Nan Li Face Recognition System based on Convolution Neural
and Minli Yao, “Diagonal Principle Component

http://www.ijipvc.org/IJIPVCV6I101.html 16
International Journal of Image Processing and Visual Communication
ISSN (Online) 2319-1724 : Volume 6 , Issue 1 , February 2019
Network using Multiple Distance Face”, International
Journal on Soft Computing,Springer, pp.1-8, 2016.
[16] Alaa Eleyan and Hasan Demirel, “Co-Occurrence
Matrix and its Statistical Features as a New Approach
for Face Recognition”, Turkish Journal of Electrical
Engineering and Computer sciences, vol.19, no.1, pp.97-
107, 2011.
[17] ORL Database, http://www.camrol.co.uk
[18] http://www.kasrl.org/jaffe_download.html
[19] http://www.yaledatabase.com
[20] R. M. Haralick, K. Shanmugam and I. Dinstein
“Textural Features for Image Classification”, IEEE
Transactions on Systems, Man and Cybernetics, Vol.3,
pp. 610-621, November 1973 .
[21] Timo Ahonen, Abdenour Hadid, and Matti Pietikainen,”
Face Description with Local Binary Patterns:
Application to Face Recognition” IEEE Transactions on
Pattern Analysis and Machine learning, vol.28, no.12,
pp.2037-2041, 2006.
[22] Ganapathi V Sagar, Savitha Y Barker, K B Raja, K
Suresh Babu and Venugopal K R, “Convolution based
Face Recognition using DWT and Feature Vector
Compression”, IEEE International Conference on Image
Information Processing”,vol 24, no.7, pp.444-449, 2015.
[23] Pallavi D.Wadkar and Megha Wankhade,”Face
Recognition using Discrete Wavelet Transform,”
International Journal of Advanced Engineering
Technology,vol.3, pp.239-242, 2012.
[24] Arindam Kar, Debotosh Bhattacharjee, Dipak Kumar
Basu, Mita Nasipuri and Mahantapas Kundu,”A
Adaptive Block Based Integrated LDP, GLCM, and
Morphological features for Face Recognition”,
International Journal of Research and Reviews in
Computer Science,vol.2, no.5, pp.1-7, 2011.

http://www.ijipvc.org/IJIPVCV6I101.html 17

You might also like