Face Recognition Using Local Patterns
Face Recognition Using Local Patterns
Volume: 3 Issue: 10
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
Abstract Deriving an effective face representation is very essential task for automatic face recognition application. In this paper we used a
feature descriptor called the Local Directional Number Pattern (LDN), which allows individuals face recognition under different lightnings,
pose and expressions. Face recognition deals with different challenging problems in the field of image analysis and human computer interface.
To deal with attention in our proposed work we use local patterns, a local directional number pattern (LDN) method, a six bit compact code for
face recognition and understanding. By using LDN method we encode the directional information of the face images by convolving the face
image with the compass mask. This compass mask extracts the edge response values in eight directions in the neighborhood. For each pixel we
get the maximum and the minimum directional values which generate a LDN code i.e. generating an LDN image. Later LDN image is divided
into number of blocks for each block histogram is computed and finally adds these histogram from each block to form the feature vector which
acts as face descriptor to represent the face images. We perform different experiments under various illumination, pose and expression
conditions.
Keywords- Biometrics, face recognition, feature extraction, feature vector, Local patterns, Local Binary Pattern , Local Directional Number
Pattern .
__________________________________________________*****_________________________________________________
I.
INTRODUCTION
_______________________________________________________________________________________
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
the micropatterns. In Section III discusses about the
proposed Local Directional Number Patterns (LDN) in
detail. Section IV discusses about the experimental results
and discussions. Finally conclusions are given in section V.
II.
LITERATURE SURVEY
Figure 1.
. (gc)=
1
2 010 gc < 0 90
0 (gc) 0
1
3 010 gc < 0 90
0 gc < 0
(4)
1
4 010 gc 0 and 90
0 (gc) < 0
.
.
1
1
f3(
(6)
. (gc), . (gP))=
1
. ,
Therefore, the generalized formulation for the nth-order
LTrP can be defined by using (n-1)th-order derivatives in
horizontal and vertical directions 1 (gP)|=00,900 as,
1
1
1
1
LTrPn(gc)={f3(
. (gc), . (g1)), f3( . (gc), . (g2)),,
1
1
f3( . (gc), . (gP))}|P=8
(7)
from (5) and (6), we get 8-bit tetra pattern for each center
pixel. After that separate all the patterns into four parts
based on the direction of center pixel. Finally, the tetra
patterns for each directions are converted to three binary
patterns.
5885
_______________________________________________________________________________________
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
LTrP2|Direction=2,3,4= =1 2(1)
4 (LTrP2(gc))|Direction=2,3,4
2
4 (LTrP (gc))|Direction
(8)
1, 2 =
= =
0,
(9)
Where = 2,3,4
Similarly, the other three tetra patterns for remaining three
directions (parts) of the center pixels are converted to binary
patterns. Thus, we get 12(4 3) binary patterns. Therefore,
totally 4 3 8=96 bit LTrP pattern is generated which
increases the feature length and high redundancy problems.
III.
PROPOSED SYSTEM
_______________________________________________________________________________________
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
4.
5.
6.
7.
8.
9.
Mathematical Modeling
f(x,y) denotes the input image of an object, where x & y
represents the coordinates of an image and f(x,y) represents
intensity or gray level of an image. The input image may be
color or monochrome. If it is color image it is converted into
gray image first and then processed. Here in the proposed
work we use two different compass masks to generate the
LDN code which convolve with input image in 8 directions
to obtain the directional information.
The input image f(x,y) is processed with masks to get the
LDN code
LDN(x,y )= 8ix,y+jx,y
(10)
Where (x,y) is the central pixel of the neighborhood being
coded, ix,y is the directional number of the maximum
positive response, and jx,y is the directional number of the
minimum negative response.
The directional number for maximum & minimum response
is calculated by
i x,y = arg max{i(x,y) | 0 i 7}
(11)
i
j x,y = arg max{j(x,y) | 0 j 7}
(12)
j
Where, i is the convolution of the original image I.
The convolution is done by
i= I*Mi
(13)
Where I is the original image and Mi is ith mask.
The Kirsch compass mask used is as shown in figure 3. In
addition to it we are using Sobel mask which is calculated as
gradient space as per the equations given below.
The sobel mask is calculated by,
gx = df/dx = (z7+2z8+z9) - (z1+2z2+z3)
(14)
and,
gy = df/dy = (z3+2z6+z9) - (z1+2z4+z7)
(15)
The magnitude of the gradient is approximated by
M(x,y)= |gx| + |gy|
(16)
1 2
=1 +
1
2
(18)
Gallery
Images
(Training)
Probe
Images
(Testin
g)
Correctly
recognized
Images using
Recognition
Rate = No. of
images
recognized
correctly/Tot
al no. of
images in the
Probe set *
100 (%)
No.
of
Obj
ects
No. of
face
image/
object
2 face
images/
object
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
10
20
19
18
95
90
20
40
37
35
92.5
87.5
40
80
75
72
93.7
90
LH=
(17)
=1
Where, is the concatenation operation, and N is the
number of regions of the divided face.
_______________________________________________________________________________________
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
Table No II. Performance Analysis of YALE Database
Experi
ment
Gallery
Images
(Training)
Probe
Images
(Testin
g)
Correctly
recognized
Images using
Recognition
Rate = No. of
images
recognized
correctly/Tot
al no. of
images in the
Probe set *
100 (%)
No.
of
Obj
ects
No. of
face
image/
object
2 face
images/
object
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
10
10
90
100
10
20
17
19
85
95
15
30
25
26
83.3
86.6
Gallery
Images
(Training)
Probe
Images
(Testin
g)
Correctly
recognized
Images using
Recognition
Rate = No. of
images
recognized
correctly/Tot
al no. of
images in the
Probe set *
100 (%)
No.
of
Obj
ects
3 face
images/
object
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
Kirsc
h
Com
pass
Mas
k
Sobel
Oper
ator
No. of
face
image/
object
15
12
12
80
80
24
21
20
87.5
83.3
10
30
25
22
83.3
73.3
CONCLUSION
_______________________________________________________________________________________
ISSN: 2321-8169
5884 - 5889
______________________________________________________________________________________
Systems For Video Technology, Vol. 14, No. 1,
January 2004, pp. 4-20.
[3] S.Z., A.k.Jain, Y.L.,Tian et al., Facial Expression
analysis, in Handbook of face Recognition. New
York: Springer-Verlag,2005, Chap. 11.
[4] T.Ahonen, A.Hadid, and M.Pietikainen, Face
recognition with local binary patterns, in Proc.
Euro. Conf. Comput. Vis., 2004, pp. 469-481.
[5] T.Ahonen, A.Hadid, and M.Pietikainen, Face
Description with local binary patterns: Application
to Face Recognition, IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol 28,
No.12, Dec.2006, pp.2037-2041.
[6] B. Zhang, Y. Gao, S. Zhao, and J. Liu, Local
derivative pattern versus local binary pattern: Face
recognition with higher-order local pattern
descriptor, IEEE Trans. Image Process., vol. 19,
no. 2, Feb. 2010, pp. 533544.
[7] Bo Yang & Songcan Chen, A comparative study
on local binary pattern (LBP) based face
recognition: LBP histogram versus LBP image,
Elsevier, Neurocomputing, 3 April 2013, pp. 365379.
[8] Di Huang, Shan, Ardabilian, Wang, & Chen,
Local Binary Patterns and Its Application to
Facial Image Analysis: A Survey, IEEE
Transaction on Systems, Man, & Cybernetics, vol.
14, No. 6, Nov. 2011, pp. 765-781.
[9] Tan, and Bill Triggs, Enhanced Local Texture
Feature Sets for Face Recognition Under Difficult
Lighting Conditions, IEEE Transaction on Image
Processing, vol. 19, No. 6, June 2010, pp. 16351650.
[10] T.Jabid, M.H. Kabir, O.Chae, Robust Facial
Expression Recognition Based on Local
Directional Pattern, ETRI Journal, Volume 32,
Number 5, October 2010, pp. 784-794.
[11] S. Murala, R. P. Maheshwari, and R.
Balasubramanian, Local tetra patterns: A new
feature descriptor for content-based image
retrieval, IEEE Trans. Image Process., vol. 21, no.
5, May 2012, pp. 28742886.
[12] Adin Ramirez Rivera, Jorge Rojas Castillo, and
Oksam Chae, Local Directional Number Pattern
for Face Analysis: Face and Expression
Recognition, IEEE Transactions on image
processing, vol.22, No.5, May 2013, pp.1740-1752.
5889
IJRITCC | October 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________