Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

IRIS Thesis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

ABSTRACT

Iris recognition is regarded as the most reliable and accurate biometric


system available. A biometric system provides automatic identification of
an individual based on a unique feature or characteristic possessed by the
individual. However, iris recognition so far has been regarded only as a
specific and separate system exclusively dedicated for high security
personnel and expensive applications.
The work presented in this thesis involved developing a real time and open
source iris recognition system in order to verify both the uniqueness of the
human iris and also its performance as a biometric.
The objective is to design and implement an iris scan biometric security
system to guarantee secure access of information and prevent security
breaches. Our model proposes scanning the iris to verify the identity of the
individual before granting access to banking information.
An image of the eye is captured by an ultra-compact iris scanner and is
uploaded in the cloud via mobile phone. Different levels of image
processing techniques are implemented in matlab to obtain a unique
biometric template of the iris region. The three stages: Enrolment,
Identification and Verification are achieved through a self-developed
Android Application.
The project facilitates a common man to use a robust, miniaturized and
simple biometric iris recognition system via a mobile phone for secure
online transactions. The model has been appropriately tested in real time
and proves to be efficient for practical applications.

KEYWORDS: biometric, iris recognition, mobile based, real-time system,


online transaction security.

ACKNOWLEDGEMENT

We wish to take this opportunity to express our sincere gratitude to our project guide
Dr. M. Bhaskar, Associate Professor, Department of Electronics and Communication
Engineering, National Institute of Technology, Tiruchirappalli for his valuable
guidance, constant motivation and extensive support throughout the project.

We express our sincere thanks to Dr. D. Sriram Kumar, Head of the Department,
Electronics and Communication Engineering, for his encouragement during the
implementation of the project.

We would like to thank all the faculty members, supporting staff and students of the
Department of Electronics and Communication Engineering for their continuous
support and cooperation.

Portions of research in this thesis use the CASIA iris image database collected by
Institute of Automation, Chinese Academy of Sciences and we thank them for these eye
images, which proved extremely useful to this research.

We also wish to extend our humble gratitude to the IOT Lab and the 1981 Alumni
Batch of NIT-Trichy for their technical guidance and financial assistance.
Finally, we thank our friends and family for their moral support which has aided us in
the successful completion of the project.

Srivignessh PSS
T Vignesh
Vignesh K

ii

TABLE OF CONTENTS

ABSTRACT

ACKNOWLEDGEMENT

ii

LIST OF TABLES

LIST OF FIGURES

vi

1 INTRODUCTION .

1.1 MOTIVATION

1.2 BACKGROUND .

1.3 OUTLINE

1.3.1 BIOMETRIC IDENTIFICATION PROCESS..

1.3.2 MODES OF OPERATION

2 MULTIPLE STAGES OF IRIS RECOGNITION ..

3 ACQUISITION OF IRIS IMAGE

3.1 IRISHIELD- USB MK 2120U

3.1.1 SPECIFICATIONS

4 MOBILE PHONE INTEGRATION .

4.1 INTRODUCTION ..

4.2 CAPTURE STATUS .

4.3 UPLOADING THE IMAGE

4.4 IMPLEMENTATION

4.5 CAPTURED IMAGES ..

5 SEGMENTATION

5.1 OVERVIEW

iii

5.2 LITERATURE REVIEW .

5.2.1 HOUGH TRANSFORM

5.2.2 DAUGMANS INTEGRO-DIFFERENTIAL


OPERATOR

10

5.2.3 EYELASH AND NOISE DETECTION

10

5.3 IMPLENTATION ...

10

5.3.1 FINDING TOP AND BOTTOM EYELID .


5.4 EXPERIMENTAL RESULTS

11
11

6 NORMALIZATION .

12

6.1 OVERVIEW .

12

6.2 LITERATURE REVIEW ..

12

6.2.1 DAUGMANS RUBBER SHEET MODEL .

12

6.2.2 IMAGE REGISTRATIONS ..

13

6.3 IMPLEMENTATION .

13

6.4 EXPERIMENTAL RESULTS .

14

7 FEATURE EXTRACTION

15

7.1 OVERVIEW ..

15

7.2 LITERATURE REVIEW ..

15

7.2.1 WAVELET TRANSFORM

15

7.2.2 GABOR FILTER .

15

7.2.3 HILBERT TRANSFORM

16

7.2.4 1D LOG- GABOR FILTER .

16

7.3 IMPLEMENTATION

17

7.4 EXPERIMENTAL RESULTS 17

iv

8 TEMPLATE MATCHING . 18
8.1 OVERVIEW . 18
8.2 LITERATURE REVIEW . 18
8.2.1 HAMMING DISTANCE .. 18
8.2.2 WEIGHTED EUCLIDEAN DISTANCE .. 19
8.2.3 NORMALIZED CORRELATION ..

19

8.3 IMPLEMENTATION . 19
8.3.1 ACCOUNTING FOR ROTATIONAL
INCONSISTENCIES ..

20

9 RESULTS AND DISCUSSIONS ..

21

9.1 INTRODUCTION..

21

9.2 UNIQUENESS OF IRIS PATTERNS ..

22

9.3 SHIFTING OF TEMPLATES ..

24

9.4 RECOGNITION OF INDIVIDUALS ..

25

9.5 FINAL OVERVIEW .

26

9.6 INTRA AND INTER CLASS COMPARISONS...

27

10 CONCLUSION

28

10.1 SUMMARY

28

10.2 SUMMARY OF FINDINGS ... 29


10.3 SCOPE FOR FUTURE WORK ... 30
11 REFERENCES .31

LIST OF TABLES

3.1 SPECIFICATIONS OF IRISHIELD USB-MK 2120U 5


9.1 EYE IMAGE DATA SETS .. 21
9.2 DEGREE OF FREEDOM AND DECIDABILITY .

26

9.3 OBSERVATION TABLE . 27

vi

LIST OF FIGURES

1.1 HUMAN EYE DIAGRAM .

2.1 FLOWCHART FOR MULTIPLE STAGES ..

2.2 IRIS IMAGES AT VARIOUS STAGES .

3.1 IRISHIELD USB MK 2120U ...

4.1 FLOWCHART FOR CAPTURE PROCESS

4.2 IMAGE ACQUISITION BY APP .

4.3 ABOUT THE APP .

4.4 CAPTURED IMAGES .

5.1 EDGE MAP LOCALIZATION

5.2 SEGMENTATION RESULTS ..

11

6.1 DAUGMANS RUBBER SHEET MODEL ..

12

6.2 RADIAL AND ANGULAR RESOLUTION .

13

6.3 NORMALIZATION RESULTS .

14

7.1 FEATURE EXTRACTION RESULTS ..

17

8.1 ILLUSTRATION OF SHIFTING PROCESS .

20

9.1 SAMPLE DATA SET OF A STUDENT .

21

9.2 INTER CLASS FROM NITT (NO SHIFTS) ......

23

9.3 INTER CLASS FROM CASIA (NO SHIFTS) .

23

9.4 INTER CLASS FROM NITT (8 SHIFTS)

24

9.5 INTER CLASS FROM CASIA (10 SHIFTS) .

25

9.6 INTER AND INTRA CLASS COMPARISON

27

vii

CHAPTER 1
INTRODUCTION
1.1 MOTIVATION
Technologies that exploit biometrics have the potential for application to the
identification and verification of individuals for controlling access to secured areas or
materials. A wide variety of biometrics have been marshalled in support of this
challenge. Resulting systems include those based on automated recognition of
fingerprints, hand shape, signature, voice, face and iris patterns.
However, apart from face and iris recognition, other techniques are highly invasive and
there is a possibility for covert evaluation. Although face recognition is an active topic
of research, the inherent difficulty of the problem might prevent and restrict its wider
applications. Iris recognition proves to be an alternative for non-invasive authentication
of people. The spatial patterns that are apparent in the human iris are distinctive to every
individual. The variability in appearance of one iris might be well enough constrained
to make possible an automated recognition system based on currently available mission
vision technologies.

1.2 BACKGROUND
Iris is a thin diaphragm stretching across the anterior portion of the eye and supported
by the lens, which gives it the shape of a truncated cone in three dimensions. At its base,
the iris is attached to the cilliary body and in the opposite end, it opens into the pupil,
which controls the amount of light
entering into the human eye. The
cornea lies in front of the iris and
provides

transparent

protective

covering.

Fig 1.1
Fig 1.1 shows a labelled diagram of the human eye and the iris region.

The iris is an externally visible, yet protected organ whose unique epigenetic pattern
remains stable throughout adult life. These characteristics make it very attractive for
use as a biometric for identifying individuals. To appreciate the richness of the iris as
a pattern for recognition, it is useful to consider its structure in a bit more detail. The
visual appearance of the iris is a direct result of its multi-layered structure.

The iris possesses a highly distinguishing texture with the following merits:
Right eye differs from left eye.
Twins have different iris texture.
Iris pattern remains unchanged after the age of two.
It does not degrade overtime or with the environment.
Claims that the structure of the iris is unique to an individual and is stable with age
come from two main sources. The first source of evidence is clinical observations.
During the course of examining large numbers of eyes, ophthalmologists and anatomists
have noted that the detailed pattern of an iris, even the left and right iris of a single
person, seems to be highly distinctive. The second source of evidence is developmental
biology. It has been proven that while the general structure of the iris is genetically
determined, the particulars of its minutiae are critically dependent on circumstances like
the initial conditions in the embryonic precursor to the iris. Therefore, they are highly
unlikely to be replicated via the natural course of events.

1.3 OUTLINE
Mobile devices have been widely used for social communications, storing large amount
of private data and online banking. It is important to build a reliable and user-friendly
biometric recognition system for online mobile payment and sensitive data protection.
The objective is to build and implement a real time and robust biometric iris recognition
system in which:-

1.3.1 BIOMETRIC IDENTIFICATION PROCESS


i.

An image of the iris is captured through the mobile camera and uploaded to the
server via Wi-Fi or internet.

ii.

The server creates a biometric template of the iris using image processing
techniques and by implementing prominent algorithms in each stage.

iii.

It also compares the template with the enrolled pre-existing templates and sends
back a matching score (a floating point number between 0 and 1) to the mobile.

iv.

A decision making function is processed with the matching score as its


parameter and identification of the user is validated.

1.3.2 MODES OF OPERATION


i.

Enrolment mode for adding templates to the database, and

ii.

Identification mode where a template is created for an individual and then a


match is searched for in the database of pre-enrolled templates.

iii.

Verification mode which indicates if the authentication is valid or invalid.

The development tool used will be MATLAB, and emphasis will be on both the
software for performing recognition, and the hardware for capturing an eye image. A
rapid application development (RAD) approach will be employed in order to produce
results quickly. MATLAB provides an excellent RAD environment, with its image
processing toolbox, and high level programming methodology.
To test the system, two data sets of eye images have been used as inputs:
i.

A database of 250 greyscale eye images courtesy of The Chinese Academy of


Sciences Institute of Automation (CASIA)

ii.

A database of 80 camera captured eye images of 40 different students.

CHAPTER 2
MULTIPILE STAGES OF IRIS RECOGNITION
The system is to be composed of a number of sub-systems, which correspond to each
stage of iris recognition. These stages are segmentation locating the iris region in an
eye image, normalisation creating a dimensionally consistent representation of the iris
region, and feature encoding creating a template containing only the most
discriminating features of the iris (refer Fig 2.1).
The input to the system will be an eye image, and the output will be an iris template,
which will provide a mathematical representation of the iris region (refer Fig 2.2).

Iris Image Capture

Segmentation

2.2 (a) original iris image

Normalization

Feature Extraction

2.2 (b) localized iris image


Template Matching

2.2 (c) normalized iris image

Authentication

Fig 2.1
2.2 (d) enhanced iris image
Fig 2.2
Fig 2.1 is a flowchart depicting overview of the different components of the system.
Fig 2.2 shows the various stages of an iris image during the recognition process.
4

CHAPTER 3
ACQUISITION OF THE IRIS IMAGE
In order to generalise the project as a robust model and to ensure its wide scope of
applications, it is essential that all the iris images are captured by the same camera. This
is done so that the huge variations in different mobile camera images do not affect the
performance of the system.
3.1 IriShield- USB MK 2120U
The IriShield USB MK 2120U is a ready-to-use
single iris capture camera. The compact camera is
powered via USB port, as shown in Fig 3.1. Its low
power consumption and mobile OS support makes
it suitable for using with smartphones, tablets or
other handheld devices.
Each eye is illuminated by infrared LED, thus irises
can be captured in various indoor and outdoor
environments. The captured iris images are
compliant with ISO/IEC 19794-6 standard.
Fig 3.1
3.1.1 SPECIFICATIONS
Table 3.1 gives the important specifications of the IriShield USB MK 2120U:

Device name

IriShield USB MK 2120U

Manufacturer

Iritech.inc

Device Connection

USB 2.0

Supported OS

Microsoft Windows
Linux
Android

Illumination

Infrared

Iris image size

640 x 480 pixels

Device weight

300 grams

Operating temperature

0 C 45 C
Table 3.1
5

CHAPTER 4
MOBILE PHONE INTEGRATION
4.1 INTRODUCTION:
In this project an android application has been developed so that it can capture the IRIS
image using the IriShieldTM-USB MK2120U and upload the captured image to the local
server. The uploaded image is then used for further processing in octave/matlab in the
server.
In the android application we use package com.iritech.iddk.demo which is the package
provided by Iritech, Inc. exclusively for this purpose. The application takes into account
the various factors like distance, illumination and gives the image quality and gives the
Iris Quality Score and usable area. These parameters are provided by IddkIrisQuality
class:

IddkResult getResultQuality (IddkIrisQuality qualities);

IddkImage consists of the image format (.jpg),Width, Height and the image data.
4.2 CAPTURE STATUS:
These are the 5 main states of capturing provided by the IddkCaptureStatus class.

IDDK_IDLE

IDDK_READY

IDDK_CAPTURING

IDDK_COMPLETE

IDDK_ABORT.

Initially the mobile phone is in IDDK_IDLE state. When START button is pressed it
enters into the capture state. After StartCapture, capturing process enters
IDDK_READY. Then, in case of Auto Capture Mode (IDDK_AUTO_CAPTURE),
streaming images from iris camera will right away go through the Quality Measurement
(QM)-based live evaluation, which detects qualified iris images.
When the first eye image is detected by the live evaluation, the capturing process enters
IDDK_CAPTURING status. After a reasonable duration of time or a reasonable number
of qualified eye images detected, the process completes and its status is changed to
6

IDDK_COMPLETE. If something abnormal happens (e.g., no images from iris camera


or StopCapture called in the middle), the capturing process will be terminated before
finishing its normal routine, and IDDK_ABORT is returned. The final status remains
until the next StartCapture.
Media Activity is used to capture the image using the module. The application is simple
to use as it mainly contains two buttons START and STOP. The given below Flow chart
in Fig 4.1 explains its functions:

Fig 4.1 Flowchart for Capture Process


SCORE CALCULATOR:
There are two scores :

total_score and useable_area

Total score of the captured image, ranging from 0 to 100. The higher the score, the better
the image quality. Enrolled images should have a stringently high total score of 70.
Percentage of usable iris area in the captured image, ranging from 0 to 100. The lower
the score, the more occluded the iris .Highly occluded iris significantly affects the
matching accuracy. Enrolled images should have a high value of usable iris area, e.g.,
greater than 70. These results are obtained from the class IddkIrisQuality.

4.3 UPLOADING THE IMAGE:


A local server is host using Xamp application in Windows and once the image is
captured it will be uploaded in the server. Once it is received in the laptop it is then
processed in matlab and then comparisons are made with the existing data base and the
result is given.
7

4.4 IMPLEMENTATION:
The android application was developed using Android Studios 2.1, tested in an Android
mobile phone and the captured image was uploaded in the local server and processing
takes place in the server. Fig 4.2 shows the image of an eye captured by the IR camera
as displayed on the App. The details of the App are given by Fig 4.3.

Fig. 4.2 Image is Captured and Quality

Fig. 4.3 About the App page

is calculated
4.5 CAPTURED IMAGES: Sample right and left eye images captured by the iris
camera are shown in Fig 4.4.

(a) Right Eye Image

Fig 4.4

(b) Left Eye Image


8

CHAPTER 5
SEGMENTATION
5.1 OVERVIEW
The first stage of iris recognition is to isolate the actual iris region in a digital eye image.
The iris region can be approximated by two circles, one for the iris/sclera boundary and
another, interior to the first, for the iris/pupil boundary. The eyelids and eyelashes
normally occlude the upper and lower parts of the iris region. Also, specular reflections
can occur within the iris region corrupting the iris pattern.
Hence, a technique is required to isolate and exclude these artefacts as well as locating
the circular iris region. The success of segmentation depends on the imaging quality of
eye images. . The segmentation stage is critical to the success of an iris recognition
system, since data that is falsely represented as iris pattern data will corrupt the
biometric templates generated, resulting in poor recognition rates.

5.2 LITERATURE REVIEW


5.2.1 Hough Transform
The circular Hough transform can be employed to deduce the radius and centre
coordinates of the pupil and iris regions Firstly, an edge map is generated by calculating
the first derivatives of intensity values in an eye image and then thresholding the result.
From the edge map, votes are cast in Hough space for the parameters of circles passing
through each edge point. These parameters are the centre coordinates xc and yc, and the
radius r, which are able to define any circle according to the equation
xc2 + yc2 r 2 = 0.

(1)

As shown in Fig 5.1, circle localisation is made more accurate and efficient since there
are less edge points to cast votes in the Hough space.

(a) Original Image

(b)

Edge Map

(c) Horizontal

(d) Vertical

Edge Map

Edge Map

Fig 5.1
9

5.2.2 Daugmans Integro-differential Operator


The operator searches for the circular path where there is maximum change in pixel
values, by varying the radius and centre x and y position of the circular contour. The
operator is applied iteratively with the amount of smoothing progressively reduced in
order to attain precise localisation. Eyelids are localised in a similar manner, with the
path of contour integration changed from circular to an arc.
The integro-differential can be seen as a variation of the Hough transform, since it too
makes use of first derivatives of the image and performs a search to find geometric
parameters. Since it works with raw derivative information, it does not suffer from the
thresholding problems of the Hough transform. However, the algorithm can fail where
there is noise in the eye image, such as from reflections, since it works only on a local
scale.
5.2.3 Eyelash and Noise Detection
Eyelashes are treated as belonging to two types, separable eyelashes, which are isolated
in the image, and multiple eyelashes, which are bunched together and overlap in the eye
image. Separable eyelashes are detected using 1D Gabor filters, since the convolution
of a separable eyelash with the Gaussian smoothing function results in a low output
value. Thus, if a resultant point is smaller than a threshold, it is noted that this point
belongs to an eyelash. Multiple eyelashes are detected using the variance of intensity.
If the variance of intensity values in a small window is lower than a threshold, the centre
of the window is considered as a point in an eyelash.

5.3 IMPLEMENTATION

It was decided to use circular Hough transform for detecting the iris and pupil
boundaries. This involves first employing Canny edge detection to generate an edge
map. Gradients were biased in the vertical direction for the outer iris/sclera boundary.
Vertical and horizontal gradients were weighted equally for the inner iris/pupil
boundary using a modified version of Kovesis Canny edge detection MATLAB
function.
The range of radius values to search for was set manually, depending on the database
used.

10

For the CASIA database, values of the iris radius range from 90 to 150 pixels, while the
pupil radius ranges from 28 to 75 pixels.
For the self-captured images of the students, the range of values is as follows:

Pupil radius: 5 to 40 pixels

Iris radius: 30 to 70 pixels

In order to make the circle detection process more efficient and accurate, the Hough
transform for the iris/sclera boundary was performed first, then the Hough transform
for the iris/pupil boundary was performed within the iris region, instead of the whole
eye region, since the pupil is always within the iris region. After this process was
complete, six parameters are stored, the radius, and x and y centre coordinates for both
circles.
5.3.1 FINDING TOP AND BOTTOM EYELID
After the iris is localised, the eyelids have to be blackened and removed. The
following functions are used to detect the top and bottom eyelids.
topeyelid = imageiris ( rowp+r): size (imageiris,1), : );
lines = findline ( topeyelid );
bottomeyelid = irispupil ( rowp+r): size (imageiris,1), : );
lines = findline ( botomeyelid);

5.4 EXPERIMENTAL RESULTS


Fig 5.2 shows the images of the same iris before and after the segmentation process
using Circular Hough Transform.

(a) Input Eye Image

Fig 5.2
11

(b) Segmented Iris

CHAPTER 6
NORMALIZATION
6.1 OVERVIEW
Once the iris region is successfully segmented from an eye image, the next stage is to
transform the iris region so that it has fixed dimensions in order to allow comparisons.
The dimensional inconsistencies between eye images are mainly due to the stretching
of the iris caused by pupil dilation from varying levels of illumination.
The normalisation process will produce iris regions, which have the same constant
dimensions, so that two photographs of the same iris under different conditions will
have characteristic features at the same spatial location.

6.2 LITERATURE REVIEW


6.2.1 Daugmans Rubber Sheet Model
The homogenous rubber sheet model devised by Daugman remaps each point within
the iris region to a pair of polar coordinates (r, ) where r is on the interval [0, 1] and
is angle [0,2]. The remapping of the iris region from (x, y) Cartesian coordinates to the
normalised non-concentric polar representation is modelled as
I (x(r, ), y(r, )) I (r, ) where I(x, y) is the iris region image, (x, y) are the original
Cartesian coordinates, (r, ) are the corresponding normalised polar coordinates.

Fig 6.1
A diagrammatic representation of Daugmans Rubber Sheet Model is shown in Fig 6.1.
Even though the homogenous rubber sheet model accounts for pupil dilation, imaging
distance and non-concentric pupil displacement, it does not compensate for rotational
inconsistencies.
12

6.2.2 Image Registration


The system employs an image registration technique, which geometrically warps a
newly acquired image, I a (x, y) into alignment with a selected database image I d (x, y)
[4]. When choosing a mapping function (u(x, y), v(x, y)) to transform the original
coordinates, the image intensity values of the new image are made to be close to those
of corresponding points in the reference image. The mapping function must be chosen
so as to minimise

(Id (x, y) Ia (x u, y v))2 dxdy

(2)

6.3 IMPLEMENTATION
Normalization of the segmented iris image is done using Daugmans Rubber Sheet
model technique. The problem of rotational inconsistencies in this method is overcome
during the Template Matching stage and will be discussed later.

The centre of the pupil was considered as the


reference point, and radial vectors pass through
the iris region, as shown in Fig 6.2. A number of
data points are selected along each radial line and
this is defined as the radial resolution. The number
of radial lines going around the iris region is
defined as the angular resolution. Since the pupil
can be non-concentric to the iris, a remapping
formula is used to rescale points depending on the
angle around the circle.

Fig 6.2

For the normalization process, the parameter values used are:


Radial resolution = 20
Angular resolution = 240
With these settings, a 9600 bit iris template is created.

Since the pupil can be non-concentric to the iris, a remapping formula is needed to
rescale points depending on the angle around the circle. This is given by the formula:
(3)
13

(4)

(5)

where displacement of the center of the pupil relative to the center of the iris is given
by ox, oy, and r is the distance between the edge of the pupil and edge of the iris at an
angle, and rI is the radius of the iris. The remapping formula first gives the radius of
the iris region doughnut as a function of the angle .

6.4 EXPERIMENTAL RESULTS

Fig 6.3 shows the various images of the same iris during the normalization process
including the polar iris and noise pattern arrays using the Daugmans Rubber Sheet
Model whose unique features have to be extracted further.

6.3 (a) Normalized Iris Image


[Dimension: 320 x 240]

6.3 (b) Polar Iris Pattern Array


[Dimension: 20 x 240]

6.3 (c) Polar Noise Array


[Dimension: 20 x 240]
Fig 6.3
14

CHAPTER 7
FEATURE EXTRACTION
7.1 OVERVIEW
In order to provide accurate recognition of individuals, the most discriminating
information present in an iris pattern must be extracted. Only the significant features of
the iris must be encoded so that comparisons between templates can be made. Most iris
recognition systems make use of a band pass decomposition of the iris image to create
a biometric template.
In this stage, texture analysis methods are used to extract the significant features from
the normalized iris image. The extracted features will be encoded to generate a
biometric template.
7.2 LITERATURE REVIEW
7.2.1 Wavelet Transform
Wavelets can be used to decompose the data in the iris region into components that
appear at different resolutions. Wavelets have the advantage over traditional Fourier
transform in that the frequency data is localised, allowing features which occur at the
same position and resolution to be matched up. A number of wavelet filters, also called
a bank of wavelets, is applied to the 2D iris region, one for each resolution with each
wavelet a scaled version of some basis function. The output of applying the wavelets is
then encoded in order to provide a compact and discriminating representation of the iris
pattern.

7.2.2 Gabor Filters


Gabor filters are able to provide optimum conjoint representation of a signal in space
and spatial frequency. A Gabor filter is constructed by modulating a sine/cosine wave
with a Gaussian. This is able to provide the optimum conjoint localisation in both
space and frequency, since a sine wave is perfectly localised in frequency, but not
localised in space. Modulation of the sine with a Gaussian provides localisation in
space, though with loss of localisation in frequency.
Decomposition of a signal is accomplished using a quadrature pair of Gabor filters, with
a real part specified by a cosine modulated by a Gaussian, and an imaginary part
specified by a sine modulated by a Gaussian. The real and imaginary filters are also
15

known as the even symmetric and odd symmetric components respectively. The centre
frequency of the filter is specified by the frequency of the sine/cosine wave, and the
bandwidth of the filter is specified by the width of the Gaussian.
Each pattern is then demodulated to extract its phase information using quadrature 2D
Gabor wavelets. The phase information is quantized into four quadrants in the complex
plane. Each quadrant is represented with two bits phase information. Therefore, each
pixel in the normalized image is demodulated into two bits code in the template.

7.2.3 Hilbert Transform


Hilbert transform is used to extract significant information from iris texture [5].
Analytic image is constructed by the original image and its Hilbert transform. It can be
used to analyse the iris texture. Emergent frequency and instantaneous phase is
computed from the analytic image. Emergent frequency is formed by three different
dominant frequencies of the analytic image. Instantaneous phase is the arctangent
function of the real and imaginary parts of the analytic image. Feature vector is encoded
by thresholding the emergent frequency and the instantaneous phase. The filtering is
performed in the Fourier domain using pure real filters.

7.2.4 1D Log-Gabor Filters


A disadvantage of the Gabor filter is that the even symmetric filter will have a DC
component whenever the bandwidth is larger than one octave [20]. However, zero DC
component can be obtained for any bandwidth by using a Gabor filter which is Gaussian
on a logarithmic scale, this is known as the Log-Gabor filter. The frequency response
of a 1D Log-Gabor filter is given as:
(6)

(log( f / f ))2
G( f ) = exp

2(log( / f0 ))
where f0 represents the centre frequency, and gives the bandwidth of the filter.

7.3 IMPLEMENTATION
Feature encoding was implemented by convolving the normalised iris pattern with 1D
Log-Gabor wavelets. The 2D normalised pattern is broken up into a number of 1D
signals, and then these 1D signals are convolved with 1D Gabor wavelets. The rows of
the 2D normalised pattern are taken as the 1D signal, each row corresponds to a circular
16

ring on the iris region. The angular direction is taken rather than the radial one, which
corresponds to columns of the normalised pattern, since maximum independence occurs
in the angular direction.
The encoding process produces a bitwise template containing a number of bits of
information, and a corresponding noise mask which corresponds to corrupt areas within
the iris pattern, and marks bits in the template as corrupt. Since the phase information
will be meaningless at regions where the amplitude is zero, these regions are also
marked in the noise mask. The total number of bits in the template will be the angular
resolution times the radial resolution, times 2, times the number of filters used.
In the proposed system, the total number of bits in the biometric template is equal to
240 x 20 x 2 x 1 = 9600 bits in a binary template.

7.4 EXPERIMENTAL RESULTS

Fig 7.1

The image shown in Fig 7.1 is that of the binary biometric template obtained from an
input iris image using 1D Log-Gabor filters. The binary 1s and 0s are represented by
white and black colours in the template, respectively as given in Fig 7.1.
Each input iris image is converted into a binary biometric template as shown in Fig 7.1
and these templates are used for the enrolment, identification and verification of the
individuals based on a matching algorithm.

17

CHAPTER 8
TEMPLATE MATCHING
8.1 OVERVIEW
The template that is generated in the feature encoding process will also need a
corresponding matching metric, which gives a measure of similarity between two iris
templates. This metric should give one range of values when comparing templates
generated from the same eye, known as intra-class comparisons, and another range of
values when comparing templates created from different irises, known as inter-class
comparisons.
These two cases should give distinct and separate values, so that a decision can be made
with high confidence as to whether two templates are from the same iris, or from two
different irises.
8.2 LITERATURE REVIEW
8.2.1 Hamming Distance
Hamming distance is defined as the fractional measure of dissimilarity between two
binary templates. A value of zero would represent a perfect match. The Hamming
distance gives a measure of how many bits are the same between two bit patterns. Using
the Hamming distance of two bit patterns, a decision can be made as to whether the two
patterns were generated from different irises or from the same one.
In comparing the bit patterns X and Y, the Hamming distance, HD, is defined as the sum
of disagreeing bits (sum of the exclusive-OR between X and Y) over N, the total number
of bits in the bit pattern. It is given by
1
HD =

(7)

X j ( XOR)Yj

N j =1
Since an individual iris region contains features with high degrees of freedom, each iris
region will produce a bit-pattern which is independent to that produced by another iris,
on the other hand, two iris codes produced from the same iris will be highly correlated.

If two bits patterns are completely independent, such as iris templates generated from
different irises, the Hamming distance between the two patterns should equal 0.5.
18

8.2.2 Weighted Euclidean Distance


The weighted Euclidean distance (WED) can be used to compare two templates,
especially if the template is composed of integer values. The weighting Euclidean
distance gives a measure of how similar a collection of values are between two
templates. It is specified as

(8)
where fi is the ith feature of the unknown iris, and fi (k ) is the ith feature of iris
template, k, and i( k ) is the standard deviation of the ith feature in iris template k. The
unknown iris template is found to match iris template k, when WED is a minimum at k.

8.2.3 Normalized Correlation


Normalized correlation between two representations is calculated for goodness of
match. It is defined as the normalized similarity of corresponding points in the iris
region. The correlations are performed over small blocks of pixels in four different
spatial frequency bands.
This technique accounts for local variations in image intensity. However, normalized
correlation method is not computationally effective because images are used for
comparisons.

8.3 IMPLEMENTATION
For matching, the Hamming distance was chosen as a metric for recognition, since bitwise comparisons were necessary. The Hamming distance algorithm employed also
incorporates noise masking, so that only significant bits are used in calculating the
Hamming distance between two iris templates.
Now when taking the Hamming distance, only those bits in the iris pattern that
correspond to 0 bits in noise masks of both iris patterns will be used in the calculation.
The Hamming distance will be calculated using only the bits generated from the true
iris region, and this modified Hamming distance formula is given as

19

(9)
Where Xj and Yj are the two bit-wise templates to compare, Xnj and Ynj are the
corresponding noise masks for Xj and Yj, and N is the number of bits represented by
each template.

8.3.1 Accounting for Rotational Inconsistencies


In order to account for rotational inconsistencies, when the Hamming distance of two
templates is calculated, one template is shifted left and right bit-wise and a number of
Hamming distance values are calculated from successive shifts. An example to
understand the shifting process is given in Fig 8.1. This bit-wise shifting in the
horizontal direction corresponds to rotation of the original iris region by an angle given
by the angular resolution used. If an angular resolution of 180 is used, each shift will
correspond to a rotation of 2 degrees in the iris region.
This method is suggested by Daugman, and corrects for misalignments in the
normalised iris pattern caused by rotational differences during imaging. From the
calculated Hamming distance values, only the lowest is taken, since this corresponds to
the best match between two templates (the lowest is case 2 as in Fig 8.1). The number
of bits moved during each shift is given by two times the number of filters used, since
each filter will generate two bits of information from one pixel of the normalised region.

Fig 8.1 An illustration of the shifting process.


20

CHAPTER 9
RESULTS AND DISCUSSIONS
9.1 INTRODUCTION:
In this chapter, the performance of the iris recognition system as a whole is examined.
Tests were carried out to find the best separation, so that the false match and false accept
rate is minimised, and to confirm that iris recognition can perform accurately as a
biometric for recognition of individuals.
There are number of parameters that improve the accuracy of iris system and we have
calibrated these values for the best results. These parameters are,

Number of Shifts

Inter class and Intra class comparisons with and without shifts.

Degree of Freedom (DOF)

Decidability

FAR and FRR Probabilities

These above simulations are done with two data sets, one from The Chinese Academy
of Sciences - Institute of Automation (CASIA) eye image database and another from
our own database collected from Students of National Institute of Technology
Tiruchirappalli (NITT). Based on the experimentations conducted on these databases
we have made comparisons. Each student in the NITT testing database is taken three
different images of the same eye as shown in Fig 9.1.
Fig 9.1 Sample Data Set of One Student
3 images each for the right and left eye, respectively.

21

Table 9.1 Eye Image Data Sets


Number of Eye

Possible Intra Class

Possible Inter Class

Images

Comparisons

Comparisons

CASIA

624

1679

192,699

NITT

120

120

7,800

Set Name

9.2 UNIQUENESS OF IRIS PATTERNS:


Uniqueness was determined by comparing templates generated from different eyes to
each other, and examining the distribution of Hamming distance values produced. This
distribution is known as the Inter-Class Comparisons and comparing templates
generated from the same eye, known as Intra-Class Comparisons. Table 9.1 gives an
overview of the total number of possible comparisons in each database.
Theoretically the mean Hamming distance for comparisons between inter-class iris
templates will be 0.5. This is because, if truly independent, the bits in each template can
be thought of as being randomly set, so there is a 50% chance of being set to 0 and a
50% chance of being set to 1.
The best way to determine uniqueness is by measuring Degree of Freedom. It can be
calculated by approximating the collection of inter-class Hamming distance values as a
binomial distribution. The number of degrees of freedom, DOF, can be calculated from
the following formulae,

(10)

22

Fig 9.2 Inter Class Comparison from NITT Database (with no Shifts)

Fig 9.3 Inter Class Comparison from CASIA Database (with no Shifts)
To study the Inter class comparisons, a graph is plotted with the x axis for the Hamming
distance and its corresponding frequency while comparing different iris patterns along
the y axis.
23

9.3 SHIFTING OF TEMPLATES:


The templates are shifted left and right to account for rotational inconsistencies in the
eye image, and the lowest Hamming distance is taken as the actual Hamming distance.
Due to this, the mean Hamming distance for inter-class template comparisons will be
slightly lower than 0.5, since the lowest Hamming distance out of several comparisons
between shifted templates is taken.
Fig 9.4 and Fig 9.5 depict two bar graphs on the Inter class comparisons with the
Hamming Distance along x axis and frequency along y axis taking into account the
following shifting process in each database separately.
CASIA Database: with 10 bit shift

and

NITT Database: with 8 bit shift

As the number of shifts increases, the mean Hamming distance for inter-class
comparisons will decrease accordingly as shown in the graphs below.

Fig 9.4 Inter Class Comparisons from NITT Database (with 8 shifts)

24

Fig 9.5 Inter Class Comparisons from CASIA Database (with 10 shifts)

9.4 RECOGNITION OF INDIVIDUALS:


The Iris System should be able to recognise the given individual any number of time
and the inter-class and intra-class Hamming Distance should have minimum overlap. If
the Hamming distance between two templates is less than the separation point, the
templates were generated from the same iris and a match is found. Otherwise if the
Hamming distance is greater than the separation point the two templates are considered
to have been generated from different irises. Shifting should also be considered.
A metric called as Decidability, which takes into account the mean and standard
deviation of the intra-class and inter-class distributions. Greater the decidability, the
greater the separation of intra-class and inter-class distributions, which allows for more
accurate recognition.
|12 +22 |

= 12

+22

(11)

where 1and 2 are mean for inter and intra

class, 12 and 22 are variance for inter and intra class distributions.
25

But the inter-class and intra-class distributions overlap in some regions, as a result of
incorrect matching or false accepts. Based on the intersection two parameters False
Acceptance Rate (FAR) and False Reject Rate (FRR) are measured.
The False Reject Rate (FRR), measures the probability of an enrolled individual not
being identified by the system. The False Accept Rate (FAR), measures the probability
of an individual being wrongly identified as another individual. The false accept rate is
defined by the normalised area between 0 and the separation point, , in the inter-class
distribution Pdiff. The false reject rate is defined as the normalised area between the
separation point, , and 1 in the intra-class distribution Psame.

FRR:

0 ()

(12)

0 ()

FAR:

0 ()

(13)

0 ()

9.5 FINAL OVERVIEW:

Table 9.2 Degree of Freedom and Decidability


S.NO

Without Shifting

With Shifting (8 bits)

Mean

0.4960

0.4673

Standard Deviation

0.0243

0.0163

DOF

425.0248

939.4595

Decidability

2.2308

4.3705

26

In order to improve the efficiency of the system in real time, eye images of the NITT
Database are matched with an 8 bit shift and the values of Degree of Freedom and
Decidability for both cases (with and without shifting) are tabulated in Table 9.2.
It can be observed from Table 9.2 that the shifting process doubles values of both the
parameters which indicates efficiency in performance.
Table 9.3

Observation Table

Optimum number of Shifts

8 bits

Optimum separation point

0.40

False Reject Rate (FRR)

3.33%

[4/120]

False Accept Rate (FAR)

0.14%

[11/7800]

Table 9.3 gives the experimental result values of FAR and FRR for the 8 bit shifted
process in the NITT Dataset. It is observed that the optimum threshold value or the
separation point of the Hamming Distance was 0.4, the value below which implies
that a perfect match is verified.

9.6 INTRA AND INTER CLASS COMPARISON


From the above observation it is
clear that shifting the bits gives a
good result by decreasing mean
and standard deviation thereby
increasing both the Degree of
Freedom and the Decidability.
Based

on

the

observations

conducted for the two data sets the


threshold for classification has
been found to be practically 0.4 in
terms of Hamming Distance (HD).

Fig 9.6 Intra and Inter class Comparison

Fig 9.6 is a further analysis to study the differences between Intra and Inter class
comparisons. It is noted that all Intra class comparisons have HD values less than 0.4
and the Inter class comparison values are higher than 0.4, as expected theoretically.

27

CHAPTER 10
CONCLUSIONS

10.1 SUMMARY
This thesis has presented an iris recognition system, which was tested using two
databases of greyscale eye images in order to verify the claimed performance of iris
recognition technology.

Firstly, an automatic segmentation algorithm was presented, which would localise the
iris region from an eye image and isolate eyelid, eyelash and reflection areas. Automatic
segmentation was achieved through the use of the circular Hough transform for
localising the iris and pupil regions, and the linear Hough transform for localising
occluding eyelids. Thresholding was also employed for isolating eyelashes and
reflections.

Next, the segmented iris region was normalised to eliminate dimensional


inconsistencies between iris regions. This was achieved by implementing a version of
Daugmans rubber sheet model, where the iris is modelled as a flexible rubber sheet,
which is unwrapped into a rectangular block with constant polar dimensions.

Finally, features of the iris were encoded by convolving the normalised iris region with
1D Log-Gabor filters and phase quantising the output in order to produce a bit-wise
biometric template. The Hamming distance was chosen as a matching metric, which
gave a measure of how many bits disagreed between two templates. A failure of
statistical independence between two templates would result in a match, that is, the two
templates were deemed to have been generated from the same iris if the Hamming
distance produced was lower than a set Hamming distance.

28

10.2 SUMMARY OF FINDINGS


Analysis of the developed iris recognition system has revealed a number of interesting
conclusions. It can be stated that segmentation is the critical stage of iris recognition,
since areas that are wrongly identified as iris regions will corrupt biometric templates
resulting in very poor recognition. The results presented in Chapter 5 have also shown
that segmentation can be the most difficult stage of iris recognition because its success
is dependent on the imaging quality of eye images. With the CASIA database 83% of
the images managed to segment successfully due to varied imaging conditions and with
the real time NITT Dataset, 97.6% of the images segmented correctly.

Another interesting finding was that the encoding process only required one 1D LogGabor filter to provide accurate recognition, since the open literature mentions the use
of multi-scale representation in the encoding process. Also the optimum centre
wavelength was found to be dependent on imaging conditions, since different lighting
conditions will produce features of different frequencies.

For both data sets, a filter bandwidth with /f of 0.5, centre wavelength of 12 pixels,
and template resolution of 20 pixels by 240 pixels was found to provide optimum
encoding. For the NITT data set, perfect recognition was possible with false accept
and false reject rates of 0.14% and 3.33% respectively. A near-perfect recognition rate
was achieved with the CASIA data set, with a separation point of 0.4, a false accept
rate of 0.005% and false reject rate of 0.238% was possible. These results confirm that
iris recognition is a reliable and accurate biometric technology.

29

10.3 SCOPE FOR FUTURE RESEARCH


The system presented in this publication was able to perform accurately, however there
are still a number of issues which need to be addressed. First of all, the automatic
segmentation was not perfect, since it could not successfully segment the iris regions
for all of the eye images in the two databases. In order to improve the automatic
segmentation algorithm, a more elaborate eyelid and eyelash detection system could be
implemented, such as the one suggested by Kong and Zhang [15]

An improvement could also be made in the speed of the system. The most computation
intensive stages include performing the Hough transform, and calculating Hamming
distance values between templates to search for a match. Since the system is
implemented in MATLAB , which is an interpreted language, speed benefits could be
made by implementing computationally intensive parts in C or C++. Speed was not one
of the objectives for developing this system, but this would have to be considered if
using the system for real-time recognition.

An optimisation whose feasibility could be examined with use of an acquisition camera


would be the use of both eyes to improve the recognition rate. In this case, two templates
would be created for each individual, one for the left eye and one for the right eye. This
configuration would only accept an individual if both eyes match to corresponding
templates stored in the database. The recognition rates produced for this optimisation
would need to be balanced with the increased imaging difficultly, and inconvenience to
the user.

30

CHAPTER 11
REFERENCES

[1]

J. Daugman. How iris recognition works. Proceedings of 2002 International

Conference on Image Processing, Vol. 1, 2002.


[2]

E. Wolff. Anatomy of the Eye and Orbit. 7th edition. H. K. Lewis & Co. LTD,

1976.
[3]

R. Wildes. Iris recognition: an emerging biometric technology. Proceedings of

the IEEE, Vol. 85, No. 9, 1997.


[4]

J. Daugman. Biometric personal identification system based on iris analysis.

United States Patent, Patent Number: 5,291,560, 1994.


[5]

J. Daugman. High confidence visual recognition of persons by a test of

statistical independence. IEEE Transactions on Pattern Analysis and Machine


Intelligence, Vol. 15, No. 11, 1993.
[6]

S. Sanderson, J. Erbetta. Authentication for secure environments based on iris

scanning technology. IEE Colloquium on Visual Biometrics, 2000.


[7]

R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride.

A system for automated iris recognition. Proceedings IEEE Workshop on


Applications of Computer Vision, Sarasota, FL, pp. 121-128, 1994.
[8]

W. Boles, B. Boashash. A human identification technique using images of the

iris and wavelet transform. IEEE Transactions on Signal Processing, Vol. 46, No. 4,
1998.
[9]

S. Lim, K. Lee, O. Byeon, T. Kim. Efficient iris recognition through


31

improvement of feature vector and classifier. ETRI Journal, Vol. 23, No. 2, Korea,
2001.
[10]

S. Noh, K. Pae, C. Lee, J. Kim. Multiresolution independent component

analysis for iris identification. The 2002 International Technical Conference on


Circuits/Systems, Computers and Communications, Phuket, Thailand, 2002.
[11]

Y. Zhu, T. Tan, Y. Wang. Biometric personal identification based on iris

patterns. Proceedings of the 15th International Conference on Pattern Recognition,


Spain, Vol. 2, 2000.
[12]

C. Tisse, L. Martin, L. Torres, M. Robert. Person identification technique

using human iris recognition. International Conference on Vision Interface, Canada,


2002.
[13]

Chinese Academy of Sciences Institute of Automation. Database of 756

Greyscale Eye Images. http://www.sinobiometrics.com Version 1.0, 2003.


[14]

C. Barry, N. Ritter. Database of 120 Greyscale Eye Images. Lions Eye

Institute, Perth Western Australia.


[15]

W. Kong, D. Zhang. Accurate iris segmentation based on novel reflection and

eyelash detection model. Proceedings of 2001 International Symposium on Intelligent


Multimedia, Video and Speech Processing, Hong Kong, 2001.
[16]

L. Ma, Y. Wang, T. Tan. Iris recognition using circular symmetric filters.

National Laboratory of Pattern Recognition, Institute of Automation, Chinese


Academy of Sciences, 2002.
[17]

N. Ritter. Location of the pupil-iris border in slit-lamp images of the cornea.

Proceedings of the International Conference on Image Analysis and Processing, 1999.


[18]

M. Kass, A. Witkin, D. Terzopoulos. Snakes: Active Contour Models.


32

International Journal of Computer Vision, 1987.


[19]

N. Tun. Recognising Iris Patterns for Person (or Individual) Identification.

Honours thesis. The University of Western Australia. 2002.


[20]

D. Field. Relations between the statistics of natural images and the response

properties of cortical cells. Journal of the Optical Society of America, 1987.


[21]

P. Burt, E. Adelson. The laplacian pyramid as a compact image code. IEEE

Transactions on Communications. Vol. 31 No. 4. 1983.


[22]

A. Oppenheim, J. Lim. The importance of phase in signals. Proceedings of the

IEEE 69, 529-541, 1981.


[23]

P. Burt, E. Adelson. The laplacian pyramid as a compact image code. IEE

Transactions on Communications, Vol. COM-31, No. 4, 1983.


[24]

J. Daugman. Biometric decision landscapes. Technical Report No. TR482,

University of Cambridge Computer Laboratory, 2000.


[25]

T. Lee. Image representation using 2D gabor wavelets. IEEE Transactions of

Pattern Analysis and Machine Intelligence, Vol. 18, No. 10, 1996.
[26]

A review of iris recognition algorithms RYF Ng, NYH Tay, KM Mok-

Information Technology, 2008.

[27] Recognition of Human Iris Patterns for Biometric Identification by Libor


Masek, 2003

[28] http://biometrics.idealtest.org/ (CASIA Dataset)


[29] http://www.biometricupdate.com/tag/iris-recognition

33

You might also like