IRIS Thesis
IRIS Thesis
IRIS Thesis
ACKNOWLEDGEMENT
We wish to take this opportunity to express our sincere gratitude to our project guide
Dr. M. Bhaskar, Associate Professor, Department of Electronics and Communication
Engineering, National Institute of Technology, Tiruchirappalli for his valuable
guidance, constant motivation and extensive support throughout the project.
We express our sincere thanks to Dr. D. Sriram Kumar, Head of the Department,
Electronics and Communication Engineering, for his encouragement during the
implementation of the project.
We would like to thank all the faculty members, supporting staff and students of the
Department of Electronics and Communication Engineering for their continuous
support and cooperation.
Portions of research in this thesis use the CASIA iris image database collected by
Institute of Automation, Chinese Academy of Sciences and we thank them for these eye
images, which proved extremely useful to this research.
We also wish to extend our humble gratitude to the IOT Lab and the 1981 Alumni
Batch of NIT-Trichy for their technical guidance and financial assistance.
Finally, we thank our friends and family for their moral support which has aided us in
the successful completion of the project.
Srivignessh PSS
T Vignesh
Vignesh K
ii
TABLE OF CONTENTS
ABSTRACT
ACKNOWLEDGEMENT
ii
LIST OF TABLES
LIST OF FIGURES
vi
1 INTRODUCTION .
1.1 MOTIVATION
1.2 BACKGROUND .
1.3 OUTLINE
3.1.1 SPECIFICATIONS
4.1 INTRODUCTION ..
4.4 IMPLEMENTATION
5 SEGMENTATION
5.1 OVERVIEW
iii
10
10
10
11
11
6 NORMALIZATION .
12
6.1 OVERVIEW .
12
12
12
13
6.3 IMPLEMENTATION .
13
14
7 FEATURE EXTRACTION
15
7.1 OVERVIEW ..
15
15
15
15
16
16
7.3 IMPLEMENTATION
17
iv
8 TEMPLATE MATCHING . 18
8.1 OVERVIEW . 18
8.2 LITERATURE REVIEW . 18
8.2.1 HAMMING DISTANCE .. 18
8.2.2 WEIGHTED EUCLIDEAN DISTANCE .. 19
8.2.3 NORMALIZED CORRELATION ..
19
8.3 IMPLEMENTATION . 19
8.3.1 ACCOUNTING FOR ROTATIONAL
INCONSISTENCIES ..
20
21
9.1 INTRODUCTION..
21
22
24
25
26
27
10 CONCLUSION
28
10.1 SUMMARY
28
LIST OF TABLES
26
vi
LIST OF FIGURES
11
12
13
14
17
20
21
23
23
24
25
27
vii
CHAPTER 1
INTRODUCTION
1.1 MOTIVATION
Technologies that exploit biometrics have the potential for application to the
identification and verification of individuals for controlling access to secured areas or
materials. A wide variety of biometrics have been marshalled in support of this
challenge. Resulting systems include those based on automated recognition of
fingerprints, hand shape, signature, voice, face and iris patterns.
However, apart from face and iris recognition, other techniques are highly invasive and
there is a possibility for covert evaluation. Although face recognition is an active topic
of research, the inherent difficulty of the problem might prevent and restrict its wider
applications. Iris recognition proves to be an alternative for non-invasive authentication
of people. The spatial patterns that are apparent in the human iris are distinctive to every
individual. The variability in appearance of one iris might be well enough constrained
to make possible an automated recognition system based on currently available mission
vision technologies.
1.2 BACKGROUND
Iris is a thin diaphragm stretching across the anterior portion of the eye and supported
by the lens, which gives it the shape of a truncated cone in three dimensions. At its base,
the iris is attached to the cilliary body and in the opposite end, it opens into the pupil,
which controls the amount of light
entering into the human eye. The
cornea lies in front of the iris and
provides
transparent
protective
covering.
Fig 1.1
Fig 1.1 shows a labelled diagram of the human eye and the iris region.
The iris is an externally visible, yet protected organ whose unique epigenetic pattern
remains stable throughout adult life. These characteristics make it very attractive for
use as a biometric for identifying individuals. To appreciate the richness of the iris as
a pattern for recognition, it is useful to consider its structure in a bit more detail. The
visual appearance of the iris is a direct result of its multi-layered structure.
The iris possesses a highly distinguishing texture with the following merits:
Right eye differs from left eye.
Twins have different iris texture.
Iris pattern remains unchanged after the age of two.
It does not degrade overtime or with the environment.
Claims that the structure of the iris is unique to an individual and is stable with age
come from two main sources. The first source of evidence is clinical observations.
During the course of examining large numbers of eyes, ophthalmologists and anatomists
have noted that the detailed pattern of an iris, even the left and right iris of a single
person, seems to be highly distinctive. The second source of evidence is developmental
biology. It has been proven that while the general structure of the iris is genetically
determined, the particulars of its minutiae are critically dependent on circumstances like
the initial conditions in the embryonic precursor to the iris. Therefore, they are highly
unlikely to be replicated via the natural course of events.
1.3 OUTLINE
Mobile devices have been widely used for social communications, storing large amount
of private data and online banking. It is important to build a reliable and user-friendly
biometric recognition system for online mobile payment and sensitive data protection.
The objective is to build and implement a real time and robust biometric iris recognition
system in which:-
An image of the iris is captured through the mobile camera and uploaded to the
server via Wi-Fi or internet.
ii.
The server creates a biometric template of the iris using image processing
techniques and by implementing prominent algorithms in each stage.
iii.
It also compares the template with the enrolled pre-existing templates and sends
back a matching score (a floating point number between 0 and 1) to the mobile.
iv.
ii.
iii.
The development tool used will be MATLAB, and emphasis will be on both the
software for performing recognition, and the hardware for capturing an eye image. A
rapid application development (RAD) approach will be employed in order to produce
results quickly. MATLAB provides an excellent RAD environment, with its image
processing toolbox, and high level programming methodology.
To test the system, two data sets of eye images have been used as inputs:
i.
ii.
CHAPTER 2
MULTIPILE STAGES OF IRIS RECOGNITION
The system is to be composed of a number of sub-systems, which correspond to each
stage of iris recognition. These stages are segmentation locating the iris region in an
eye image, normalisation creating a dimensionally consistent representation of the iris
region, and feature encoding creating a template containing only the most
discriminating features of the iris (refer Fig 2.1).
The input to the system will be an eye image, and the output will be an iris template,
which will provide a mathematical representation of the iris region (refer Fig 2.2).
Segmentation
Normalization
Feature Extraction
Authentication
Fig 2.1
2.2 (d) enhanced iris image
Fig 2.2
Fig 2.1 is a flowchart depicting overview of the different components of the system.
Fig 2.2 shows the various stages of an iris image during the recognition process.
4
CHAPTER 3
ACQUISITION OF THE IRIS IMAGE
In order to generalise the project as a robust model and to ensure its wide scope of
applications, it is essential that all the iris images are captured by the same camera. This
is done so that the huge variations in different mobile camera images do not affect the
performance of the system.
3.1 IriShield- USB MK 2120U
The IriShield USB MK 2120U is a ready-to-use
single iris capture camera. The compact camera is
powered via USB port, as shown in Fig 3.1. Its low
power consumption and mobile OS support makes
it suitable for using with smartphones, tablets or
other handheld devices.
Each eye is illuminated by infrared LED, thus irises
can be captured in various indoor and outdoor
environments. The captured iris images are
compliant with ISO/IEC 19794-6 standard.
Fig 3.1
3.1.1 SPECIFICATIONS
Table 3.1 gives the important specifications of the IriShield USB MK 2120U:
Device name
Manufacturer
Iritech.inc
Device Connection
USB 2.0
Supported OS
Microsoft Windows
Linux
Android
Illumination
Infrared
Device weight
300 grams
Operating temperature
0 C 45 C
Table 3.1
5
CHAPTER 4
MOBILE PHONE INTEGRATION
4.1 INTRODUCTION:
In this project an android application has been developed so that it can capture the IRIS
image using the IriShieldTM-USB MK2120U and upload the captured image to the local
server. The uploaded image is then used for further processing in octave/matlab in the
server.
In the android application we use package com.iritech.iddk.demo which is the package
provided by Iritech, Inc. exclusively for this purpose. The application takes into account
the various factors like distance, illumination and gives the image quality and gives the
Iris Quality Score and usable area. These parameters are provided by IddkIrisQuality
class:
IddkImage consists of the image format (.jpg),Width, Height and the image data.
4.2 CAPTURE STATUS:
These are the 5 main states of capturing provided by the IddkCaptureStatus class.
IDDK_IDLE
IDDK_READY
IDDK_CAPTURING
IDDK_COMPLETE
IDDK_ABORT.
Initially the mobile phone is in IDDK_IDLE state. When START button is pressed it
enters into the capture state. After StartCapture, capturing process enters
IDDK_READY. Then, in case of Auto Capture Mode (IDDK_AUTO_CAPTURE),
streaming images from iris camera will right away go through the Quality Measurement
(QM)-based live evaluation, which detects qualified iris images.
When the first eye image is detected by the live evaluation, the capturing process enters
IDDK_CAPTURING status. After a reasonable duration of time or a reasonable number
of qualified eye images detected, the process completes and its status is changed to
6
Total score of the captured image, ranging from 0 to 100. The higher the score, the better
the image quality. Enrolled images should have a stringently high total score of 70.
Percentage of usable iris area in the captured image, ranging from 0 to 100. The lower
the score, the more occluded the iris .Highly occluded iris significantly affects the
matching accuracy. Enrolled images should have a high value of usable iris area, e.g.,
greater than 70. These results are obtained from the class IddkIrisQuality.
4.4 IMPLEMENTATION:
The android application was developed using Android Studios 2.1, tested in an Android
mobile phone and the captured image was uploaded in the local server and processing
takes place in the server. Fig 4.2 shows the image of an eye captured by the IR camera
as displayed on the App. The details of the App are given by Fig 4.3.
is calculated
4.5 CAPTURED IMAGES: Sample right and left eye images captured by the iris
camera are shown in Fig 4.4.
Fig 4.4
CHAPTER 5
SEGMENTATION
5.1 OVERVIEW
The first stage of iris recognition is to isolate the actual iris region in a digital eye image.
The iris region can be approximated by two circles, one for the iris/sclera boundary and
another, interior to the first, for the iris/pupil boundary. The eyelids and eyelashes
normally occlude the upper and lower parts of the iris region. Also, specular reflections
can occur within the iris region corrupting the iris pattern.
Hence, a technique is required to isolate and exclude these artefacts as well as locating
the circular iris region. The success of segmentation depends on the imaging quality of
eye images. . The segmentation stage is critical to the success of an iris recognition
system, since data that is falsely represented as iris pattern data will corrupt the
biometric templates generated, resulting in poor recognition rates.
(1)
As shown in Fig 5.1, circle localisation is made more accurate and efficient since there
are less edge points to cast votes in the Hough space.
(b)
Edge Map
(c) Horizontal
(d) Vertical
Edge Map
Edge Map
Fig 5.1
9
5.3 IMPLEMENTATION
It was decided to use circular Hough transform for detecting the iris and pupil
boundaries. This involves first employing Canny edge detection to generate an edge
map. Gradients were biased in the vertical direction for the outer iris/sclera boundary.
Vertical and horizontal gradients were weighted equally for the inner iris/pupil
boundary using a modified version of Kovesis Canny edge detection MATLAB
function.
The range of radius values to search for was set manually, depending on the database
used.
10
For the CASIA database, values of the iris radius range from 90 to 150 pixels, while the
pupil radius ranges from 28 to 75 pixels.
For the self-captured images of the students, the range of values is as follows:
In order to make the circle detection process more efficient and accurate, the Hough
transform for the iris/sclera boundary was performed first, then the Hough transform
for the iris/pupil boundary was performed within the iris region, instead of the whole
eye region, since the pupil is always within the iris region. After this process was
complete, six parameters are stored, the radius, and x and y centre coordinates for both
circles.
5.3.1 FINDING TOP AND BOTTOM EYELID
After the iris is localised, the eyelids have to be blackened and removed. The
following functions are used to detect the top and bottom eyelids.
topeyelid = imageiris ( rowp+r): size (imageiris,1), : );
lines = findline ( topeyelid );
bottomeyelid = irispupil ( rowp+r): size (imageiris,1), : );
lines = findline ( botomeyelid);
Fig 5.2
11
CHAPTER 6
NORMALIZATION
6.1 OVERVIEW
Once the iris region is successfully segmented from an eye image, the next stage is to
transform the iris region so that it has fixed dimensions in order to allow comparisons.
The dimensional inconsistencies between eye images are mainly due to the stretching
of the iris caused by pupil dilation from varying levels of illumination.
The normalisation process will produce iris regions, which have the same constant
dimensions, so that two photographs of the same iris under different conditions will
have characteristic features at the same spatial location.
Fig 6.1
A diagrammatic representation of Daugmans Rubber Sheet Model is shown in Fig 6.1.
Even though the homogenous rubber sheet model accounts for pupil dilation, imaging
distance and non-concentric pupil displacement, it does not compensate for rotational
inconsistencies.
12
(2)
6.3 IMPLEMENTATION
Normalization of the segmented iris image is done using Daugmans Rubber Sheet
model technique. The problem of rotational inconsistencies in this method is overcome
during the Template Matching stage and will be discussed later.
Fig 6.2
Since the pupil can be non-concentric to the iris, a remapping formula is needed to
rescale points depending on the angle around the circle. This is given by the formula:
(3)
13
(4)
(5)
where displacement of the center of the pupil relative to the center of the iris is given
by ox, oy, and r is the distance between the edge of the pupil and edge of the iris at an
angle, and rI is the radius of the iris. The remapping formula first gives the radius of
the iris region doughnut as a function of the angle .
Fig 6.3 shows the various images of the same iris during the normalization process
including the polar iris and noise pattern arrays using the Daugmans Rubber Sheet
Model whose unique features have to be extracted further.
CHAPTER 7
FEATURE EXTRACTION
7.1 OVERVIEW
In order to provide accurate recognition of individuals, the most discriminating
information present in an iris pattern must be extracted. Only the significant features of
the iris must be encoded so that comparisons between templates can be made. Most iris
recognition systems make use of a band pass decomposition of the iris image to create
a biometric template.
In this stage, texture analysis methods are used to extract the significant features from
the normalized iris image. The extracted features will be encoded to generate a
biometric template.
7.2 LITERATURE REVIEW
7.2.1 Wavelet Transform
Wavelets can be used to decompose the data in the iris region into components that
appear at different resolutions. Wavelets have the advantage over traditional Fourier
transform in that the frequency data is localised, allowing features which occur at the
same position and resolution to be matched up. A number of wavelet filters, also called
a bank of wavelets, is applied to the 2D iris region, one for each resolution with each
wavelet a scaled version of some basis function. The output of applying the wavelets is
then encoded in order to provide a compact and discriminating representation of the iris
pattern.
known as the even symmetric and odd symmetric components respectively. The centre
frequency of the filter is specified by the frequency of the sine/cosine wave, and the
bandwidth of the filter is specified by the width of the Gaussian.
Each pattern is then demodulated to extract its phase information using quadrature 2D
Gabor wavelets. The phase information is quantized into four quadrants in the complex
plane. Each quadrant is represented with two bits phase information. Therefore, each
pixel in the normalized image is demodulated into two bits code in the template.
(log( f / f ))2
G( f ) = exp
2(log( / f0 ))
where f0 represents the centre frequency, and gives the bandwidth of the filter.
7.3 IMPLEMENTATION
Feature encoding was implemented by convolving the normalised iris pattern with 1D
Log-Gabor wavelets. The 2D normalised pattern is broken up into a number of 1D
signals, and then these 1D signals are convolved with 1D Gabor wavelets. The rows of
the 2D normalised pattern are taken as the 1D signal, each row corresponds to a circular
16
ring on the iris region. The angular direction is taken rather than the radial one, which
corresponds to columns of the normalised pattern, since maximum independence occurs
in the angular direction.
The encoding process produces a bitwise template containing a number of bits of
information, and a corresponding noise mask which corresponds to corrupt areas within
the iris pattern, and marks bits in the template as corrupt. Since the phase information
will be meaningless at regions where the amplitude is zero, these regions are also
marked in the noise mask. The total number of bits in the template will be the angular
resolution times the radial resolution, times 2, times the number of filters used.
In the proposed system, the total number of bits in the biometric template is equal to
240 x 20 x 2 x 1 = 9600 bits in a binary template.
Fig 7.1
The image shown in Fig 7.1 is that of the binary biometric template obtained from an
input iris image using 1D Log-Gabor filters. The binary 1s and 0s are represented by
white and black colours in the template, respectively as given in Fig 7.1.
Each input iris image is converted into a binary biometric template as shown in Fig 7.1
and these templates are used for the enrolment, identification and verification of the
individuals based on a matching algorithm.
17
CHAPTER 8
TEMPLATE MATCHING
8.1 OVERVIEW
The template that is generated in the feature encoding process will also need a
corresponding matching metric, which gives a measure of similarity between two iris
templates. This metric should give one range of values when comparing templates
generated from the same eye, known as intra-class comparisons, and another range of
values when comparing templates created from different irises, known as inter-class
comparisons.
These two cases should give distinct and separate values, so that a decision can be made
with high confidence as to whether two templates are from the same iris, or from two
different irises.
8.2 LITERATURE REVIEW
8.2.1 Hamming Distance
Hamming distance is defined as the fractional measure of dissimilarity between two
binary templates. A value of zero would represent a perfect match. The Hamming
distance gives a measure of how many bits are the same between two bit patterns. Using
the Hamming distance of two bit patterns, a decision can be made as to whether the two
patterns were generated from different irises or from the same one.
In comparing the bit patterns X and Y, the Hamming distance, HD, is defined as the sum
of disagreeing bits (sum of the exclusive-OR between X and Y) over N, the total number
of bits in the bit pattern. It is given by
1
HD =
(7)
X j ( XOR)Yj
N j =1
Since an individual iris region contains features with high degrees of freedom, each iris
region will produce a bit-pattern which is independent to that produced by another iris,
on the other hand, two iris codes produced from the same iris will be highly correlated.
If two bits patterns are completely independent, such as iris templates generated from
different irises, the Hamming distance between the two patterns should equal 0.5.
18
(8)
where fi is the ith feature of the unknown iris, and fi (k ) is the ith feature of iris
template, k, and i( k ) is the standard deviation of the ith feature in iris template k. The
unknown iris template is found to match iris template k, when WED is a minimum at k.
8.3 IMPLEMENTATION
For matching, the Hamming distance was chosen as a metric for recognition, since bitwise comparisons were necessary. The Hamming distance algorithm employed also
incorporates noise masking, so that only significant bits are used in calculating the
Hamming distance between two iris templates.
Now when taking the Hamming distance, only those bits in the iris pattern that
correspond to 0 bits in noise masks of both iris patterns will be used in the calculation.
The Hamming distance will be calculated using only the bits generated from the true
iris region, and this modified Hamming distance formula is given as
19
(9)
Where Xj and Yj are the two bit-wise templates to compare, Xnj and Ynj are the
corresponding noise masks for Xj and Yj, and N is the number of bits represented by
each template.
CHAPTER 9
RESULTS AND DISCUSSIONS
9.1 INTRODUCTION:
In this chapter, the performance of the iris recognition system as a whole is examined.
Tests were carried out to find the best separation, so that the false match and false accept
rate is minimised, and to confirm that iris recognition can perform accurately as a
biometric for recognition of individuals.
There are number of parameters that improve the accuracy of iris system and we have
calibrated these values for the best results. These parameters are,
Number of Shifts
Inter class and Intra class comparisons with and without shifts.
Decidability
These above simulations are done with two data sets, one from The Chinese Academy
of Sciences - Institute of Automation (CASIA) eye image database and another from
our own database collected from Students of National Institute of Technology
Tiruchirappalli (NITT). Based on the experimentations conducted on these databases
we have made comparisons. Each student in the NITT testing database is taken three
different images of the same eye as shown in Fig 9.1.
Fig 9.1 Sample Data Set of One Student
3 images each for the right and left eye, respectively.
21
Images
Comparisons
Comparisons
CASIA
624
1679
192,699
NITT
120
120
7,800
Set Name
(10)
22
Fig 9.2 Inter Class Comparison from NITT Database (with no Shifts)
Fig 9.3 Inter Class Comparison from CASIA Database (with no Shifts)
To study the Inter class comparisons, a graph is plotted with the x axis for the Hamming
distance and its corresponding frequency while comparing different iris patterns along
the y axis.
23
and
As the number of shifts increases, the mean Hamming distance for inter-class
comparisons will decrease accordingly as shown in the graphs below.
Fig 9.4 Inter Class Comparisons from NITT Database (with 8 shifts)
24
Fig 9.5 Inter Class Comparisons from CASIA Database (with 10 shifts)
= 12
+22
(11)
class, 12 and 22 are variance for inter and intra class distributions.
25
But the inter-class and intra-class distributions overlap in some regions, as a result of
incorrect matching or false accepts. Based on the intersection two parameters False
Acceptance Rate (FAR) and False Reject Rate (FRR) are measured.
The False Reject Rate (FRR), measures the probability of an enrolled individual not
being identified by the system. The False Accept Rate (FAR), measures the probability
of an individual being wrongly identified as another individual. The false accept rate is
defined by the normalised area between 0 and the separation point, , in the inter-class
distribution Pdiff. The false reject rate is defined as the normalised area between the
separation point, , and 1 in the intra-class distribution Psame.
FRR:
0 ()
(12)
0 ()
FAR:
0 ()
(13)
0 ()
Without Shifting
Mean
0.4960
0.4673
Standard Deviation
0.0243
0.0163
DOF
425.0248
939.4595
Decidability
2.2308
4.3705
26
In order to improve the efficiency of the system in real time, eye images of the NITT
Database are matched with an 8 bit shift and the values of Degree of Freedom and
Decidability for both cases (with and without shifting) are tabulated in Table 9.2.
It can be observed from Table 9.2 that the shifting process doubles values of both the
parameters which indicates efficiency in performance.
Table 9.3
Observation Table
8 bits
0.40
3.33%
[4/120]
0.14%
[11/7800]
Table 9.3 gives the experimental result values of FAR and FRR for the 8 bit shifted
process in the NITT Dataset. It is observed that the optimum threshold value or the
separation point of the Hamming Distance was 0.4, the value below which implies
that a perfect match is verified.
on
the
observations
Fig 9.6 is a further analysis to study the differences between Intra and Inter class
comparisons. It is noted that all Intra class comparisons have HD values less than 0.4
and the Inter class comparison values are higher than 0.4, as expected theoretically.
27
CHAPTER 10
CONCLUSIONS
10.1 SUMMARY
This thesis has presented an iris recognition system, which was tested using two
databases of greyscale eye images in order to verify the claimed performance of iris
recognition technology.
Firstly, an automatic segmentation algorithm was presented, which would localise the
iris region from an eye image and isolate eyelid, eyelash and reflection areas. Automatic
segmentation was achieved through the use of the circular Hough transform for
localising the iris and pupil regions, and the linear Hough transform for localising
occluding eyelids. Thresholding was also employed for isolating eyelashes and
reflections.
Finally, features of the iris were encoded by convolving the normalised iris region with
1D Log-Gabor filters and phase quantising the output in order to produce a bit-wise
biometric template. The Hamming distance was chosen as a matching metric, which
gave a measure of how many bits disagreed between two templates. A failure of
statistical independence between two templates would result in a match, that is, the two
templates were deemed to have been generated from the same iris if the Hamming
distance produced was lower than a set Hamming distance.
28
Another interesting finding was that the encoding process only required one 1D LogGabor filter to provide accurate recognition, since the open literature mentions the use
of multi-scale representation in the encoding process. Also the optimum centre
wavelength was found to be dependent on imaging conditions, since different lighting
conditions will produce features of different frequencies.
For both data sets, a filter bandwidth with /f of 0.5, centre wavelength of 12 pixels,
and template resolution of 20 pixels by 240 pixels was found to provide optimum
encoding. For the NITT data set, perfect recognition was possible with false accept
and false reject rates of 0.14% and 3.33% respectively. A near-perfect recognition rate
was achieved with the CASIA data set, with a separation point of 0.4, a false accept
rate of 0.005% and false reject rate of 0.238% was possible. These results confirm that
iris recognition is a reliable and accurate biometric technology.
29
An improvement could also be made in the speed of the system. The most computation
intensive stages include performing the Hough transform, and calculating Hamming
distance values between templates to search for a match. Since the system is
implemented in MATLAB , which is an interpreted language, speed benefits could be
made by implementing computationally intensive parts in C or C++. Speed was not one
of the objectives for developing this system, but this would have to be considered if
using the system for real-time recognition.
30
CHAPTER 11
REFERENCES
[1]
E. Wolff. Anatomy of the Eye and Orbit. 7th edition. H. K. Lewis & Co. LTD,
1976.
[3]
iris and wavelet transform. IEEE Transactions on Signal Processing, Vol. 46, No. 4,
1998.
[9]
improvement of feature vector and classifier. ETRI Journal, Vol. 23, No. 2, Korea,
2001.
[10]
D. Field. Relations between the statistics of natural images and the response
Pattern Analysis and Machine Intelligence, Vol. 18, No. 10, 1996.
[26]
33