Abstract
Iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain high reliability and stability of this kind of systems in harsh environments. Iris segmentation is one of the most important process in iris recognition to preserve the above-mentioned characteristics. Indeed, iris segmentation may compromise the performance of the entire system. This paper presents a comparative study of four segmentation algorithms in the frame of the high reliability iris verification system. These segmentation approaches are implemented, evaluated and compared based on their accuracy using three unconstraint databases, one of them is a video iris database. The result shows that, for an ultra-high security system on verification at FAR = 0.01 %, segmentation 3 (Viterbi) presents the best results.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Nowadays, iris recognition is being widely used in different environments where the identity of a person is necessary. So, it is a challenging problem to maintain a stable and reliable iris recognition system which is effective in unconstrained environments. Indeed, in difficult conditions, the person to recognize usually moves his head in different ways giving rise to non-ideal images (with occlusion, off-angle, motion-blur and defocus) for recognition [1, 2]. Most iris recognition systems achieve recognition rates higher than 99 % under controlled conditions. These cases are well documented in [3–6]. However, the iris recognition rates may significantly decrease in unconstrained environments if the capabilities of the key processing stages are not developed accordingly. For this reason in [7–9], some improvements have been incorporated on the acquisition module. As it is mentioned, iris segmentation is one of the most important process in iris recognition to preserve a high reliable and stable iris verification systems in harsh environments. It may compromise the performance of the entire system. Thus, this paper presents a comparative study of four segmentation algorithms in the frame of the high reliability iris verification system. These segmentation approaches are implemented, evaluated and compared based on their accuracy using three unconstraint databases, one of them is a video iris database. The remainder of this paper is organized as follows. Section 2 presents the iris verification scheme and segmentation approaches in the frame of the high reliability. Section 3 presents the experimental results, and Sect. 4 gives the conclusion of this work.
2 Iris Verification Scheme and Segmentation Approaches
The iris verification scheme (Fig. 1) comprises the following steps. From capturing one or more images of the iris of a person or persons either for the same type of sensor or multiple sensors, preprocessing of the input image is performed. In this step the inner and outer boundaries of the iris is extracted using at least one of the four previously selected segmentation algorithms implemented [10, 17]. Once the iris segmentation was obtained, the transformation of coordinates is performed to obtain the normalized iris image.
Finally, the characteristic extraction, matching and similarity degree steps [17] are performed as it is shown in Fig. 1.
2.1 Viterbi-Based Segmentation Algorithm
The first step of this segmentation approach [10, 11] consists in a rough localization of the pupil area. First, filling the white holes removes specular reflections due to illuminators. Then, a morphological opening removes dark areas smaller than the disk-shaped structuring element. Then, the pupil area is almost the biggest dark area, and is surrounded by the iris, which is darker than the sclera and the skin. Consequently the sum of intensity values in large windows in the image is computed, and the minimum corresponds to the pupil area. The pupil being roughly located, a morphological reconstruction allows estimating a first center, which is required for exploiting the Viterbi algorithm. The second step consists in accurately extracting the pupil contour and a well estimated pupil circle for normalization. Relying on the pupil center, the Viterbi algorithm is used to extract the accurate pupil contour. This accurate contour will be used to build the iris mask for recognition purposes. Viterbi algorithm is exploited at two resolutions, corresponding to the number of points considered to estimate the contour. At a high resolution (all pixels are considered), it allows finding precise contours, while at a low resolution; it retrieves coarse contours that will be used to improve the accuracy of normalization circles. One advantage of this approach is that it can be easily generalized to elliptic normalization curves as well as to other parametric normalization curves in polar coordinates. Also, a clear interest of this approach is that it does not require any threshold on the gradient map. Moreover, the Viterbi algorithm implementation is generic (the system can be used on different databases presenting various degradations, without any adaptation).
2.2 Contrast-Adjusted Hough Transform Segmentation Algorithm
Contrast-adjusted Hough Transform (CHT), is based on a Masek [10, 12] implementation of a Hough Transform approach using (database-specific). It is well known that this method implies a high computational cost. To reduce this issue a Canny edge detection method is used to detect boundary curves that help the iris and pupil boundary localization, and enhancement techniques to remove unlikely edges.
2.3 Weighted Adaptive Hough and Ellipsopolar Transform Segmentation Algorithm
Weighted Adaptive Hough and ellipsopolar Transforms (WHT) [10, 13], is the iris segmentation algorithm implemented in the USIT toolbox. This algorithm applies Gaussian weighting functions to incorporate model-specific prior knowledge. An adaptive Hough transform is applied at multiple resolutions to estimate the approximate position of the iris center. Subsequent polar transform detects the first elliptic limbic or pupillary boundary, and an ellipsopolar transform finds the second boundary based on the outcome of the first. This way, both iris images with clear limbic (typical for visible wavelength) and with clear pupillary boundaries (typical for near infrared) can be processed in a uniform manner.
2.4 Modified Hough Transform Segmentation Algorithm
Modified Hough Transform (MHT), uses the circular Hough transform initially employed by Wildes et al. [14] combined with a Canny edge detector [15–17]. From the edge map, votes are cast in Hough space for the parameters of circles passing through each edge point. These parameters are the centre coordinates and the radius, for the iris and pupil outer boundaries. These parameters are the centre’s coordinates \( \left[ {({\text{x}}_{\text{p}} ,{\text{y}}_{\text{p}} ), ({\text{x}}_{\text{i}} ,{\text{y}}_{\text{i}} )} \right] \) and radius \( \left[ {{\text{r}}_{\text{p}} ,{\text{r}}_{\text{i}} } \right] \), for the iris and pupil outer boundaries respectively.
3 Experimental Results
The aim of this research was oriented to explore the capacity of the robust methods at level of segmentation stage for unconstrained environments to increase the recognition rates, in the frame of the high reliability iris verification system.
3.1 Databases
To develop a robust iris image preprocessing, feature extraction and matching methods in unconstrained environments, it is necessary to use a database collected with different iris cameras and different capture conditions. We highlight the fact for this research is been used two image still databases and one video database. CASIA-V3-INTERVAL (images) [18] all iris images are 8 bit gray-level JPEG files, collected under near infrared illumination, with 320 × 280 pixel resolution (2639 images, 395 classes). Almost all subjects are Chinese. CASIA-V4-THOUSAND (images) [19], which were contains 20,000 iris images from 1,000 subjects. The main sources of intra-class variations in CASIA-Iris-Thousand are eyeglasses and specular reflections. The MBGC-V2 (video) [20] database provided 986 near infrared eye videos. All videos were acquired using an LG2200 EOU iris capture system [21]. This database presents noise factors, especially those relative to reflections, contrast, luminosity, eyelid and eyelash iris obstruction and focus characteristics. These facts make it the most appropriate for the objectives of real iris systems for uncontrolled. A sample set of three database images are shown in Fig. 2.
The Table 1 shows the used iris database with detailed specifications.
3.2 Quality for Segmented Images
In this part we considered two categories of quality for segmented images: good segmented and bad segmented images. Good segmented images contain more than 60 % of the iris texture and less than 40 % of eyelids or eyelashes or elements that do not belong to the eye (noise elements). Bad segmented images contain more than 40 % of noise elements (see Fig. 3).
Table 2 shows the obtained segmentation results on the analyzed databases. The process of evaluation was manually assessed by comparing the segmented iris images. As measure of segmentation performance we computed the percentage of good segmented images for each evaluated database by the expression 1:
where: NGSI, is the number of good segmented images in the database; NTI is the total number of images in the database.
To choose the best segmentation methods we evaluated the mean value of PGI for each segmentation method in all databases by expression 2:
From Table 2 it is possible to see that taking into account the MS values obtained for each segmentation method the first two best performances were obtained by Viterbi and Weighted Adaptive Hough transform. These methods obtained stable results on the three evaluated databases.
3.3 Experimental Results in Verification Task
The recognition tests were conducted using the experimental design presented in Fig. 1. All these processes were implemented in C language.
The matching probes generate two distributions: Inter-Class and Intra-Class (Hamming distances for Clients and Impostors) useful to compare the performance of segmentation algorithms. To evaluate any identity verification system, it is necessary to determine the point in which the FAR (false accept rate) and FRR (false reject rate) have the same value, which is called EER (equal error rate), because it allows the user to determine the appropriate threshold Th, for a given application. The Table 3 contains the above mentioned values for the verification scheme (Fig. 1).
3.3.1 ROC Curve Analysis
The receiver operating characteristics (ROC) curve was used to obtain the optimal threshold decision. In a ROC curve the false accept rate is plotted in function of the false reject rate for different threshold points. With this type of curve, we assure that the high security systems are interested in not allow unauthorized users to access restricted places. Otherwise, the system would allow a false accepted user to access. Therefore, these biometric systems work at low FAR/high FRR values. In this scene, the system have better to reject a valid user than allow an unauthorized user to access. The FAR and FRR evaluation measurements are the most common indicators of recognition accuracy when the biometric system is meant for verification mode, see Fig. 4.
The Table 4 contains the obtained results that choose the optimal decision threshold for discrimination between classes (Intra-Class and Inter-Class). This improvement was described using ROC curves; FAR and GAR (GAR = 1-FRR) [22].
Under conditions of CASIA-V3-Interval which is the database that was captured in more controlled conditions, the best performance is obtained for WHT segmentation method with GAR = 92.47 % at FAR = 2.39 % (see Fig. 3A).
Under conditions of CASIA4-V4-Thousands database, also the highest rating was obtained by WHT segmentation method with GAR = 91.6 % at FAR = 4.85 % (see Fig. 3B). For MBGC database the CHT segmentation method obtained the best results with GAR = 95.83 % at FAR = 1.21 % (see Fig. 3C). Overall the results demonstrate that WHT method behaves stable for the three databases.
The evaluation of accuracy for an ultra-high security system on verification at FAR = 0.01 % was estimated by ROC curves; False Reject Rate versus False Acceptance Rate. Table 5 reports the results of the GAR for each of automatic segmentation results.
Under conditions of CASIA-V3-Interval database, the four experimented segmentation algorithms have very similar performance with GAR = 87.9–93.2 %, showing the best performance for Viterbi method at FAR ≤ 0.01 %. This shows that when image capture conditions are controlled, segmentation methods generally perform well.
Under conditions of CASIA4-V4-Thousands database, also the highest rating was obtained by Viterbi segmentation method with GAR = 90.2 % at FAR ≤ 0.01 %, but in general the three algorithms based on Hough Transform obtained stable results.
For MBGCv2 database the CHT and WHT segmentation method obtained the best results with GAR = 95.6 and 95.5 % respectively at FAR ≤ 0.01 %, the Viterbi Algorithm also obtained wood results with GAR = 95,1 % at FAR ≤ 0.01 %.
Overall the results demonstrate that Viterbi method behaves stable for the three databases. The other three methods do not achieved the best results. This fact also corresponds with the results of the evaluation of segmentation, although the Viterbi method misses the best results in terms of MS, it maintains a constant stability for the three databases evaluated with a PGI of 100, 80 and 86 %.
4 Conclusions
In this paper, we have presented a comparative study of four segmentation algorithms in the frame of the high reliability iris verification system. These segmentation approaches were implemented, evaluated and compared based on their accuracy using three unconstraint databases, one of them is a video iris database. The ability of the system to work with non-ideal iris images has a significant importance because is a common realistic scenario.
The first results test show that based on Table 2, the best segmentation method was WHT. On the other hand, based on Table 4, for optimal threshold the GAR average is for WHT 93.23 %, CHT 91.34 %, Viterbi 91.59 % and MHT 88.30 %. This shows that WHT method presents the best performance under a normal behavior of the system. However, if we raise the robustness of the system working at low FAR/ high FRR values (FAR ≤ 0.01 %), based on Table 5 the GAR average is for Viterbi 92.83 %, CHT 91.43 %, WHT 90.90 % and MHT 85.50 %. These differences could be possible due the segmentation method accuracy.
It was demonstrated that the Viterbi segmentation method has the most stable performance segmenting images taken under different conditions since the features extracted from images segmented by it contain more information than the images segmented by the others methods. It can be used in a real iris recognition system for ultra-high security. Combining this method with a set preprocessing technics to improve the image quality can produce a significant increase in the effectiveness of the recognition rates.
We believe that this problem can be also faced using combination of clustering algorithms. In particular, an algorithm of this kind may be able to face the possible high dimensionality of the image and use the existing geospatial relationship between the pixels to obtain better results.
The problem of computational cost is another line that we will continue investigating since it has to perform at least two segmentations methods, in consequence they will increase the computational time depending on the nature of the simultaneously used methods.
References
Cao, Y., Wang, Z., Lv, Y.: Genetic algorithm based parameter identification of defocused image. In: ICCCSIT 2008, International Conference on Computer Science and Information Technology, pp. 439–442, September 2008
Colores, J.M., García-Vázquez, M., Ramírez-Acosta, A., Pérez-Meana, H.: Iris image evaluation for non-cooperative biometric iris recognition system. In: Batyrshin, I., Sidorov, G. (eds.) MICAI 2011, Part II. LNCS, vol. 7095, pp. 499–509. Springer, Heidelberg (2011)
Daugman, J.: The importance of being random: statistical principles of iris recognition. Pattern Recogn. 36, 279–291 (2003)
Phillips, P., Scruggs, W., Toole, A.: FRVT 2006 and ICE 2006 large-scale results, Technical report, National Institute of Standards and Technology, NISTIR 7408 (2007)
Proenca, H., Alexandre, L.: The NICE.I: noisy iris challenge evaluation. In: Proceedings of the IEEE First International Conference on Biometrics: Theory, Applications and Systems, vol. 1, pp. 1–4 (2007)
Newton, E.M., Phillips, P.J.: Meta-analysis of third-party evaluations of iris recognition. IEEE Trans. Syst. Man Cybern. 39(1), 4–11 (2009)
Kalka, N.D., Zuo, J., Schmid, N.A., Cukic, B.: Image quality assessment for iris biometric. In: SPIE 6202: Biometric Technology for Human Identification III, vol. 6202, pp. D1–D11 (2006)
Chen, Y., Dass, S.C., Jain, A.K.: Localized iris image quality using 2-d wavelets. In: Zhang, D., Jain, A.K. (eds.) ICB 2005. LNCS, vol. 3832, pp. 373–381. Springer, Heidelberg (2005)
Belcher, C., Du, Y.: A selective feature information approach for iris image quality measure. IEEE Trans. Inf. Forensics Secur. 3(3), 572–577 (2008)
Sanchez-Gonzalez, Y., Chacon-Cabrera, Y., Garea-Llano, E.: A comparison of fused segmentation algorithms for iris verification. In: Bayro-Corrochano, E., Hancock, E. (eds.) CIARP 2014. LNCS, vol. 8827, pp. 112–119. Springer, Heidelberg (2014)
Sutra, G., Garcia-Salicetti, S., Dorizzi, B.: The viterbi algorithm at different resolutions for enhanced iris segmentation. In: 2012 5th IAPR International Conference on Biometrics (ICB), pp. 310–316. IEEE (2012)
Masek, L.: Recognition of human iris patterns for biometric identification. Technical report (2003)
Uhl, A., Wild, P.: Weighted adaptive hough and ellipsopolar transforms for realtime iris segmentation. In: 2012 5th IAPR International Conference on Biometrics (ICB), pp. 283–290. IEEE (2012)
Wildes, R.P., Asmuth, J.C., Green, G.L.: A system for automated recognition 0-8186-6410-X/94, IEEE (1994)
Canny, J.F.: Finding edges and lines in images. M.S. thesis, Massachusetts Institute of Technology (1983)
Hough, P.V.C.: Method and means for recognizing complex patterns. U.S. Patent 3 069 654 (1962)
Colores-Vargas, J.M., García-Vázquez, M., Ramírez-Acosta, A., Pérez-Meana, H., Nakano-Miyatake, M.: Video images fusion to improve iris recognition accuracy in unconstrained environments. In: Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., Rodríguez, J.S., di Baja, G.S. (eds.) MCPR 2012. LNCS, vol. 7914, pp. 114–125. Springer, Heidelberg (2013)
CASIA-V3-Interval. The Center of Biometrics and Security Research, CASIA Iris Image Database. http://biometrics.idealtest.org/
CASIA-V4-Thousands. The Center of Biometrics and Security Research, CASIA Iris Image Database. http://biometrics.idealtest.org/
Multiple Biometric Grand Challenge. http://face.nist.gov/mbgc/
Bowyer, K.W., Hollingsworth, K., Flynn, P.J.: Image understanding for iris biometrics: a survey. Comput. Vis. Image Underst. 110(2), 281–307 (2008)
Zweig, M., Campbell, G.: Receiver-operating characteristic ROC plots: a fundamental evaluation tool in clinical medicine. Clin. Chem. 39, 561–577 (1993)
Aknowledgment
This research was supported by SIP2015 project grant from Instituto Politécnico Nacional from México and Iris Project grant from Advanced Technologies Application Center from Cuba.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
García-Vázquez, M.S., Garea-Llano, E., Colores-Vargas, J.M., Zamudio-Fuentes, L.M., Ramírez-Acosta, A.A. (2015). A Comparative Study of Robust Segmentation Algorithms for Iris Verification System of High Reliability. In: Carrasco-Ochoa, J., Martínez-Trinidad, J., Sossa-Azuela, J., Olvera López, J., Famili, F. (eds) Pattern Recognition. MCPR 2015. Lecture Notes in Computer Science(), vol 9116. Springer, Cham. https://doi.org/10.1007/978-3-319-19264-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-19264-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19263-5
Online ISBN: 978-3-319-19264-2
eBook Packages: Computer ScienceComputer Science (R0)