Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Minutiae extraction for fingerprint recognition

2008, 2008 5th International Multi-Conference on Systems, Signals and Devices

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/4372754 Minutiae extraction for fingerprint recognition Conference Paper · August 2008 DOI: 10.1109/SSD.2008.4632892 · Source: IEEE Xplore CITATIONS READS 4 119 2 authors: Yusra Al-Najjar Alaa F. Sheta 6 PUBLICATIONS 21 CITATIONS 127 PUBLICATIONS 884 CITATIONS King Faisal University SEE PROFILE Texas A&M University - Corpus Christi SEE PROFILE Some of the authors of this publication are also working on these related projects: Senors Fusion Applications View project All content following this page was uploaded by Alaa F. Sheta on 23 September 2014. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. MINUTIAE EXTRACTION FOR FINGERPRINT RECOGNITION Yusra Al-Najjar, Alaa Sheta Information Technology Department Al-Balqa Applied University Salt, Jordan. usra7@yahoo.com, asheta2@yahoo.com ABSTRACT Automatic Personal Identification (API) represents a challenge for tremendous life applications such as in passports, cellular telephones, automatic teller machines, and driver licenses. It is important to achieve a high degree of confidence when handling such types of application. Biometrics is being more and more adopted in such cases. In the past years, the development of fingerprint identification systems has received a great deal of attention. The goal of this paper is to represent a complete identification process for fingerprint recognition throughout the extracting of matching minutiae. The performance of the proposed system is tested on a database with fingerprints from different people and experimental results are presented. Index Terms— Fingerprint, minutiae extraction, termination, bifurcation. 1. INTRODUCTION Fingerprint technology is the most widely used form of biometric technology. Traditional knowledge-based (password or personal Identification Number (PIN)) and tokenbased (password, driver license, and ID card) identifications are prone to fraud because PINs may be forgotten or guessed by others and the token may be lost or stolen [1]. Therefore, biometric, which refers to identifying an individual based on the physiological or behavioral characteristics has been more reliable. For decades, fingerprints have been in use for biometric recognition because of their high immutability and individuality [2]. Immutability refers to the persistence of the fingerprints over time whereas individuality is related to the uniqueness of ridge details across individuals. The probability that two fingerprints are alike is 1 in 1.9 × 1015 [3]. These features give the fingerprint its importance; they are extremely effective where high degree of security is an issue. Minutiae detection can be categorized into two types: global and local. A global representation gives an overall characteristic of the finger where a single representation is valid for the entire fingerprint. Whereas, a local representation consists of segments derived from regions of the fingerprint. Typically global representations are used for classification of fingerprints into different categories such as right loop, left loop, and arch etc. The global classification schema of fingerprints in provided in [4] and shown in Figure 1. Major local representations of fingerprints are based on finger ridges. Fingerprints possess many features called local features. Minutiae are minute details of the fingerprint [5]. The most two used minutiae are ridge endings (a point where a ridge ends suddenly) and ridge bifurcation (where a ridge breaks up into two ridges) [6]. The global features are used to classify fingerprints into six major classes whereas the minutiae details are used for fingerprint based person identification. In this paper, we adopt the method of fingerprint enhancement proposed in [7]. As for feature extraction, we employed the technique proposed in [8]. The paper is organized as follows. In Section 2 we discuss the proposed methodology used for enhancing fingerprint image. Also, we describe the proposed methodology for extracting fingerprint characteristics. In Section 3, we present our experimental results which justify the method used. 2. PROPOSED METHODOLOGY The proposed methodology is based on collecting a database of different persons by scanning fingerprints using a fingerprint reader. Images taken will be of the size 390 × 355 with a device resolution of 512 dpi (i.e. Microsoft Fingerprint Reader). In Figure 2, we show a block diagram of the approach which we are adopting in this study. The proposed methodology consists of five stages. They are 1) image acquisition 2) image enhancement 3) image binarization 4) image thinning and 5) image feature extraction. 2.1. Image Acquisition The images are obtained in two different ways, accordingly: • Live scan print: The images is obtained by scanning the fingertip using the flat bed scanner or any other scanner. • Offline print: This is the traditional method where the fingerprint is obtained by taking the impression Figure 1. Global Classification Schema of Fingerprints [4] Figure 2. Fingerprint Recognition System on a card/paper in ink and which is later fed to system database [9]. There are number of sensors to obtain fingerprint images. They include: • Optical Sensor uses a small camera to take an image of a fingerprint pressed against a clear screen. Since FTIR (Frustrated Total Internal Reflection) devices sense a three-dimensional surface, it is difficult to fool them with a photograph or image of a fingerprint. • Thermal Sensor measures temperature difference between the ridges and valleys of the fingerprint surface. • Capacitive Sensor measures the pressure difference between the ridge and valleys to cause a capacitance difference in sensor surface [10]. 2.2. Image Enhancement Image enhancement is a critical step in automatic fingerprint matching. The objective of this stage is to help in extracting minutiae from the input fingerprint images. Extracting minutiae strongly rely on the quality of the input fingerprint images. Enhancement is a pre-process contains the stages listed below. where V (k) is the variance for block k, I(i, j) is the grey-level at pixel (i, j), and M (k) is the mean grey-level value for the block k. Normalization offsets and rescales image so that the minimum value is 0 and the maximum value is 1. Normalization is implemented upon the image to have zero mean, and a unit standard deviation.  q  M + V0 (I(i,j)−M )2 , ifI(i, j)iM, 0 V q N (i, j) =  M − V0 (I(i,j)−M )2 , otherwise, 0 (2) V where M and V are the estimated mean and variance of I(i, j), respectively [7]. 2.2.2. Orientation Estimation This stage estimates the local orientation of ridges in a fingerprint of a normalized image. It requires three variables as an input. They are 1) Sigma of the derivative of Gaussian used to compute image gradients 2) block sigma which is sigma of the Gaussian weighting used to sum the gradient moments, and 3) Orient smooth sigma which is sigma of the Gaussian used to smooth the final orientation vector field [1]. 2.2.1. Segmentation and Normalization 2.2.3. Frequency Segmentation is the process of separating the foreground regions in the image from the background regions. Firstly, the image is divided into blocks and the grey-scale variance is calculated for each block in the image. If the variance is less than the global threshold, then the block is assigned to be a background region; otherwise, it is assigned to be part of the foreground. The grey-level variance for a block of size W × W is defined as: Frequency is an important parameter that is used in the construction of the filter. Frequency image represents the local frequency of the ridge in a fingerprint. The image is divided into blocks of size W × W then the grey level values of all pixels located inside the block are projected along a direction orthogonal to local ridge orientation. The ridge spacing S(i, j) is computed by counting the median number of pixels [7]. The frequency F (i, j) for a block centered at pixel (i, j) is define as: V (k) = W −1 W −1 1 X X (I(i, j) − M (k))2 W 2 i=0 j=0 (1) F (i, j) = 1 S(i, j) (3) 2.2.4. Filtering This process enhances fingerprint image by using oriented filters, it requires normalized, oriented, and frequency images. The process is done by convolving the image with the filter. The convolution of a pixel (i, j) in the image requires the corresponding orientation value O(i, j) and ridge frequency value F (i, j) of the pixel [7]. Hence, the application of the filter G to obtain the enhanced image E is performed in Equation 4. Good fingerprint thinning algorithms should preserve the topology of the image, and keeping the original connectivity. Ridge thinning was implemented to eliminate the redundant pixels of ridges [13]. 2.5. Feature Extraction Extraction of appropriate features is one of the most important tasks for a recognition system [14]. Feature extraction is done by applying a filter of a 3 × 3 over the thinned image according to the following rules: wx /2 E(i, j) = X Xwy /2 G(u, v; O(i, j), F (i, j))N (i − u, j − v) v=−wy /2 u=−wx /2 (4) where O is the orientation image, F is the frequency image, N is the normalized image, whereas wx and wy are the width and height of the filter mask [11, 1]. Figure 3 displays the proposed enhancement process. Binarization is the process where a grayscale image is decimated or categorized into two levels, black and white (0 and 1). Binarization is implemented over ridge/valley of filtered image with threshold of 0 [7]. Pixels below a certain level are turned into black, and ones above it are turned into white. After this operation, ridges in the fingerprint will be highlighted with black color while valleys are white. • if the central pixel is ’1’ and the sum other than ’2’ or ’4’, then the central pixel is a usual pixel [15]. Skeletonization or thinning is the process applied over binarized image, from previous step, by thinning certain pattern shapes until it is represented by 1-pixel wide lines. Fingerprint thinning is usually implemented via morphological operations such as erosion and dilation to reduce the width of ridges to a single pixel while preserving the extent and connectivity of the original shape [12]. The mathematical definition of erosion is explained in the following equation. The erosion of A by element B, is denoted by: \ • Extracting minutiae: In this stage we find all the terminations and bifurcations in the fingerprint. • Terminating false minutiae: This stage excludes lots of minutiae (terminations and bifurcations). Removing false minutiae is done according to the following rules: – if the distance between a termination and a bifurcation is smaller than a specified number ’D’, then we removes this minutia. 2.4. Thinning (Skeletonization) Ac 6= φ} (5) Sometimes using erosion might cause some features to be corrupted, so we use another function called dilation, its mathematical definition is shown in the following equation: A ⊕ B = {z|(B̂)z • if the central pixel is ’1’ and the sum is ’4’, then the central pixel is a bifurcation. In [16] two stages procedure was presented for extracting minutiae: 2.3. Binarization A ⊖ B = {z|(B)z • if the central pixel is ’1’ and the sum of pixels inside the block is ’2’, then the central pixel is a termination. \ A 6= φ} (6) Implementing dilation before erosion stores small gaps before thinning the image. The morphological closing of image A by element B, denoted A • B, is simply dilation of A by B, followed by erosion of the result by B: A • B = {A ⊕ B} ⊖ B (7) – if the distance between two bifurcations is less than D, then we removes this minutia. – if the distance between two terminations is less than D, then we removes this minutia. 2.5.1. Region Of Interest (ROI) ROI is the region of the image in which we are interested. To determine this region, we consider the thinned image, and we apply morphological operations such as closing, filling, opening area, and erosion on it. After specifying the ROI, we can suppress minutiae outside it. 2.5.2. Minutiae Orientation Once we determined the true minutiae, we have to find the orientation of both terminations and bifurcations [17]. • Termination Orientation: We have to find the orientation of the termination. For finding that, we use a table of 5 × 5 for different angles of theta, analyze the position of pixels connected to the center in a boundary of a 5×5 block. By keeping only the edge pixels, we take the first non-zero pixel and compare its location to the table to get the corresponding angle for the termination. Figure 3. Image enhancement stages • Bifurcation Orientation: For each bifurcation, we have three bounding non-zero pixels so we operate the same process as in the termination case but for three times instead of just one [18]. A 5×5 boundary is used since 3×3 does not show enough information while a 7 × 7 might show much information. 2.6. Exporting minutiae In this stage the extracted data are exported to a file. The file will contain the Termination as given in Table 1 and the Bifurcations as given in Table 2. Extracted data are the x, y location of each minutia and its orientation θ. X 67 153 185 146 189 127 254 119 267 114 135 211 Table 2. Bifurcations Y θ1 θ2 46 -2.62 -1.05 121 2.62 1.57 134 3.14 -1.57 162 3.14 1.57 172 -2.62 -1.57 181 2.09 2.09 204 -2.36 -1.57 217 3.14 -1.57 223 -2.62 1.05 229 -2.62 1.57 250 2.36 -2.36 251 2.36 -2.09 θ3 -0.52 0.79 -1.05 0.79 -1.05 0.52 -0.79 0.00 -0.79 -0.52 0.52 0.52 3. EXPERIMENTAL RESULTS Figure 4 shows the stages of the enhancement process. Figure 4-a displays the original image; the binary image is shown in Figure 4-b, while Figure 4-c displays the thinned image. Figure 4-d illustrates the region of interest overlapped by the thinned image and plotted by extracted minutiae. Figure 5 displays the thinned image with true extracted minutiae plotted on it. Table 1. Termination X Y θ 82 23 1.57 160 27 -0.52 89 43 3.14 62 65 -2.62 231 87 2.36 79 88 0.00 73 92 -2.62 91 138 -1.57 137 163 2.09 227 173 -1.57 162 174 2.09 205 174 -1.57 105 185 -0.79 243 189 -1.57 199 2.36 -1.57 4. CONCLUSION In this paper we presented a complete fingerprint recognition methodology to extract matching minutiae. The proposed methodology consists of five stages. They are image acquisition, image enhancement, image binarization, image thinning and image feature extraction. The performance of the proposed system was tested on a database with fingerprints from different people. The experimental results are promising. 5. REFERENCES [1] Salil Prabhakar, Fingerprint Classification and Matching Using a Filterbank, Ph.D. thesis, Michigan State University, 2001. [2] Chaohong Wu, Advance Feature Extraction Algorithm for Automatic Fingerprint Recognition Systems, Ph.D. thesis, University of New York at Buffalo, 2007. [3] Venu Govindaraju, Z. Shi, and J. Schneider, “Feature extraction using a chaincoded contour representation,” International Conference on Audio and Video Based Biometric Person Authentication, 2003. [4] Anil Jail, Lin Hong, Sharath Pankanti, and Ruud Bolle, “An identity authentication system using fingerprints,” in http://www.research.ibm.com/ecvg/pubs/sharatproc.pdf. 2007. a) Original Image Figure 5. Extracted Minutiae [5] Chandan Sharma, “DSP implementation of fingerprintbased biometric system,” Tech. Rep., University of Auckland, 2005. [6] F.A. Afsar, M. Arif, and M. Hussain, “Fingerprint identification and verification system using minutiae matching,” in National Conference on Emerging Technologies, 2004. [7] Raymond Thai, “Fingerprint image enhancement and minutiae extraction,” Tech. Rep., University of Western Australia, 2003. b) Binarized Image [8] N. Ratha, S. Chen, and A. Jain, “Adaptive flow orientation based feature extraction in fingerprint images,” Pattern Recognition, vol. 11, pp. 1657–1672, 1995. [9] Sarkodie-Gyan Thompson, “Fingerprint recognition using fuzzy inferencing techniques,” Tech. Rep., University of Texas El Paso, 2006. [10] Danny Rodberg, Colin Soutar, and Viajaya Kumar, “Highspeed fingerprint verification using an optical correlator,” Optical Pattern Recognition IX, pp. 123–133, 1998. [11] L. Hong, Y. Wan, and A.K. Jain, “Fingerprint image enhancement: Algorithms and performance evaluation,” IEEE Trans. Pattern Anal. Mach. Intelligence, pp. 777– 789, 1998. [12] D. Maio, D. Maltoni, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, Springer Verlag, 2003. [13] X. Luo, J. Tian, and Y. Wu, “A minutia matching algorithm in fingerprint verification,” in International Conference on Pattern Recognition, 2000, pp. 833–836. c) Thinned Image [14] M. Tico and P. Kuosmanen, “A topographic method for fingerprint segmentation,” in International Conference on Image Processing, 1999, pp. 36–40. [15] S. Prabhakar, Anil Jain, and Sharath Pankanti, “Learning fingerprint minutiae location and type,” in 15th International Conference on Pattern Recognition (ICPR), Barcelona, September 3-8, 2000. [16] Tsai-Yang Jea, Minutiae-Based Partial Fingerprint Recognition, Ph.D. thesis, State University of New York, 2005. [17] N.K. Ratha, S. Chene, and A. Jain, “Adaptive flow orientation-based feature extraction in fingerprint images,” Pattern Recognition, , no. 11, pp. 1657–1672, 1995. d) Region Of Interest Figure 4. Stages of processing fingerprint image View publication stats [18] A. Jain, Y. Chen, and M. Demirkus, “A fingerprint recognition algorithm combining phase-based image matching and feature-based matching,” in International Conference on Biometrics (ICB), 2005, pp. 316–325.