Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Open access peer-reviewed chapter

Biometric Systems and Their Applications

Written By

Souhail Guennouni, Anass Mansouri and Ali Ahaitouf

Submitted: 19 October 2018 Reviewed: 30 January 2019 Published: 01 March 2019

DOI: 10.5772/intechopen.84845

From the Edited Volume

Visual Impairment and Blindness - What We Know and What We Have to Know

Edited by Giuseppe Lo Giudice and Angel Catalá

Chapter metrics overview

2,473 Chapter Downloads

View Full Metrics

Abstract

Nowadays, we are talking more and more about insecurity in various sectors as well as the computer techniques to be implemented to counter this trend: access control to computers, e-commerce, banking, etc. There are two traditional ways of identifying an individual. The first method is a knowledge-based method. It is based on the knowledge of an individual’s information such as the PIN code to allow him/her to activate a mobile phone. The second method is based on the possession of token. It can be a piece of identification, a key, a badge, etc. These two methods of identification can be used in a complementary way to obtain increased security like in bank cards. However, they each have their weaknesses. In the first case, the password can be forgotten or guessed by a third party. In the second case, the badge (or ID or key) may be lost or stolen. Biometric features are an alternative solution to the two previous identification modes. The advantage of using the biometric features is that they are all universal, measurable, unique, and permanent. The interest of applications using biometrics can be summed up in two classes: to facilitate the way of life and to avoid fraud.

Keywords

  • biometry
  • object detection
  • recognition
  • security

1. Introduction

The increasing performance of computers over the last decade has stimulated the development of general-purpose computer vision algorithms. One of the major problems of computer vision is object recognition tasks, to which special attention is paid. This is due to the desire to create artificial intelligent systems. The first step toward any kind of intelligence is perception, followed by reasoning and action.

Human perception is based on visual perception. Since intelligent artificial systems are primarily inspired by human perception and reasoning, we can conclude that visual perception is an important source of information for many potential systems.

Recently, there was a raising interest on eye tracking technology. This is mainly due to the industrial growth of many domains such as augmented reality, smart cars, and web applications’ testing for which a solid eye tracking technology is essential. Eye movement recognition, combined with other biometrics such as sound recognition, can enable a smooth interaction with virtual environments.

A good example of a smart system is the autonomous car. It perceives the surrounding world and the signs while adapting her behavior to changing situations. Such a car contains a lot of different sensors, which help to perceive the necessary information. The visual perception of the surrounding world is among the most important. It could be used to recognize pedestrians on the street, cars, animals, or even unspecified objects on the road, which could pose a potential threat to human life.

Improving and developing object recognition algorithms will help improve not only artificial intelligent systems but many other useful applications in today’s world. Other examples of application of this system can be extended to the tourist industry where applications of augmented reality (Figure 1) are becoming more and more popular especially after the widespread use of smartphones. In addition, the field of video surveillance is also a possible extension of object detection algorithms because of the need for quick and timely detection of different video scenes captured by cameras.

Figure 1.

Augmented reality.

Indeed, scene comprehension includes many separable tasks ranging from object recognition to the categorization of scenes and events. Object detection is a complex discipline that can be divided into three main directions:

  • Image classification: The search for images in the majority of search engines is a typical case of image classification algorithms.

  • Object detection: The location of the object on the request is one of the desired information in many of the systems mentioned.

  • Segmentation of objects: Which pixels belong to which objects? It is more precisely compared to object detection for obvious reasons.

Advertisement

2. Issues and challenges

Object detection is a difficult task mainly because of possible changes in the appearance of the object due to different consequences. The design of the potential method must consider the possible difficulties:

  • Intra-class variations: An object type can have a large number of variations (Figure 2 illustrates different types of chairs). This can pose a problem by using specific features, which do not cover all possible object variations.

  • Luminance conditions: Variations in luminance conditions change the appearance of the object, mainly in color and reflection.

  • Point of view: The majority of objects in the images are in three dimensions. The images are only two dimensions, which means that we can only see a particular view of the given object. The same object may differ from other points of view, which we must also be aware of. Different views of the same object cause the invisibility of different features. Not all features are visible from a single point of view.

  • Scale: The size of the object may differ, and there is a desire to be able to detect what the object gives to any size or scale.

  • Location: It is much easier if you know where the desired object is. The situation differs if you have a prior knowledge of the location of the object and no information about it or if you know that the image contains only one object, located in the center of the image.

  • Orientation: Humans do not have obvious problems recognizing the same object with a different orientation, but many algorithms do. Invariance to this possibility is often crucial.

  • Occlusion and truncation: Occlusion from one object to another often causes a lot of inconvenience. Sometimes even humans do not see enough features of the object to recognize correctly. Truncation is the same problem when you do not see certain parts of the object, because they are out of the picture.

  • Footprint: The background is almost nonexistent; an image contains only one chair. This situation is not typical in a real world. When you take the image of an object, there are almost always many other objects in the background, in which the recognition algorithm is usually not interested. The scene is often very complex, and it is difficult to recognize the object/objects desired among other objects.

  • Out of context: Context is often used to increase the likelihood of certain categories of common occurrences. For example, cars and roads are often associated, but we cannot rely too much on context because sometimes it can be misleading.

  • Multiple instances: The image often contains several objects of the same category. Some algorithms can identify regions of different categories in the image, but they cannot identify individual instances of the same object category.

  • Pose: One of the biggest challenges is the invariant detection of the pose. Many objects change their appearance by changing their shape. For example, it is desirable to detect a person in any posture of their body [1].

  • Instance level recognition vs. object class recognition. It is necessary to realize the difference between these problems. It is obvious that different methods are needed to recognize the human face in general and the person who uses the face.

Figure 2.

Image search results.

After analyzing the potential problems associated with recognition tasks, we believe that the direction to follow in imitating the human visual perception system is natural. The first moment of human comprehension of the image is a very general activity that analyzes the basic categories (buildings, men, cars, etc.). After getting the big picture, his attention focuses on the things that interest him. While focusing, humans observe objects of interest to enrich more details and see and recognize more features. A feature is a general term for describing a particular part of the object in order to enrich its appearance. A human has special predispositions on several objects (e.g., faces) and on situations (mainly of the danger and movement type) on which he is more sensitive to recognize. The typical situation is when you see someone away from you and you can recognize that it is a person. As you get closer, by focusing on this person, you are enriching and recognizing more and more elements that make it possible to distinguish whether he is a known person and to detect his name. Humans can do instance-level recognition as in the case presented, but they must first distinguish the object category to optimize the subsequent search.

Advertisement

3. Biometric systems overview

3.1 History of biometrics

Biometrics has been a concern for centuries. Proving one’s identity reliably was done using several techniques. From prehistory man knew the uniqueness of fingerprints, which meant that signatures by fingerprints were sufficient to prove the identity of an individual. Indeed, two centuries before Christ, the Emperor Ts-In-She authenticated certain sealed with the fingerprint.

At the beginning of the nineteenth century, in France, Alphonse Bertillon launched the first steps of the scientific police. He proposed the first method of biometrics that can be described as a scientific approach: bertillonage allowed the identification of criminals through several physiological measures.

At the beginning of the twentieth century, biometry was rediscovered by William James Herschel, an English officer who had the idea of having his subcontractors sign their fingerprints to find them easily in case of unhonored contracts. As a result, police departments have begun using fingerprints as a unique and reliable feature to identify an individual.

Biometrics is constantly growing especially in the field of secure identity documents such as the national identity card, passport, or driving license. This technology is running on new platforms, including chip cards based on the microprocessor.

3.2 Biometric market development

The biometric market has undergone a great development thanks to the great number of advancement and innovation that this field has experienced in recent decades. This development is increasing as a result of the security concerns of several countries, which has pushed investment in this area and the widespread use of biometric solutions in several social and legal fields.

As shown by the statistics in Figure 3 between 2007 and 2015, there has been a considerable increase in the share of the private sector market due to the growing need for biometric solutions in this sector especially for smartphone and camera manufacturers.

Figure 3.

Distribution of the global biometric market.

According to ABI Research [2], the global biometric market will break the $30 billion mark by 2021, 118% higher than the 2015 market. In this context, consumer electronics, and smartphones in particular, are boosting the biometric sector: it is expected to sell two billion onboard fingerprint sensors in 2021, for an average annual increase of 40% in 5 years.

3.3 Biometric systems

A biometric system is a system that allows the recognition of a certain characteristic of an individual using mathematical algorithms and biometric data. There are several uses of biometric systems. There are systems that require enrollment upstream of users. Other identification systems do not require this phase.

  • Enrollment mode is a learning phase that aims to collect biometric information about who to identify. Several data acquisition campaigns can be carried out to ensure a certain robustness of the recognition system to temporal variations of the data. During this phase, the biometric characteristics of individuals are captured by a biometric sensor, and then represented in digital form (signatures), and finally stored in the database. The processing related to the enrollment has no time constraint, since it is performed “off-line.”

  • The verification or authentication mode is a “one-to-one” comparison, in which the system validates the identity of a person by comparing the biometric data entered with the biometric template of that person stored in the system’s database. In such a mode, the system must then answer the question related to the identity of the user. Currently the verification is carried out via a personal identification number, a user name, or a smart card.

  • The identification mode is a “one-to-N” comparison, in which the system recognizes an individual by matching it with one of the models in the database. The person may not be in the database. This mode consists of associating an identity with a person.

Figure 4 presents the architecture of a biometric system, which consists of the following elements:

  • The capture module that represents the entry point of the biometric system and consists in acquiring the biometric data in order to extract a digital representation. This representation is used later in the following phases.

  • The module of signal processing makes it possible to optimize the processing time and the digital representation acquired in the enrollment phase in order to optimize the processing time of the verification phase and the identification.

  • The storage module that contains the biometric templates of the system enrollees.

  • The matching module that compares the data extracted by the extraction module with the data of the registered models and determines the degree of similarity between the two biometric data.

  • The decision module that determines whether the similarity index returns through the matching module is sufficient to make a decision about the identity of an individual.

Figure 4.

Biometric system architecture.

Advertisement

4. Performance of biometric systems

4.1 Performance evaluation

For the evaluation of the precision of a biometric system, which makes it possible to measure these performances, numerous attempts have been made on the system, and all the similarity scores are saved.

By applying the variable score threshold to similarity scores, the pairs of false recognition rate (FRR) and false acceptance rate (FAR) can be calculated. The false recognition rate, or FRR, is the measure of the likelihood that the biometric system will incorrectly reject an access attempt by an authorized user. It is stated as the ratio of the number of false recognitions divided by the number of identification attempts. On the other hand, the false acceptance rate, or FAR, is the measure of the likelihood that the biometric system will incorrectly accept an access attempt by an unauthorized user. It is stated as the ratio of the number of false acceptances divided by the number of identification attempts.

The results are presented either as such pairs, i.e., FRR at a certain level of FAR or as the graph in Figure 5. The rates can be expressed in several ways, for example, in percentages (1%), in fractions (1/100), in decimal format (0.01), or using powers of ten ( 10 p 2 ). When comparing two systems, the most accurate shows a lower FRR equal to FAR level. Some systems do not report the similarity score, only the decision. In this case, it is only possible to win a single FRR/FAR pair (and not a continuous series) as a result of a performance evaluation. If the mode of operation (the security level) is adjustable (i.e., we have a means of controlling the scoring threshold used internally), the performance evaluation can be performed repeatedly in different modes to get other FRR/FAR pairs.

Figure 5.

DET graph sample.

4.2 Evaluation mode

There are three modes of performance evaluations, which are technology, scenario, and operational evaluation. When evaluating biometric algorithms, technological evaluations are the most common and often the most feasible. Since this type of evaluation is done using saved samples, the results are reproducible, and the evaluation is not a tedious or complicated process.

  • Technological evaluation: Evaluation using recorded data, e.g., previously acquired fingerprints

  • Scenario evaluation: End-to-end evaluation of the system using a prototype or simulated environment

  • Operational evaluation: Evaluation in which the performance of a complete biometric system is determined in an application environment with a specific population

The biggest disadvantage of technological evaluations is that they do not necessarily reflect the final conditions of use of the system. For this reason, it is important to collect a set of samples of the conditions of use of the target system when preparing an assessment.

4.3 Database

Registered samples used in technology assessments are collected in databases. Data collection is performed using a group of volunteers, at least some of whom provide multiple acquisitions of the same biometric modality (e.g., the same finger) to have relevant attempts. To make collection efficient, samples of several objects can be collected from each volunteer, for example, every ten fingers. The characteristics of the database have a great impact on the results of an evaluation. As previously stated, with the exception of the capabilities of the biometric algorithm, the amount of available information can be used to characterize the objects.

4.4 Degree of confidence

To be able to make an assertion about the FRR 1% @ FAR 1 / 1 000 000 (i.e., when the system operates in a mode where one out of one million impostor attempts is-falsely-considered a match, one percent of the genuine attempts would fail) it at least one million impostor attempts (user sticking perfectly to another person’s template). It is not difficult to understand that the uncertainty of such an assertion would be rather high. The result depends heavily on how the two most similar samples in the database are scored. When comparing and viewing a DET (detection error trade-off) graph, it is important to understand that the uncertainty is higher on the side of the edges of the image. The number of comparisons made is only an important factor affecting confidence. The key to getting better statistical significance is to make as many uncorrelated attempts as possible.

Advertisement

5. Applications of biometric systems

Biometric systems can be used in a large number of applications. For security reasons, biometrics can help make transactions, and everyday life is both safer and more practical. The following domains use biometric solutions to meet their respective needs:

  • Legal applications:

    • Justice and law enforcement: Biometric technology and law enforcement have a very long history, and many very important innovations in identity management have emerged from this beneficial relationship. Today, the biometrics applied by the police force is truly multimodal. Fingerprint, face, and voice recognitions play a unique role in improving public safety and keeping track of the people we are looking for.

  • Government applications:

    • Border control and airport: A key area of application for biometric technology is at the border. Biometric technology helps to automate the process of border crossing. Reliable and automated passenger screening initiatives and automated SAS help to facilitate international passenger travel experience while improving the efficiency of government agencies and keeping borders safer than ever before.

    • Healthcare: In the field of healthcare, biometrics introduces an enhanced model. Medical records are among the most valuable personal documents; doctors need to be able to access them quickly, and they need to be accurate. A lack of security and good accounting can make the difference between timely and accurate diagnosis and health fraud.

  • Commercial applications:

    • Security: As connectivity continues to spread around the world, it is clear that old security methods are simply not strong enough to protect what is most important. Fortunately, biometric technology is more accessible than ever, ready to provide added security and convenience for everything that needs to be protected, from a car door to the phone’s PIN.

    • Finance: Among the most popular applications of biometric technology, financial identification, verification, and authentication in commerce help make banking, purchasing, and account management safer and more convenient and responsible. In the financial area, biometric solutions help to ensure that a customer is the person he/she claims to be when accessing sensitive financial data by entering his/her unique biometric characteristics and comparing them to a model stored in a device or on a secure server. Banking solutions and the payment technologies available today use a wide range of biometric modalities: fingerprints, iris, voice, face, fingerprint, palm veins, behavior, and other types of biometric recognition are all used alone or combined in a multifactorial manner as a system, to lock accounts and serve against fraud.

    • Mobile: Mobile biometric solutions live at the intersection connectivity and identity. They integrate one or more biometric terms for authentication or identification purposes and take advantage of smartphones, tablets, other types of handhelds, wearable technology, and the Internet of things for versatile deployment capabilities. Thanks to the versatility brought by modern mobile technology, as well as the proliferation of mobile paradigms in the consumer, public, and private world, mobile biometrics is becoming more and more important.

    • Eye movements tracking applications:

      • Automotive industry: there is an established relationship between eye movement and attention. Thus, tracking the car driver’s eye movements can be very helpful in measuring the degree of sleepiness, tiredness, or drowsiness. The sleepiness of the driver can be detected by analyzing either blink duration and amplitude or the level of gaze activity [3].

      • Screen navigation: one of the most important applications for people with disabilities is screen navigation. Using cameras, the application can track a person’s eye movements in order to scroll a web page, write text, or perform actions by clicking on buttons on a computer or mobile devices. Therefore, this kind of application is gaining more attention recently due the rapid development and the growing need of new means of screen navigation especially on mobile devices platforms.

      • Aviation: the flight simulators track the pilot eye and head movement in order to analyze the pilot’s behavior under realistic circumstances. This simulator is capable of evaluating a pilot’s performance based on his eye movements combined with other information. It can be also used as an important training tool for new pilots in order to help them to look at the primary flight display (PFD) more regularly in order to monitor different airplane indicators.

5.1 Detection and recognition of dynamic shapes

Detection of dynamic forms is a very important research area that is rapidly evolving in the field of image processing. The goal is to recognize the shapes of objects in an image or in a sequence of images from the information relating to their shapes. In fact, shape is one of the most differentiating features in an image. However, the description and representation of an image remain a major challenge to perform the recognition task.

The quality of a descriptor is represented by its intelligence and ability to distinguish the different forms in a reliable manner despite the geometric variations related to translation and rotation.

On the other hand, a reliable descriptor must withstand the various changes that affect the shape of an object such as noise and distortion that can actually alter the shape and make the recognition task more complicated.

5.2 Representation and description of planar shapes

The form representation and description techniques can be generally split into two main classes of methods: contour-based methods and region-based methods. This ranking depends on how the shape features are extracted: from only the outline or the entire region of the shape. For each category, the different approaches are divided into global approaches and local (structural) approaches. This subclassification is based on the representation of the form that depends on the whole form or parts of the form (primitives). These approaches can also be distinguished according to the spatial or transform processing space, in which the shape characteristics are calculated. Global methods are not always robust against occlusions and image noise. In addition, they require an entire and correct segmentation of objects in the images. In general, the segmentation process results in partitioning objects into regions or contour parts that do not necessarily correspond to whole objects.

The contour-based approaches only exploit the boundary of the object for the characterization of the form by ignoring its inner content. The most commonly used representation in contour-based recognition methods is the signature of the form [4]. For a given form, the signature is essentially a representation based on the parameters 1D of the contour of shape. This can be done using a scalar value of the radial distance, angle, curvature, or velocity function. Let us note here that the signature of an entire form (closed curve) is often a periodic function; this will not be the case of a part of form (open curve) for which the two ends are not contiguous. Outline-based descriptors include Fourier descriptors [5, 6], the wavelet descriptors [7, 8], the multi-scale curvature [9], the shape context [10], the contour moments [11], and the symbol chain [12, 13]. Since these descriptors are calculated using only the pixels of the contour, the computational complexity is low, and their characteristic vectors are generally compact.

In region-based approaches, all pixels of the object are considered for characterization of the shape. This type of methods aims to exploit not only the information of the shape boundary but also that of the inner region of the form. The majority of region-based methods use moment descriptors to describe shapes such as Zernike moments [14], Legendre moments [15], or invariant geometric moments [16]. Other methods include grid descriptors [17] or shape matrix [18]. Since the region-based descriptor makes use of all the pixels constituting the shape, it can effectively describe various forms in a single descriptor. However, the size of the region-based features is usually large. This descriptor leads to a computing time that remains considerable.

It remains to emphasize that the description of the forms based on the contour is considered more relevant than that based on the region because the shape of an object is essentially distinguished by the border. In most cases, the central part of the object does not contribute much to pattern recognition [13].

Advertisement

6. Conclusion

In this chapter, we presented different biometric techniques used in the industrial world as well as their performances.

We started with an overview of biometric systems as well as an overview of biometrics. Then we presented the different issues and challenges related to implementation of such systems.

After that, we presented a performance evaluation of different biometric systems given the issues and challenges previously stated. Then we presented an overview of some important biometric elements such as the databases and the degree of confidence. Furthermore, a detailed analysis of different domains of application of several biometric techniques was presented with a focus on eye movement tracking techniques.

Finally, the different approaches of recognition of dynamic and planar shapes were discussed in the last paragraph.

Advertisement

Conflict of interest

We have no conflicts of interest to disclose.

References

  1. 1. Shotton J et al. Real-time human pose recognition in parts from single depth images. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference. IEEE; 2011. pp. 1297-1304
  2. 2. David P. Fueling innovation beyond security—bio-metrics in payments. Available from: https://www.abiresearch.com/market-research/product/1031105-fueling-innovation-beyond-security-biometr [Accessed: 01 February 2018]
  3. 3. Barbuceanu F, Antonya C. Eye Tracking Applications. In: Bulletin of the Transilvania University of Brasov. Engineering Sciences. Series I 2. 2009. p. 17
  4. 4. Quinlan JR. C4.5: Programs for Machine Learning. Elsevier; 2014
  5. 5. Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23(11):1222-1239
  6. 6. Ballerini L et al. A query-by-example content-based image retrieval system of non-melanoma skin lesions. In: MICCAI International Workshop on Medical Content-Based Retrieval for Clinical Decision Support. Springer; 2009. pp. 31-38
  7. 7. LeCun Y et al. Back propagation applied to handwritten zip code recognition. Neural Computation. 1989;1(4):541-551
  8. 8. Platt J et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers. 1999;10(3):61-74
  9. 9. Hao F, Qiu G, He H. Feature combination beyond basic arithmetics. In: BMVC; Citeseer. 2011. pp. 1-11
  10. 10. Sivic J, Zisserman A. Video Google: A Text Retrieval Approach to Object Matching in Videos. IEEE; 2003. p. 1470
  11. 11. Tomasz A. Using contour information and segmentation for object registration, modeling and retrieval [PhD thesis]. Dublin City University; 2006
  12. 12. Shotton J, Johnson M, Cipolla R. Semantic text on forests for image categorization and segmentation. In: Computer Vision and Pattern Recognition, 2008 (CVPR 2008). IEEE; 2008. pp. 1-8
  13. 13. Leibe B, Leonardis A, Schiele B. Combined object categorization and segmentation with an implicit shape model. In: Workshop on Statistical Learning in Computer Vision, ECCV. Vol. 2.5. 2004. p. 7
  14. 14. Bengio S, Weston J, Grangier D. Label em-bedding trees for large multi-class tasks. In: Advances in Neural Information Processing Systems. 2010. pp. 163-171
  15. 15. Zhang P, Peng J, Domeniconi C. Kernel pooled local subspaces for classification. In: IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 35.3. 2005. pp. 489-502
  16. 16. Fergus R, Perona P, Zisserman A. A Sparse Object Category Model for Efficient Learning and Exhaustive Recognition. IEEE; 2005. pp. 380-387
  17. 17. Theocharides T, Vijaykrishnan N, Irwin MJ. A parallel architecture for hardware face detection. In: ISVLSI. Vol. 6. Citeseer; 2006. p. 452
  18. 18. Kullback S, Leibler RA. On information and sufficiency. The Annals of Mathematical Statistics. 1951;22(1):79-86

Written By

Souhail Guennouni, Anass Mansouri and Ali Ahaitouf

Submitted: 19 October 2018 Reviewed: 30 January 2019 Published: 01 March 2019