Ab5 PDF
Ab5 PDF
Ab5 PDF
Submitted in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology
In
Electronics and Communication Engineering
Submitted by
Mrs.K.Prasanthi, M.Tech,
Assistant Professor
(2016-20)
SRINIVASA INSTITUTE OF ENGINEERING AND TECHNOLOGY
(Approved by AICTE, New Delhi, Permanently affiliated to JNTUK, Kakinada)
(An ISO 9001:2015 Certified Institute, Accredited by NAAC with ‘A’ Grade)
(Recognised by UGC Under Sections 2(f) & 12(B))
NH-216, Amalapuram-Kakinada Highway, Cheyyeru (V), AMALAPURAM, E.G.Dt. PIN-533222
(2016-20)
CERTIFICATE
This is to certify that Project Report entitled, “Face Recognition Attendance System”
that is being submitted by N.Durga Bhavani, P.N.L.Rama Devi, K.Leela Mohan, partial
fulfillment for the award of Bachelor of Technology in Electronics and Communication
Engineering during the academic period 2016-20.
EXTERNAL EXAMINER
Acknowledgment
We express our deep hearted thanks to Dr.V.Radhika, our beloved Head of the
Department for being helpful in providing us with his valuable advice and timely guidance.
We would like to thank the Principal, Dr. K. Narendra Kumar and Management of
“Srinivasa Institute of Engineering & Technology”, for providing us with the requisite
facilities to carry out our project on the campus.
Our deep hearted thanks to our project Co-Ordinator Mr.G.Vijaya Raju and to all the
Faculty members of our department for their value-based imparting of theory and practical
subjects, which we had put into use in our project. We are also indebted to the Non-Teaching
Staff for their co-operation.
We would like to thank our Friends and Family members for their help and support in
making our project a success.
DECLARATION
We, N.Durga Bhavani (166N1A0435), P.N.L.Rama Devi
(166N1A0444), K.Leela Mohan (166N1A0421), hereby declare that the project report
entitled “Face Recognition Attendance System” under the esteemed guidance of
Mrs.K.Prashanthi, M.Tech. Assistant Professor is submitted in partial fulfillment of the
requirements for the award of the degree of Bachelor of Technology in Electronics and
Communication Engineering.
Appendix 57
70
Reference/Biblography
LIST OF FIGURES
i
Fig. No Name of the Figure Page No
6.2 Checking Python 42
6.3 Install Numpy 43
6.4 Install Opencv 43
6.5 Checking Opencv version 43
6.6 Student Registration Flow 44
6.7 Taking Attendance Flow 45
6.8 Generating Attendance Flow 46
7.1 Interface 47
7.2 Create Dataset 48
7.3 Capturing Images 48
7.4 Database 49
8.1 Interface 50
8.2 Create Dataset 51
8.3 Capturing Images 51
8.4 Database 52
8.5 53
Face Recognition
ii
LIST OF TABLES
Iii
ABSTRACT
The world is getting refreshed quicker and quicker however a few things are not being
refreshed; one of them is the Attendance framework. In ancient days the attendance is taken
by signature on the paper. This process is a time consuming and burdensome process. Now
some techniques came like RFID based attendance system but there is a chance to cope
with the occurrence of malpractice. In this project, new technology is proposed namely
Face recognition attendance system. In this system, a camera is placed, that camera will
take a picture of the person and compares it with the database. If the image matches the
database then attendance is given to that person. In this, the comparison is done by using
the Local Binary Pattern Histogram (LBPH) Algorithm. Previously Principal Component
Analysis(PCA) is used, compared to PCA, LBPH is the most efficiently working algorithm.
By using the LBPH algorithm this system gives the best results.
iv
CHAPTER 1
INTRODUCTION
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER-1
INTRODUCTION
1. Introduction
The Attendance Management System has become an important factor in the education
field. This system should help the professors and lecturers in marking the attendance by reducing
their effort and the time consumption. In this project we will provide complete details about the
Attendance Management System using face recognition LBPH algorithm and also provides the
complete details about the student in a regular manner.
1.1 Current Systems
At present attendance marking involves manual attendance on paper sheet by professors
and teachers but it is very time consuming process and chances of proxy is also one problem that
arises in such type of attendance marking. Also there are attendance marking system such as RFID
(Radio Frequency Identification), Biometrics etc. but these systems are currently not so much
popular in schools and classrooms for students. So, why not shift to an automated attendance
system which works on face recognition technique? Be it a class room or entry gates it will mark
the attendance of the students, professors, employees, etc.
1.2 Motivation
The main motivation for us to go for this project was the slow and inefficient traditional
manual attendance system. These made us to think why not make it automated fast and much
efficient. Also such face detection techniques are in use by department like crime investigation
where they use CCTV footages and detect the faces from the crime scene and compare those with
criminal database to recognize them.
1.3 Scope
The scope of the system is to have a high-tech environment in the GLA University. That
means by using the face recognition attendance management system, the university will transfer
to the technical attendance system from a traditional attendance system. This system will add some
features in the dynamic classroom management system of our university by marking the attendance
of every student using the image processing and computer vision technology in every classroom.
That will help the university use the technology in effective ways:
● Make the attendee process easier and effective.
● Help faculty in the attendance process every time.
● Mange and organize the attendance data of every student.
1.5.1Objective
● Detection of unique face image amidst the other natural component such as walls and other
backgrounds.
● Detection of faces amongst other face characters such as beard, spectacles etc.
● Extraction of unique characteristic features of a face useful for face recognition.
● Effective recognition of unique faces in a class (individual recognition).
● Automated update in the attendance sheet without human intervention
● To help the lecturers, improve and organize the process of track and manage student
attendance.
● Provides a valuable attendance service for both teachers and students.Reduce manual
process errors by provide automated and a reliable attendance system.
● Increase privacy and security which student cannot present him or his friend while they are
not.
LITERATURE SURVEY
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER-2
LITERATURE SURVEY
Face detection is a computer technology that determines the location and size of human face in
arbitrary (digital) image. The facial features are detected and any other objects like trees, buildings
and bodies etc are ignored from the digital image. It can be regarded as a specific‘ case of object-
class detection, where the task is finding the location and sizes of all objects in an image that
belong to a given class. Face detection, can be regarded as a more general‘ case of face localization.
In face localization, the task is to find the locations and sizes of a known number of faces (usually
one). Basically there are two types of approaches to detect facial part in the given image i.e. feature
base and image base approach.Feature base approach tries to extract features of the image and
match it against the knowledge of the face features. While image base approach tries to get best
match between training and testing images.
Active shape models focus on complex non-rigid features like actual physical and higher level
appearance of features Means that Active Shape Models (ASMs) are aimed at automatically
locating landmark points that define the shape of any statistically modelled object in an image.
When of facial features such as the eyes, lips, nose, mouth and eyebrows. The training stage of an
ASM involves the building of a statistical facial model from a training set containing images with
manually annotated landmarks.
ASMs is classified into three groups i.e. snakes, PDM, Deformable templates
1.1)Snakes:The first type uses a generic active contour called snakes, first introduced by Kass et
al. in 1987 Snakes are used to identify head boundaries [8,9,10,11,12]. In order to achieve the task,
a snake is first initialized at the proximity around a head boundary. It then locks onto nearby edges
and subsequently assume the shape of the head. The evolution of a snake is achieved by
minimizing an energy function, Esnake (analogy with physical systems), denoted asEsnake =
Einternal + EExternal WhereEinternal and EExternal are internal and external energy
functions.Internal energy is the part that depends on the intrinsic properties of the snake and defines
its natural evolution. The typical natural evolution in snakes is shrinking or expanding. The
external energy counteracts the internal energy and enables the contours to deviate from the natural
evolution and eventually assume the shape of nearby features—the head boundary at a state of
equilibria.Two main consideration for forming snakes i.e. selection of energy terms and energy
minimization. Elastic energy is used commonly as internal energy. Internal energy is vary with the
distance between control points on the snake, through which we get contour an elastic-band
characteristic that causes it to shrink or expand. On other side external energy relay on image
features. Energy minimization process is done by optimization techniques such as the steepest
gradient descent. Which needs highest computations. Huang and Chen and Lam and Yan both
employ fast iteration methods by greedy algorithms. Snakes have some demerits like contour often
becomes trapped onto false image features and another one is that snakes are not suitable in
extracting non convex features.
2.1.1Deformable Templates:
Deformable templates were then introduced by Yuille et al. to take into account the a priority of
facial features and to better the performance of snakes. Locating a facial feature boundary is not
an easy task because the local evidence of facial edges is difficult to organize into a sensible global
entity using generic contours. The low brightness contrast around some of these features also
makes the edge detection process.Yuille et al. took the concept of snakes a step further by
incorporating global information of the eye to improve the reliability of the extraction
process.Deformable templates approaches are developed to solve this problem. Deformation is
based on local valley, edge, peak, and brightness .Other than face boundary, salient feature
(eyes, nose, mouth and eyebrows) extraction is a great challenge of face recognition.E = Ev + Ee+
Ep + Ei + Einternal ; where Ev , Ee , Ep , Ei , Einternal are external energy due to valley, edges,
peak and image brightness and internal energy
Based on low level visual features like color, intensity, edges, motion etc.Skin Color BaseColor is
avital feature of human faces. Using skin-color as a feature for tracking a face has several
advantages. Color processing is much faster than processing other facial features. Under certain
lighting conditions, color is orientation invariant. This property makes motion estimation much
easier because only a translation model is needed for motion estimation.Tracking human faces
using color as a feature has several problems like the color representation of a face obtained by a
camera is influenced by many factors (ambient light, object movement, etc)
Majorly three different face detection algorithms are available based on RGB, YCbCr, and HIS
color space models.In the implementation of the algorithms there are three main steps viz.
Classify the skin region in the color space,Apply threshold to mask the skin region and Draw
bounding box to extract the face image.Crowley and Coutaz suggested simplest skin color
algorithms for detecting skin pixels.The perceived human color varies as a function of the relative
direction to the illumination.The pixels for skin region can be detected using a normalized color
histogram, and can be normalized for changes in intensity on dividing by luminance. Converted
an [R, G, B] vector is converted into an [r, g] vector of normalized color which provides a fast
means of skin detection. This algorithm fails when there are some more skin region like legs, arms,
etc.Cahi and Ngan [27] suggested skin color classification algorithm with YCbCr color
space.Research found that pixels belonging to skin region having similar Cb and Cr values. So that
the thresholds be chosen as [Cr1, Cr2] and [Cb1, Cb2], a pixel is classified to have skin tone if the
values [Cr, Cb] fall within the thresholds. The skin color distribution gives the face portion in the
color image. This algorithm is also having the constraint that the image should be having only face
as the skin region. Kjeldson and Kender defined a color predicatein HSV color space to separate
skin regionsfrom background. Skin color classification inHSI color space is the same as YCbCr
color spacebut here the responsible values are hue (H) andsaturation (S). Similar to above the
threshold be chosen as [H1, S1] and [H2, S2], and a pixel isclassified to have skin tone if the values
[H,S] fallwithin the threshold and this distribution gives thelocalized face image. Similar to above
twoalgorithm this algorithm is also having the same constraint.
2.3MOTION BASE:
When useof video sequence is available, motion informationcan be used to locate moving objects.
Movingsilhouettes like face and body parts can be extractedby simply thresholding accumulated
framedifferences . Besides face regions, facial featurescan be located by frame differences .
Gray information within a face canalso be treat as important features. Facial features such as
eyebrows, pupils, and lips appear generallydarker than their surrounding facial regions. Various
recent feature extraction algorithms searchfor local gray minima within segmented facial regions.
In these algorithms, the input imagesare first enhanced by contrast-stretching and gray-scale
morphological routines to improvethe quality of local dark patches and thereby make detection
easier. The extraction of darkpatches is achieved by low-level gray-scale thresholding. Based
method and consist three levels. Yang and huang presented new approach i.e. faces gray scale
behaviour in pyramid (mosaic) images. This system utilizes hierarchical Face location consist three
levels. Higher two level based on mosaic images at different resolution. In the lower level, edge
detection method is proposed. Moreover this algorithms gives fine response in complex
background where size of the face is unknown
2.3.2Edge Base:
Face detection based on edges was introduced by Sakai et al. This workwas based on analysing
line drawings of the faces from photographs, aiming to locate facialfeatures. Than later Craw et
al. proposed a hierarchical framework based on Sakai et al.‘swork to trace a human head outline.
Then after remarkable works were carried out by many researchers in this specific area. Method
suggested by Anila and Devarajan was very simple and fast. They proposed frame work which
consist three stepsi.e. initially the images are enhanced by applying median filterfor noise removal
and histogram equalization for contrast adjustment. In the second step the edge imageis
constructed from the enhanced image by applying sobel operator. Then a novel edge
trackingalgorithm is applied to extract the sub windows from the enhanced image based on edges.
Further they used Back propagation Neural Network (BPN) algorithm to classify the sub-window
as either face or non-face.
2.4FEATURE ANALYSIS
These algorithms aimto find structural features that exist even when thepose, viewpoint, or lighting
conditions vary, andthen use these to locate faces. These methods aredesigned mainly for face
localization
2.4.1Feature Searching
Paul Viola and Michael Jones presented an approach for object detection which
minimizes computation time while achieving high detection accuracy. Paul Viola and Michael
Jones [39] proposed a fast and robust method for face detection which is 15 times quicker than any
technique at the time of release with 95% accuracy at around 17 fps.The technique relies on the
use of simple Haar-like features that are evaluated quickly through the use of a new image
representation. Based on the concept of an ―Integral Image‖ it generates a large set of features
and uses the boosting algorithm AdaBoost to reduce the overcomplete set and the introduction of
a degenerative tree of the boosted classifiers provides for robust and fast interferences. The
detector is applied in a scanning fashion and used on gray-scale images, the scanned window that
is applied can also be scaled, as well as the features evaluated.
All methods discussed so far are able to track faces but still some issue like locating faces of
various poses in complex background is truly difficult. To reduce this difficulty investigator form
a group of facial features in face-like constellations using more robust modelling approaches such
as statistical analysis. Various types of face constellations have been proposed by Burl et al. . They
establish use of statistical shape theory on the features detected from a multiscale Gaussian
derivative filter. Huang et al. also apply a Gaussian filter for pre-processing in a framework based
on image feature analysis.Image Base Approach.
Neural networks gaining much more attention in many pattern recognition problems, such as OCR,
object recognition, and autonomous robot driving. Since face detection can be treated a 0 two class
pattern recognition problem, various neural network algorithms have been proposed.
The advantage of using neural networks for face detection is the feasibility of training a system to
capture the complex class conditional density of face patterns. However, one demerit is that the
network architecture has to be extensively tuned (number of layers, number of nodes, learning
rates, etc.) to get exceptional performance. In early days most hierarchical neural network was
proposed by Agui et al. [43]. The first stage having twoparallel subnetworks in which the inputs
are filtered intensity valuesfrom an original image. The inputs to the second stagenetwork consist
of the outputs from the sub networks andextracted feature values. An output at thesecond stage
shows the presence of a face in the inputregion.Propp and Samal developed one of the earliest
neuralnetworks for face detection [44]. Their network consists offour layers with 1,024 input units,
256 units in the first hiddenlayer, eight units in the second hidden layer, and two outputunits.Feraud
and Bernier presented a detection method using auto associative neural networks [45], [46], [47].
The idea is based on [48] which shows an auto associative network with five layers is able to
perform a nonlinear principal component analysis. One auto associative network is used to detect
frontal- view faces and another one is used to detect faces turned up to 60 degrees to the left and
right of the frontal view. After that Lin et al. presented a face detection system using probabilistic
decision-based neural network (PDBNN) [49]. The architecture of PDBNN is similar to a radial
basis function (RBF) network with modified learning rules and probabilistic interpretation.
An early example of employing eigen vectors in face recognition was done by Kohonen in which
a simple neural network is demonstrated to perform face recognition for aligned and normalized
face images. Kirby and Sirovich suggested that images of faces can be linearly encoded using a
modest number of basis images. The idea is arguably proposed first by Pearson in 1901 and then
by HOTELLING in 1933 .Given a collection of n by m pixel training.
Images represented as a vector of size m X n, basis vectors spanning an optimal subspace are
determined such that the mean square error between the projection of the training images onto this
subspace and the original images is minimized. They call the set of optimal basis vectors Eigen
pictures since these are simply the eigen vectors of the covariance matrix computed from the
vectorized face images in the training set. Experiments with a set of 100 images show that a face
image of 91 X 50 pixels can be effectively encoded using only 50 Eigen pictures.
SVMs were first introduced Osuna et al. for face detection. SVMs work as a new paradigm to train
polynomial function, neural networks, or radial basis function (RBF) classifiers.SVMs works on
induction principle, called structural risk minimization, which targets to minimize an upper bound
on the expected generalization error. An SVM classifier is a linear classifier where the separating
hyper plane is chosen to minimize the expected classification error of the unseen test patterns.In
Osunaet al. developed an efficient method to train an SVM for large scale problems,and applied it
to face detection. Based on two test sets of 10,000,000 test patterns of 19 X 19 pixels, their system
has slightly lower error rates and runs approximately30 times faster than the system by Sung and
Poggio . SVMs have also been used to detect faces and pedestrians in the wavelet domain.
CHAPTER-3
3.1 Methodology
The system proposed in the basis of face recognition. When a student come across the
camera module, then his/her image/photo will be captured and recognize with validation. When
recognition and validation is succeeded, then his/her attendance will mark automatically.
In this system, user gets a login interface to interact with the system. If login is succeeded
in the system, interface displays the home page of the proposed system. The proposed block
diagram of the automatic attendance system is shown in the Fig.The system block diagram and
explained as follows:
Data
Base
1.Camera:
The camera will place at the entrance of the classroom to get student's face images
perfectly. Then it goes to further process of face detection.
4. Face Recognition:
The important part of this system is face recognition. Face recognition of an automatic
method of identifying and verifying a person from images and videos from camera.
5. Attendance marker:
The particular student will be marked as present in attendance when if a face from the
particular date folder is matched and rest of the students does not matched will be marked as absent.
This is the following procedure
6.Data Base:
In this the student attendance data is stored with date and time and we can retrieve the
data at any time
Start
Image Capture
Face Recognition
Training Recognition
Not Matched
Matching img Attendance Not Marked
Matched
Attendance
marked Generate Report
“PRESENT”
Update Attendance
Stop
Fig:3.2 FlowChart
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6:
Step 7:
Step 8:
Step 9:
Generate report
Step 10:
Update attendance
Step11:
Step 12:
Stop
These systems save the time and effects of the attendance system.It is good accuracy. The
proposed system is the update the attendance marking on the students. The system used for the
school, colleges and library.
INTRODUCTION TO IMAGE
PROCESSING
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER-4
4.1 Introduction
Various techniques have been developed in Image Processing during the last four to
five decades. Most of the techniques are developed for enhancing images obtained from unmanned
spacecrafts, space probes and military reconnaissance flights. Image Processing systems are
becoming popular due to easy availability of powerful personnel computers, large size memory
devices, graphics softwares etc.
● Remote Sensing
● Medical Imaging
● Non-destructive Evaluation
● Forensic Studies
● Textiles
● Material Science.
● Military
● Film industry
● Document processing
● Printing Industry
The common steps in image processing are image scanning, storing, enhancing and interpretation.
Analog Image Processing refers to the alteration of image through electrical means.
The most common example is the television image.
In this case, digital computers are used to process the image. The image will be
converted to digital form using a scanner – digitizer [6] (as shown in Figure 1) and then process it.
It is defined as the subjecting numerical representations of objects to a series of operations in order
to obtain a desired result. It starts with one image and produces a modified version of the same. It
is therefore a process that takes an image into another.
● Image representation
● Image preprocessing
● Image enhancement
● Image restoration
● Image analysis
● Image reconstruction
● Image data compression
4.3Image Representation
Fig:4.1 Digitization
The 2D continuous image f(x,y) is divided into N rows and M columns. The
intersection of a row and a column is called as pixel. The value assigned to the integer coordinates
[m,n] with {m=0,1, 2,...,M-1} and {n=0,1,2,...,N-1} is f[m,n]. In fact, in most cases f(x,y)--which
we might consider to be the physical signal that impinges on the face of a sensor. Typically an
image file such as BMP, JPEG, TIFF etc., has some header and
picture information. A header usually includes details like format identifier (typically first
information), resolution, number of bits/pixel, compression type,etc.,
4.4Image Preprocessing
4.4.1 Scaling
4.4.2 Magnification
This is usually done to improve the scale of display for visual interpretation or
sometimes to match the scale of one image to another. To magnify an image by a factor of 2, each
pixel of the original image is replaced by a block of 2x2 pixels, all with the same brightness value
as the original pixel.
Fig:4.2 Magnification
4.4.3Reduction
To reduce a digital image to the original data, every mth row and mth column of
the original imagery is selected and displayed. Another way of accomplishing the same is by taking
the average in 'm x m' block and displaying this average after proper rounding of the resultant
value.
Fig:4.3 Reduction
4.4.4 Mosaic
Mosaic is a process of combining two or more images to form a single large image
without radiometric imbalance. Mosaic is required to get the synoptic view of the entire area,
otherwise capture as small images.
Fig:4.4 Mosaic
Some times images obtained from satellites and conventional and digital cameras
lack in contrast and brightness because of the limitations of imaging sub systems and illumination
conditions while capturing image. Images may have different types of noise. In image
enhancement, the goal is to accentuate certain image features for subsequent analysis or for image
display[1,2]. Examples include contrast and edge enhancement, pseudo-coloring, noise filtering,
sharpening, and magnifying. Image enhancement is useful in feature extraction, image analysis
and an image display. The enhancement process itself does not increase the inherent information
content in the data. It simply emphasizes certain specified image characteristics. Enhancement
algorithms are generally interactive and applicationdependent.
● Contrast Stretching
● Noise Filtering
● Histogram modification
Some images (eg. over water bodies, deserts, dense forests, snow, clouds and
under hazy conditions over heterogeneous regions) are homogeneous i.e., they do not have much
change in their levels. In terms of histogram representation, they are characterized as the
occurrence of very narrow peaks. The homogeneity can also be due to the incorrect illumination
of the scene.
Ultimately the images hence obtained are not easily interpretable due to
poor human perceptibility. This is because there exists only a narrow range of gray-levels in the
image having provision for wider range of gray-levels. The contrast stretching methods are
designed exclusively for frequently encountered situations. Different stretching techniques have
been developed to stretch the narrow range to the whole of the available dynamic range.
restored to its original quality by inverting the physical degradation phenomenon such as defocus,
linear motion, atmospheric degradation and additive noise.
Compression is a very essential tool for archiving image data, image data transfer
on the network etc. They are various techniques available for lossy and lossless compressions. One
of most popular compression techniques, JPEG (Joint Photographic Experts Group) uses Discrete
Cosine Transformation (DCT) based compression technique. Currently wavelet based
compression techniques are used for higher compression ratios with minimal loss of data.
Chapter 5
Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the
pixels of an image by thresholding the neighborhood of each pixel and considers the result as a
binary number.
It was first described in 1994 (LBP) and has since been found to be a powerful feature for
texture classification. It has further been determined that when LBP is combined with histograms
of oriented gradients (HOG) descriptor, it improves the detection performance considerably on
some datasets.Using the LBP combined with histograms we can represent the face images with a
simple data vector. As LBP is a visual descriptor it can also be used for face recognition tasks, as
can be seen in the following step-by-step explanation.
5.1.1 Step-by-Step
Now that we know a little more about face recognition and the LBPH, let’s go further and
see the steps of the algorithm:
● Radius: the radius is used to build the circular local binary pattern and represents the
radius around the central pixel. It is usually set to 1.
● Neighbors: the number of sample points to build the circular local binary pattern. Keep in
mind: the more sample points you include, the higher the computational cost. It is usually set
to 8.
● Grid X: the number of cells in the horizontal direction. The more cells, the finer the grid, the
higher the dimensionality of the resulting feature vector. It is usually set to 8.
● Grid Y: the number of cells in the vertical direction. The more cells, the finer the grid, the
higher the dimensionality of the resulting feature vector. It is usually set to 8.
First, we need to train the algorithm. To do so, we need to use a dataset with
the facial images of the people we want to recognize. We need to also set an ID (it may be a number
or the name of the person) for each image, so the algorithm will use this information to recognize
an input image and give you an output. Images of the same person must have the same ID. With
the training set already constructed, let’s see the LBPH computational steps.
The first computational step of the LBPH is to create an intermediate image that
describes the original image in a better way, by highlighting the facial characteristics. To do so,
the algorithm uses a concept of a sliding window, based on the parameters radius and neighbors.
Based on the image above, let’s break it into several small steps so we can understand it easily:
● It can also be represented as a 3x3 matrix containing the intensity of each pixel (0~255).
● Then, we need to take the central value of the matrix to be used as the threshold.
● This value will be used to define the new values from the 8 neighbors.
● For each neighbor of the central value (threshold), we set a new binary value. We set 1 for
values equal or higher than the threshold and 0 for values lower than the threshold.
● Now, the matrix will contain only binary values (ignoring the central value). We need to
concatenate each binary value from each position from the matrix line by line into a new binary
value (e.g. 10001101). Note: some authors use other approaches to concatenate the binary
values (e.g. clockwise direction), but the final result will be the same.
● Then, we convert this binary value to a decimal value and set it to the central value of the
matrix, which is actually a pixel from the original image.
● At the end of this procedure (LBP procedure), we have a new image which represents better
the characteristics of the original image.
● Note: The LBP procedure was expanded to use a different number of radius and neighbors, it
is called Circular LBP.
It can be done by using bilinear interpolation. If some data point is between the pixels, it
uses the values from the 4 nearest pixels (2x2) to estimate the value of the new data point.
It can be done by using bilinear interpolation. If some data point is between the pixels, it
uses the values from the 4 nearest pixels (2x2) to estimate the value of the new data point.
HISTOGRAM
Introduction To Histogram
In digital image processing, the histogram is used for graphical representation of a digital
image. A graph is a plot by the number of pixels for each tonal value. Nowadays, image histogram
is present in digital cameras. Photographers use them to see the distribution of tones captured.
In a graph, the horizontal axis of the graph is used to represent tonal variations whereas the
vertical axis is used to represent the number of pixels in that particular pixel. Black and dark
areas are represented in the left side of the horizontal axis, medium grey color is represented in
the middle, and the vertical axis represents the size of the area.
Applications of Histograms
● In digital image processing, histograms are used for simple calculations in software.
● It is used to analyze an image. Properties of an image can be predicted by the detailed study
of the histogram.
● The brightness of the image can be adjusted by having the details of its histogram.
● The contrast of the image can be adjusted according to the need by having details of the x-
axis of a histogram.
● It is used for image equalization. Gray level intensities are expanded along the x-axis to
produce a high contrast image.
● Histograms are used in thresholding as it improves the appearance of the image.
● If we have input and output histogram of an image, we can determine which type of
transformation is applied in the algorithm.
Based on the image above, we can extract the histogram of each region as follows:
● As we have an image in grayscale, each histogram (from each grid) will contain only 256
positions (0~255) representing the occurrences of each pixel intensity.
● Then, we need to concatenate each histogram to create a new and bigger histogram. Supposing
we have 8x8 grids, we will have 8x8x256=16.384 positions in the final histogram. The final
histogram represents the characteristics of the image original image.
In this step, the algorithm is already trained. Each histogram created is used to represent
each image from the training dataset. So, given an input image, we perform the steps again for this
new image and creates a histogram which represents the image.
● So to find the image that matches the input image we just need to compare two histograms and
return the image with the closest histogram.
● We can use various approaches to compare the histograms (calculate the distance between two
histograms), for example: euclidean distance, chi-square, absolute value, etc. In this
example, we can use the Euclidean distance (which is quite known) based on the following
formula:
● So the algorithm output is the ID from the image with the closest histogram. The algorithm
should also return the calculated distance, which can be used as a ‘confidence’
measurement. Note: don’t be fooled about the ‘confidence’ name, as lower confidences are
better because it means the distance between the two histograms is closer.
● We can then use a threshold and the ‘confidence’ to automatically estimate if the algorithm has
correctly recognized the image. We can assume that the algorithm has successfully recognized
if the confidence is lower than the threshold defined.
5.5 Conclusions
SOFTWARE REQUIREMENT
ANALYSIS
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER-6
SOFTWARE REQUIREMENT ANALYSIS
6.1General Description
Over the past decade, taking down student’s attendance process had been developed and
changed. The driven force of this development is the desire to automate, facilitate, speed up and
save time and efforts. This way is time consuming and associated with high error scales. In this
project, we attempt to reduce wasted time, eliminate buddy clocking, and automate the process.
Our system uses facial recognition technology to record the attendance through a high resolution
digital camera that detects and recognizes faces and compare the recognize faces with students’
faces images stored in faces database.
Once the recognized face matches a stored image, attendance is marked in attendance
database for that person. The process will repeat if there are missed faces. For example, if there
are 4 faces missed for a bad position while the detecting phase, then this phase will start again to
detect the missed faces and recognize them and continue the attending process. By the end of the
month, a monthly report is send to the lecturer contains attendance and absence rates as a chart
and the names of absentees. Also, a warning message sends to the student if he passes the allowed
number of absence.
This product Face Recognition Attendance Management System depends on the vast
database of GLA University for the records of the students. The product will automate the various
task associated with handling student details, surveillance inside the class and optimum
performance thus helping the college authorities to ensure the smooth working of these processes.
● Student Registration
● Face Detection
● Face Recognition
o Feature Extraction
o Feature Classification
● Attendance management system
It will allow uploading, updating and deletion of the contents of the system.
● System will only allow authenticated user to login to the system and/or make changes to it.
● It will allow user to mark attendance of the students via face recognition technique.
● It will detect faces via webcam and then recognize the faces.
● After recognition it will mark the attendance of the recognized student and update the
attendance record.
● The user will be able to print these record details afterward.
6.6 Contribution
Face recognition is the most natural biological features recognition technology according
to the cognitive rule of human beings; its algorithm is ten times more complex than a fingerprint
algorithm. Face recognition is featured by the following advantages compared to fingerprint:
6.7.1.1User Interfaces
▪ GUI along with the meaningful frames and buttons.
▪ Reports are generated as per the requirement.
The machine will have to be the part of the Local Area Network to access central database.
The functional requirements that our product Face Recognition Attendance Management
System will fulfill are given below.
6.9.1 OpenCV
thousands of smaller, bite-sized tasks, each of which is easy to solve. These tasks
are also called classifiers.
Computer Vision : Computer Vision can be defined as a discipline that
explains how to reconstruct, interrupt, and understand a 3D scene from its 2D
images, in terms of the properties of the structure present in the scene. It deals with
modeling and replicating human vision using computer software and hardware.
Features of OpenCV Library:
Using OpenCV library, you can −
Detect specific objects such as faces, eyes, cars, in the videos or images.
Analyze the video, i.e., estimate the motion in it, subtract the background, and track
objects in it.
OpenCV was originally developed in C++. In addition to it, Python and Java
bindings were provided. OpenCV runs on various Operating Systems such as
windows, Linux, OSx, FreeBSD, Net BSD, Open BSD, etc
Before we install the open cv in our working environment we need to install the
python and numpy.
After install python successfully now we can check the python in Command
Prompt(CMD).
Install Opencv 4:
We are going to use pip install method.
Open CMD as administrator.
Use command: pip install opencv-python
IMPLEMENTATION AND
USER-INTERFACE
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER - 7
This is the first GUI page of FRAS (Face Recognition Attendance System ) in which
login option is shown by clicking on the login button we enter in the next page.
This is the next page where the user after login have to enter the user name and password.
After scanning the faces of all the present student Attendance sheet will be generated in
which we can modify the attendance last time before saving the attendance.
Fig:7.1 Interface
Admin can create training model for face recognition algorithm from the dataset created
by Generate dataset and admin can also provide a access to any faculty who want to take attendance
by creating their user rollno and name
This will get the dataset from specific location and generate training model using LBPH
algorithm and save at specific location by the name trainer.yml.
It consist of three table named as access, attendance and studentAccess table store the
name, user ID , password for Admin and faculty.Student table store the name of students, ID of
student. Attendance table store the date and attendance of students.
After scanning the faces of all the present student Attendance sheet will be generated in
which we can modify the attendance last time before saving the attendance.
RESULT
Face Recognition Attendance System Using LBPH Algorithm
CHAPTER -8
RESULTS
8.1 Result
Fig:8.1 Interface
When train file is run the interface window will open as shown in above figure
. Student’s data is created by entering Name and ID ,then click ‘Take image’ button
the web cam is opened and captures the student’s image then the student’s data is
created in ‘Student Details’ folder in excel sheet.
Fig:8.4 Database
After, creating data then we have to click ‘Train Image’ button .In this the
student’s images are trained by the LBPH algorithm .
Attendance is taken by clicking ‘Track Image’ when this button is clicked web
cam is opened then the student’s face is detected and compare with the data base ,if
it is matched it shows student’s ‘ID’ and ‘Name’ as shown in below figure .
Attendance is registered with date and time in excel sheet format for the
respective students if it matches otherwise it does not creates .
CHAPTER-9
9.1 Conclusion
Advantages
● Easy To Use
● Inexpensive
Applications
On this project, there is some further works to do for alert the student by sending SMS
regarding his, her attendance. GSM module is used for this purpose. Parent of the student gets this
SMS alert.
APPENDIX
import tkinter as tk
import cv2,os
import shutil
import csv
import numpy as np
import pandas as pd
import datetime
import time
window = tk.Tk()
window.title("Face_Recogniser")
dialog_title = 'QUIT'
#window.geometry('1280x720')
window.configure(background='blue')
#window.attributes('-fullscreen', True)
window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
#path = "profile.jpg"
#Creates a Tkinter-compatible photo image, which can be used everywhere Tkinter expects an
image object.
#img = ImageTk.PhotoImage(Image.open(path))
#The Label widget is a standard Tkinter widget used to display a text or image on the screen.
#cv_img = cv2.imread("img541.jpg")
#canvas.pack(side="left")
lbl.place(x=400, y=200)
txt.place(x=700, y=215)
lbl2.place(x=400, y=300)
txt2.place(x=700, y=315)
lbl3.place(x=400, y=400)
message2.place(x=700, y=650)
def clear():
txt.delete(0, 'end')
res = ""
message.configure(text= res)
def clear2():
txt2.delete(0, 'end')
res = ""
message.configure(text= res)
def is_number(s):
try:
float(s)
return True
except ValueError:
pass
try:
import unicodedata
unicodedata.numeric(s)
return True
pass
return False
def TakeImages():
Id=(txt.get())
name=(txt2.get())
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
sampleNum=0
while(True):
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
sampleNum=sampleNum+1
cv2.imshow('frame',img)
break
elif sampleNum>60:
break
cam.release()
cv2.destroyAllWindows()
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message.configure(text= res)
else:
if(is_number(Id)):
message.configure(text= res)
if(name.isalpha()):
message.configure(text= res)
def TrainImages():
recognizer = cv2.face_LBPHFaceRecognizer.create()#recognizer =
cv2.face.LBPHFaceRecognizer_create()#$cv2.createLBPHFaceRecognizer()
harcascadePath = "haarcascade_frontalface_default.xml"
detector =cv2.CascadeClassifier(harcascadePath)
faces,Id = getImagesAndLabels("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("TrainingImageLabel\Trainner.yml")
message.configure(text= res)
def getImagesAndLabels(path):
#print(imagePaths)
faces=[]
Ids=[]
#now looping through all the image paths and loading the Ids and the images
pilImage=Image.open(imagePath).convert('L')
imageNp=np.array(pilImage,'uint8')
Id=int(os.path.split(imagePath)[-1].split(".")[1])
faces.append(imageNp)
Ids.append(Id)
return faces,Ids
def TrackImages():
recognizer = cv2.face.LBPHFaceRecognizer_create()#cv2.createLBPHFaceRecognizer()
recognizer.read("TrainingImageLabel\Trainner.yml")
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);
df=pd.read_csv("StudentDetails\StudentDetails.csv")
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id','Name','Date','Time']
gID = []
while True:
ret, im =cam.read()
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
faces=faceCascade.detectMultiScale(gray, 1.2,5)
for(x,y,w,h) in faces:
cv2.rectangle(im,(x,y),(x+w,y+h),(225,0,0),2)
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
aa=df.loc[df['Id'] == Id]['Name'].values
tt=str(Id)+"-"+aa
attendance.loc[len(attendance)] = [Id,aa,date,timeStamp]
fileName="Attendance\Attendance_"+str(Id)+".csv"
gID = np.append(gID,Id)
else:
Id='Unknown'
tt=str(Id)
noOfFile=len(os.listdir("ImagesUnknown"))+1
attendance=attendance.drop_duplicates(subset=['Id'],keep='first')
cv2.imshow('im',im)
if (cv2.waitKey(1)==ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour,Minute,Second=timeStamp.split(":")
fileName="Attendance\Attendance_"+date+".csv"
attendance.to_csv(fileName,index=False)
mylist = list(dict.fromkeys(gID))
length = len(mylist)
i=0
fileName="Attendance\Attendance_"+str(mylist[i])+".csv"
file_exists = os.path.isfile(fileName)
headers = ['Date']
if not file_exists:
writer.writerow({'Date': [date]})
i += 1
cam.release()
cv2.destroyAllWindows()
res=attendance
message2.configure(text= res)
clearButton.place(x=950, y=200)
clearButton2.place(x=950, y=300)
takeImg.place(x=200, y=500)
trainImg.place(x=500, y=500)
trackImg.place(x=800, y=500)
quitWindow.place(x=1100, y=500)
copyWrite.tag_configure("superscript", offset=10)
copyWrite.configure(state="disabled",fg="red" )
copyWrite.pack(side="left")
copyWrite.place(x=800, y=750)
window.mainloop()
REFERENCES/BIBLIOGRAPHY
Books
1) Learning Open CV 3 Computer Vision with Python by Joe Minichino and Joseph Howse
2) Learing python by Mark Lutz, David Ascher
3) Learning OpenCv: Computer Vision With The Opencv Library
4) Learning OpenCV 3 Computer Vision with Python - Second Edition: Unleash the power
of computer vision with Python using OpenCV 2nd Edition
References
Websites:
1) Diepen, G. (2017). Detecting and tracking a face with Python and OpenCV. Available at
https://www.guidodiepen.nl/2017/02/detecting and-tracking-a-face-with-python-and-opencv
2) https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_tutorials.html
3) https://www.tutorialspoint.com/python/python_gui_programming.html
4) https://www.pyimagesearch.com/2015/05/11/creating-a-face-detection-api-with-python-and-
opencv-in-just-5-minutes/
5) https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3.php