Report On Face Recognition System
Report On Face Recognition System
Report On Face Recognition System
Introduction
This paper will show how we can implement algorithms for face detection and recognition in
image processing to build a system that will detect and recognise frontal faces of students in a
classroom. “A face is the front part of a person’s head from the forehead to the chin, or the
corresponding part of an animal” (Oxford Dictionary). In human interactions, the face is the most
important factor as it contains important information about a person or individual. All humans
have the ability to recognise individuals from their faces. The proposed solution is to develop a
working prototype of a system that will facilitate class control for Kingston University lecturers
in a classroom by detecting the frontal faces of students from a picture taken in a classroom. The
second part of the system will also be able to perform a facial recognition against a small database.
In recent years, research has been carried out and face recognition and detection systems have
been developed. Some of which are used on social media platforms, banking apps, government
offices e.g. the Metropolitan Police, Facebook etc.
There are two predominant approaches to the face recognition problem: Geometric (feature
based) and photometric (view base). As researcher’s interest in face recognition continued,
many different algorithms were developed, three of which have been well studied in face
recognition literature.
Images taken under different lighting conditions. The shape of the recovered object is
defined by a gradient map, which is made up of an array of surface normals (Zhao and
Chellappa, 2006) (Figure 1)
1
2. Linear Discriminate Analysis,
Face detection involves separating image windows into two classes; one containing faces (tarring the
background (clutter). It is difficult because although commonalities exist between faces, they can vary
considerably in terms of age, skin colour and facial expression. The problem is further complicated by
differing lighting conditions, image qualities and geometries, as well as the possibility of partial
occlusion and disguise. An ideal face detector would therefore be able to detect the presence of any face
2
under any set of lighting conditions, upon any background. The face detection task can be broken down
into two steps. The first step is a classification task that takes some arbitrary image as input.
and outputs a binary value of yes or no, indicating whether there are any faces present in the image. The
second step is the face localization task that aims to take an image as input and output the location of
any face or faces within that image as some bounding box with (x, y, width, height). The face detection
1. Pre-Processing: To reduce the variability in the faces, the images are processed before they
are fed into the network. All positive examples that is the face images are obtained by cropping
images with frontal faces to include only the front view. All the cropped images are then
corrected for lighting through standard algorithms.
3
2. LITERATURE SURVEY
Face detection is a computer technology that determines the location and size of human face in
arbitrary (digital) image. The facial features are detected and any other objects like trees, buildings
and bodies etc are ignored from the digital image. It can be regarded as a specific case of object-
class detection, where the task is finding the location and sizes of all objects in an image that
belong to a given class. Face detection, can be regarded as a more general case of face localization.
In face localization, the task is to find the locations and sizes of a known number of faces (usually
one). Basically there are two types of approaches to detect facial part in the given image i.e. feature
base and image base approach. Feature base approach tries to extract features of the image and
match it against the knowledge of the face features. While image base approach tries to get best
match between training and testing images.
4
2.1 FEATURE BASE APPROACH:
Active Shape Model Active shape models focus on complex non-rigid features like ac-tual
physical and higher level appearance of features Means that Active Shape Models (ASMs)
are aimed at automatically locating landmark points that define the shape of any statistically
modeled object in an image. When of facial features such as the eyes, lips, nose, mouth and
eyebrows. The training stage of an ASM involves the building of a sta-tistical
a) Facial model from a training set containing images with manually annotated landmarks. ASMs is
b) Snakes:The first type uses a generic active contour called snakes, first introduced by Kass et al. in
1987 Snakes are used to identify head boundaries [8,9,10,11,12]. In order to achieve the task, a snake
is first initialized at the proximity around a head boundary. It then locks onto nearby edges and
subsequently assume the shape of the head. The evolution of a snake is achieved by minimizing an
energy function, Esnake (analogy with physical systems), denoted as Esnake = Einternal + EExternal
where Einternal and EExternal are internal and external energy functions. Internal energy is the part
that depends on the intrinsic properties of the snake and defines its natural evolution. The typical
natural evolution in snakes is shrinking or expanding. The external energy counteracts the internal
energy and enables the contours to deviate from the natural evolution and eventually assume the shape
of nearby features—the head boundary at a state of equilibria. Two main consideration for forming
snakes i.e. selection of energy terms and energy minimization. Elastic energy is used commonly as
internal energy. Internal energy is vary with the distance between control points on the snake, through
which we get contour an elastic-band characteristic that causes it to shrink or expand. On other side
external energy relay on image features. Energy minimization process is done by optimization
techniques such as the steepest gradient descent. Which needs highest
computations. Huang and Chen and Lam and Yan both employ fast iteration methods by greedy
algorithms. Snakes have some demerits like contour often becomes trapped onto
false image features and another one is that snakes are not suitable in extracting
non convex features.
5
2.1.1 Deformable Templates:
Deformable templates were then introduced by Yuille et al. to take into account the a priori
of facial features and to better the performance of snakes. Locating a facial feature boundary
is not an easy task because the local evidence of facial edges is difficult to organize into a
sensible global entity using generic contours. The low brightness contrast around some of
these features also makes the edge detection process. Yuille et al. took the concept of snakes
a step further by incorporating global information of the eye to improve the reliability of the
extraction process.
Deformable templates approaches are developed to solve this problem. Deformation is based on local
valley, edge, peak, and brightness .Other than face boundary, salient feature (eyes, nose, mouth and
eyebrows) extraction is a great challenge of face recognition. E = Ev + Ee + Ep + Ei + Einternal ;
where Ev , Ee , Ep , Ei , Einternal are external energy due to valley, edges, peak and image brightness
and internal energy
Independently of computerized image analysis, and before ASMs were developed, researchers
developed statistical models of shape. The idea is that once you represent shapes as vectors, you can
apply standard statistical methods to them just like any other multivariate object. These models learn
allowable constellations of shape points from training examples and use principal components to
build what is called a Point Distribution Model. These have been used in diverse ways, for example
for categorizing Iron Age broaches. Ideal Point Distribution Models can only deform in ways that
are characteristic of the object. Cootes and his colleagues were seeking models which do exactly that
so if a beard, say, covers the chin, the shape model can “override the image" to approximate the
position of the chin under the beard. It was therefore natural (but perhaps only in retrospect) to adopt
Point Distribution Models. This synthesis of ideas from image
processing and statistical shape modelling led to the Active Shape Model. The first
parametric statistical shape model for image analysis based on principal components of inter-
landmark distances was presented by Cootes and Taylor in. On this approach, Cootes,
Taylor, and their colleagues, then released a series of papers that cumulated in what we call
the classical Active Shape Model.
6
2.2) LOW LEVEL ANALYSIS:
Based on low level visual features like color, intensity, edges, motion etc.Skin Color BaseColor
is avital feature of human faces. Using skin-color as a feature for tracking a face has several
advantages. Color processing is much faster than processing other facial features. Under certain
lighting conditions, color is orientation invariant. This property makes motion estimation much
easier because only a translation model is needed for motion estimation.Tracking human faces
using color as a feature has several problems like the color representation of a face obtained by
a camera is influenced by many factors (ambient light, object movement, etc.)
Majorly three different face detection algorithms are available based on RGB, YCbCr, and
HIS color space models. In the implementation of the algorithms there are three main steps
viz.
7
(1) Classify the skin region in the color space,
Crowley and Coutaz suggested simplest skin color algorithms for detecting skin pixels.
The perceived human color varies as a function of the relative direction to the illumination.
The pixels for skin region can be detected using a normalized color histogram, and can be normalized
for changes in intensity on dividing by luminance. Converted an [R, G, B] vector is converted into
an [r, g] vector of normalized color which provides a fast means of skin detection. This algorithm
fails when there are some more skin region like legs, arms, etc.Cahi and Ngan [27] suggested skin
color classification algorithm with YCbCr color space.Research found that pixels belonging to skin
region having similar Cb and Cr values. So that the thresholds be chosen as [Cr1, Cr2] and [Cb1,
Cb2], a pixel is classified to have skin tone if the values [Cr, Cb] fall within the thresholds. The skin
color distribution gives the face portion in the color image. This algorithm is also having the
constraint that the image should be having only face as the skin region. Kjeldson and Kender defined
a color predicatein HSV color space to separate skin regionsfrom background. Skin color
classification inHSI color space is the same as YCbCr color spacebut here the responsible values are
hue (H) andsaturation (S). Similar to above the threshold be chosen as [H1, S1] and [H2, S2], and a
pixel isclassified to have skin tone if the values [H,S] fallwithin the threshold and this distribution
gives thelocalized face image. Similar to above twoalgorithm this algorithm is also having the same
constraint.
Face detection based on edges was introduced by Sakai et al. This workwas based on analysing
line drawings of the faces from photographs, aiming to locate facialfeatures. Than later Craw
et al. proposed a hierarchical framework based on Sakai et al. swork to trace a human head
outline. Then after remarkable works were carried out by many researchers in this specific
area. Method suggested by Anila and Devarajan was very simple and fast. They proposed
frame work which consist three steps i.e. initially the images are enhanced by applying median
filter for noise removal and histogram equalization for contrast adjustment. In the second step
the edge image is constructed from the enhanced image by applying sobel operator. Then a
novel edge tracking algorithm is applied to extract the sub windows from the enhanced image
8
based on edges. Further they used Back propagation Neural Network (BPN) algorithm to
classify the sub-window as either face or non-face.
These algorithms aim to find structural features that exist even when the pose, viewpoint, or
lighting conditions vary, and then use these to locate faces. These methods are designed
mainly for face localization.
Paul Viola and Michael Jones presented an approach for object detection which minimizes
computation time while achieving high detection accuracy. Paul Viola and Michael Jones
[39] proposed a fast and robust method for face detection which is 15 times quicker than any
technique at the time of release with 95% accuracy at around 17 fps.The technique relies on the
use of simple Haar-like features that are evaluated quickly through the use of a new image
representation. Based on the concept of an ―Integral Image‖ it generates a large set of features
and uses the boosting algorithm AdaBoost to reduce the over complete set and the introduction
of a degenerative tree of the boosted classifiers provides for robust and fast interferences. The
detector is applied in a scanning fashion and used on gray-scale images, the scanned window
that is applied can also be scaled, as well as the features evaluated.
Gabor Feature Method:
Sharif et al proposed an Elastic Bunch Graph Map (EBGM) algorithm that successfully
implements face detection using Gabor filters. The proposed system applies 40 different
Gabor filters on an image. As a result of which 40 images with different angles and
orientations are received. Next, maximum intensity points in each filtered image are
calculated and mark them as fiducial points. The system reduces these points in accordance
to distance between them. The next step is calculating the distances between the reduced
points using distance formula. At last, the distances are compared with database. If match
occurs, it means that the faces in the image are detected. Equation of Gabor filter [40] is
shown below`
9
2.5 CONSTELLATION METHOD
All methods discussed so far are able to track faces but still some issue like locating faces
of various poses in complex background is truly difficult. To reduce this difficulty
investigator form a group of facial features in face-like constellations using more robust
modelling approaches such as statistical analysis. Various types of face constellations have
been proposed by Burl et al. They establish use of statistical shape theory on the features
detected from a multiscale Gaussian derivative filter. Huang et al. also apply a Gaussian
filter for pre-processing in a framework based on image feature analysis.Image Base
Approach.
An early example of employing eigen vectors in face recognition was done by Kohonen in
which a simple neural network is demonstrated to perform face recognition for aligned and
normalized face images. Kirby and Sirovich suggested that images of faces can be linearly
encoded using a modest number of basis images. The idea is arguably proposed first by
Pearson in 1901 and then by HOTELLING in 1933 .Given a collection of n by m pixel
training.
Images represented as a vector of size m X n, basis vectors spanning an optimal subspace are
determined such that the mean square error between the projection of the training images onto
10
this subspace and the original images is minimized.They call the set of optimal basis vectors
Eigen pictures since these are simply the eigen vectors of the covariance matrix computed
from the vectorized face images in the training set.Experiments with a set of 100 images show
that a face image of 91 X 50 pixels can be effectively encoded using only50 Eigen pictures.
11
2.8 HARDWARE REQUIREMENTS
Processor: Intel Core i3 or above
Ram: 4.0 GB or above
Hard disk: 250Gb and above
Framework: OpenCv
Operation System: Window 8 or above
12
3. DIGITAL IMAGE PROCESSING
Interest in digital image processing methods stems from two principal application areas:
In this second application area, interest focuses on procedures for extracting image
information in a form suitable for computer processing.
Examples includes automatic character recognition, industrial machine vision for product assembly and
Image:
Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the brightness or gray
levels of the image at that point. A digital image is an image f(x, y) that has been
discretized both in spatial coordinates and brightness. The elements of such a digital array
are called image elements or pixels.
The storage and processing requirements increase rapidly with the spatial resolution and
the number of gray levels.
Example: A 256 gray-level image of size 256x256 occupies 64k bytes of memory.
13
Types of image processing
Low level processing means performing basic operations on images such as reading an
image resize, resize, image rotate, RGB to gray level conversion, histogram equalization
etc…,
The output image obtained after low level processing is raw image. Medium level processing
means extracting regions of interest from output of low level processed image. Medium level
processing deals with identification of boundaries i.e edges .This process is called
segmentation. High level processing deals with adding of artificial intelligence to medium
level processed signal.
2. Image pre-processing: to improve the image in ways that increases the chances for
success of the other processes.
3. Image segmentation: to partitions an input image into its constituent parts of objects.
4. Image segmentation: to convert the input data to a from suitable for computer
processing.
5. Image description: to extract the features that result in some quantitative information of
interest of features that are basic for differentiating one class of objects from another.
14
3.3 ELEMENTS OF DIGITAL IMAGE PROCESSING SYSTEMS
A digital image processing system contains the following blocks as shown in the figure
15
The basic operations performed in a digital image processing system include
1. Acquisition
2. Storage
3. Processing
4. Communication
5. Display
3. f(x, y) = i(x, y)r(x, y), where 0 < i(x,y) < and 0 < r(x, y) < 1
• 0.93 for snow Example of typical ranges of illumination i(x, y) for visible light
(average values)
• Sun on a clear day: ~90,000 lm/m^2,down to 10,000lm/m^2 on a cloudy day
16
Fig 3.3: Above table shows image types with its description.
17
4. Project Activities
-Open the software AMS_Run.py
18
-After run you need to give your face data to system so enter your ID and name in box than click on
`Take Images` button.
- It will collect 200 images of your faces, it save a images in `TrainingImage` folder
19
- After that we need to train a model (for train a model click on `Train Image` button.
20
- After training click on `Automatic Attendance`,it can fill attendance by your face using our trained
model (model will save in `TrainingImageLabel` )
21
- it will create `.csv` file of attendance according to time & subject.
- You can store data in database (install wampserver),change the DB name according to you in
`AMS_Run.py`.
- `Manually Fill Attendance` Button in UI is for fill a manually attendance (without face recognition), it’s
also create a `.csv` and store in a database.
22
5. Modules
A module allows you to logically organize your Python code. Grouping related code into a
module makes the code easier to understand and use. A module is a Python object with arbitrarily
named attributes that you can bind and reference.
Simply, a module is a file consisting of Python code. A module can define functions, classes and
variables. A module can also include runnable code.
Example
The Python code for a module named aname normally resides in a file named aname.py. Here's
an example of a simple module, support.py
def print_func( par ):
print "Hello : ", par
return
23
This statement does not import the entire module fib into the current namespace; it just introduces
the item fibonacci from the module fib into the global symbol table of the importing module.
24
5.4 Namespaces and Scoping
Variables are names (identifiers) that map to objects. A namespace is a dictionary of variable
names (keys) and their corresponding objects (values).
A Python statement can access variables in a local namespace and in the global namespace. If a
local and a global variable have the same name, the local variable shadows the global variable.
Each function has its own local namespace. Class methods follow the same scoping rule as
ordinary functions.
Python makes educated guesses on whether variables are local or global. It assumes that any
variable assigned a value in a function is local.
Therefore, in order to assign a value to a global variable within a function, you must first use the
global statement.
The statement global VarName tells Python that VarName is a global variable. Python stops
searching the local namespace for the variable.
For example, we define a variable Money in the global namespace. Within the function Money,
we assign Money a value, therefore Python assumes Money as a local variable. However, we
accessed the value of the local variable Money before setting it, so an UnboundLocalError is the
result. Uncommenting the global statement fixes the problem.
25
5.7 Packages in Python
A package is a hierarchical file directory structure that defines a single Python application
environment that consists of modules and subpackages and sub-subpackages, and so on.
Consider a file Pots.py available in Phone directory. This file has following line of source code
−
#!/usr/bin/python
def Pots():
print "I'm Pots Phone"
Similar way, we have another two files having different functions with the same name as above
−
Phone/Isdn.py file having function Isdn()
Phone/G3.py file having function G3()
Now, create one more file __init__.py in Phone directory −
Phone/__init__.py
To make all of your functions available when you've imported Phone, you need to put explicit
import statements in __init__.py as follows −
from Pots import Pots
from Isdn import Isdn
from G3 import G3
After you add these lines to __init__.py, you have all of these classes available when you import
the Phone package.
#!/usr/bin/python
Phone.Pots()
Phone.Isdn()
Phone.G3()
When the above code is executed, it produces the following result −
I'm Pots Phone
I'm 3G Phone
I'm ISDN Phone
26
6. Frameworks Used
6.1 OpenCV
OpenCV is a Python library which is designed to solve computer vision problems. OpenCV was
originally developed in 1999 by Intel but later it was supported by Willow Garage.
OpenCV supports a wide variety of programming languages such as C++, Python, Java etc.
Support for multiple platforms including Windows, Linux, and MacOS.
OpenCV Python is nothing but a wrapper class for the original C++ library to be used with Python.
Using this, all of the OpenCV array structures gets converted to/from NumPy arrays.
This makes it easier to integrate it with other libraries which use NumPy. For example, libraries
such as SciPy and Matplotlib.
Next up on this OpenCV Python Tutorial blog, let us look at some of the basic operations that we
can perform with OpenCV.
# colored Image
As seen in the above piece of code, the first requirement is to import the OpenCV module.
Later we can read the image using imread module. The 1 in the parameters denotes that it is a
color image. If the parameter was 0 instead of 1, it would mean that the image being imported is a
black and white image. The name of the image here is ‘Penguins’. Pretty straightforward, right?
Import cv2
27
# Black and White (gray scale)
Print(img.shape)
By shape of the image, we mean the shape of the NumPy array. As you see from executing the
code, the matrix consists of 768 rows and 1024 columns.
import cv2
cv2.imshow(“Penguins”, img)
cv2.waitKey(0)
# cv2.waitKey(2000)
cv2.destroyAllWindows()
As you can see, we first import the image using imread. We require a window output to display
the images, right?
We use the imshow function to display the image by opening a window. There are 2 parameters
to the imshow function which is the name of the window and the image object to be displayed.
Later, we wait for a user event. waitKey makes the window static until the user presses a key. The
parameter passed to it is the time in milliseconds.
And lastly, we use destroyAllWindows to close the window based on the waitForKey parameter.
import cv2
cv2.imshow(“Penguins”, resized_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
28
6.6 Face Detection Using OpenCV
This seems complex at first but it is very easy. Let me walk you through the entire process and you
will feel the same.
Step 1: Considering our prerequisites, we will require an image, to begin with. Later we need to
create a cascade classifier which will eventually give us the features of the face.
Step 2: This step involves making use of OpenCV which will read the image and the features file.
So at this point, there are NumPy arrays at the primary data points.
All we need to do is to search for the row and column values of the face NumPy ndarray. This is
the array with the face rectangle coordinates.
Step 3: This final step involves displaying the image with the rectangular face box.
First, we create a CascadeClassifier object to extract the features of the face as explained earlier.
The path to the XML file which contains the face features is the parameter here.
The next step would be to read an image with a face on it and convert it into a black and white
image using COLOR_BGR2GREY. Followed by this, we search for the coordinates for the
image. This is done using detectMultiScale.
What coordinates, you ask? It’s the coordinates for the face rectangle. The scaleFactor is used to
decrease the shape value by 5% until the face is found. So, on the whole – Smaller the value,
greater is the accuracy.
29
6.7Adding the rectangular face box:
This logic is very simple – As simple as making use of a for loop statement. Check out the
following imageWe define the
method to create a rectangle
using cv2.rectangle by passing
parameters such as the image
object, RGB values of the box
outline and the width of the
rectangle.
import cv2
# Create a CascadeClassifier Object
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# Reading the image as it is
img = cv2.imread("photo.jpg")
# Reading the image as gray scale image
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Search the co-ordintes of the image
faces = face_cascade.detectMultiScale(gray_img, scaleFactor = 1.05,
minNeighbors=5)
for x,y,w,h in faces:
img = cv2.rectangle(img, (x,y), (x+w,y+h),(0,255,0),3)
resized = cv2.resize(img, (int(img.shape[1]/7),int(img.shape[0]/7)))
cv2.imshow("Gray", resized).
30
7. Conclusion and Future Work
7.1 Conclusion
We developed a Attendance System Using Face Recognition method which uses various
frameworks of python (OpenCV, Pillow, numpy etc.).Through this project we can easily take
attendance without using pen papers. The automatic filling attendance feature inside this project
needed a webcam to capture the student’s image, if student is physically present in the class then
only attendance will be marked otherwise that student will be marked as absent. Teachers can
manually fill attendance of students if webcam not working properly.
We tried our best to develop our project as per SRS and in the specified period of time .
and it’s a matter of satisfaction that we finished work accordingly.However this project has a
broad scope. If we are given time and opportunity we would like to add following features in this
app:
1. Students Registration
2. Fees Management
3. Entry Pass for students
4. Visitors Profile for Campus Visit
31
References
32
Appendix
Libraries Used
33
Automatic Management System
34
GUI for manually fill attendance
def manually_fill():
global sb
sb = tk.Tk()
sb.iconbitmap('AMS.ico')
sb.title("Enter subject name...")
sb.geometry('580x320')
sb.configure(background='snow')
def err_screen_for_subject():
def ec_delete():
ec.destroy()
global ec
ec = tk.Tk()
ec.geometry('300x100')
ec.iconbitmap('AMS.ico')
ec.title('Warning!!')
ec.configure(background='snow')
Label(ec, text='Please enter your subject name!!!', fg='red',
bg='white', font=('times', 16, ' bold ')).pack()
Button(ec, text='OK', command=ec_delete, fg="black", bg="lawn
green", width=9, height=1, activebackground="Red",
font=('times', 15, ' bold ')).place(x=90, y=50)
def fill_attendance():
ts = time.time()
Date =
datetime.datetime.fromtimestamp(ts).strftime('%Y_%m_%d')
timeStamp =
datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Time =
datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour, Minute, Second = timeStamp.split(":")
35
Connect to the database
try:
global cursor
connection =
pymysql.connect(host='localhost',port='3306',user='user', password='',
db='manually_fill_attendance')
cursor = connection.cursor()
except Exception as e:
print(e)
try:
cursor.execute(sql) ##for create a table
except Exception as ex:
print(ex) #
if subb=='':
err_screen_for_subject()
else:
sb.destroy()
MFW = tk.Tk()
MFW.iconbitmap('AMS.ico')
MFW.title("Manually attendance of "+ str(subb))
MFW.geometry('880x470')
36
MFW.configure(background='snow')
def del_errsc2():
errsc2.destroy()
def err_screen1():
global errsc2
errsc2 = tk.Tk()
errsc2.geometry('330x100')
errsc2.iconbitmap('AMS.ico')
errsc2.title('Warning!!')
errsc2.configure(background='snow')
Label(errsc2, text='Please enter Student & Enrollment!!!',
fg='red', bg='white',
font=('times', 16, ' bold ')).pack()
Button(errsc2, text='OK', command=del_errsc2, fg="black",
bg="lawn green", width=9, height=1,
activebackground="Red", font=('times', 15, ' bold
')).place(x=90, y=50)
37
global ENR_ENTRY
ENR_ENTRY = tk.Entry(MFW, width=20,validate='key', bg="yellow",
fg="red", font=('times', 23, ' bold '))
ENR_ENTRY['validatecommand'] = (ENR_ENTRY.register(testVal),
'%P', '%d')
ENR_ENTRY.place(x=290, y=105)
def remove_enr():
ENR_ENTRY.delete(first=0, last=22)
def remove_student():
38
For take images for datasets
def take_img():
l1 = txt.get()
l2 = txt2.get()
if l1 == '':
err_screen()
elif l2 == '':
err_screen()
else:
try:
cam = cv2.VideoCapture(0)
detector =
cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
Enrollment = txt.get()
Name = txt2.get()
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0),
2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder
cv2.imwrite("TrainingImage/ " + Name + "." + Enrollment +
'.' + str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
cv2.imshow('Frame', img)
# wait for 100 miliseconds
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
39
elif sampleNum > 70:
break
cam.release()
cv2.destroyAllWindows()
ts = time.time()
Date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
Time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
row = [Enrollment, Name, Date, Time]
with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile, delimiter=',')
writer.writerow(row)
csvFile.close()
res = "Images Saved for Enrollment : " + Enrollment + " Name : "
+ Name
Notification.configure(text=res, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=250, y=400)
except FileExistsError as F:
f = 'Student Data already exists'
Notification.configure(text=f, bg="Red", width=21)
Notification.place(x=450, y=400)
40
For train the model
def trainimg():
recognizer = cv2.face.LBPHFaceRecognizer_create()
global detector
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
try:
global faces,Id
faces, Id = getImagesAndLabels("TrainingImage")
except Exception as e:
l='please make "TrainingImage" folder & put Images'
Notification.configure(text=l, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
recognizer.train(faces, np.array(Id))
try:
recognizer.save("TrainingImageLabel\Trainner.yml")
except Exception as e:
q='Please make "TrainingImageLabel" folder'
Notification.configure(text=q, bg="SpringGreen3", width=50,
font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
def getImagesAndLabels(path):
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faceSamples = []
# create empty ID list
Ids = []
41
# now looping through all the image paths and loading the Ids and the
images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image
Id = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces = detector.detectMultiScale(imageNp)
# If a face is there then append that in the list as well as Id of it
for (x, y, w, h) in faces:
faceSamples.append(imageNp[y:y + h, x:x + w])
Ids.append(Id)
return faceSamples, Ids
window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
window.iconbitmap('AMS.ico')
def on_closing():
from tkinter import messagebox
if messagebox.askokcancel("Quit", "Do you want to quit?"):
window.destroy()
window.protocol("WM_DELETE_WINDOW", on_closing)
message.place(x=80, y=20)
42
Notification = tk.Label(window, text="All things good", bg="Green",
fg="white", width=15,
height=3, font=('times', 17, 'bold'))
def testVal(inStr,acttyp):
if acttyp == '1': #insert
if not inStr.isdigit():
return False
return True
43
AP = tk.Button(window, text="Check Register
students",command=admin_panel,fg="black" ,bg="cyan" ,width=19 ,height=1,
activebackground = "Red" ,font=('times', 15, ' bold '))
AP.place(x=990, y=410)
FA = tk.Button(window, text="Automatic
Attendace",fg="white",command=subjectchoose ,bg="blue2" ,width=20
,height=3, activebackground = "Red" ,font=('times', 15, ' bold '))
FA.place(x=690, y=500)
window.mainloop()
44