Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Face Recognition Based Attendance System 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Published by : International Journal of Engineering Research & Technology (IJERT)

http://www.ijert.org ISSN: 2278-0181


Vol. 9 Issue 06, June-2020

Face Recognition based Attendance System


1 2 3 4 5
Dhanush Gowda H.L , K Vishal , Keertiraj B. R , Neha Kumari Dubey , Pooja M. R.
1,2,3,4
Department of Computer Science and Engineering
5Associate Professor, Department Science and Engineering Vidyavardhaka College of Engineering, Mysuru, Karnataka, India

Abstract - The management of the attendance can be a position of faces and clear images. For face recognition
great burden on the teachers if it is done by hand. To purpose, there is a need for large data sets and complex
resolve this problem, smart and auto attendance features to uniquely identify the different subjects by
management system is being utilized. By utilizing this manipulating different obstacles like illumination, pose
framework, the problem of proxies and students being and aging. During the recent few years, a good
marked present even though they are not physically improvement has been made in facial recognition systems.
present can easily be solved. This system marks the In comparison to the last decade, one can observe an
attendance using live video stream. The frames are enormous development in the world of face recognition.
extracted from video using OpenCV. The main Currently, most of the facial recognition systems perform
implementation steps used in this type of system are well with limited faces in the frame. Moreover, these
face detection and recognizing the detected face, for methodologies have been tested under controlled lighting
which dlib is used. After these, the connection of conditions, proper face poses and non- blurry images. The
recognized faces ought to be conceivable by comparing system that is proposed for face recognition in this paper
with the database containing student's faces. This for attendance system is able to recognize multiple faces in
model will be a successful technique to manage the a frame without any control on illumination, position of
attendance of students. face.

Keywords: Attendance system, Automated attendance, II.RELATED WORKS


Image Processing, Face detection, Feature matching, The paper “Individual Stable Space: An Approach to Face
Face recognition. Recognition Under Uncontrolled Conditions” by Xin
Geng, says that most face recognition systems needs the
I.INTRODUCTION faces to be fed into it based on certain rules, like under a
Human face plays an important role in our day to day life controlled illumination, at a particular position, under a
mostly for identification of a person. Face recognition is a particular view angle, and without any obstacles. Such
part of biometric identification that extracts the facial systems are called face recognition under controlled
features of a face, and then stores it as a unique face print conditions. These rules restrict the uses of face recognition
to uniquely recognize a person. Biometric face recognition in many real time applications because they cannot satisfy
technology has gained the attention of many researchers these rules. Real time applications need techniques which
because of its wide application. Face recognition does not need any strict control over the human beings for
technology is better than other biometric based recognition recognizing the face. These types of systems need face
techniques like finger-print, palm-print, iris because of its recognition under uncontrolled conditions. So, this paper
non-contact process. Recognition techniques using face proposes one such system but the system needs an image as
recognition can also recognize a person from a distance, input and one person per image which is a drawback of the
without any contact or interaction with person. The face system and provides a hindrance in using it in real time
recognition techniques are currently implemented in social applications like attendance systems.[1]
media websites like Facebook, at the airports, railway
stations. The, at crime investigations. Face recognition In the paper “Anti-Cheating Presence System Based on
technique can also be used in crime reports, the captured 3WPCA-Dual Vision Face Recognition”, Edy Winarno
photo can be stored in a database, and can be used to proposed a system that can predict the cheating in facial
identify a person. Facebook uses the facial recognition recognition-based system like using the photograph of an
technique for automating the process of tagging people. For authorized person or image similar to the authorized
face recognition we require large dataset and complex person. They used dual vision camera also called as stereo
features to identify a person in all conditions like change vision camera which produces one image from each of its
of illumination, age, pose, etc. Recent researches show two lenses. After getting the two images they used half-join
there is a betterment in facial recognition systems. In the method to combine the half of left image and half of right
last ten years there is huge development in recognition image of a person into a single image of the person that can
techniques. then undergo extraction using 3WPCA method. The
But currently most of the facial recognition techniques is recognition of cheating using this system is 98%.[2]
able to work fine only if the number of people in one frame In this paper the author designed and explained
is very few and under controlled illumination, proper improvement of picture-based attendance system capture

IJERTV9IS060615 www.ijert.org 761


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

the faces of many students and may be the next generation Module as a feature. The approach shows Good detection
to all the bio-metric devices that are ruling now. Human accuracy percentage for CNN based approaches. SVM,
face, a differential thing and has a great degree of changing MLP and CNN achieve test accuracy of 87%, 86.5% and
tendency, so it needs to be fast and accurate for detecting 98% on self- generated databases respectively. [6]
student’s facial structures. Processing the system will In the paper" Class Attendance framework the on- Face
involve registration of students by taking their images and Recognition" composed by Priyanka Wagh. To distinguish
then taking them for setting Attendance. Continuous the understudies sitting on the last columns conveniently,
registration is required to achieve great and sharp accuracy. the histogram leveling of picture should be finished. The
In this system. This paper tells the system and lastly picture will be passed for individual's face discovery. The
evidence will be given to support the system. The project productivity of Ada-Boost calculation is best of all these.
can be used in online examination for certification. In this way, this will paper utilizes this calculation for
Identification of the student taking the test. [3]. identifying countenances of understudies by utilizing the
There are a number of systems for attendance purpose, like haar highlight classifiers and course ideas of Ada-Boost
traditional methods of data, have drawbacks and hard to calculation .Every understudy's face is trimmed and the
use that list, a biometric presence. There is a lack of human different highlights are removed from them like separation
error in the system like fingerprint scan is not accepted between eyes, nose, blueprint of face, and so forth utilizing
because of wet conditions Fingers, dirty, very dry or peeled these countenances as Eigen includes, the understudy is
fingers. So, the author proposes Authority to add mobile perceived and by contrasting them and the face database
presence system and face with NFC Safety facility and and their participation are stamped. A database of faces
possibility to store using Raspberry Pi data in the cloud. should be made with the end goal of examination. [7]
The paper reviews relatable works. Attendance
management system, NFC, face authority area,
Microcomputers and Cloud area. Then, it provides new
Method and design system and planning. Outcome of this
is a system which reduces the usage of paper, ending time
and energy wasted by attendance Mobile Based
Attendance System.[4]
Computers are intelligent to communicate with humans in
various perspectives. It will be participation If it is based Figure 2: Face Recognition model
then it is more acceptable for both humans and computers.
On the validation process. The author is concerned The classroom attendance system based on face
Integrating and developing a student recognition using recognition technology uses the camera to monitor the
“survival- Ing” algorithm. Then embedding is used in scene information. It triggers the shooting of the student
classification of a person's face the system offers a variety face photo event, reads the student information when the
of applications such as attendance - Systems, security etc. student is signed in with campus card, which prevents non-
After making a system, a resulting display, is shown in the school personnel from entering the classroom and
paper.[5] substitute classes. [8]
In the paper" Computerized Participation Framework
Utilizing Face Acknowledgment "composed by Akshara
Jadhav, Akshay Jadhav Tushar Ladhe, Krishna Yeolekar.
the recognized face is separated and exposed to pre-
preparing. This pre-preparing step includes with histogram
leveling of the extricated face picture and is resized to
100x100. In this framework, in the wake of perceiving the
essences of the understudies, the names are refreshed into
an exceed expectations sheet. The exceed expectations
Figure 1: Traditional neural network versus CNN. sheet is created by trading instrument present in the
database framework. The database likewise can create
In this paper “Face Recognition based Attendance System month to month and week after week reports of
using Machine Learning Algorithms” by Radhika C. understudy's participation records. catch the Understudy's
Damale, the author says identification of a person by facial Picture Apply Face identification calculations to recognize
features Known as facial recognition. A face feature can be face extricate the locale of enthusiasm for rectangular
used for various computer-based vision algorithms such as bouncing box convert to dim scale, apply histogram
face recognition, emotion detection and multiple camera adjustment and resize to 100x100 for example apply pre-
surveillance applications. Face recognition system is preparing on the off chance that enrolment stage, at that
attracting scholars towards it. In this, different methods point stores in database else apply PCA/LDA/LBPH (for
such as SVM, MLP and CNN are discussed. DNN is used highlight extraction) [9].
to “face detection”. For SVM and MLP approaches, the In the paper “An Attendance Marking System based on Face
features like PCA and LDA extracted using extraction Recognition" written by Khem Puthea, Rudy Hartanto and
algorithms. In CNN approach, images fed directly to CNN Risanuri Hidayat, says that the proposed system uses a

IJERTV9IS060615 www.ijert.org 762


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

machine learning technique named as principal component In the paper “Student Attendance System in Classroom
analysis or PCA for face recognition and other machine Using Face Recognition Technique “composed by Samuel
learning algorithms used in computer vision. A technique Lukas, Aditya Rama Mitra, Ririn Ikana Desanti, Dion
called Haar classifier is used to train the system to detect a Krisnadi” the author says the quantity of highlights of any
face. When the faces are captured by a camera, they are facial understudy picture is made to be consistent, for
first converted to grayscale and then to that image example 16 DCT coefficient. The process is finished
subtraction process is applied. The image after this is stored entirely performing grayscale standardization, histogram
on the server for further processing which is done later. balance, Discrete Wavelet Transform (DWT), and Discrete
[10] Cosine Transform (DCT). Further examination of the
The author proposed a strategy where the framework was disappointment in perceivingthe rest facial pictures
used as an online Web Server, so the participation results demonstrates that an understudy might be perhaps
can be open to a verified web customer. The facial perceived as other student(s)By considering the all-out
acknowledgment is finished by actualizing Local Binary degree of acknowledgment as came about because of the
Patterns (LBP) first handling venture is to identify and edit investigation which doesn't meet exclusive
the locale of intrigue ROI which is the human face, then requirement.[13]
apply the Haar Feature-based Cascade calculation After In the paper “Attendance System based on Face
that, the picture highlights are extricated utilizing LBPs, at Recognition” written by Venkata Kalyan Polamarasetty,
that point LBPs calculation contrasts the separated Muralidhar Reddy Reddem, Dheeraj Ravi, Mahith Sai
highlights and the prepared datasets. Later, by clicking ‘c' Madala.they proposed that we need to catch the picture
as in catch on the console framework, the participation from the webcam or the outside camera.] To do as such, in
results are put away in MySQL database, so it tends to be MATLAB, they introduce the drivers from the math works
available to the web server. [11] site dependent on the sort of camera we are utilizing.
Next, they use any rate 500 to 1000 catches of every
In the paper "Face Recognition Based Attendance System individual. For getting higher level of exactness they used
“written by Nandhini R. the author mentions the some HD camera so as to get results. For face
fundamental working rule of the venture is that, the video distinguishing, we can do it utilizing the article falling class
caught information is changed over into picture to identify and we utilize the b-box technique. The caught
and perceive it. CNN calculation is executed to recognize countenances are trimmed into little pictures of goals
the faces. A CNN (Convolution Neural Network), utilizes 112x92. It would associate with 11 KB of size. The faces
a framework like a multilayer perceptron, intended to taken in the database are expected to stack into our
process the prerequisites faster. After the end of workspace. We will stack the gallery images into that. All
distinguishing and preparing the face, it is contrasted with the HOG highlights separated are put away as exhibit list it
the face’s present in the understudy’s database to refresh restores a mark to which the given information matches or
the participation of the students. The post- preparing about coordinated.[14]
component includes the way toward refreshing the names
of the understudy into an exceed expectations sheet. The
exceed expectations sheets can be kept up on a week after
week premise or month to month premise to record the
understudy’s participation.[12].

IJERTV9IS060615 www.ijert.org 763


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

8. Design of CNN The Automated Other methods


Table 1: Comparison of different techniques used in face Classroom uses a system Classroom with greater
recognition-based attendance system. Attendance like a Attendance accuracy can be
System Based multilayer System helps in used to build
Ref Title & Concept Used Advantages Disadvantages on Face perceptron increasing the
No. Authors Name Recognition, that has been accuracy and
Wenxian Zeng designed to speed ultimately
1. Individual Face This paper Video process the achieve the
Stable Space: recognition projects on sequences, requirements high- precision
An Approach under face verification, faster.[12] real-time
to Face uncontrolled recognition and multiple attendance to
Recognition condition. [1] under persons per meet the need
Under uncontrolled image for automatic
Uncontrolled conditions. required by classroom
Conditions Xin most of the evaluation.
Geng, Zhi-Hua real
Zhou, & applications
Smith-Miles can’t be
implemented.
2. Anti-Cheating Dual vision It can The relative
Presence face anticipate angle of the
System Based recognition falsification target’s face
9. Automated DWT Facial images Higher accuracy
on 3WPCA using of face data influences
Dual Vision 3WPCA. [2] with the Attendance can be can be obtained.
Face recognition recognition System Using recognized
Recognition accuracy up to score Face successfully
Winarno, 98% profoundly. Recognition giving a total
Wiwien Akshara Jadhav, level of
Akshay Jadhav recognition of
Hadikurniawat
Tushar Ladhe, 82%. This figure
i, Imam Husni
Krishna is perceived as
Al Amin, Muji
Yeolekar the best
Sukur
recognition rate
3. Prototype ADA Boost By using this Works only
which can be
model for an algorithm with system for single
. obtained from
Intelligent techniques chances of image of a
the data.
Attendance PCA and LDA fake system.
System based Hybrid attendance
on facial algorithm [7]. and proxy can
IdentificationR be reduced.
aj Malik,
Praveen
Kumar, Amit
Verma, Seema 10. An Attendance The complete User-friendly Similar
Rawat Marking System system is application on techniques are
4. Convolutional Alex NET Uses the Alex NET based on Face implemented face recognitions used.
Neural CNNs and camera system won’t work on Recognition" in MATLAB created.
Network RFID to Monitor the all the students Khem Puthea, [14]
Nusrat Mubin Technology scene until it is Rudy Hartanto
Ara1. Neural [8]. information. improved. and Risanuri
Network RFID Hidayat.
Approach for technology
Vision Based uses electronic
Student toy which can’t
Recognition
System
be used in all
the cases.
III. METHODOLOGY
In order to mark attendance, we follow a series of steps
5. NFC Based Iris Real time face Iris condition which includes enrolment, face detection, face
Mobile Recognition detection and needs to recognition, and then marking the attendance in a
Attendance [9]. efficient improve in
System with different database. Unlike Eigenfaces and Fisherfaces, where in
Facial most modern face verification systems, training and
Authorization light conditions
on Raspberry enrolment are two different steps. Training is
Pi and Cloud
Server Siti
performed on millions of images. On the other hand,
Ummi enrolment is performed using a small set of images. In
Masruroh
6. Facerecognitio Local Binary Continuous System has
case of Dlib, enrolling a person is simply passing a few
n-based patterns and automatic issues with images of the person through the network to obtain 128-
Attendance (Support attendance system dimensional feature descriptors corresponding to each
System using vector system. performance
MachineLearni machine), and accuracy. image. In other words, we convert each image to a
ng Algorithms LDA based feature in a high-dimensional space. In this high
Radhika C. OpenCV and
Damale FLTK.[10]. dimensional space, features belonging to the same
person will be close to each other and far away for
7. ClassAttendanc Local Binary continuous System has different persons.
esystem based patterns and automatic issues with
on Face (Support attendance system
Recognition" vector system. performance A. Traditional Image Classification Pipeline Versus
Priyanka machine), and accuracy. Dlib’s Face Recognition Model
Wagh. LDA
In a traditional image classification pipeline, we convert
based the image into a feature vector (or equivalently a point)
OpenCV and
FLTK.[11]. in higher dimensional space.

IJERTV9IS060615 www.ijert.org 764


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

This was done by calculating the feature descriptor However, in mathematics a distance (also known as a
(e.g. HOG) for an image patch. Once the image is metric) has a much broader definition. For example, a
represented as a point in higher dimensional space, different kind of distance is called the L1 distance. It is
we then use a learning algorithm like SVM to the sum of absolute values of elements of the two
partition the space using hyperplanes that separated vectors.
points representing different classes. Eq(1.3)

The following rules define when a function involving


two vectors can be called a metric. A mapping d(x,y) is
called a metric if
Figure 3: Traditional image classification pipeline. 1. The distance between any two points is greater than
or equal to zero d(x,y) ≥0
Even though on the surface Deep Learning looks very
different from the above model, there are conceptual 2. A point has zero distance from itself i.e., d(x,x)=0
similarities.
3. The distance from x to y i.e., the same as the distance
from y to x i.e., d(x,y)=d(y,x)
4. Triangle inequality: For any three points x, y and z
the following inequality holds true. i.e.,
d(x,y) +d(y,z)≥d(z,x)
1) Deep Metric Learning
Figure 4: Dlib's Face Recognition module
Any image can be vectorized by simply storing all the
Figure 4 reveals the Dlib’s Face Recognition module pixel values in a tall vector. This vector represents a
is based on an CNN architecture called ResNet. point in higher dimensional space. However, this space
ResNet contains a bank of Convolutional Layers is not very good for measuring distances. In a face
followed by one Fully Connected Layer. As most recognition application, the points representing two
CNN architectures, ResNet contains a bank of different images of the same person may be very far
Convolutional (Conv) Layers followed by a Fully away and the points representing images of two
Connected (FC) Layer. The bank of convolutional different people may be close by.
layers produces a feature vector in higher dimensional
space just like the HOG descriptor. Deep Metric Learning is a class of techniques that uses
Deep Learning to learn a lower dimensional effective
The most important differences between bank of metric space where images are represented by points
convolutional layer and HOG descriptor are: such that images of the same class are clustered
1. HOG is a fixed descriptor. There is an exact recipe together, and images of different class are far apart.
for calculating the descriptor. On the other hand, a Instead of directly reducing the dimension of the pixel
bank of conv layers contains many convolution space, the convolution layers first calculate the
filters. These filters are learned from the data. So meaningful features which are then implicitly used to
unlike HOG, they adapt based on the problem at create the metric space. Turns out we can use the same
hand. CNN architecture we use for image classification for
deep metric learning.
2. The FC layer does the same job as the SVM
classifier in traditional approaches. It classifies the
feature vector. In fact, sometimes the final FC layer
is replaced by an SVM. Usually, when we want to use
the word “distance” between two points we are
talking about the Euclidean distance between them.
Figure 5: CNN for clarification task
For example, the distance between 3D points (1, 0, 1)
and (1, 3, 5) is In Figure 5 see a CNN that is trained to take as input a
150x150 colour image (which is the same as a vector of
Eq(1.1) size 150x150x3 = 67,500) and output the probability
that it belongs to one of the 128 different animal classes.
In general, if we have an n dimensional vectors x and
In Deep Metric Learning, the architecture remains the
y the L2 distance (also called the Euclidean distance)
same, but the loss function is changed.
is given by
Eq(1.2)

IJERTV9IS060615 www.ijert.org 765


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

this imbalance into account while calculating the metric


loss function. If there are N matching pairs that share
the same class in a mini batch, then the algorithm
includes ONLY the N worst non-matching pairs in the
loss computation. In other words, performs hard
Figure 6: CNN for metric learning negative mining on the mini batch by picks the worst
non-matching pairs.
Figure 6 reveals in Deep Metric Learning, the
architecture remains the same as for CNN B. Enrolment
classification task, but the loss function is changed.
For enrolment we define smaller ResNet neural
In other words, you input an image and the output is network. Training was also done using this network. A
a point in 128-dimensional space. If you want to find Persons’ images we are going to enrol are structured in
how closely related two images are, you can simply following way: We will be having subfolders, each
find the pass both images through the CNN and obtain subfolder has images of one person. We will store this
the two points in this 128-dimensional space. You can mapping of images and their corresponding labels to use
compare the two points using simple L2 (Euclidean) it later in testing. Then we process enrolment images
distance between them. one by one, convert each image from BGR to RGB
format, because Dlib uses RGB as default format. Then
2) Metric Loss convert OpenCV BGR image to Dlib’s cv_image and
Millions of images are typically used to train a then Dlib’s cv_image to Dlib’s matrix format since
production ready CNN. Obviously, these millions of Dlib’s cv_image format is not recognized by neural
images cannot be simultaneously used to update the network module. Detect faces in the image. For each
knobs of the CNN. Training is done iteratively using face we detect facial landmarks and get a normalized
one small batch of images at a time. This small batch and warped patch of detected face. Compute face
is called a mini batch. As mentioned in the previous descriptor using facial landmarks. This is a 128-
section, we need to define a new loss function so that dimensional vector which represents a face. Then save
the CNN output is a point in this 128-dimensional labels and names to disk and face descriptors and
space. The loss function is defined over all pairs of corresponding labels to disk.
images in a mini batch.
C. Face Detection And Recognition
Given a new image of a person, we can verify if it is the
same person by checking the distance between the
enrolled faces and the new face in the 128-dimensional
space. Read name-labels mapping and descriptors from
disk. Then read the query image that is an image of
Figure 7: Metric loss defined by Dlib's Face classroom with multiple students and convert it from
Recogniser BGR to RGB format. Because Dlib uses RGB as default
For simplicity, the concept is shown in 2D. The loss format. Then convert OpenCV RGB image to Dlib’s
is defined in terms of two parameters: 1) Threshold cv_image, and then Dlib’s cv_imageto Dlib’s matrix
(T) and 2) Margin. The blue and the red dots present format. Dlib’s cv_image format is not recognized by
images of two different classes. For the metric loss to neural network module. Detect faces in query image.
be 0, the maximum distance between any two points For each face detect facial landmarks. Get a warped and
of the same class should be (T - M) and the minimum patch of 150x150 for each face. Now compute face
distance between any two points of different classes descriptor for each face. Now we calculate Euclidean
should be (T + M) Let p1 and p2 represent the points distance between face descriptors in query images
corresponding to images l1 and l2 in the 128- versus face descriptors of enrolled images. Find the
dimensional space. If the images belong to the same enrolled face for which distance is minimum. Dlib
class, the loss is given by max (0, ||p1-p2||-T+M) specifies that in general, if two face descriptor vectors
have a Euclidean distance between them less than 0.6
On the other hand, if l1 and l2 have two different then they are from the same person, otherwise they are
class labels then their contribution to the loss from different people. This threshold will vary
function is: max (0, T- ||p1-p2||+M depending upon number of images enrolled and various
Figure 7 shows how this loss function prefers variations (illumination, camera quality) between
embedding where images of the same class are enrolled images and query image. We are using a
clustered together, and images of different classes are threshold of 0.5. If minimum distance is less than
separated by a large margin. threshold, find the name of person from index, else the
person in query image is unknown.
3) Hard Negative Mining
In a mini batch, there are many non-matching pairs
(images from different classes) than matching pairs
(images from the same class). It is important to take

IJERTV9IS060615 www.ijert.org 766


(This work is licensed under a Creative Commons Attribution 4.0 International License.)
Published by : International Journal of Engineering Research & Technology (IJERT)
http://www.ijert.org ISSN: 2278-0181
Vol. 9 Issue 06, June-2020

D. Attendance Marking
For each face detected and matched with enrolled face, the
attendance is marked for the corresponding USN in the
database. The name of student along with day and time of
attendance is also be stored in the database.
CONCLUSION
The above method provides the best outcome will be
achieved. This is achieved using OpenCV for frame
extraction and dlib for face recognition. This method will
have higher accuracy in recognition of multiple faces from
a single frame with lower response time.
REFERENCES
[1] Xin Geng, Zhi-Hua Zhou, & Smith-Miles, K. (2008). Individual
Stable Space: An Approach to Face Recognition Under
Uncontrolled Conditions. IEEE Transactions on Neural Networks.
[2] Winarno, Wiwien Hadikurniawati, Imam Husni Al Amin, Muji
Sukur, “Anti-Cheating
[3] Presence System Based on 3WPCA Dual Vision Face
Recognition, Faculty of Information Technology Universitas
Stikubank Semarang Indonesia.
[4] Prototype model for an Intelligent Attendance System based on
facial Identification by Raj Malik, Praveen Kumar, Amit Verma,
Seema Rawat, Amity University Uttar Pradesh.
[5] Convolutional Neural Network Approach for Vision Based Student
Recognition System, Nusrat Mubin Ara1, Dept. of CSE, SUST,
Sylhet, Bangladesh.
[6] NFC Based Mobile Attendance System with Facial Authorization
on Raspberry Pi and Cloud Server Siti Ummi Masruroh Andrew
Fiade Imelda Ristanti Julia.
[7] Face recognition-based Attendance System using Machine
Learning Algorithms, Radhika C. Damale, Department of
Electronics and Telecommunications, Cummins College of
engineering for Women, Pune, Maharashtra, India.
[8] Class Attendance system based on Face Recognition" Priyanka
Wagh.
[9] Design of Classroom Attendance System Based on Face
Recognition,WenxianZeng.
[10] Automated Attendance System Using Face Recognition, Akshara
Jadhav, Akshay Jadhav Tushar Ladhe, Krishna Yeolekar.
[11] An Attendance Marking System based on Face Recognition" written
by Khem Puthea, Rudy Hartanto and Risanuri Hidayat.
[12] Class Attendance Management System Using Face Recognition
,Omar Abdul Rhman Salim Department of Electrical and Computer
Engineering, Faculty of Engineering International Islamic University
Malaysia, Kuala Lumpur,Malaysia o.salem92@gmail.com
[13] Face Recognition Based Attendance System Nandhini R,
Duraimurugan N.
[14] Student Attendance System in Classroom Using Face Recognition
Technique, Samuel LukasAditya Rama Mitra, Ririn Ikana Desanti,
Dion Krisnadi, Informatics Department,Computer System
Department, Information System Department Universitas Pelita
Harapan Karawaci, Indonesia.
[15] [14] Attendance System based on Face Recognition Venkata
Kalyan Polamarasetty, Muralidhar Reddy Reddem, Dheeraj Ravi,
Mahith Sai Madala.

IJERTV9IS060615 www.ijert.org 767


(This work is licensed under a Creative Commons Attribution 4.0 International License.)

You might also like