Face
Face
Face
Belgaum-590 018
Abstract
Introduction
Literature survey
Problem system
Proposed System
Implementation
Results and Discussion
Advantages
Disadvantages
Applications
Conclusion
References
ABSTRACT
Face Detection and Recognition, a technique that utilizes various features of the face to verify and
identify individuals, is becoming more popular in various industrial applications including security,
surveillance, and access control. Due to machine learning's capability to learn complicated patterns from
enormous datasets, it's an effective approach for facial recognition. The utilization of machine learning
for face recognition is a methodology comprising three distinct stages: face detection, feature extraction,
and classification. The first phase involves an algorithm that recognizes the facial features in a given
image or video source. Following this, the feature extraction process identifies key geometric and
textural characteristics from the detected face. Lastly, the extracted data is run through a classification
Face detection and recognition is a technique for using a person's face to identify or confirm their identification. It is one
of the most significant applications of computer vision that has significant commercial appeal. Deep learning-based
techniques have substantially improved face recognition technology recently. Due to its vast range of applications in
surveillance, law enforcement, biometrics, marketing, and many other fields, face identification in static images and video
sequences recorded in unrestricted recording conditions is one of the most extensively researched, problems in computer
vision. applications Deep Face Recognition History Face recognition became increasingly popular in the early 1990s after
the historical Eigenface method was introduced. Holistic strategies dominated the face recognition community throughout
the 1990s and 2000s. Effectively and efficiently analysing the features related to facial information is a challenging task
that requires lot of time and efforts. Recently, many facial recognition-based algorithms for automatic attendance
Face Detector, for which several non-derived learning methods are available. These include, for example, the
OpenCV based face detectors, and the Haar Cascades.Elaborate work by Viola and Jones, while later based on
Gradiente's histogram. PCA is used to describe face images in terms set of base functions, or eigenfaces. Eigenface was
described in early identification problems. PCA is a technique, so the process does not rely on class definition. In our
implementation of eigenvalues, Euclidean distance. Multiple linear principal components analysis. However, a face
picture and video are a multilinear array, this vector define a 1D vector from the face image and liner projection for the
vector. choose biometric based system every individual is required. This database development phase consists of an
image capture of each individual , In the proposed system, after recognizing the faces of the person, the names are show
into a video output. The result is generated by exporting mechanism present in the database system.
2.“Facial Expression Classification Based on SVM, KNN and MLP Classifiers”
In this paper, The Extended Cohn-Kanade (CK+) Dataset was used. The dataset contains different emotions for each
person. Dataset is consisting of image sequences converted from video frame. There are many missing label sequences
in the dataset. The labeled emotion was considered and used in our experiment. First image frame used as a natural
emotion, last image frame used as a labeled emotion in the dataset. Our work is the process for identifying the human’s
expressions with eight emotions which are: neutral, anger, disgust, fear, happy, sadness and surprise. n this work we
used Viola-Jones algorithm for face detection. The original images were digitized into either 640x490 or 640x480
pixel arrays with 8- bit gray-scale or 24-bit color values. This is the most significant stage in FER. The efficiency of
FER is depending on the techniques used in this stage. s. The three classifiers were utilized to perform the
classification of facial expressions, SVM, NN and K-NN. The experiments results show that SVM proven to be better
classifier with 93.53% accuracy of correct classification rate. In future, we can use our own novel dataset which is in
collecting progress and test the presented method with different machine learning algorithms to provide better
accuracy
PROBLEM STATEMENT
Face recognition has many challenges due to illumination variations, large dimensionality, uncontrolled
environments, pose variations and aging. In the recent years, Face recognition get remarkable improvement and
accuracy to overcome these challenges, but illumination change is still changing. Optimize algorithms and data
structures for efficient computation and memory usage. Implement measures to ensure the privacy and security
of individuals' facial data, including data encryption, access control, and compliance with privacy regulations.
During training, the system received a training data comprising grayscale images of faces with their respective
expression label and learns a set of weights for the network. The training step took as input an image with a face.
Thereafter, an intensity normalization is applied to the image. The normalized images are used to train the
Convolutional Network. To ensure that the training performance is not affected by the order of presentation of the
examples, validation dataset is used to choose the final best set of weights out of a set of trainings performed with
samples presented in different orders. The output of the training step is a set of weights that achieve the best result
with the training data. During test, the system received a grayscale image of a face from test dataset, and output
the predicted expression by using the final network weights learned during training. Its output is a single number
Import the data set: The next step is to load the data set into Collab. To import a dataset, we first need to import some
of the required libraries. Once that's done, let's import the data currently in Google Drive using the code.
Data Pre-processing: After importing the library and data, let's proceed to data pre-processing. Images exist in many
different formats, with natural, spurious, greyscale, etc., and they should be considered and normalized before input to
the neural network.
Extract characteristics from Face: A dataset is first required; you may even make one yourself. Just be careful to
group all of your photos into folders, with one folder holding all of a certain person's pictures.
Model Building: Downloading the dataset is the initial step before running our code. As follows many facial datasets
available online. After You may start with the code after obtaining the dataset. We’ve loaded a few Python packages here. Use
of dlib in face feature recognition. Using face analysis experts feature recognition points from the dlib library, we estimate the
68 (x, y) coordinate positions mapping the facial structure of the face. Images may be operated on with Cv2. This step loads
all pretrained packages. You need these four pretrained files to work with it. Now initialize two lists. One for face and one for
name. Store the 128-dimensional face codes obtained from the name list's associated labels and the face list's face encoder,
which uses all training pictures.
Face detection : The process of finding faces in an image is referred to as face detection. If faces are found in the image, face
detection will return the image location and surrounding area of each face. To recognize faces, image windows need to be
divided into two categories: the first category must contain faces, which must be distinguished from the background (clutter).
It is not easy since, even though different faces have similarities, there might be significant differences between them in terms
of age, skin color, and facial features. Partial occlusion and disguise are additional factors that add complexity to the situation
already brought on by varying illumination, picture quality, and geometry. As a result, the ideal face detector would be able to
detect a face in any setting, independent of the illumination or background.
Face Recognition : The process of using a person's face to identify them based on an image of their face that has been
trained on data from a dataset is known as facial recognition. One of the most effective biometric methods for identifying
a person is face recognition. It has advantages over other biometric techniques in that it may be used without the user's
involvement and has non-intrusive features. Surveillance, smart cards, entertainment, law enforcement, data security,
picture database analysis, non-military usage, and human- computer interactions are just a few of the many applications
of face recognition technology. The input for face recognition is a digital still or moving picture, and the output is the
processed data of the person or thing shown in the input. There are two possible stages to the facial recognition process.
Acquiring face pictures through scanning, improving image quality, cropping, filtering, edge detection, and feature
extraction are all part of the initial phase of image processing. The second component is a facial recognition method that
combines genetic algorithms and artificial intelligence with other methods.
RESULT AND DISCUSSION
Result 1: The result illustrates the accuracy comparison graph between the proposed model and earlier research
models. From figure 03, the decision tree classifier attained low accuracy as compared to the random forest
and CNN. CNN models have attained high accuracy as compared to random forest classifiers. This result
demonstrates that our proposed model obtained maximum and more accuracy as compared to other models.
Figure 03 shows the comparative bar graph.
Fig 03 : Accuracy comparison graph between the proposed model and earlier research models
Result 2 : The result illustrates the F1- score “comparison graph between the proposed model and earlier
research models. From figure 04, CNN attained a low f1-score as compared to the random forest and
decision tree classifier. The random forest classifier has attained a high F1 score as compared to CNN
and decision tree classifier, while our proposed model is having F1- score than other models. Figure 04
shows the comparative bar graph.
Fig 04 : F1- score “comparison graph between the proposed model and earlier research models
Result 3: Figure 05 illustrates the loss comparison graph between the proposed model and earlier research models.
From Fig.05 , the decision tree classifier has obtained maximum loss as compared to the random forest and CNN
model. This result demonstrates that our proposed model is having less loss than other models as shown below.
Fig 05 : loss comparison graph between the proposed model and earlier research models
Result 4: From Fig.06, the CNN model achieved low precision as compared to other models.
Random forest classifier has attained the maximum precision as compared to CNN and decision
tree classifier. This result demonstrates that our proposed model is having more precision than
other models as shown below.
Fig 06 : precision comparison graph between the proposed model and earlier research models
Result 5: Figure 07 illustrates the recall comparison graph between the proposed model and earlier research
models. Random forest classifier has high recall as comparison to CNN, while the CNN model has attained low
recall as compared to another model as shown below. This result demonstrates that our proposed model is having
more recall than other models.
Fig 07 : Recall comparison graph between the proposed model and earlier research models
Result 6 : Figure 08 illustrates the training loss graph of the proposed model. This result demonstrates that our
proposed model is having less loss than other models. Figure 08 shows the training loss graph of the proposed
model as shown below.
Increased Security : With the help of this technology, it is easier to track down any thieves or other trespassers and it can
also help identify terrorists or any other criminals with the help of the face scan only.
Fast and Accurate : The process of recognizing a face is very fast, takes a second or less.
Automation of identification : Now there is no need of human assistance for identification process. Identification process
is completely automated Facial Recognition technology and not only takes seconds but is also incredibly accurate.
Cost-efficiency : Since this technology is automated, it also reduces the need for human assistance to personally verify a
match. This means, it can save costs on hiring security staff and other security measures.
No Contact : It is preferred over other bio-metric options like fingerprint scanning because of its non-contact process.
People need not to worry about the problems related to fingerprint identification technology such as germs or smudges.
DISADVANTAGES
Surveillance Angle : Identification process is under great pressure of the surveillance angle that was
responsible for face capturing. To capture a face through the recognition software, the multiple angles are
being used.
Data Storage : Data storage is gold in today’s world. To store many thousands of faces, lot of space is
required.
Legislation : There are concerns that bio-metrics is progressing too rapidly for regulators, legislators, and
the judicial system to set up standardized rules and precedents around their use.
Misalignment
Pose Variation
Illumination Variation
Expression Variation
APPLICATIONS
Healthcare
Automotive industry
Education
Finance
CONCLUSION AND FUTURE WORK
System uses a deep learning approach to designing and building a system for recognizing and identifying people by
their faces. The whole process of creating this face recognition system is outlined, beginning with data training and
ending with the implementation of the suggested method of face recognition. The reliability of the system is mostly
unaffected by external variables. This model is trained using a CNN-based technique with a high sample size of photos
for each candidate. This has resulted in a massive dataset and increased precision. Examining the data, it becomes clear
that the ambient lighting affects the identification procedure. When the lighting is poor, the recognition system is more
likely to make mistakes. To fix this, additional low-light-quality training photos should be used to create the face
classifier. This study's findings indicate that, compared to the decision tree, random forest, and CNN models, face
recognition on images provides the highest accuracy. The parameters such as recall, precision, and f1-score of the
proposed method are higher as equated to existing methods. A confusion matrix is generated and the confusion matrix
of the suggested method is better as equated to existing methods. To further improve the recognition rate in uncontrolled
environments, this work may be expanded to discover a strategy for selecting optimal features from extracted features
and also an improved classification model.
REFERENCES