Face Detection in Python Using OpenCV
Face Detection in Python Using OpenCV
Abstract
Computer vision is such kind of research field which tries to percept and represent
the 3D information for world objects. Its essence is to reconstruct the visual
aspects of 3D object by analyzing the 2D information extracted accordingly. 3D
objects surface reconstruction and representation not only provide theoretical
benefits, but also are required by numerous applications.
Face detection is a process, which is to analysis the input image and to determine
the number, location, size, position and the orientation of face. Face detection is
the base for face tracking and face recognition, whose results directly affect the
process and accuracy of face recognition. The common face detection methods are:
knowledge-based approach, Statistics-based approach and integration approach
with different features or methods. The knowledge-based approach can achieve
face detection for complex background images to some extent and also obtain high
detection speed, but it needs more integration features to further enhance the
adaptability. Statistics-based approach detects face by judging all possible areas of
images by classifier, which is to look the face region as a class of models, and use
a large number of “Face” and “non-face” training samples to construct the
Classifier.
Introduction
Face Detection has been one of the hottest topics of computer vision for the past
few years.This technology has been available for some years now and is being used
all over the place.From cameras that make sure faces are focused before you take a
picture, to Facebook when it tags people automatically once you upload a picture
(before you did that manually remember?).
Attractive people… That's why I'm taking the picture
In short, how Face Detection and Face Recognition work when unlocking your
phone is as following:
You look at your phone, and it extracts your face from an image (the nerdy name
for this process is face detection). Then, it compares the current face with the one it
saved before during training and checks if they both match (its nerdy name is face
recognition) and, if they do, it unlocks itself.
As you see, this technology not only allows me to be any type of dog I want.
People are getting pretty interested in it because of its ample applications. ATMs
with Facial Recognition and Face Detection software have been introduced to
withdraw money. Also, Emotion Analysis is gaining relevance for research
purposes
An ATM with a facial recognition system. Source
1. Haar Classifier
2. LBP Classifier
Both of these classifiers process images in gray scales, basically because we don't
need color information to decide if a picture has a face or not (we'll talk more about
this later on). As these are pre-trained in OpenCV, their learned knowledge files
also come bundled with OpenCV To run a classifier, we need to load the
knowledge files first, as if it had no knowledge, just like a newly born baby Each
file starts with the name of the classifier it belongs to. For example, a Haar
cascade classifier These are the two types of classifiers we will be using to
analyze Casper.
HAAR CLASSIFIER
The Harr classifier is a machine learning based approach, an algorithm created by
Paul Viola and Michael Jones; which (as mentioned before) are trained from many
many positive images (with faces) and negatives images (without faces).
PROJECT OVERVIEW
The method has strong adaptability and robustness, however, the detection speed
search and has high computational complexity. The method has real-time detection
speed and high detection accuracy, but needs long training time. The digital image
of digital values, called picture elements or pixels Pixel values typically represent
gray levels, colours, heights, opacities etc. It is to be noted that digitization implies
that a digital image is an approximation of a real scene. Recently there has been a
tremendous growth in the field of computer vision. The conversion of this huge
amount of low level information into usable high level information is the subject of
computer vision. It deals with the development of the theoretical and algorithmic
Here we will work with face detection. Initially, the algorithm needs a lot of
positive images (images of faces) and negative images (images without faces) to
train the classifier. Then we need to extract features from it. For this, haar features
shown in below image are used. They are just like our convolutional kernel. Each
feature is a single value obtained by subtracting sum of pixels under white
rectangle from sum of pixels under black rectangle.
PROBLEM DEFINITION
Existing system
Computer vision is such kind of research field which tries to percept and represent
the 3D information for world objects. Its essence is to reconstruct the visual
aspects of 3D object by analyzing the 2D information extracted accordingly. 3D
objects surface reconstruction and representation not only provide theoretical
benefits, but also are required by numerous applications.
Proposed system
In this project, we are going to describe a system that can detect and track human
face and give out necessary information as per our training data.
This project will be as a use case in biometrics, often as a part of (or together with)
interface and image database management. Some recent digital cameras use face
User Documentation
The method has strong adaptability and robustness, however, the detection speed
search and has high computational complexity. The method has real-time detection
speed and high detection accuracy, but needs long training time. The digital image
of digital values, called picture elements or pixels Pixel values typically represent
gray levels, colours, heights, opacities etc. It is to be noted that digitization implies
that a digital image is an approximation of a real scene. Recently there has been a
tremendous growth in the field of computer vision. The conversion of this huge
amount of low level information into usable high level information is the subject of
computer vision. It deals with the development of the theoretical and algorithmic
SYSTEM REQUIREMENTS:
Anaconda Software to be installed.
● 4 GB RAM recommended.
TESTING OBJECTIVES
• To ensure that during operation the system will perform as per specification.
• TO make sure that system meets the user requirements during operation
• To make sure that during the operation, incorrect input, processing an
output will be detected
• To see that when correct inputs are fed to the system the outputs are correct
• To verify that the controls incorporated in the same system as intended
• Testing is a process of executing a program with the intent of finding an
error
• A good test case is one that has a high probability of finding an as yet
undiscovered error
The software developed has been tested successfully using the following testing
strategies and any errors that are encountered are corrected and again the part of
the program or the procedure or function is put to testing until all the errors are
removed. A successful test is one that uncovers an as yet undiscovered error.
Note that the result of the system testing will prove that the system is working
correctly. It will give confidence to system designer, users of the system, prevent
frustration during implementation process etc.,
Unit Testing:
Unit testing is essentially for the verification of the code produced
during the coding phase and the goal is test the internal logic of the
module/program. In the Generic code project, the unit testing is done during coding
phase of data entry forms whether the functions are working properly or not. In this
phase all the drivers are tested they are rightly connected or not.
Integration Testing:
All the tested modules are combined into sub systems, which are then
tested. The goal is to see if the modules are properly integrated, and the emphasis
being on the testing interfaces between the modules. In the generic code integration
testing is done mainly on table creation module and insertion module.
Validation Testing
This testing concentrates on confirming that the software is error-free in all
respects. All the specified validations are verified and the software is subjected to
hard-core testing. It also aims at determining the degree of deviation that exists in
the software designed from the specification; they are listed out and are corrected.
System Testing
This testing is a series of different tests whose primary is to fully exercise the
computer-based system. This involves:
• Implementing the system in a simulated production environment and testing
it.
• Introducing errors and testing for error handling.
OUTPUT SCREENS
CONCLUSION
This proposed a real time face recognition system (RTFRS). RTFRS has been
implemented in four implementation variants (i.e., CPU Mono, CPU Parallel,
Hybrid Mono and Hybrid Parallel. Fisherface algorithm is employed to implement
recognition phase and Haar-cascade algorithm is employed for the detection phase.
In addition, these implementations are based on industrial standard tools involve
Open Computer Vision (OpenCV) programming language, EmguCV version
windows universal CUDA 2.9.0.1922, and heterogeneous processing units. The
experiment consists of applying 400 images for 40 persons' faces (10 images per
person), defining, training, and recognizing these images on these four variants, the
experiment is taken place on the same environment The speed up factor is
measured with respect to the CPU Mono implementation (the slowest than all other
three variants). The practical results demonstrated that, the Hybrid Parallel
Recognition is the fastest algorithm variant among the all, because it gives an
overall speed up around times. The CPU Parallel gives an overall speed up around .
Finally, the Hybrid Mono gives a little improvement about (1.04). Thus, employing
parallel processing on modern computer architecture can accelerate face
recognition system.