Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
101 views

Face Detection With Opencv

The document discusses a face detection based attendance system using real-time face recognition. It describes maintaining attendance using biometrics like face recognition. The system detects faces in images and compares them to a database for verification without user intervention. The document also discusses the software development life cycle and methodologies like the spiral model used in developing such a system.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

Face Detection With Opencv

The document discusses a face detection based attendance system using real-time face recognition. It describes maintaining attendance using biometrics like face recognition. The system detects faces in images and compares them to a database for verification without user intervention. The document also discusses the software development life cycle and methodologies like the spiral model used in developing such a system.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Face Detection Based Attendance System

Abstract:
Automatic face recognition (AFR) technologies have seen dramatic improvements in
performance over the past years, and such systems are now widely used for security and
commercial applications. An automated system for human face recognition in a real time
background which can be useful for a college to mark the attendance of their students . So using
Real Time Face Recognition is a real world solution which comes with day to day activities of
handling various activities. The task is very difficult as the real time background subtraction in
an image is still a challenge . To detect real time human face are used and a simple fast Principal
Component Analysis has used to recognize the faces detected with a high accuracy rate. The
matched face is used to detect accurate user .Our system maintains the collection of user facial
features as datasets and use them for verification.

INTRODUCTION:

Maintaining the attendance is very important in all the institutes for checking the
performance of employees (4). Every institute has its own method in this regard.
Some are taking attendance manually using the old paper or file based approach
and some have adopted methods of automatic attendance using some biometric
techniques. But in these methods employees have to wait for long time in making a
queue at time they enter the office. Many biometric systems are available but the
key authentications are same is all the techniques. Every biometric system consists
of enrolment process in which unique features of a person is stored in the database
and then there are processes of identification and verification. These two processes
compare the biometric feature of a person with previously stored template captured
at the time of enrollment. Biometric templates can be of many types like
Fingerprints, Eye Iris, Face, Hand Geometry, Signature, Gait and voice. Our
system uses the face recognition approach for the automatic attendance of
employees in the office room environment without employees’ intervention (2).
Face recognition consists of two steps, in first step faces are detected in the image
and then these detected faces are compared with the database for verification. A
number of methods have been proposed for face detection i.e. Ada Boost
algorithm, the Float Boost algorithm, the S-Ada Boost algorithm Support Vector
Machines (SVM), and the Bayes classifier. The efficiency of face recognition
algorithm can be increased with the fast face detection algorithm. In all the above
methods SURF is most efficient. Our system utilized this algorithm for the
detection of faces in the office room image. Face recognition techniques can be
Divided into two types Appearance based which use texture features that is applied
to whole face or some specific Regions, other is Feature

based which uses geometric features like mouth, nose, eyes, eye brows, cheeks and
Relation between them. Statistical tools such as Linear Discriminant Analysis
(LDA), Principal Component Analysis (PCA), Kernel Methods, and Neural
Networks, Eigen-faces have beenused for construction of face templates.
Illumination invariant algorithm is utilized for removing the lighting effect inside
the office room.

Objectives:

Every biometric system consists of enrolment process in which unique features of


a person is stored in the database and then there are processes of identification and
verification. These two processes compare the biometric feature of a person with
previously stored template captured at the time of enrollment. Biometric templates
can be of many types like Fingerprints, Eye Iris, Face, Hand Geometry, Signature,
Gait and voice. Our system uses the face recognition approach for the automatic
attendance of employees in the office room environment without employees’
intervention (2). Face recognition consists of two steps, in first step faces are
detected in the image and then these detected faces are compared with the database
for verification.

Methodology:

Software Development Life Cycle:

There is various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also
referred as "Software Development Process Models". Each process model follows
a particular life cycle in order to ensure success in process of software
development.
Requirements:

Business requirements are gathered in this phase.  This phase is the main
focus of the project managers and stake holders.  Meetings with managers, stake
holders and users are held in order to determine the requirements.  Who is going to
use the system?  How will they use the system?  What data should be input into the
system?  What data should be output by the system?  These are general questions
that get answered during a requirements gathering phase.  This produces a nice big
list of functionality that the system should provide, which describes functions the
system should perform, business logic that processes data, what data is stored and
used by the system, and how the user interface should work.  The overall result is
the system as a whole and how it performs, not how it is actually going to do it.

Design:

The software system design is produced from the results of the requirements
phase.  Architects have the ball in their court during this phase and this is the phase
in which their focus lies.  This is where the details on how the system will work is
produced.  Architecture, including hardware and software, communication,
software design (UML is produced here) are all part of the deliverables of a design
phase.

Implementation:

Code is produced from the deliverables of the design phase during


implementation, and this is the longest phase of the software development life
cycle.  For a developer, this is the main focus of the life cycle because this is where
the code is produced.  Implementation my overlap with both the design and testing
phases.  Many tools exists (CASE tools) to actually automate the production of
code using information gathered and produced during the design phase.

Testing:

During testing, the implementation is tested against the requirements to


make sure that the product is actually solving the needs addressed and gathered
during the requirements phase.  Unit tests and system/acceptance tests are done
during this phase.  Unit tests act on a specific component of the system, while
system tests act on the system as a whole.

So in a nutshell, that is a very basic overview of the general software development


life cycle model.  Now let’s delve into some of the traditional and widely used
variations.

SDLC METHDOLOGIES:
This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers
and will be the basic during testing phase. Any changes made to the requirements
in the future will have to go through formal change approval process.

SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral
Model of Software Development and Enhancement. This model was not the first
model to discuss iterative development, but it was the first model to explain why
the iteration models.

As originally envisioned, the iterations were typically 6 months to 2 years


long. Each phase starts with a design goal and ends with a client reviewing the
progress thus far. Analysis and engineering efforts are applied at each phase of
the project, with an eye toward the end goal of the project.

The following diagram shows how a spiral model acts like:


The steps for Spiral Model can be generalized as follows:

 The new system requirements are defined in as much details as possible.


This usually involves interviewing a number of usersrepresenting all the
external or internal users and other aspects of the existing system.

 A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the preliminary


design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.

 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths, weakness,


and risks.

2. Defining the requirements of the second prototype.

3. Planning a designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involve development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer’s judgment, result in a less-than-satisfactory final product.
 The existing prototype is evaluated in the same manner as was the
previous prototype, and if necessary, another prototype is developed from
it according to the fourfold procedure outlined above.

 The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.

 The final system is constructed, based on the refined prototype.

 The final system is thoroughly evaluated and tested. Routine


maintenance is carried on a continuing basis to prevent large scale
failures and to minimize down time.

2.2 STUDY OF THE SYSTEM

In the flexibility of uses the interface has been developed a graphics


concepts in mind, associated through a browser interface. The GUI’s at the top
level has been categorized as follows

1. Administrative User Interface Design

2. The Operational and Generic User Interface Design

The administrative user interface concentrates on the consistent information


that is practically, part of the organizational activities and which needs proper
authentication for the data collection. The Interface helps the administration with
all the transactional states like data insertion, data deletion, and data updating
along with executive data search capabilities.

The operational and generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational user
interface also helps the ordinary users in managing their own information helps the
ordinary users in managing their own information in a customized manner as per
the assisted flexibilities.

2.3. INPUT AND OUTPUT


2.3.1.INPUT DESIGN

Input design is a part of overall system design. The main objective during the
input design is as given below:

 To produce a cost-effective method of input.


 To achieve the highest possible level of accuracy.
 To ensure that the input is acceptable and understood by the user.

INPUT STAGES:

The main input stages can be listed as below:

 Data recording
 Data transcription
 Data conversion
 Data verification
 Data control
 Data transmission
 Data validation
 Data correction

INPUT TYPES:

It is necessary to determine the various types of inputs. Inputs can be categorized


as follows:
 External inputs, which are prime inputs for the system.
 Internal inputs, which are user communications with the system.
 Operational, which are computer department’s communications to the system?
 Interactive, which are inputs entered during a dialogue.

INPUTMEDIA:

At this stage choice has to be made about the input media. To conclude
about the input media consideration has to be given to;

 Type of input
 Flexibility of format
 Speed
 Accuracy
 Verification methods
 Rejection rates
 Ease of correction
 Storage and handling requirements
 Security
 Easy to use
 Portability
Keeping in view the above description of the input types and input media, it
can be said that most of the inputs are of the form of internal and interactive. As

Input data is to be the directly keyed in by the user, the keyboard can be considered
to be the most suitable input device.
2.3.2.OUTPUT DESIGN

Outputs from computer systems are required primarily to communicate the results
of processing to users. They are also used to provide a permanent copy of the
results for later consultation. The various types of outputs in general are:

 External Outputs, whose destination is outside the organization


 Internal Outputs whose destination is within organization and they are the
 User’s main interface with the computer.
 Operational outputs whose use is purely within the computer department.
 Interface outputs, which involve the user in communicating directly.

OUTPUT DEFINITION:

The outputs should be defined in terms of the following points:


 Type of the output
 Content of the output
 Format of the output
 Location of the output
 Frequency of the output
 Volume of the output
 Sequence of the output
It is not always desirable to print or display data as it is held on a computer.
It should be decided as which form of the output is the most suitable.

Literature Survey:

Face recognition: A literature survey,” ACM Computing Surveys


As one of the most successful applications of image analysis and understanding,
face recognition has recently received significant attention, especially during the
past several years. At least two reasons account for this trend: the first is the wide
range of commercial and law enforcement applications, and the second is the
availability of feasible technologies after 30 years of research. Even though current
machine recognition systems have reached a certain level of maturity, their success
is limited by the conditions imposed by many real applications. For example,
recognition of face images acquired in an outdoor environment with changes in
illumination and/or pose remains a largely unsolved problem. In other words,
current systems are still far away from the capability of the human perception
system.This paper provides an up-to-date critical survey of still- and video-based
face recognition research. There are two underlying motivations for us to write this
survey paper: the first is to provide an up-to-date review of the existing literature,
and the second is to offer some insights into the studies of machine recognition of
faces. To provide a comprehensive survey, we not only categorize existing
recognition techniques but also present detailed descriptions of representative
methods within each category. In addition, relevant topics such as psychophysical
studies, system evaluation, and issues of illumination and pose variation are
covered.

Speeded up robust features. Computer Vision and Image


Understanding

This article presents a novel scale- and rotation-invariant detector and descriptor,
coined SURF (Speeded-Up Robust Features). SURF approximates or even
outperforms previously proposed schemes with respect to repeatability,
distinctiveness, and robustness, yet can be computed and compared much faster.

This is achieved by relying on integral images for image convolutions; by building


on the strengths of the leading existing detectors and descriptors (specifically,
using a Hessian matrix-based measure for the detector, and a distribution-based
descriptor); and by simplifying these methods to the essential. This leads to a
combination of novel detection, description, and matching steps.

The paper encompasses a detailed description of the detector and descriptor and
then explores the effects of the most important parameters. We conclude the article
with SURF’s application to two challenging, yet converse goals: camera
calibration as a special case of image registration, and object recognition. Our
experiments underline SURF’s usefulness in a broad range of topics in computer
vision.

Analysis of Local Appearance-Based Face Recognition: Effects of


Feature Selection and Feature Normalization

In this paper, the effects of feature selection and feature normalization to the
performance of a local appearance based face recognition scheme are presented.
From the local features that are extracted using block-based discrete cosine
transform, three feature sets are derived. These local feature vectors are
normalized in two different ways; by making them unit norm and by dividing
each coefficient to its standard deviation that is learned from the training set. The
input test face images are then classified using four different distance measures:
L1 norm, L2 norm, cosine angle and covariance between feature vectors.
Extensive experiments have been conducted on the AR and CMU PIE face
databases. The experimental results show the importance of using appropriate
feature sets and doing normalization on the feature vector.

Architecture Diagram:

SYSTEM ANALYSIS

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems
engineering, information systems and software engineering, is the process of creating or altering
systems, and the models and methodologies that people use to develop these systems. In software
engineering the SDLC concept underpins many kinds of software development methodologies.

System Analysis

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in
systems engineering, information systems and software engineering, is the process of creating
or altering systems, and the models and methodologies that people use to develop these
systems.

In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling the
creation of an information system the software development process.

SOFTWARE MODEL OR ARCHITECTURE ANALYSIS:

Structured project management techniques (such as an SDLC) enhance


management’s control over projects by dividing complex tasks into manageable sections. A
software life cycle model is either a descriptive or prescriptive characterization of how software
is or should be developed. But none of the SDLC models discuss the key issues like Change
management, Incident management and Release management processes within the SDLC
process, but, it is addressed in the overall project management. In the proposed hypothetical
model, the concept of user-developer interaction in the conventional SDLC model has been
converted into a three dimensional model which comprises of the user, owner and the
developer. In the proposed hypothetical model, the concept of user-developer interaction in
the conventional SDLC model has been converted into a three dimensional model which
comprises of the user, owner and the developer. The ―one size fits all‖ approach to applying
SDLC methodologies is no longer appropriate. We have made an attempt to address the above
mentioned defects by using a new hypothetical model for SDLC described elsewhere. The
drawback of addressing these management processes under the overall project management is
missing of key technical issues pertaining to software development process that is, these issues
are talked in the project management at the surface level but not at the ground level.

2.5Functional requirements

Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provide a permanent copy of the results for later
consultation. The various types of outputs in general are:

 External Outputs, whose destination is outside the organization,.


 Internal Outputs whose destination is within organization and they are the
 User’s main interface with the computer.
 Operational outputs whose use is purely within the computer department.
 Interface outputs, which involve the user in communicating directly.
 Understanding user’s preferences, expertise level and his business requirements
through a friendly questionnaire.
 Input data can be in four different forms - Relational DB, text files, .xls and xml files. For
testing and demo you can choose data from any domain. User-B can provide business
data as input.

Non-Functional Requirements:
1. Secure access of confidential data (user’s details). SSL can be used.

2. 24 X 7 availability.

3. Better component design to get better performance at peak time

4. Flexible service based architecture will be highly desirable for future extension
FEASIBILITY STUDY

FEASIBILITY STUDY:
Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are aspects
in the feasibility study portion of the preliminary investigation:

 Technical Feasibility
 Economical Feasibility
 Operation Feasibility

2.1. Technical Feasibility:

In the feasibility study first step is that the organization or company has to decide that
what technologies are suitable to develop by considering existing system.

The technical issue usually raised during the feasibility stage of the investigation includes
the following:
 Does the necessary technology exist to do what is suggested?
 Do the proposed equipment have the technical capacity to hold the data required to use
the new system?
 Will the proposed system provide adequate response to inquiries, regardless of the
number or location of users?
 Can the system be upgraded if developed?
 Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation
System’. The current system developed is technically feasible. It is a web based user interface for
audit workflow at NIC-CSD. Thus it provides an easy access to the users. The database’s
purpose is to create, establish and maintain a workflow among various entities in order to
facilitate all concerned users in their various capacities or roles. Permission to the users would be
granted based on the roles specified. Therefore, it provides the technical guarantee of
accuracy, reliability and security. The software and hard requirements for the development of
this project are not many and are already available in-house at NIC or are available as free as
open source. The work for the project is done with the current equipment and existing software
technology. Necessary bandwidth exists for providing a fast feedback to the users irrespective of
the number of users using the system.
Here in this application used the technologies like Visual Studio 2012 and SqlServer
2014. These are free software that would be downloaded from web.

Visual Studio 2013 –it is tool or technology.

2.2. ECONOMICAL FEASIBILITY


A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.
The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.

Determining Economic Feasibility:

Assessing the economic feasibility of an implementation by performing a


cost/benefit analysis, which as its name suggests compares the full/real costs of the application
to its full/real financial benefits.  The alternatives should be evaluated on the basis of their
contribution to net cash flow, the amount by which the benefits exceed the costs, because the
primary objective of all investments is to improve overall organizational performance. 
Type Potential Costs Potential Benefits

 Hardware/software upgrades
 Fully-burdened cost of labor
(salary + benefits)
 Reduced operating costs
 Support costs for the
 Reduced personnel costs from a
application
reduction in staff
Quantitative  Expected operational costs
 Increased revenue from additional
 Training costs for users to learn
sales of your organizations
the application
products/services
 Training costs to train
developers in new/updated
technologies

 Improved decisions as the result of


access to accurate and timely
information
 Increased employee  Raising of existing, or introduction
Qualitative dissatisfaction from fear of of a new, barrier to entry within
change your industry to keep competition
out of your market
 Positive public perception that
your organization is an innovator

 The table includes both qualitative factors, costs or benefits that are subjective in
nature, and quantitative factors, costs or benefits for which monetary values can easily be
identified.  I will discuss the need to take both kinds of factors into account when performing a
cost/benefit analysis.

2.3. OPERATIONAL FEASIBILITY


Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organization’s operating requirements. Operational feasibility aspects of the
project are to be taken as an important part of the project implementation. Some of the important
issues raised are to test the operational feasibility of a project includes the following: -
 Is there sufficient support for the management from the users?
 Will the system be used and work properly if it is being developed and implemented?
 Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues. Beforehand, the
management issues and user requirements have been taken into consideration. So there is no
question of resistance from the users that can undermine the possible application benefits.
The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.

Not only must an application make economic and technical sense, it must also make operational
sense. 

Operations Issues Support Issues

 What tools are needed to support


operations?  What documentation will
 What skills will operators need to be users be given?
trained in?  What training will users be
 What processes need to be created and/or given?
updated?  How will change requests be
 What documentation does operations managed?
need?

Very often you will need to improve the existing operations, maintenance, and support
infrastructure to support the operation of the new application that you intend to develop.  To
determine what the impact will be you will need to understand both the current operations and
support infrastructure of your organization and the operations and support characteristics of
your new application. To operate this application END-TO-END VMS. The user no need to
require any technical knowledge that we are used to develop this project is Asp.net C#.net.
That the application providing rich user interface by user can do the operation in flexible
manner.

SELECTED SOFTWARE
IMPLEMENTATION ON (PYTHON):

What Is A Script?

Up to this point, I have concentrated on the interactive programming capability of Python.   This is a very
useful capability that allows you to type in a program and to have it executed immediately in an
interactive mode

Scripts are reusable:

Basically, a script is a text file containing the statements that comprise a Python program.  Once
you have created the script, you can execute it over and over without having to retype it each
time.

Scripts are editable:

Perhaps, more importantly, you can make  different versions of the script by modifying the
statements from one file to the next using a text editor.  Then you can execute each of the
individual versions.  In this way, it is easy to create different programs with a minimum amount
of typing.

You will need a text editor:

Just about any text editor will suffice for creating Python script files.

You can use Microsoft Notepad, Microsoft WordPad, Microsoft Word, or just about any word
processor if you want to.
Difference between a script and a program

Script: Scripts are distinct from the core code of the application, which is usually written in a
different language, and are often created or at least modified by the end-user. Scripts are often
interpreted from source code or byte code, where as the applications they control are
traditionally compiled to native machine code.

Program: The program has an executable form that the computer can use directly to execute
the instructions.

The same program in its human-readable source code form, from which executable programs
are derived(e.g., compiled)

Python

what is Python? Chances you are asking yourself this. You may have found this book
because you want to learn to program but don’t know anything about programming languages.
Or you may have heard of programming languages like C, C++, C#, or Java and want to know
what Python is and how it compares to “big name” languages. Hopefully I can explain it for you.

Python concepts

If you not interested in the how and whys of Python, feel free to skip to the next
chapter. In this chapter I will try to explain to the reader why I think Python is one of the best
languages available and why it’s a great one to start programming with.

• Open source general-purpose language.

• Object Oriented, Procedural, Functional

• Easy to interface with C/ObjC/Java/Fortran


• Easy-ish to interface with C++ (via SWIG)

• Great interactive environment

Python is a high-level, interpreted, interactive and object-oriented scripting language. Python


is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.

 Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

 Python is Object-Oriented − Python supports Object-Oriented style or technique of


programming that encapsulates code within objects.

 Python is a Beginner's Language − Python is a great language for the beginner-level


programmers and supports the development of a wide range of applications from
simple text processing to WWW browsers to games.

History of Python

Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the GNU General
Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.

Python Features
Python's features include −

 Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.

 Easy-to-read − Python code is more clearly defined and visible to the eyes.

 Easy-to-maintain − Python's source code is fairly easy-to-maintain.

 A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.

 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.

 Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.

 Extendable − You can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.

 Databases − Python provides interfaces to all major commercial databases.

 GUI Programming − Python supports GUI applications that can be created and ported
to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.

 Scalable − Python provides a better structure and support for large programs than shell
scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are listed
below −

 It supports functional and structured programming methods as well as OOP.

 It can be used as a scripting language or can be compiled to byte-code for building large
applications.

 It provides very high-level dynamic data types and supports dynamic type checking.

 IT supports automatic garbage collection.

 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

Dynamic vs Static

Types Python is a dynamic-typed language. Many other languages are static typed, such
as C/C++ and Java. A static typed language requires the programmer to explicitly tell the
computer what type of “thing” each data value is.

For example, in C if you had a variable that was to contain the price of something, you would
have to declare the variable as a “float” type.

This tells the compiler that the only data that can be used for that variable must be a floating
point number, i.e. a number with a decimal point.

If any other data value was assigned to that variable, the compiler would give an error when
trying to compile the program.

Python, however, doesn’t require this. You simply give your variables names and assign values
to them. The interpreter takes care of keeping track of what kinds of objects your program is
using. This also means that you can change the size of the values as you develop the program.
Say you have another decimal number (a.k.a. a floating point number) you need in your
program.

With a static typed language, you have to decide the memory size the variable can take when
you first initialize that variable. A double is a floating point value that can handle a much larger
number than a normal float (the actual memory sizes depend on the operating environment).

If you declare a variable to be a float but later on assign a value that is too big to it, your
program will fail; you will have to go back and change that variable to be a double.

With Python, it doesn’t matter. You simply give it whatever number you want and Python will
take care of manipulating it as needed. It even works for derived values.

For example, say you are dividing two numbers. One is a floating point number and one is an
integer. Python realizes that it’s more accurate to keep track of decimals so it automatically
calculates the result as a floating point number

Variables

Variables are nothing but reserved memory locations to store values. This means that
when you create a variable you reserve some space in memory.

Based on the data type of a variable, the interpreter allocates memory and decides what can be
stored in the reserved memory. Therefore, by assigning different data types to variables, you
can store integers, decimals or characters in these variables.

Standard Data Types


The data stored in memory can be of many types. For example, a person's age is stored
as a numeric value and his or her address is stored as alphanumeric characters. Python has
various standard data types that are used to define the operations possible on them and the
storage method for each of them.

Python has five standard data types −


 Numbers

 String

 List

 Tuple

 Dictionary

Python Numbers
Number data types store numeric values. Number objects are created when you assign a
value to them

Python Strings
Strings in Python are identified as a contiguous set of characters represented in the
quotation marks. Python allows for either pairs of single or double quotes. Subsets of strings can
be taken using the slice operator ([ ] and [:] ) with indexes starting at 0 in the beginning of the
string and working their way from -1 at the end.

Python Lists
Lists are the most versatile of Python's compound data types. A list contains items
separated by commas and enclosed within square brackets ([]). To some extent, lists are similar
to arrays in C. One difference between them is that all the items belonging to a list can be of
different data type.

The values stored in a list can be accessed using the slice operator ([ ] and [:]) with indexes
starting at 0 in the beginning of the list and working their way to end -1. The plus (+) sign is the
list concatenation operator, and the asterisk (*) is the repetition operator. 

Python Tuples
A tuple is another sequence data type that is similar to the list. A tuple consists of a
number of values separated by commas. Unlike lists, however, tuples are enclosed within
parentheses.

The main differences between lists and tuples are: Lists are enclosed in brackets ( [ ] ) and their
elements and size can be changed, while tuples are enclosed in parentheses ( ( ) ) and cannot be
updated. Tuples can be thought of as read-only lists.

Python Dictionary
Python's dictionaries are kind of hash table type. They work like associative arrays or
hashes found in Perl and consist of key-value pairs. A dictionary key can be almost any Python
type, but are usually numbers or strings. Values, on the other hand, can be any arbitrary Python
object.

Dictionaries are enclosed by curly braces ({ }) and values can be assigned and accessed using
square braces ([]).

Different modes in python

Python has two basic modes: normal and interactive.

The normal mode is the mode where the scripted and finished .py files are run in the Python
interpreter.

Interactive mode is a command line shell which gives immediate feedback for each statement,
while running previously fed statements in active memory. As new lines are fed into the
interpreter, the fed program is evaluated both in part and in whole

20 Python libraries
1. Requests. The most famous http library written by kenneth reitz. It’s a must have for every
python developer.
2. Scrapy. If you are involved in webscraping then this is a must have library for you. After using
this library you won’t use any other.
3. wxPython. A gui toolkit for python. I have primarily used it in place of tkinter. You will really
love it.
4. Pillow. A friendly fork of PIL (Python Imaging Library). It is more user friendly than PIL and
is a must have for anyone who works with images.
5. SQLAlchemy. A database library. Many love it and many hate it. The choice is yours.
6. BeautifulSoup. I know it’s slow but this xml and html parsing library is very useful for
beginners.
7. Twisted. The most important tool for any network application developer. It has a very
beautiful api and is used by a lot of famous python developers.
8. NumPy. How can we leave this very important library ? It provides some advance math
functionalities to python.
9. SciPy. When we talk about NumPy then we have to talk about scipy. It is a library of
algorithms and mathematical tools for python and has caused many scientists to switch from
ruby to python.
10. matplotlib. A numerical plotting library. It is very useful for any data scientist or any data
analyzer.
11. Pygame. Which developer does not like to play games and develop them ? This library will
help you achieve your goal of 2d game development.
12. Pyglet. A 3d animation and game creation engine. This is the engine in which the
famous python port of minecraft was made
13. pyQT. A GUI toolkit for python. It is my second choice after wxpython for developing
GUI’s for my python scripts.
14. pyGtk. Another python GUI library. It is the same library in which the famous Bittorrent
client is created.
15. Scapy. A packet sniffer and analyzer for python made in python.
16. pywin32. A python library which provides some useful methods and classes for interacting
with windows.
17. nltk. Natural Language Toolkit – I realize most people won’t be using this one, but it’s
generic enough. It is a very useful library if you want to manipulate strings. But it’s capacity is
beyond that. Do check it out.
18. nose. A testing framework for python. It is used by millions of python developers. It is a
must have if you do test driven development.
19. SymPy. SymPy can do algebraic evaluation, differentiation, expansion, complex numbers,
etc. It is contained in a pure Python distribution.
20. IPython. I just can’t stress enough how useful this tool is. It is a python prompt on steroids. It
has completion, history, shell capabilities, and a lot more. Make sure that you take a look at it.

Numpy

NumPy’s main object is the homogeneous multidimensional array. It is a table of elements


(usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy
dimensions are called axes. The number of axes is rank.

• Offers Matlab-ish capabilities within Python

• Fast array operations

• 2D arrays, multi-D arrays, linear algebra etc.

Matplotlib

• High quality plotting library.

Python class and objects

These are the building blocks of OOP class creates a new object. This object can be
anything, whether an abstract data concept or a model of a physical object, e.g. a chair. Each
class has individual characteristics unique to that class, including variables and methods. Classes
are very powerful and currently “the big thing” in most programming languages. Hence, there
are several chapters dedicated to OOP later in the book.
The class is the most basic component of object-oriented programming. Previously, you learned
how to use functions to make your program do something.

Now will move into the big, scary world of Object-Oriented Programming (OOP). To be honest,
it took me several months to get a handle on objects.

When I first learned C and C++, I did great; functions just made sense for me.

Having messed around with BASIC in the early ’90s, I realized functions were just like
subroutines so there wasn’t much new to learn.

However, when my C++ course started talking about objects, classes, and all the new features
of OOP, my grades definitely suffered.

Once you learn OOP, you’ll realize that it’s actually a pretty powerful tool. Plus many Python
libraries and APIs use classes, so you should at least be able to understand what the code is
doing.

One thing to note about Python and OOP: it’s not mandatory to use objects in your code in a
way that works best; maybe you don’t need to have a full-blown class with initialization code
and methods to just return a calculation. With Python, you can get as technical as you want.

As you’ve already seen, Python can do just fine with functions. Unlike languages such as Java,
you aren’t tied down to a single way of doing things; you can mix functions and classes as
necessary in the same program. This lets you build the code

Objects are an encapsulation of variables and functions into a single entity. Objects get their
variables and functions from classes. Classes are essentially a template to create your objects.

Here’s a brief list of Python OOP ideas:

• The class statement creates a class object and gives it a name. This creates a new namespace.
• Assignments within the class create class attributes. These attributes are accessed by
qualifying the name using dot syntax: ClassName.Attribute.

• Class attributes export the state of an object and its associated behavior. These attributes are
shared by all instances of a class.

• Calling a class (just like a function) creates a new instance of the class.

This is where the multiple copies part comes in.

• Each instance gets ("inherits") the default class attributes and gets its own namespace. This
prevents instance objects from overlapping and confusing the program.

• Using the term self identifies a particular instance, allowing for per-instance attributes. This
allows items such as variables to be associated with a particular instance.

Inheritance

First off, classes allow you to modify a program without really making changes to it.

To elaborate, by subclassing a class, you can change the behavior of the program by simply
adding new components to it rather than rewriting the existing components.

As we’ve seen, an instance of a class inherits the attributes of that class.

However, classes can also inherit attributes from other classes. Hence, a subclass inherits from
a superclass allowing you to make a generic superclass that is specialized via subclasses.

The subclasses can override the logic in a superclass, allowing you to change the behavior of
your classes without changing the superclass at all.

Operator Overloads
Operator overloading simply means that objects that you create from classes can respond to
actions (operations) that are already defined within Python, such as addition, slicing, printing,
etc.

Even though these actions can be implemented via class methods, using overloading ties the
behavior closer to Python’s object model and the object interfaces are more consistent to
Python’s built-in objects, hence overloading is easier to learn and use.

User-made classes can override nearly all of Python’s built-in operation methods.

Exceptions

I’ve talked about exceptions before but now I will talk about them in depth. Essentially,
exceptions are events that modify program’s flow, either intentionally or due to errors.

They are special events that can occur due to an error, e.g. trying to open a file that doesn’t
exist, or when the program reaches a marker, such as the completion of a loop.

Exceptions, by definition, don’t occur very often; hence, they are the "exception to the rule"
and a special class has been created for them. Exceptions are everywhere in Python.

Virtually every module in the standard Python library uses them, and Python itself will raise
them in a lot of different circumstances.

Here are just a few examples:

• Accessing a non−existent dictionary key will raise a KeyError exception.

• Searching a list for a non−existent value will raise a ValueError exception

. • Calling a non−existent method will raise an AttributeError exception.

• Referencing a non−existent variable will raise a NameError exception.

• Mixing datatypes without coercion will raise a TypeError exception.


One use of exceptions is to catch a fault and allow the program to continue working; we have
seen this before when we talked about files.

This is the most common way to use exceptions. When programming with the Python
command line interpreter, you don’t need to worry about catching exceptions.

Your program is usually short enough to not be hurt too much if an exception occurs.

Plus, having the exception occur at the command line is a quick and easy way to tell if your code
logic has a problem.

However, if the same error occurred in your real program, it will fail and stop working.
Exceptions can be created manually in the code by raising an exception.

It operates exactly as a system-caused exceptions, except that the programmer is doing it on


purpose. This can be for a number of reasons. One of the benefits of using exceptions is that, by
their nature, they don’t put any overhead on the code processing.

Because exceptions aren’t supposed to happen very often, they aren’t processed until they
occur.

Exceptions can be thought of as a special form of the if/elif statements. You can realistically do
the same thing with if blocks as you can with exceptions.

However, as already mentioned, exceptions aren’t processed until they occur; if blocks are
processed all the time.

Proper use of exceptions can help the performance of your program.

The more infrequent the error might occur, the better off you are to use exceptions; using if
blocks requires Python to always test extra conditions before continuing.

Exceptions also make code management easier: if your programming logic is mixed in with
error-handling if statements, it can be difficult to read, modify, and debug your program.

User-Defined Exceptions
I won’t spend too much time talking about this, but Python does allow for a programmer to
create his own exceptions.

You probably won’t have to do this very often but it’s nice to have the option when necessary.

However, before making your own exceptions, make sure there isn’t one of the built-in
exceptions that will work for you.

They have been "tested by fire" over the years and not only work effectively, they have been
optimized for performance and are bug-free.

Making your own exceptions involves object-oriented programming, which will be covered in
the next chapter

. To make a custom exception, the programmer determines which base exception to use as the
class to inherit from, e.g. making an exception for negative numbers or one for imaginary
numbers would probably fall under the Arithmetic Error exception class.

To make a custom exception, simply inherit the base exception and define what it will do.

Python modules

Python allows us to store our code in files (also called modules). This is very useful for
more serious programming, where we do not want to retype a long function definition from the
very beginning just to change one mistake. In doing this, we are essentially defining our own
modules, just like the modules defined already in the Python library.

To support this, Python has a way to put definitions in a file and use them in a script or in an
interactive instance of the interpreter. Such a file is called a module; definitions from a module
can be imported into other modules or into the main module.

Testing code
As indicated above, code is usually developed in a file using an editor.

To test the code, import it into a Python session and try to run it.

Usually there is an error, so you go back to the file, make a correction, and test again.

This process is repeated until you are satisfied that the code works. T

he entire process is known as the development cycle.

There are two types of errors that you will encounter. Syntax errors occur when the form of
some command is invalid.

This happens when you make typing errors such as misspellings, or call something by the wrong
name, and for many other reasons. Python will always give an error message for a syntax error.
Functions in Python

It is possible, and very useful, to define our own functions in Python. Generally
speaking, if you need to do a calculation only once, then use the interpreter. But when you
or others have need to perform a certain type of calculation many times, then define a
function.

You use functions in programming to bundle a set of instructions that you want
to use repeatedly or that, because of their complexity, are better self-contained in a
sub-program and called when needed. That means that a function is a piece of code
written to carry out a specified task.

To carry out that specific task, the function might or might not need multiple
inputs. When the task is carred out, the function can or can not return one or more
values.There are three types of functions in python:

help(),min(),print().

Python Namespace

Generally speaking, a namespace (sometimes also called a context) is a naming system


for making names unique to avoid ambiguity. Everybody knows a namespacing system from
daily life, i.e. the naming of people in firstname and familiy name (surname).

An example is a network: each network device (workstation, server, printer, ...) needs a
unique name and address. Yet another example is the directory structure of file systems.

The same file name can be used in different directories, the files can be uniquely accessed via
the pathnames. 
Many programming languages use namespaces or contexts for identifiers. An identifier defined
in a namespace is associated with that namespace.

This way, the same identifier can be independently defined in multiple namespaces. (Like the
same file names in different directories) Programming languages, which support namespaces,
may have different rules that determine to which namespace an identifier belongs. 

Namespaces in Python are implemented as Python dictionaries, this means it is a mapping from
names (keys) to objects (values). The user doesn't have to know this to write a Python program
and when using namespaces. 

Some namespaces in Python:

 global names of a module


 local names in a function or method invocation
 built-in names: this namespace contains built-in functions (e.g. abs(), cmp(), ...) and
built-in exception names

Garbage Collection

Garbage Collector exposes the underlying memory management mechanism of Python,


the automatic garbage collector. The module includes functions for controlling how the
collector operates and to examine the objects known to the system, either pending collection
or stuck in reference cycles and unable to be freed.

Existing system:
 User authentication process was manually using the old paper or file based approach and
some have adopted methods of automatic attendance using some biometric techniques.
Disadvantages:
 But in these existing methods have to wait for long time in making a queue at time they
enter the office. Many biometric systems are available but the key authentications are
same is all the techniques.
 Every biometric system consists of enrolment process in which unique features of a
person is stored in the database and then there are processes of identification and
verification.

Proposed system:
The system consists of a camera that captures the images of the users and sends it to the image
enhancement module. After enhancement the image comes in the Face Detection and
Recognition modules and then the user face is recognized with existing dataset . This is shown in
the experimental setup in Figure. At the time of enrolment, templates of face images of
individual users are stored in the Face dataset. Here all the faces are detected from the input
image and the algorithm compares them one by one with the face database.

Advantages:
 In this way a lot of time is saved and this is highly secure process and effective way of
user authentication is performed.
 Attendance is maintained on the excel sheet so anyone can access it for purposes like
administration, employees themselves.
SOFTWARE REQUIREMENT SPECIFICATION

5.1 Requirements Specification:

Requirement Specification provides a high secure storage to the web server efficiently.
Software requirements deal with software and hardware resources that need to be installed
on a serve which provides optimal functioning for the application. These software and
hardware requirements need to be installed before the packages are installed. These are the
most common set of requirements defined by any operation system. These software and
hardware requirements provide a compatible support to the operation system in developing
an application.

5.1.1 HARDWARE REQUIREMENTS:

The hardware requirement specifies each interface of the software elements and the
hardware elements of the system. These hardware requirements include configuration
characteristics.
 System : Pentium IV 2.4 GHz.
 Hard Disk : 100 GB.
 Monitor : 15 VGA Color.
 Mouse : Logitech.
 RAM : 1 GB.

5.1.2 SOFTWARE REQUIREMENTS:

The software requirements specify the use of all required software products like data
management system. The required software product specifies the numbers and version. Each
interface specifies the purpose of the interfacing software as related to this software product.


Software Requirement:

 Programming language: python


 Front end: tinkter
 Library: opencv
System Design

7.1 SYSTEM ARCHITECTURE

The purpose of the design phase is to arrange an answer of the matter such as by the
necessity document. This part is that the opening moves in moving the matter domain to
the answer domain. The design phase satisfies the requirements of the system. The design
of a system is probably the foremost crucial issue warm heartedness the standard of the
software package. It’s a serious impact on the later part, notably testing and maintenance.

The output of this part is that the style of the document. This document is analogous to a
blueprint of answer and is employed later throughout implementation, testing and
maintenance. The design activity is commonly divided into 2 separate phases System
Design and Detailed Design.

System Design conjointly referred to as top-ranking style aims to spot the modules that
ought to be within the system, the specifications of those modules, and the way them move
with one another to supply the specified results.

At the top of the system style all the main knowledge structures, file formats, output
formats, and also the major modules within the system and their specifications square
measure set. System design is that the method or art of process the design, components,
modules, interfaces, and knowledge for a system to satisfy such as needs. Users will read it
because the application of systems theory to development.

Detailed Design, the inner logic of every of the modules laid out in system design is
determined. Throughout this part, the small print of the info of a module square measure
sometimes laid out in a high-level style description language that is freelance of the target
language within which the software package can eventually be enforced.
In system design the main target is on distinguishing the modules, whereas throughout
careful style the main target is on planning the logic for every of the modules.
Figure 7.1: Architecture diagram

7.2 DATA FLOW DIAGRAMS

Data Flow Diagram can also be termed as bubble chart. It is a pictorial or graphical
form, which can be applied to represent the input data to a system and multiple functions
carried out on the data and the generated output by the system.

A graphical tool accustomed describe and analyze the instant of knowledge through
a system manual or automatic together with the method, stores of knowledge, and delays
within the system. The transformation of knowledge from input to output, through
processes, is also delineate logically and severally of the physical elements related to the
system. The DFD is also known as a data flow graph or a bubble chart.The BasicNotation
used to create a DFD’s are as follows:

 Dataflow:

 Process:
.

 Source:

 Data Store:

 Rhombus: decision

7.3 UML DIAGRAMS


The Unified Modeling Language allows the software engineer to express an analysis model
using the modeling notation that is governed by a set of syntactic semantic and pragmatic rules.

A UML system is represented using five different views that describe the system from distinctly
different perspective. Each view is defined by a set of diagram, which is as follows.

User Model View

This view represents the system from the users perspective. The analysis representation
describes a usage scenario from the end-users perspective.

Structural Model view

In this model the data and functionality are arrived from inside the system. This model view
models the static structures.

Behavioral Model View

It represents the dynamic of behavioral as parts of the system, depicting the interactions of
collection between various structural elements described in the user model and structural
model view.

Implementation Model View


In this the structural and behavioral as parts of the system are represented as they are to be
built.

5.3.1 USE CASE DIAGRAM


A use case diagram at its simplest is a representation of a user's interaction with the
system and depicting the specifications of a use case. A use case diagram can portray the
different types of users of a system and the various ways that they interact with the system.
This type of diagram is typically used in conjunction with the textual use case and will often be
accompanied by other types of diagrams as well.
Figure 7.3.1 Use Case Diagram

5.3.2 CLASS DIAGRAM

The class diagram is the main building block of object oriented modeling. It is used both for
general conceptual modeling of the systematic of the application, and for detailed modeling
translating the models into programming code. Class diagrams can also be used for data
modeling. The classes in a class diagram represent both the main objects, interactions in the
application and the classes to be programmed. A class with three sections, in the diagram, classes
is represented with boxes which contain three parts:

The upper part holds the name of the class

The middle part contains the attributes of the class

The bottom part gives the methods or operations the class can take or undertake.
Figure 7.3.2: Class Diagram.

5.3.3 SEQUENCEDIAGRAM

A sequence diagram is a kind of interaction diagram that shows how processes operate
with one another and in what order. It is a construct of a Message Sequence Chart. A sequence
diagram shows object interactions arranged in time sequence. It depicts the objects and classes
involved in the scenario and the sequence of messages exchanged between the objects needed
to carry out the functionality of the scenario. Sequence diagrams are typically associated with
use case realizations in the Logical View of the system under development. Sequence diagrams
are sometimes called event diagrams, event scenarios, and timing diagrams.
Figure 7.3.3: Sequence diagram

5.3.4 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.

Figure 7.3.4: Activity Diagram


Component Diagram

Implementation/coding
Screens:
Code

from cx_Freeze import setup, Executable


import sys,os
PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__))
os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl',
'tcl8.6')
os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl',
'tk8.6')

base = None

if sys.platform == 'win32':
base = None

executables = [Executable("train.py", base=base)]

packages = ["idna","os","sys","cx_Freeze","tkinter","cv2","setup",
"numpy","PIL","pandas","datetime","time"]
options = {
'build_exe': {

'packages':packages,
},

setup(
name = "ToolBox",
options = options,
version = "0.0.1",
description = 'Vision ToolBox',
executables = executables
)

#write python setup build

CONCLUSION:

Automated Attendance System has been envisioned for the purpose of reducing the
errors that occur in the traditional (manual) attendance taking system. The aim is to
automate and make a system that is useful to the organization such as an institute.
The efficient and accurate method of attendance in the office environment that can
replace the old manual methods. This method is secure enough, reliable and
available for use. No need for specialized hardware for installing the system in the
office. It can be constructed using a camera and computer.

REFERENCES:

[1]. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld,“Face recognition: A literature


survey,” ACM Computing Surveys, 2003, vol. 35, no. 4, pp. 399-458.

[2]. Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust
features. Computer Vision and Image Understanding (CVIU), 110(3):346–359.

[3]. H.K.Ekenel and R.Stiefelhagen,Analysis of local appearance based face recognition: Effe cts
of feature selection and feature normalization. In CVPR Biometrics Workshop, New York, USA,
2006

[4]. IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 4, No 1, July 2012
ISSN (Online): 1694-0814

[5]. Javier Ruiz Del Solar, Rodrigo Verschae, and Mauricio Correa. Face recognition in
unconstrained environments: A comparative study. In ECCV Workshop on Faces in RealLife
Images: Detection, Alignment, and Recognition, Marseille, France, October 2008.

[6]. Kyungnam Kim “Face Recognition using Principle Component Analysis”, Department of
Computer Science, University of Maryland, College Park, MD 20742, USA.

[7]. Osuna, E., Freund, R. and Girosit, F. (1997). "Training support vector machines: an
application to face detection." 130-136

You might also like