A Project Report On Python Project: Degree of Bachelor of Technology Branch:-Computer Science and Engineering
A Project Report On Python Project: Degree of Bachelor of Technology Branch:-Computer Science and Engineering
PYTHON PROJECT
Submitted in partial fulfilment of the Requirements for the award of
SUBMITTED BY:
SHUBHAM DIGRA
PRAKASH SINGH
SUSHANT PANDITA
AY-2022-2023
1
DEPARTMENT OF COMPUTER
INSTITUTES, AMRITSAR
MAY, 2021
CHAPTER 1
2
ThinkNEXT has its own multiple Smart Card printing, smart card encoding
and barcode label printing machines to provide better and effective customer
support solutions.
ThinkNEXT has also setup its own placement consultancy and is
having numerous placement partner companies to provide best possible
placements in IT/Electronics industry. ThinkNEXT has its numerous clients
across the globe. ThinkNEXT has also its offices in USA, Canada, New
Delhi, Shimla and Bathinda.
ThinkNEXT Technologies has developed its own cloud computing based
Cloud Campus 4.0 to facilitate knowledge and placement centric services. It
is a unique concept for effective and collaborative learning. ThinkNEXT
Cloud Campus is a step towards not only 100% placements, but also better
job offers even after placements.
3
4
1.2.1 ThinkNEXT Industrial Training Programs under Digital India
Scheme (ESDM) and PMKVY 2.0
As ThinkNEXT is also an accredited training partner for Digital India
Government Scheme (ESDM)
and PMKVY 2.0, Therefore under this scheme, ThinkNEXT offers Free 6
Months industrial training in
following programs:
1. Telecom Technician – PC Hardware and Networking
2. Embedded Systems
3. PLC/SCADA (Advanced)
4. Computer Hardware
5. Junior Software Developer
6. Computer Networking and Storage
In this, Dual Certification will be provided to students i.e. ThinkNEXT and
Government of India.
Students will also be provided National Skill Certificates approved from 5
Government bodies.
5
1.2.2 ThinkNEXT Industrial Training Programs:
CSE/IT/MCA: 1. SAP (ABAP) 2. PHP 3. Android 4. Java 5. SAP (ABAP,
MM, PP, SD, HR) 6. .Net 7. Web Designing 8. Professional Hardware,
Networking, CCNA, CCNP (With Routers and Managed Switches) 9.
Software Testing 10. Digital Marketing
Electronics/Electrical:
1. SAP (ABAP, MM, PP)
2. Embedded Systems
3. PLC/SCADA (Industrial Automation)
4. Professional Hardware, Networking, CCNA (With Routers and Managed
Switches) 5. Android
Mechanical:
1. SAP (MM, PP)
2. AutoCAD
3. Solidworks
4. CNC Programming
5. Solidcam/Delcam/Mastercam
6. CATIA
7. CREO
8. ANSYS
9. NX Unigraphics
Mechanical Industry Tie-ups:
1. Ess Dee Engineers
2. Urgent Engineering
3. 3D Technologies Private Limited
Civil/Architecture:
1. SAP (MM)
2. AutoCAD
3. STAADPro
4. 3DS Max
5. Revit
6. Primavera
Civil Companies Tie-ups:
1. Bajwa Developers Limited
2. TDI Group
3. JLPL Group
6
1.3 Why ThinkNEXT?
1. National Icon Award Winner for “Best Web development and Industrial
Training
Company”
2. Google Partner, Microsoft Accredited Professional, Facebook Blueprint
and
Hubspot Certified Company
3. Got Award for “Excellence in Industrial Training” in Corporate Summit
2017
4. An ISO 9001:2008 Certified, Private Limited Company.
5. National Skill Development Corporation Partner Company (NSDC
Partner)
6. Accredited Training Partner of National Institute of Electronics and
Information
Technology, Department of Electronics and Information Technology,
Ministry of
Communications Information Technology
7. Approved from Ministry of Corporate Affairs, Govt. of India. Corporate
Identity No.
U72200PB2011PTCO35677.
8. Affiliated to Indian Testing Board
9. Accredited Training Partner of ISTQB (International Software Testing
Qualifications
Board).
10. Approved from Department of IT (DoIT), Punjab
11. Approved from Board of Apprenticeship Training, Ministry of HRD,
Govt. of India 12. Member of CII (Confederation of Indian Industry)
Membership No. N5238P.
13. Approved from Ministry of Skill Development and Entrepreneurship
14. Accredited Training Partner for PMKVY 2.0, Skill Development in
Electronics
Systems Design and Manufacturing for Digital India,
15. Accredited Training Partner for PSDM (Punjab Skill Development
Mission)
7
1.4 Clients:
Some of our prestigious software clients for various ThinkNEXT
products/services are:
1. Coromandel International Limited, Secundrabad
2. Medzel, USA
3. Nature9 Inc., USA
4. Punjab Technical University, Jalandhar
5. Maharaja Ranjit Singh Punjab Technical University, Bathinda
6. Guru Kashi University, Talwandi Sabo
7. Rayat Group of Institutions, Ropar
8. Aryans Group of Institutions, Rajpura
9. Punjabi University Patiala
10. Bhai Gurdas Group of Institutions
11. Baba Farid Group of Institutions, Bathinda
12. SUS Group of Institutions, Tangori
13. Asra Group of Institutions, Sangrur
14. Yadavindra College of Engineering and Technology, Talwandi Sabo
15. Guru Nanak Dev Dental College Sunam
16. Shiva International School, Bilaspur
17. Akal Degree College, Sangrur
18. St. Xavier School, Mansa
19. DAV School, Mansa
20. Eternal University, Baru Sahib
21. Shiva Group of Institutions, Bilaspur
22. Swami Devi Dyal Group of Institutions, Barwala
23. SRM Global Group of Institutions, Narayangarh
And many others…..
8
CHAPTER 2
PYTHON
1. Web development
2. Software development
3. Mathematics
4. System scripting
9
2.4 MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
USING PYTHON:WHY PYTHON?
10
dictionary, 'scripting languages' (Perl, Python, Rexx, Tcl) are more
productive than 'conventional languages' (C, C++, Java). In terms of run
time and memory consumption, they often turn out better than Java and not
much worse than C or C++. In general, the differences between languages
tend to be smaller than the typical differences due to different programmers
within the same language. (see Lutz Prechelt, An empirical comparison of
C, C++, Java, Perl, Python, Rexx, and Tcl, IEEE Computer, Vol. 30, (10), p.
23-29, Oct 2000.)
Other Advantages of Python:
It's surprisingly easy to embed Python, or better the Python interpreter into
C programs. By doing this you can add features from Python that could take
months to code in C. Vice versa, it's possible to extend the Python
interpreter by adding a module written in C. One reason to do this is if a C
library exists that does something which Python doesn't. Another good
reason is if you need something to run faster than you can manage in
Python.
The Python Standard Library contains an enormous number of useful
modules and is part of every standard Python installation. After having
learned the essentials of Python, it is necessary to become familiar with the
Python Standard Library because many problems can be solved quickly and
easily if you are acquainted with the possibilities that these libraries offer.
2.5.1 VARIABLES
As the name implies, a variable is something which can change. A variable
is a way of referring to a memory location used by a computer program. A
variable is a symbolic name for this physical location. This memory location
contains values, like numbers, text or more complicated types.
A variable can be seen as a container (or some say a pigeonhole) to store
certain values. While the program is running, variables are accessed and
sometimes changed, i.e. a new value will be assigned to the variable.
One of the main differences between Python and strongly-typed languages
like C, C++ or Java is the way it deals with types. In strongly-typed
languages every variable must have a unique data type. E.g. if a variable is
of type integer, solely integers can be saved in the variable. In Java or C,
every variable has to be declared before it can be used. Declaring a variable
means binding it to a data type.
Declaration of variables is not required in Python. If there is need of a
variable, you think of a name and start using it as a variable.
Another remarkable aspect of Python: Not only the value of a variable may
change during program execution but the type as well. You can assign an
integer value to a variable, use it as an integer for a while and then assign a
string to the variable.
In the following line of code, we assign the value 42 to a variable:
11
i = 42
The equal "=" sign in the assignment shouldn't be seen as "is equal to". It
should be "read" or interpreted as "is set to", meaning in our example "the
variable i is set to 42". Now we will increase the value of this variable by 1:
>>> i = i + 1
>>> print i
43
>>>
13
Repetition
String can be repeated or repeatedly concatenated with the asterisk operator
"*":
"*-*" * 3 -> "*-**-**-*"
Indexing
"Python"[0] will result in "P"
Slicing
Substrings can be created with the slice or slicing notation, i.e. two indices
in square brackets separated by a colon:
"Python"[2:4] will result in "th"
String Slicing
Size
len("Python") will result in 6
14
statement_block_2
else:
statement_block_3
If the condition "condition_1" is True, the statements in the block
statement_block_1 will be executed. If not, condition_2 will be executed. If
condition_2 evaluates to True, statement_block_2 will be executed, if
condition_2 is False, the statements in statement_block_3 will be executed.
III. True or False
Unfortunately it is not as easy in real life as it is in Python to differentiate
between true and false:
The following objects are evaluated by Python as False:
numerical zero values (0, 0L, 0.0, 0.0+0.0j),
the Boolean value False,
empty strings,
empty lists and empty tuples,
empty dictionaries.
plus the special value None.
All other values are considered to be True.
IV Abbreviated IF statement
C programmers usually know the following abbreviated notation for the if
construct:
max = (a > b) ? a : b;
This is an abbreviation for the following C code:
if (a > b)
max=a;
else
max=b;
C programmers have to get used to a different notation in Python:
max = a if (a > b) else b;
V Print statement
There are hardly any computer programs and of course hardly any Python
programs, which don't communicate with the outside world. Above all a
program has to deliver its result in some way. One form of output goes to
the standard output by using the print statement in Python.
>>> print "Hello User"
Hello User
>>> answer = 42
>>> print "The answer is: " + str(answer)
The answer is: 42
>>>
It's possible to put the arguments inside of parentheses:
>>> print("Hallo")
Hallo
>>> print("Hallo","Python")
15
('Hallo', 'Python')
>>> print "Hallo","Python"
Hallo Python
>>>
The list is a most versatile data type in Python. It can be written as a list of
comma-separated items (values) between square brackets. Lists are related
to arrays of programming languages like C, C++ or Java, but Python lists are
by far more flexible than "classical" arrays. For example, items in a list need
not all have the same type. Furthermore lists can grow in a program run,
while in C the size of an array has to be fixed at compile time.
16
An example of a list:
languages = ["Python", "C", "C++", "Java", "Perl"]
There are different ways of accessing the elements of a list. Most probably
the easiest way for C programmers will be through indices, i.e. the numbers
of the lists are enumerated starting with 0:
>>> languages = ["Python", "C", "C++", "Java", "Perl"]
>>> languages[0]
'Python'
>>> languages[1]
'C'
>>> languages[2]
'C++'
>>> languages[3]
'Java'
2.10 Sublists
Lists can have sublists as elements. These sublists may contain sublists as
well, i.e. lists can be recursively constructed by sublist structures.
>>> person = [["Marc","Mayer"],["17, Oxford Str",
"12345","London"],"07876-7876"]
>>> name = person[0]
>>> print name
['Marc', 'Mayer']
>>> first_name = person[0][0]
>>> print first_name
Marc
>>> last_name = person[0][1]
>>> print last_name
Mayer
>>> address = person[1]
>>> street = person[1][0]
>>> print street
17, Oxford Str
2.11 Tuples
A tuple is an immutable list, i.e. a tuple cannot be changed in any way once
it has been created. A tuple is defined analogously to lists, except that the set
of elements is enclosed in parentheses instead of square brackets. The rules
for indices are the same as for lists. Once a tuple has been created, you can't
add elements to a tuple or remove elements from a tuple.
17
Tuples are faster than lists.
If you know that some data doesn't have to be changed, you should use
tuples instead of lists, because this protects your data against accidental
changes.
Tuples can be used as keys in dictionaries, while lists can't.
The following example shows how to define a tuple and how to access a
tuple. Furthermore we can see that we raise an error, if we try to assign a
new value to an element of a tuple:
>>> t = ("tuples", "are", "immutable")
>>> t[0]
'tuples'
>>> t[0]="assignments to elements are not possible"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
Generalization
Python endless Lists and strings have many common properties, e.g. the
elements of a list or the characters of a string appear in a defined order and
can be accessed through indices. There are other data types with similar
properties like tuple, buffer and xrange. In Python these data types are called
"sequence data types" or "sequential data types".
Operators and methods are the same for "sequence data types", as we will
see in the following text.
2.12 SLICING
When you want to extract part of a string, or some part of a list, you use in
Python the slice operator. The syntax is simple. Actually it looks a little bit
like accessing a single element with an index, but instead of just one number
we have more, separated with a colon ":". We have a start and an end index,
one or both of them may be missing. It's best to study the mode of operation
of slice by having a look at examples:
>>> str = "Python is great"
>>> first_six = str[0:6]
>>> first_six
'Python'
>>> starting_at_five = str[5:]
>>> starting_at_five
18
'n is great'
>>> a_copy = str[:]
>>> without_last_five = str[0:-5]
>>> without_last_five
'Python is '
>>>
Length:
Length of a sequence The length of a sequence, i.e. a list, a string or a tuple,
can be determined with the function len(). For strings it counts the number
of characters and for lists or tuples the number of elements are counted,
whereas a sublist counts as 1 element.
>>> txt = "Hello World"
>>> len(txt)
11
>>> a = ["Swen", 45, 3.54, "Basel"]
>>> len(a)
3
1. add(element)
A method which adds an element, which has to be immutable, to a set.
>>> colours = {"red","green"}
>>> colours.add("yellow")
>>> colours
set(['green', 'yellow', 'red'])
>>> colours.add(["black","white"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
>>>
Of course, an element will only be added, if it is not already contained in the
set. If it is already contained, the method call has no effect.
2. clear()
All elements will removed from a set.
>>> cities = {"Stuttgart", "Konstanz", "Freiburg"}
>>> cities.clear()
>>> cities
set([])
>>>
3. copy
Creates a shallow copy, which is returned.
>>> more_cities = {"Winterthur","Schaffhausen","St. Gallen"}
>>> cities_backup = more_cities.copy()
19
>>> more_cities.clear()
>>> cities_backup
set(['St. Gallen', 'Winterthur', 'Schaffhausen'])
>>>
4. difference()
This method returns the difference of two or more sets as a new set.
>>> x = {"a","b","c","d","e"}
>>> y = {"b","c"}
>>> z = {"c","d"}
>>> x.difference(y)
set(['a', 'e', 'd'])
>>> x.difference(y).difference(z)
set(['a', 'e'])
>>>
5. difference_update()
The method difference_update removes all elements of another set from this
set. x.difference_update(y) is the same as "x = x - y"
>>> x = {"a","b","c","d","e"}
>>> y = {"b","c"}
>>> x.difference_update(y)
>>>
>>> x = {"a","b","c","d","e"}
>>> y = {"b","c"}
>>> x = x - y
>>> x
set(['a', 'e', 'd'])
>>>
6. discard(el)
An element el will be removed from the set, if it is contained in the set. If el
is not a member of the set, nothing will be done.
>>> x = {"a","b","c","d","e"}
>>> x.discard("a")
>>> x
set(['c', 'b', 'e', 'd'])
>>> x.discard("z")
>>> x
set(['c', 'b', 'e', 'd'])
>>>
7. remove(el)
works like discard(), but if el is not a member of the set, a KeyError will be
raised.
>>> x = {"a","b","c","d","e"}
>>> x.remove("a")
>>> x
20
set(['c', 'b', 'e', 'd'])
8. intersection(s)
Returns the intersection of the instance set and the set s as a new set. In other
words: A set with all the elements which are contained in both sets is
returned.
>>> x = {"a","b","c","d","e"}
>>> y = {"c","d","e","f","g"}
>>> x.intersection(y)
set(['c', 'e', 'd'])
>>>
This can be abbreviated with the ampersand operator "&":
>>> x = {"a","b","c","d","e"}
>>> y = {"c","d","e","f","g"}
>>> x.intersection(y)
set(['c', 'e', 'd'])
>>>
>>> x = {"a","b","c","d","e"}
>>> y = {"c","d","e","f","g"}
>>> x & y
set(['c', 'e', 'd'])
>>>
2.14 LAMBDA,MAP,REDUCE,FILTER:
1. Lambda Operator:
The lambda operator or lambda function is a way to create small anonymous
functions, i.e. functions without a name. These functions are throw-away
functions, i.e. they are just needed where they have been created. Lambda
functions are mainly used in combination with the functions filter(), map()
and reduce(). The lambda feature was added to Python due to the demand
from Lisp programmers.
The general syntax of a lambda function is quite simple:
lambda argument_list: expression
The argument list consists of a comma separated list of arguments and the
expression is an arithmetic expression using these arguments. You can
assign the function to a variable to give it a name.
The following example of a lambda function returns the sum of its two
arguments:
>>> f = lambda x, y : x + y
>>> f(1,1)
2
21
The advantage of the lambda operator can be seen when it is used in
combination with the map() function.
map() can be applied to more than one list. The lists have to have the same
length. map() will apply its lambda function to the elements of the argument
lists, i.e. it first applies to the elements with the 0th index, then to the
elements with the 1st index until the n-th index is reached:
>>> a = [1,2,3,4]
>>> b = [17,12,11,10]
>>> c = [-1,-4,5,9]
>>> map(lambda x,y:x+y, a,b)
[18, 14, 14, 14]
>>> map(lambda x,y,z:x+y+z, a,b,c)
[17, 10, 19, 23]
>>> map(lambda x,y,z:x+y-z, a,b,c)
[19, 18, 9, 5]
We can see in the example above that the parameter x gets its values from
the list a, while y gets its values from b and z from list c.
3. Filtering:
The function filter(function, list) offers an elegant way to filter out all the
elements of a list, for which the function function returns True.
The function filter(f,l) needs a function f as its first argument. f returns a
Boolean value, i.e. either True or False. This function will be applied to
every element of the list l. Only if f returns True will the element of the list
be included in the result list.
>>> fib = [0,1,1,2,3,5,8,13,21,34,55]
>>> result = filter(lambda x: x % 2, fib)
>>> print result
[1, 1, 3, 5, 13, 21, 55]
>>> result = filter(lambda x: x % 2 == 0, fib)
>>> print result
[0, 2, 8, 34]
>>>
4. Reducing a List:
The function reduce(func, seq) continually applies the function func() to the
sequence seq. It returns a single value.
Examples of reduce()
Determining the maximum of a list of numerical values by using reduce:
>>> f = lambda a,b: a if (a > b) else b
>>> reduce(f, [47,11,42,102,13])
102
>>>
Calculating the sum of the numbers from 1 to 100:
>>> reduce(lambda x, y: x+y, range(1,101))
5050
22
CHAPTER 3
A. ADVANCED TOPICS
sys module and system programming Like all the other modules, the sys
module has to be imported with the import statement, i.e.
import sys
The sys module provides information about constants, functions and
methods of the Python interpreter. dir(system) gives a summary of the
available constants, functions and methods. Another possibility is the help()
function. Using help(sys) provides valuable detail information.
The module sys informs e.g. about the maximal recursion depth
(sys.getrecursionlimit() ) and provides the possibility to change
(sys.setrecursionlimit())
The current version number of Python can be accessed as well:
>>> import sys
>>> sys.version
'2.6.5 (r265:79063, Apr 16 2010, 13:57:41) \n[GCC 4.4.3]'
>>> sys.version_info
(2, 6, 5, 'final', 0)
>>>
23
A "graph"1 in mathematics and computer science consists of "nodes", also
known as "vertices". Nodes may or may not be connected with one another.
In our illustration, - which is a pictorial representation of a graph, - the node
"a" is connected with the node "c", but "a" is not connected with "b". The
connecting line between two nodes is called an edge. If the edges between
the nodes are undirected, the graph is called an undirected graph. If an edge
is directed from one vertex (node) to another, a graph is called a directed
graph. An directed edge is called an arc.
Though graphs may look very theoretical, many practical problems can be
represented by graphs. They are often used to model problems or situations
in physics, biology, psychology and above all in computer science. In
computer science, graphs are used to represent networks of communication,
data organization, computational devices, the flow of computation,
In the latter case, the are used to represent the data organisation, like the file
system of an operating system, or communication networks. The link
structure of websites can be seen as a graph as well, i.e. a directed graph,
because a link is a directed edge or an arc.
Python has no built-in data type or class for graphs, but it is easy to
implement them in Python. One data type is ideal for representing graphs in
Python, i.e. dictionaries. The graph in our illustration can be implemented in
the following way:
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
}
The keys of the dictionary above are the nodes of our graph. The
corresponding values are lists with the nodes, which are connecting by an
edge. There is no simpler and more elegant way to represent a graph.
An edge can be seen as a 2-tuple with nodes as elements, i.e. ("a","b")
Function to generate the list of all edges:
def generate_edges(graph):
edges = []
for node in graph:
for neighbour in graph[node]:
edges.append((node, neighbour))
return edges
print(generate_edges(graph))
This code generates the following output, if combined with the previously
defined graph dictionary:
$ python3 graph_simple.py
[('a', 'c'), ('c', 'a'), ('c', 'b'), ('c', 'd'), ('c', 'e'), ('b', 'c'), ('b', 'e'), ('e', 'c'), ('e', 'b'),
('d', 'c')]
24
Paths in Graphs
We want to find now the shortest path from one node to another node.
Before we come to the Python code for this problem, we will have to present
some formal definitions.
Adjacent vertices:
Two vertices are adjacent when they are both incident to a common edge.
Path in an undirected Graph:
A path in an undirected graph is a sequence of vertices P = ( v1, v2, ..., vn )
∈ V x V x ... x V such that vi is adjacent to v{i+1} for 1 ≤ i < n. Such a path
P is called a path of length n from v1 to vn.
Simple Path:
A path with no repeated vertices is called a simple path.
Example:
(a, c, e) is a simple path in our graph, as well as (a,c,e,b). (a,c,e,b,c,d) is a
path but not a simple path, because the node c appears twice.
A tree is an undirected graph which contains no cycles. This means that any
two vertices of the graph are connected by exactly one simple path.
A forest is a disjoint union of trees. Contrary to forests in nature, a forest in
graph theory can consist of a single tree!
A graph with one vertex and no edge is a tree (and a forest).
An example of a tree:
25
Example of a Graph which is a forest but not a tree
Overview of forests:
With one vertex:
26
B. NUMERICAL PROGRAMMING WITH PYTHON
27
3. veracity:
uncertainty or imprecision of data
4. variety:
the many sources and types of data both structured and unstructured
The big question is how useful Python is for these purposes. If we would
only use Python without any special modules, this language could only
poorly perform on the previously mentioned tasks. We will describe the
necessary tools in the following chapter.
28
3B.3.4 Advantages of using Numpy with Python:
1. array oriented computing
2. efficiently implemented multi-dimensional arrays
3. designed for scientific computation
A Simple Numpy Example:
Before we can use NumPy we will have to import it. It has to be imported
like any other module:
import numpy
But you will hardly ever see this. Numpy is usually renamed to np:
import numpy as np
Our first simple Numpy example deals with temperatures. Given is a list
with values, e.g. temperatures in Celsius:
cvalues = [20.1, 20.8, 21.9, 22.5, 22.7, 22.3, 21.8, 21.2, 20.9, 20.1]
We will turn our list "cvalues" into a one-dimensional numpy array:
C = np.array(cvalues)
print(C)
[ 20.1 20.8 21.9 22.5 22.7 22.3 21.8 21.2 20.9 20.1]
Python with the module NumPy all the basic Matrix Arithmetics like
1. Matrix addition
2. Matrix subtraction
3. Matrix multiplication
4. Scalar product
5. Cross product
6. and lots of other operations on matrices
The arithemtic standard Operators
+
-
*
/
**
%
are applied on the elements, this means that the arrays have to have the same
size.
>>> x = np.array([1,5,2])
>>> y = np.array([7,4,1])
>>> x + y
array([8, 9, 3])
>>> x * y
array([ 7, 20, 2])
>>> x - y
array([-6, 1, 1])
>>> x / y
array([0, 1, 2])
29
>>> x % y
array([1, 1, 0])
30
3B.6 Matrix Class
The matrix objects are a subclass of the numpy arrays (ndarray). The matrix
objects inherit all the attributes and methods of ndarry. Another difference is
that numpy matrices are strictly 2-dimensional, while numpy arrays can be
of any dimension, i.e. they are n-dimensional.
The most important advantage of matrices is that the provide convenient
notations for the matrix mulitplication. If X and Y are two Matrices than X *
Y defines the matrix multiplication. While on the other hand, if X and Y are
ndarrays, X * Y define an element by element multiplication.
>>> x = np.array( ((2,3), (3, 5)) )
>>> y = np.array( ((1,2), (5, -1)) )
>>> x * y
array([[ 2, 6],
[15, -5]])
>>> x = np.matrix( ((2,3), (3, 5)) )
>>> y = np.matrix( ((1,2), (5, -1)) )
>>> x * y
matrix([[17, 1],
[28, 1]])
Matrix Product
The matrix product of two matrices can be calculated if the number of
columns of the left matrix is equal to the number of rows of the second or
right matrix.
The product of a (l x m)-matrix A = (aij)i=1...l, j= 1..m and an (m x n)-
matrix B = (bij)i=1...m, j= 1..n is a matrix C = (cij)i=1...l, j= 1..n, which is
calculated like this:
Matrix Product
31
If we want to perform matrix multiplication with two numpy arrays
(ndarray), we have to use the dot product:
>>> x = np.array( ((2,3), (3, 5)) )
>>> y = np.matrix( ((1,2), (5, -1)) )
>>> np.dot(x,y)
matrix([[17, 1],
[28, 1]])
Alternatively, we can cast them into matrix objects and use the "*" operator:
>>> np.mat(x) * np.mat(y)
matrix([[17, 1],
[28, 1]])
3B.7 Matplotlib
32
Is is common practice to rename matplotlib.pyplot to plt. We will use the
plot function of pyplot in our first example. We will pass a list of values to
the plot function. Plot takes these as Y values. The indices of the list are
automatically taken as the X values. The command %matplotlib inline
makes only sense, if you work with Ipython Notebook. It makes sure, that
the graphs will be depicted inside of the document and not as independent
windows:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([-1, -4.5, 16, 23])
plt.show()
33
3B.8 Introduction into Pandas
The pandas we are writing about in this chapter have nothing to do with the
cute panda bears, and they are neither what our visitors are expecting in a
Python tutorial. Pandas is a Python module, which is rounding up the
capabilities of Numpy, Scipy and Matplotlab. The word pandas is an
acronym which is derived from "Python and data analysis" and "panel data".
There is often some confusion about whether Pandas is an alternative to
Numpy, SciPy and Matplotlib. The truth is that it is built on top of Numpy.
This means that Numpy is required by pandas. Scipy and Matplotlib on the
other hand are not required by pandas but they are extremely useful. That's
why the Pandas project lists them as "optional dependency".
Pandas is a software library written for the Python programming language. It
is used for data manipulation and analysis. It provides special data structures
and operations for the manipulation of numerical tables and time series.
Pandas is free software released under the three-clause BSD license.
3B.8.1 Data Structures:
We will start with the following two important data structures of Pandas:
1. Series and
2. DataFrame
3B.8.1.1 Series
A Series is a one-dimensional labelled array-like object. It is capable of
holding any data type, e.g. integers, floats, strings, Python objects, and so
on. It can be seen as a data structure with two arrays: one functioning as the
index, i.e. the labels, and the other one contains the actual data.
We define a simple Series object in the following example by instantiating a
Pandas Series object with a list. We will later see that we can use other data
objects for example Numpy arrays and dictionaries as well to instantiate a
Series object.
import pandas as pd
S = pd.Series([11, 28, 72, 3, 5, 8])
S
The above Python code returned the following result:
0 11
1 28
2 72
3 3
4 5
5 8
dtype: int64
We haven't defined an index in our example, but we see two columns in our
output: The right column contains our data, whereas the left column contains
the index. Pandas created a default index starting with 0 going to 5, which is
the length of the data minus 1.
We can directly access the index and the values of our Series S:
print(S.index)
34
print(S.values)
RangeIndex(start=0, stop=6, step=1)
[11 28 72 3 5 8]
3B.8.1.2 DataFrame
Playing Pandas
The underlying idea of a DataFrame is based on spreadsheets. We can see
the data structure of a DataFrame as tabular and spreadsheet-like. A
DataFrame logically corresponds to a "sheet" of an Excel document. A
DataFrame has both a row and a column index.
Like a spreadsheet or Excel sheet, a DataFrame object contains an ordered
collection of columns. Each column consists of a unique data typye, but
different columns can have different types, e.g. the first column may consist
of integers, while the second one consists of boolean values and so on.
There is a close connection between the DataFrames and the Series of
Pandas. A DataFrame can be seen as a concatenation of Series, each Series
having the same index, i.e. the index of the DataFrame.
A DataFrame has a row and column index; it's like a dict of Series with a
common index.
cities = {"name": ["London", "Berlin", "Madrid", "Rome",
"Paris", "Vienna", "Bucharest", "Hamburg",
"Budapest", "Warsaw", "Barcelona",
"Munich", "Milan"],
"population": [8615246, 3562166, 3165235, 2874038,
2273305, 1805681, 1803425, 1760433,
1754000, 1740119, 1602386, 1493900,
1350680],
"country": ["England", "Germany", "Spain", "Italy",
"France", "Austria", "Romania",
"Germany", "Hungary", "Poland", "Spain",
"Germany", "Italy"]}
city_frame = pd.DataFrame(cities)
city_frame
The above Python code returned the following:
countr Name populat
y ion
0 Engla Londo 861524
nd n 6
1 Germ Berlin 356216
any 6
2 Spain Madri 316523
d 5
3 Italy Rome 287403
8
4 Franc Paris 227330
e 5
5 Austri Vienn 180568
35
a a 1
6 Roma Bucha 180342
nia rest 5
7 Germ Hamb 176043
any urg 3
8 Hung Budap 175400
ary est 0
9 Polan Warsa 174011
d w 9
1 Spain Barcel 160238
0 ona 6
1 Germ Munic 149390
1 any h 0
1 Italy Milan 135068
2 0
36
Numpy is a module which provides the basic data structures, implementing
multi-dimensional arrays and matrices. Besides that the module supplies the
necessary functionalities to create and manipulate these data structures.
SciPy is based on top of Numpy, i.e. it uses the data structures provided by
NumPy. It extends the capabilities of NumPy with further useful functions
for minimization, regression, Fourier-transformation and many others.
The youngest child in this family of modules is Pandas. Pandas is using all
of the previously mentioned modules. It's build on top of them to provide a
module for the Python language, which is also capable of data manipulation
and analysis. The special focus of Pandas consists in offering data structures
and operations for manipulating numerical tables and time series. The name
is derived from the term "panel data". Pandas is well suited for working with
tabular data as it is known from spread sheet programming like Excel.
37
We take pictures to preserve great moments in time. Pickled memories ready
to be "opened" in the future at will.
Similar to pickling things, we have to pay attention to the right
preservatives. Of course, mobile phone also provide us with a range of
image processing software, but as soon as we need to manipulate a huge
quantity of photographs we need other tools. This is when programming and
Python comes into play. Python and its modules like Numpy, Scipy,
Matplotlib and other special modules provide the optimal functionality to be
able to cope with the flood of pictures.
To provide you with the necessary knowledge this chapter of our Python
tutorial deals with basic image processing and manipulation. For this
purpose we use the modules NumPy, Matplotlib and SciPy.
We start with the scipy package misc. The helpfile says that scipy.misc
contains "various utilities that don't have another home".
# the following line is only necessary in Python notebook:
%matplotlib inline
from scipy import misc
ascent = misc.ascent()
import matplotlib.pyplot as plt
plt.gray()
plt.imshow(ascent)
plt.show()
38
Additionally to the image, we can see the axis with the ticks. This may be
very interesting, if you need some orientations about the size and the pixel
position, but in most cases, you want to see the image without this
information. We can get rid of the ticks and the axis by adding the command
plt.axis("off"):
from scipy import misc
ascent = misc.ascent()
import matplotlib.pyplot as plt
plt.axis(“off”) # removes the axis and the ticks
plt.gray()
plt.imshow(ascent)
plt.show()
39
1. Tiling an Image
The function imag_tile, which we are going to design, can be best explained
with the following diagram:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
def imag_tile(img, n, m=1):
"""
The image "img" will be repeated n times in
vertical and m times in horizontal direction.
"""
if n == 1:
tiled_img = img
else:
lst_imgs = []
for i in range(n):
lst_imgs.append(img)
40
tiled_img = np.concatenate(lst_imgs, axis=1 )
if m > 1:
lst_imgs = []
for i in range(m):
lst_imgs.append(tiled_img)
tiled_img = np.concatenate(lst_imgs, axis=0 )
return tiled_img
basic_pattern = mpimg.imread('decorators_b2.png')
decorators_img = imag_tile(basic_pattern, 3, 3)
plt.axis("off")
plt.imshow(decorators_img)
This gets us the following output:
<matplotlib.image.AxesImage at 0x7f29cf529a20>
41
[ 1., 1., 1.],
[ 1., 1., 1.]],
[[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
...,
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]]], dtype=float32)
42
CHAPTER 4
43
4.3 k-Nearest-Neighbor Classifier
44
Before we actually start with writing a nearest neighbor classifier, we need
to think about the data, i.e. the learnset. We will use the "iris" dataset
provided by the datasets of the sklearn module.
The data set consists of 50 samples from each of three species of Iris
1) Iris setosa,
2) Iris virginica and
3) Iris versicolor.
Four features were measured from each sample: the length and the width of
the sepals and petals, in centimetres.
import numpy as np
from sklearn import datasets
iris = datasets.load_iris()
iris_data = iris.data
iris_labels = iris.target
print(iris_data[0], iris_data[79], iris_data[100])
print(iris_labels[0], iris_labels[79], iris_labels[100])
[5.1 3.5 1.4 0.2] [5.7 2.6 3.5 1. ] [6.3 3.3 6. 2.5]
012
The following code is only necessary to visualize the data of our learnset.
Our data consists of four values per iris item, so we will reduce the data to
three values by summing up the third and fourth value. This way, we are
capable of depicting the data in 3-dimensional space:
# following line is only necessary, if you use ipython notebook!!!
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
colours = ("r", "b")
X = []
for iclass in range(3):
X.append([[], [], []])
for i in range(len(learnset_data)):
if learnset_labels[i] == iclass:
X[iclass][0].append(learnset_data[i][0])
X[iclass][1].append(learnset_data[i][1])
X[iclass][2].append(sum(learnset_data[i][2:]))
colours = ("r", "g", "y")
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for iclass in range(3):
ax.scatter(X[iclass][0], X[iclass][1], X[iclass][2], c=colours[iclass])
plt.show()
45
4.4 Neural Networks
46
Even though the above image is already an abstraction for a biologist, we
can further abstract
it:
47
Building Principle of a Simple Artificial Neural Network
We will write a very simple Neural Network implementing the logical
"And" and "Or" functions.
Let's start with the "And" function. It is defined for two inputs:
Input1 Input2 Output
0 0 0
0 1 0
1 0 0
1 1 1
import numpy as np
class Perceptron:
def __init__(self, input_length, weights=None):
if weights is None:
self.weights = np.ones(input_length) * 0.5
else:
self.weights = weights
@staticmethod
def unit_step_function(x):
if x > 0.5:
return 1
return 0
def __call__(self, in_data):
weighted_input = self.weights * in_data
weighted_sum = weighted_input.sum()
return Perceptron.unit_step_function(weighted_sum)
p = Perceptron(2, np.array([0.5, 0.5]))
for x in [np.array([0, 0]), np.array([0, 1]),
np.array([1, 0]), np.array([1, 1])]:
y = p(np.array(x))
48
print(x, y)
[0 0] 0
[0 1] 0
[1 0] 0
[1 1] 1
Line Separation:
In the following program, we train a neural network to classify two clusters
in a 2-dimensional space. We show this in the following diagram with the
two classes class1 and class2. We will create those points randomly with the
help of a line, the points of class2 will be above the line and the points of
class1 will be below the line.
49
CHAPTER 5
50
5.2 HISTORY
Face Detection has been one of the hottest topics of computer vision for the
past few years.
Face Detection is a process of finding and locating human faces in digital
visual data (Images/Videos).In 1960‟s government agencies in U.S.A. made
a contract with Woodrow W. Bledsoe of Panoramic Research Inc. for the
development of first semi-automatic face recognition system. Although the
detection of face was manual as this system relied solely on the
administrator to locate features such as eyes, ears, nose and mouth on the
photographs. It calculated distances and ratios to a common reference point
that was compared to the reference data. As it can be observed that for a
large set of visual data the above process become humanly impossible,
unreliable and extremely difficult so it led to the need of a system which can
detect human faces with more accuracy and speed. Face detection is not a
straightforward problem as it involves various challenges such as face
definition, pose and scale variation, image orientation, facial expressions,
facial deformities, illumination conditions, occlusions and background
noises. Face detection techniques can be mainly classified into four
categories: Knowledge based methods, Feature-Invariant approaches,
Appearance based methods and Template matching methods . Knowledge
based methods use the face knowledge to encode rules based on face
structure and symmetrical positions of different parts of face like eyes, nose
and mouth. Now in this approach challenge is that, it is difficult to translate
human knowledge into well define rule set. Feature-invariant approaches
uses features such as edges, geometric shapes, facial features such as eyes,
nose, ears, mouth, hairline to build a statistical model which describes their
relationships. Main challenges in this approach is face deformities,
illumination conditions, pose variations, facial expressions and occlusions.
Appearance based methods uses features based upon appearance such as
Eigenfaces – PCA, Neural Network, SVMs, and AdaBoost. Here challenges
are illumination conditions, facial deformities and speed and accuracy of
operation. In template matching based methods pre-defined templates have
to be stored and correlation values with the standard patterns are computed
e.g.: for the face contour, eyes, nose and mouth independently. Limitation so
far is that it cannot effectively deal with variations in scale, pose and shape.
Challenges are how to represent the template, how to model deformations,
and efficient matching algorithms.
51
Here we will deal with detection. OpenCV already contains many pre-
trained classifiers for face, eyes, smile etc. Those XML files are stored in
opencv/data/haarcascades/ folder. Let’s create face and eye detector with
OpenCV.
Each file starts with the name of the classifier it belongs to. For example
face_cascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
Haar features:
OpenCV's algorithm is currently using the following Haar-like features
which are the input to the basic classifiers.
52
Haar-Features are good at detecting edges and lines. This makes it especial
effective in face detection. For example, in a small image of Beyonce, this
Haar-feature would be able to detect her eye (an area that is dark on top and
brighter underneath).
53
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
0
We use v2.CascadeClassifier.detectMultiScale() to find faces or eyes, and it
is defined like this:
cv2.CascadeClassifier.detectMultiScale(image[, scaleFactor[,
minNeighbors[, flags[, minSize[, maxSize]]]]])
Where the parameters are:
1. image : Matrix of the type CV_8U containing an image where objects
are detected.
2. scaleFactor : Parameter specifying how much the image size is
reduced at each image scale.
54
5.4 BACK END TECHNOLOGIES:
55
Compared to other languages like C/C++, Python is slower. But another
important feature of Python is that it can be easily extended with C/C++.
This feature helps us to write computationally intensive codes in C/C++
and create a Python wrapper for it so that we can use these wrappers as
Python modules. This gives us two advantages: first, our code is as fast
as original C/C++ code (since it is the actual C++ code working in
background) and second, it is very easy to code in Python. This is how
OpenCV-Python works, it is a Python wrapper around original C++
implementation.
And the support of Numpy makes the task more easier. Numpy is a
highly optimized library for numerical operations. It gives a MATLAB-
style syntax. All the OpenCV array structures are converted to-and-from
Numpy arrays. So whatever operations you can do in Numpy, you can
combine it with OpenCV, which increases number of weapons in your
arsenal. Besides that, several other libraries like SciPy, Matplotlib which
supports Numpy can be used with this.
So OpenCV-Python is an appropriate tool for fast prototyping of
computer vision problems.
5.4.1 Key Features
• Optimized for real time image processing & computer vision
applications
• Primary interface of OpenCV is in C++
• There are also C, Python and JAVA full interfaces
• OpenCV applications run on Windows, Android, Linux, Mac and iOS
• Optimized for Intel processors
5.4.2 USES
What it can do :
1. Read and Write Images.
2. Detection of faces and its features.
3. Detection of shapes like Circle,rectangle etc in a image. E.g Detection
of coin in images.
4. Text recognition in images. e.g Reading Number Plates/
5. Modifying image quality and colors e.g Instagram, CamScanner.
6. Developing Augmented reality apps.
and many more.....
5.4.3 Which Language it supports :
1. C++
2. Android SDK
3. Java
4. Python
5. C (Not recommended)
5.4.4 Some Advantages of using OpenCV :
1. Simple to learn,lots of tutorial available.
2. Works with almost all the famous languages.
56
3. Free to use.
5.5 NUMPY
57
In 2011, PyPy started development on an implementation of the NumPy API
for PyPy. It is not yet fully compatible with NumPy.
5.5.1 TRAITS
NumPy targets the CPython reference implementation of Python, which is a
non-optimizing bytecode interpreter. Mathematical algorithms written for
this version of Python often run much slower than compiled equivalents.
NumPy addresses the slowness problem partly by providing
multidimensional arrays and functions and operators that operate efficiently
on arrays, requiring rewriting some code, mostly inner loops using NumPy.
Using NumPy in Python gives functionality comparable to MATLAB since
they are both interpreted, and they both allow the user to write fast programs
as long as most operations work on arrays or matrices instead of scalars. In
comparison, MATLAB boasts a large number of additional toolboxes,
notably Simulink, whereas NumPy is intrinsically integrated with Python, a
more modern and complete programming language. Moreover,
complementary Python packages are available; SciPy is a library that adds
more MATLAB-like functionality and Matplotlib is a plotting package that
provides MATLAB-like plotting functionality. Internally, both MATLAB
and NumPy rely on BLAS and LAPACK for efficient linear algebra
computations.
Python bindings of the widely used computer vision library OpenCV utilize
NumPy arrays to store and operate on data. Since images with multiple
channels are simply represented as three-dimensional arrays, indexing,
slicing or masking with other arrays are very efficient ways to access
specific pixels of an image. The NumPy array as universal data structure in
OpenCV for images, extracted feature points, filter kernels and many more
vastly simplifies the programming workflow and debugging.
5.5.3 Limitations
Inserting or appending entries to an array is not as trivially possible as it is
with Python's lists. The np.pad(...) routine to extend arrays actually creates
new arrays of the desired shape and padding values, copies the given array
58
into the new one and returns it. NumPy's np.concatenate([a1,a2]) operation
does not actually link the two arrays but returns a new one, filled with the
entries from both given arrays in sequence. Reshaping the dimensionality of
an array with np.reshape(...) is only possible as long as the number of
elements in the array does not change. These circumstances originate from
the fact that NumPy's arrays must be views on contiguous memory buffers.
A replacement package called Blaze attempts to overcome this limitation.
Algorithms that are not expressible as a vectorized operation will typically
run slowly because they must be implemented in "pure Python", while
vectorization may increase memory complexity of some operations from
constant to linear, because temporary arrays must be created that are as large
as the inputs. Runtime compilation of numerical code has been implemented
by several groups to avoid these problems; open source solutions that
interoperate with NumPy include scipy.weave, numexpr and Numba.
Cython and Pythran are static-compiling alternatives to these.
The future is here, all we have to do is face it. At least that is what the latest
face recognition and detection developers think. That really comes as no
surprise, I mean, how many people do you think have used Snapchat to send
a selfie with a crazy filter today? And how many have browsed through
potential photos of themselves on Facebook to make sure they were tagged?
(Or weren’t, in cases of truly embarrassing photos like the ones a soon to be
ex-friend thought the world needed to see.)
The truth is that facial recognition and detection software is popping up
everywhere. And this can be a really great thing. Let’s say someone is trying
to steal your identity or use photos of you under a different name online.
With facial recognition technology, ideally, you could search the whole of
the internet to see where each and every photo of your face is posted. And
apart from our lives on social media, facial recognition software can also
offer protection from and prevention of other threats. From using facial
recognition in smart security cameras to its uses in digital medical
applications, facial recognition software might help us in creating a safer,
healthier future.
1. Identifiable online daters. An important part of online dating is, of
course, anonymity. You make up a screen name because you want an
element of surprise when you meet someone — and because you don’t want
creepers showing up at your office uninvited. In 2010, Acquisti published
the study, “Privacy in the Age of Augmented Reality.” He and his fellow
researchers analyzed 6,000 online profiles on a dating site in the same US
city. Using four cloud computing cores and the facial recognition software
PittPatt, they were able to identify 1 in 10 of these anonymous daters. And
remember, this technology has improved three-fold since then.
2. Better tools for law enforcement. After the Boston Marathon
bombing, the Boston police commissioner said that facial recognition
59
software had not helped them identify Dzhokhar and Tamerlan Tsarnaev,
despite the fact that the two were in public records databases—and
photographed at the scene. Only, those images were taken from far away,
the brothers were wearing sunglasses and caps, and many shots of them
were in profile — all things that make facial recognition difficult. Experts
say that technology can overcome these difficulties. In an interview with
Salon.com, Acquisti said that the increasing resolution of photos will help
(hello, gigapixel!), as will the improved computational capabilities of
computers and the ever-expanding mountain of data available from social
networks. In a fascinating article via Yahoo, Paul Schuepp of the company
Animetrics shares a more specific advance: software that turns 2D images
into a simulated 3D model of a person’s face. In a single second, it can turn
an unidentifiable partial snapshot into a very identifiable headshot. He
claims the software can boost identification rates from 35 percent to 85
percent.
3. Full body recognition? Allyson Rice of the University of Texas at
Dallas has an idea for how facial recognition software could become even
more accurate for law enforcement purposes — by becoming body
recognition software. In a study published this month in Psychological
Science, Rice and her fellow researchers asked college students to discern
whether two photos — which had stumped facial recognition software —
were indeed of the same person. They used eye-tracking equipment to
discern how the participants were making the call. In the end, they found
that students were far more accurate in their answers when the face and body
of the subject was shown. And while participants reported judging based on
facial features, their eyes were spending more time examining body build,
stance, and other body features. “Psychologists and computer scientists have
concentrated almost exclusively on the role of the face in person
recognition,” Rice tells The Telegraph. “But our results show that the body
can also provide important and useful identity information for person
recognition.”
4. A face scan for your phone. “Face Unlock” is a feature that allows you
to unlock Android smartphones using your “faceprint,” i.e. a map of the
unique structure of your face. This is just the beginning of face-as-security
measure. In June, according to eWeek.com, Google patented a technology
that would turn goofy facial expressions — a wink, a scrunched nose, a
smile, a stuck-out tongue — into a code to unlock devices. The hope: that
this would be harder to spoof than a faceprint. Turns out, apps such as
FastAccess Anywhere, which uses your face as a password, can reportedly
be fooled with a simple photo, says USA Today.
5. Facial recognition as advertising. Could facial recognition technology
be used to influence what we buy? Very likely. In 2012, an interactive ad for
Choice for Girls was launched at bus stops in London. These billboards
were able to scan passersby, judge their gender and show them appropriate
content. Girls and women got a video, while boys and men got statistics on a
subject. This ad was for a good cause, but this technology will no doubt
60
expand — and could allow corporations and organizations to tap into our
personal lives in unpredictable ways. Personalized ads as we walk down the
street, a la the classic scene in Minority Report, yes. But as Acquisti notes in
his talk, there’s a potentially more subtle application of this technology too:
ads that can identify us and our two favorite friends on Facebook. From
there, it’s a snap to create a composite image of a person who’ll star in an ad
targeted just to us. For more in what’s coming in the facial recognition
advertising realm, check out Leslie Stahl’s 60 Minutes segment “A Face in
the Crowd: Say goodbye to anonymity.” Among other fascinating tidbits, it
introduces us to FaceDeals — which notes when you’ve walked into an
establishment, mines your Facebook likes and text messages a deal created
just for you.
6. Shattered Glass. As Acquisti notes in his talk, the fact that someone’s
face can be used to find out private information is especially disconcerting
given Google Glass’ emergence on the scene. In June, US lawmakers
questioned Google about the privacy implications of the device and, in
response, Google stressed that they “won’t be approving any facial
recognition Glassware at this time.” But of course, it’s not completely up to
them. In July, Stephen Balaban announced to NPR and the world that he had
hacked Glass in order to give it facial recognition powers. “Essentially what
I am building is an alternative operating system that runs on Glass but is not
controlled by Google,” he said. On a similar note, one Michael DiGiovanni
created a program called Winky for Glass that lets the wearer take a photo
with a wink, rather than using the voice command.
7. Your face as currency. In July, a Finnish company called Uniqul
released a video of a project in the works, a pay-by-face authentication
system. The idea? At a store, rather than paying with cash or a credit card,
you give a “meaningful nod” to a scanner to make a purchase. A Huffington
Post article describes this new tech, and also gives a peak at the Millennial
ATM, which uses facial recognition as its primary security method.
Facial detection is evolving rapidly. What here sounds cool and useful to
you, and what sounds like a trip to Scarytown? For me, I may well be
investing in these custom t-shirts, which claim to trip up facial detection.
5.7 CONCLUSION:
There is no doubt that lot of research work has been done in the area of face
detection but the goal is still far from achieved: To mimic the human vision
of detecting and identifying the human faces. So to meet that goal, still a lot
of work has to be done in this area. As per literature survey, following
directions for future work in this area are being proposed:
1. The training of Haar features in seminar viola jones' face detector takes a
long time, which may be couple of days if used serial processing. There is
61
scope of work to apply the parallel computing to enhance the speed of
features training. Till date not much work has addressed the performance.
4. Use of holistic features for performing various tasks in the process of face
extraction from video such as face detection, face quality estimation, face
quality enhancement and face recognition
instead of using separate feature for each task.
5.8 REFERENCES:
Refrences for making this project, firstly our teacher Mr. Sunil Kumar ,secondly
internet and websites such as Google with above given sites , You Tube Vedio
Tutorials and lastly lots of practice.
62