Chapter 1 Introduction to datastructure and algorithm
Chapter 1 Introduction to datastructure and algorithm
Algorithm
An algorithm is a procedure having well defined steps for solving a particular problem.
Algorithm is finite set of logic or instructions, written in order for accomplish the certain
predefined task. It is not the complete program or code, it is just a solution (logic) of a problem,
which can be represented either as an informal description using a Flowchart or Pseudo code.
o Search: Algorithm developed for searching the items inside a data structure.
o Delete: Algorithm developed for deleting the existing element from the data structure.
o Update: Algorithm developed for updating the existing element inside a data structure.
Example: Design an algorithm to multiply the two numbers x and y and display the result in z.
o Step 1 START
o Step 6 print z
o Step 7 STOP
o Step 3 z← x * y
o Step 4 display z
o Step 5 STOP
Characteristics of an Algorithm
o Output: An algorithm must have 1 or well defined outputs, and should match with the
desired output.
o Feasibility: An algorithm must be terminated after the finite number of steps.
Asymptotic Analysis
It is used to mathematically calculate the running time of any operation inside an algorithm.
Example: Running time of one operation is x(n) and for another operation it is calculated as
f(n2). It refers to running time will increase linearly with increase in 'n' for first operation and
running time will increase exponentially for second operation. Similarly the running time of both
operations will be same if n is significantly small.
Worst case: It defines the input for which the algorithm takes the huge time.
Best case: It defines the input for which the algorithm takes the lowest time.
Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time complexity of an
algorithm is given below:
It is the formal way to express the upper boundary of an algorithm running time. It measures the
worst case of time complexity or the longest amount of time, algorithm takes to complete their
operation. It is represented as shown below:
For example: If f(n) and g(n) are the two functions defined for positive integers, then f(n) is
O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists constants c and no
such that:
It is the formal way to represent the lower bound of an algorithm's running time. It measures the
best amount of time an algorithm can possibly take to complete or the best case time complexity.
If we required that an algorithm takes at least certain amount of time without using an upper
bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of
running time for large input size.
If running time is Ω (f(n)), then for the larger value of n, the running time is at least k?f(n) for
constant (k). It is represented as shown below:
It is the formal way to express both the upper bound and lower bound of an algorithm running
time.
Consider the running time of an algorithm is θ (n), if at once (n) gets large enough the running
time is at most k2-n and at least k1 ?n for some constants k1 and k2. It is represented as shown
below:
Common Asymptotic Notations
constant - ?(1)
linear - ?(n)
logarithmic - ?(log n)
exponential - 2?(n)
cubic - ?(n3)
polynomial - n?(1)
quadratic - ?(n2)