LECTURE 1 - Algorithms Basics
LECTURE 1 - Algorithms Basics
From the data structure point of view, following are some important
categories of algorithms −
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the
following characteristics −
Example
Let's try to learn algorithm-writing by using an example.
Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and
space used by the algorithm X are the two main factors, which decide the
efficiency of X.
Time Factor − Time is measured by counting the number of key operations
such as comparisons in the sorting algorithm.
Space Factor − Space is measured by counting the maximum memory space
required by the algorithm.
Space Complexity
Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. The space required by an
algorithm is equal to the sum of the following two components −
A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and
constants used, program size, etc.
A variable part is a space required by variables, whose size depends on the size
of the problem. For example, dynamic memory allocation, recursion stack
space, etc.
Here we have three variables A, B, and C and one constant. Hence S(P) = 1
+ 3. Now, space depends on data types of given variables and constant
types and it will be multiplied accordingly.
Time Complexity
Time complexity of an algorithm represents the amount of time required by
the algorithm to run to completion. Time requirements can be defined as a
numerical function T(n), where T(n) can be measured as the number of
steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the
total computational time is T(n) = c ∗ n, where c is the time taken for
the addition of two bits. Here, we observe that T(n) grows linearly as the
input size increases.
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical
boundation /framing of its run-time performance. Using asymptotic
analysis, we can very well conclude the best case, average case, and worst
case scenario of an algorithm.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the
running time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time. It measures the worst case time complexity or the
longest amount of time an algorithm can possibly take to complete.
3. as the input gets larger — Our algorithm may have steps that seem
expensive when n is small but are eclipsed eventually by other steps as n
gets larger. For Big O Notation analysis, we care more about the stuff
that grows fastest as the input grows, because everything else is quickly
eclipsed as n gets very large.
This function runs in O(1) time (or “constant time”) relative to its input.
This means that the input array could be 1 item or 1,000 items, but this
function would still just require one “step.”
This function runs in O(n) time (or “linear time”), where n is the number
of items in the array. This means that if the array has 10 items, I have to
print 10 times. If it has 1,000 items, I have to print 1,000 times.
In this example I am nesting two loops. If the array has n items, the outer
loop runs n times and the inner loop runs n times for each iteration of the
outer loop, giving us n² total prints. Thus this function runs in O(n²) time
(or “quadratic time”). If the array has 10 items, I have to print 100 times.
If it has 1,000 items, I have to print 1,000,000 times.
Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or the
best amount of time an algorithm can possibly take to complete.
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Constant − Ο(1)
Logarithmic − Ο(log n)
Linear − Ο(n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nΟ(1)
exponential − 2Ο(n)