Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Lecture5_Algorithm Writing and Analysis

The document provides an overview of data structures and algorithms, focusing on algorithm writing and analysis. It covers various algorithm categories, properties, structures, and the importance of algorithm analysis, including time and space complexity. Additionally, it discusses greedy algorithms and their applications, highlighting their potential limitations in achieving globally optimized solutions.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture5_Algorithm Writing and Analysis

The document provides an overview of data structures and algorithms, focusing on algorithm writing and analysis. It covers various algorithm categories, properties, structures, and the importance of algorithm analysis, including time and space complexity. Additionally, it discusses greedy algorithms and their applications, highlighting their potential limitations in achieving globally optimized solutions.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Data Structures and Algorithms

(DSAs)

Algorithm Writing and Analysis

Lecture 5
Outline
Overview
Algorithm Categories
Algorithm Properties
Structure of an Algorithm
Writing Algorithms
Algorithm Analysis
Asymptotic Analysis
Reasons for Analysis of Algorithms
Greedy Algorithm
Overview
Algorithm is a step-by-step procedure, which
defines a set of instructions to be executed in a
certain order to get the desired output.

Algorithms are generally created independent


of underlying languages, i.e. an algorithm can
be implemented in more than one
programming language.
Algorithm Categories
From the data structure point of view, following
are some important categories of algorithms;
Search: Algorithm to search an item in a data
structure.
Sort: Algorithm to sort items in a certain order.
Insert: Algorithm to insert item in a data
structure.
Update: Algorithm to update an existing item
in a data structure.
Delete: Algorithm to delete an existing item
from a data structure.
Algorithm Properties
 Unambiguous: Algorithm should be clear and
unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only one
meaning.
 Input: An algorithm should have 0 or more well-defined
inputs.
Output: An algorithm should have 1 or more well-defined
outputs, and should match the desired output.
Finiteness: Algorithms must terminate after a finite
number of steps.
Feasibility: Should be feasible with the available resources.
Independent: An algorithm should have step-by-step
directions, which should be independent of any
programming code.
Algorithm Properties
Finiteness: An algorithm must terminate after
a finite number of steps.
Definiteness: The steps of the algorithm must
be precisely defined or unambiguously specified.
Generality: An algorithm must be generic
enough to solve all problems of a particular
class.
Effectiveness: the operations of the algorithm
must be basic enough to be put down on pencil
and paper. They should not be too complex to
warrant writing another algorithm for the
operation.
Structure of an Algorithm
An algorithm has the following structure;

1. Input Step

2. Assignment Step

3. Decision Step

4. Repetitive Step

5. Output Step


Writing Algorithms
There are no well-defined standards for writing algorithms.
Rather, it is problem and resource dependent. Algorithms
are never written to support a particular programming code.

As we know that all programming languages share basic


code constructs like loops (do, for, while), flow-control (if-
else), etc. These common constructs can be used to write an
algorithm.

We write algorithms in a step-by-step manner, but it is not


always the case. Algorithm writing is a process and is
executed after the problem domain is well-defined. That is,
we should know the problem domain, for which we are
designing a solution.
Writing Algorithms - Example
Let's try to learn algorithm-writing by using an
example.
Problem − Design an algorithm to add two
numbers and display the result.
Writing Algorithms - Example
Algorithms tell the programmers how to code
the program. Alternatively, the algorithm can
be written as;
Writing Algorithms
In design and analysis of algorithms, usually
the second method is used to describe an
algorithm.
It makes it easy for the analyst to analyze the
algorithm ignoring all unwanted definitions.
He can observe what operations are being
used and how the process is flowing.
Writing step numbers, is optional.
We design an algorithm to get a solution of a
given problem.
A problem can be solved in more than one
ways.
Writing Algorithms
Many solution algorithms can be derived for
a given problem.
Algorithm Analysis
 Efficiency of an algorithm can be analyzed at two different
stages, before implementation and after implementation. They
are the following;
 A Priori Analysis − This is a theoretical analysis of an
algorithm. Efficiency of an algorithm is measured by
assuming that all other factors, for example, processor
speed, are constant and have no effect on the
implementation.
 A Posteriori Analysis − This is an empirical analysis of an
algorithm. The selected algorithm is implemented using
programming language. This is then executed on target
computer machine. In this analysis, actual statistics like
running time and space required, are collected.

 We shall learn about a priori algorithm analysis. Algorithm


analysis deals with the execution or running time of various
Algorithm Complexity
Suppose K is an algorithm and n is the size of
input data, the time and space used by the
algorithm K are the two main factors, which
decide the efficiency of K.
 Time Factor – Time is measured by counting the
number of key operations such as comparisons in
the sorting algorithm.
 Space Factor − Space is measured by counting the
maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the
running time and/or the storage space required by
the algorithm in terms of n as the size of input
data.
Space Complexity
Space complexity of an algorithm represents the
amount of memory space required by the algorithm
in its life cycle. The space required by an algorithm
is equal to the sum of the following two components;

 A fixed part that is a space required to store certain


data and variables, that are independent of the size of
the problem. For example, simple variables and
constants used, program size, etc.

 A variable part is a space required by variables,


whose size depends on the size of the problem. For
example, dynamic memory allocation, recursion stack
space, etc.
Space Complexity
Space complexity S(P) of any algorithm P is S(P) =
C + SP(I), where C is the fixed part and S(I) is the
variable part of the algorithm, which depends on
instance characteristic I. Following is a simple
example that tries to explain the concept

Here we have three variables A, B, and C and one


constant. Hence S(P) = 1+3. Now, space depends
on data types of given variables and constant types
and it will be multiplied accordingly.
Time Complexity
Time complexity of an algorithm represents
the amount of time required by the algorithm
to run to completion. Time requirements can
be defined as a numerical function T(n), where
T(n) can be measured as the number of steps,
provided each step consumes constant time.
For example, addition of two n-bit integers
takes n steps. Consequently, the total
computational time is T(n) = c*n, where c is
the time taken for the addition of two bits.
Here, we observe that T(n) grows linearly as
the input size increases.
Algorithm Analysis
Suppose M is an algorithm, and suppose n is
the size of the input data.

Clearly the complexity f(n) of M increases as n


increases. It is usually the rate of increase of
f(n) with some standard functions. The most
common computing times are;

O(1), O(log2 n), O(n), O(n log2 n), O(n2), O(n3),


O(2n)
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to
defining the mathematical boundation /
framing of its run-time performance. Using
asymptotic analysis, we can very well
conclude the best case, average case, and
worst case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if
there's no input to the algorithm, it is
concluded to work in a constant time.
Other than the "input" all other factors are
considered constant.
Asymptotic Analysis
Asymptotic analysis refers to computing the running time of
any operation in mathematical units of computation. For
example, the running time of one operation is computed as
f(n) and may be for another operation it is computed as g(n2).
This means the first operation running time will increase
linearly with the increase in n and the running time of the
second operation will increase exponentially when n
increases. Similarly, the running time of both operations will
be nearly the same if n is significantly small.
The time required by an algorithm falls under three
types ;
Best Case: Minimum time required for program execution.
Average Case: Average time required for program execution.
Worst Case: Maximum time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic
notations to calculate the running time
complexity of an algorithm’

Ο Notation

Ω Notation

θ Notation
Big Oh Notation, O
The notation Ο(n) is the formal way to express
the upper bound of an algorithm's running time.
It measures the worst case time complexity or
the longest amount of time an algorithm can
possibly take to complete.

For example, for a function f(n)


Omega Notation, Ω
The notation Ω(n) is the formal way to express
the lower bound of an algorithm's running time.
It measures the best case time complexity or the
best amount of time an algorithm can possibly
take to complete.
Omega Notation, Ω
For example, for a function f(n)
Theta Notation, θ
The notation θ(n) is the formal way to express
both the lower bound and the upper bound of
an algorithm's running time. It is represented
as follows
Common Asymptotic Notations
Reasons for Analyzing Algorithms
1. To predict the resources that the algorithm
requires
Computational Time(CPU consumption).
Memory Space(RAM consumption).
Communication bandwidth consumption.

2. To predict the running time of an


algorithm
Total number of primitive operations executed
Greedy Algorithms
An algorithm is designed to achieve optimum
solution for a given problem. In greedy algorithm
approach, decisions are made from the given
solution domain. As being greedy, the closest
solution that seems to provide an optimum
solution is chosen.

Greedy algorithms try to find a localized optimum


solution, which may eventually lead to globally
optimized solutions. However, generally greedy
algorithms do not provide globally optimized
solutions.
Counting Coins
This problem is to count to a desired value by
choosing the least possible coins and the greedy
approach forces the algorithm to pick the largest
possible coin. If we are provided coins of € 1, 2, 5
and 10 and we are asked to count € 18 then the
greedy procedure will be;
1 − Select one € 10 coin, the remaining count is 8
2 − Then select one € 5 coin, the remaining count is
3
3 − Then select one € 2 coin, the remaining count is
1
4 − And finally, the selection of one € 1 coins solves
the problem.
Counting Coins
Though, it seems to be working fine, for this count
we need to pick only 4 coins. But if we slightly
change the problem then the same approach may
not be able to produce the same optimum result.
For the currency system, where we have coins of
1, 7, 10 value, counting coins for value 18 will be
absolutely optimum but for count like 15, it may
use more coins than necessary.
For example, the greedy approach will use 10 + 1
+ 1 + 1 + 1 + 1, total 6 coins. Whereas the same
problem could be solved by using only 3 coins (7
+ 7 + 1)
Coins Counting
 Hence, we may conclude that the greedy approach picks an
immediate optimized solution and may fail where global optimization
is a major concern.

 Examples
 Most networking algorithms use the greedy approach. Here is a list of
few of them −
 Travelling Salesman Problem
 Prim's Minimal Spanning Tree Algorithm
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Graph - Map Coloring
 Graph - Vertex Cover
 Knapsack Problem
 Job Scheduling Problem
 There are lots of similar problems that uses the greedy approach to
find an optimum
 solution.
Questions

You might also like