Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
114 views

LECTURE 1 - Algorithms Basics

The document discusses algorithms and their basics. It defines an algorithm as a step-by-step procedure to get a desired output. Algorithms can be implemented in different programming languages. Common algorithm categories include search, sort, insert, update, and delete operations on data structures. An algorithm must be unambiguous, have inputs and outputs, terminate after a finite number of steps, and be feasible with available resources. The document also discusses how to write algorithms, analyze their efficiency, and complexity classifications like best case, average case, and worst case scenarios.

Uploaded by

Dervpolo Vans
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

LECTURE 1 - Algorithms Basics

The document discusses algorithms and their basics. It defines an algorithm as a step-by-step procedure to get a desired output. Algorithms can be implemented in different programming languages. Common algorithm categories include search, sort, insert, update, and delete operations on data structures. An algorithm must be unambiguous, have inputs and outputs, terminate after a finite number of steps, and be feasible with available resources. The document also discusses how to write algorithms, analyze their efficiency, and complexity classifications like best case, average case, and worst case scenarios.

Uploaded by

Dervpolo Vans
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Algorithms Basics

Algorithm is a step-by-step procedure, which defines a set of instructions to


be executed in a certain order to get the desired output. Algorithms are
generally created independent of underlying languages, i.e. an algorithm
can be implemented in more than one programming language.

From the data structure point of view, following are some important
categories of algorithms −

 Search − Algorithm to search an item in a data structure.


 Sort − Algorithm to sort items in a certain order.
 Insert − Algorithm to insert item in a data structure.
 Update − Algorithm to update an existing item in a data structure.
 Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the
following characteristics −

 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps


(or phases), and their inputs/outputs should be clear and must lead to only one
meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs, and should
match the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which should
be independent of any programming code.

How to Write an Algorithm?


There are no well-defined standards for writing algorithms. Rather, it is
problem and resource dependent. Algorithms are never written to support a
particular programming code.
As we know that all programming languages share basic code constructs
like loops (do, for, while), flow-control (if-else), etc. These common
constructs can be used to write an algorithm.

We write algorithms in a step-by-step manner, but it is not always the case.


Algorithm writing is a process and is executed after the problem domain is
well-defined. That is, we should know the problem domain, for which we are
designing a solution.

Example
Let's try to learn algorithm-writing by using an example.

Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP

Algorithms tell the programmers how to code the program. Alternatively,


the algorithm can be written as −
Step 1 − START ADD
Step 2 − get values of a & b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP

In design and analysis of algorithms, usually the second method is used to


describe an algorithm. It makes it easy for the analyst to analyze the
algorithm ignoring all unwanted definitions. He can observe what operations
are being used and how the process is flowing.

Writing step numbers, is optional.

We design an algorithm to get a solution of a given problem. A problem can


be solved in more than one ways.
Hence, many solution algorithms can be derived for a given problem. The
next step is to analyze those proposed solution algorithms and implement
the best suitable solution.

Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −

 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of


an algorithm is measured by assuming that all other factors, for example,
processor speed, are constant and have no effect on the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This is then
executed on target computer machine. In this analysis, actual statistics like
running time and space required, are collected.

We shall learn about a priori algorithm analysis. Algorithm analysis deals


with the execution or running time of various operations involved. The
running time of an operation can be defined as the number of computer
instructions executed per operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and
space used by the algorithm X are the two main factors, which decide the
efficiency of X.
 Time Factor − Time is measured by counting the number of key operations
such as comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space
required by the algorithm.

The complexity of an algorithm f(n) gives the running time and/or the


storage space required by the algorithm in terms of n as the size of input
data.

Space Complexity
Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. The space required by an
algorithm is equal to the sum of the following two components −

 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and
constants used, program size, etc.
 A variable part is a space required by variables, whose size depends on the size
of the problem. For example, dynamic memory allocation, recursion stack
space, etc.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is


the fixed part and S(I) is the variable part of the algorithm, which depends
on instance characteristic I. Following is a simple example that tries to
explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1
+ 3. Now, space depends on data types of given variables and constant
types and it will be multiplied accordingly.

Time Complexity
Time complexity of an algorithm represents the amount of time required by
the algorithm to run to completion. Time requirements can be defined as a
numerical function T(n), where T(n) can be measured as the number of
steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the
total computational time is T(n) = c ∗ n, where c is the time taken for
the addition of two bits. Here, we observe that T(n) grows linearly as the
input size increases.

Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical
boundation /framing of its run-time performance. Using asymptotic
analysis, we can very well conclude the best case, average case, and worst
case scenario of an algorithm.

Asymptotic analysis is input bound i.e., if there's no input to the algorithm,


it is concluded to work in a constant time. Other than the "input" all other
factors are considered constant.

Asymptotic analysis refers to computing the running time of any operation


in mathematical units of computation. For example, the running time of one
operation is computed as f(n) and may be for another operation it is
computed as g(n2). This means the first operation running time will increase
linearly with the increase in n and the running time of the second operation
will increase exponentially when n increases. Similarly, the running time of
both operations will be nearly the same if n is significantly small.

Usually, the time required by an algorithm falls under three types −

 Best Case − Minimum time required for program execution.


 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the
running time complexity of an algorithm.

 Ο Notation

 Ω Notation

 θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time. It measures the worst case time complexity or the
longest amount of time an algorithm can possibly take to complete.

For example, for a function f(n)


Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }

Further Explanation of Big O Notation


Big O Notation is the language we use to describe the complexity of an
algorithm. In other words, Big O Notation is the language we use for talking
about how long an algorithm takes to run. It is how we compare the
efficiency of different approaches to a problem. With Big O Notation we
express the runtime in terms of — how quickly it grows relative to the input,
as the input gets larger.

Let’s break down the last line:

1. how quickly the runtime grows — Being that it is difficult to determine


the exact runtime of an algorithm. It depends on the speed of the
computer processor. So instead of talking about the runtime directly, we
use Big O Notation to talk about how quickly the runtime grows.

2. relative to the input — If we were measuring our runtime directly, we


could express our speed in seconds and minutes. Since we are measuring
how quickly our runtime grows, we need to express our speed in terms of
something else. With Big O Notation, we use the size of the input, which
we call “n”. So we can say things like the runtime grows “on the order of
the size of the input” (O(n)) or “on the order of the square of the size of
the input” (O(n²)).

3. as the input gets larger — Our algorithm may have steps that seem
expensive when n is small but are eclipsed eventually by other steps as n
gets larger. For Big O Notation analysis, we care more about the stuff
that grows fastest as the input grows, because everything else is quickly
eclipsed as n gets very large.

Some Examples of Big O  Notation

This function runs in O(1) time (or “constant time”) relative to its input.
This means that the input array could be 1 item or 1,000 items, but this
function would still just require one “step.”

This function runs in O(n) time (or “linear time”), where n is the number
of items in the array. This means that if the array has 10 items, I have to
print 10 times. If it has 1,000 items, I have to print 1,000 times.
In this example I am nesting two loops. If the array has n items, the outer
loop runs n times and the inner loop runs n times for each iteration of the
outer loop, giving us n² total prints. Thus this function runs in O(n²) time
(or “quadratic time”). If the array has 10 items, I have to print 100 times.
If it has 1,000 items, I have to print 1,000,000 times.

Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or the
best amount of time an algorithm can possibly take to complete.

For example, for a function f(n)


Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and
the upper bound of an algorithm's running time. It is represented as follows

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Common Asymptotic Notations


Following is a list of some common asymptotic notations −

Constant − Ο(1)

Logarithmic − Ο(log n)

Linear − Ο(n)

n log n − Ο(n log n)

quadratic − Ο(n2)

cubic − Ο(n3)

polynomial − nΟ(1)

exponential − 2Ο(n)

You might also like