Introduction To Algorithm
Introduction To Algorithm
Learning objectives:
After studying this lesson, student should be able to:
• Discuss what an algorithm is and how to use it to represent the solution of a problem.
• Use Flowchart and Pseudocode represent algorithms
I. ALGORITHM
I.1 Definitions:
• Algorithm: an ordered set of unambiguous steps that produces a result and terminates in a finite
time.
• It is a step-by-step problem-solving procedure, especially an established, recursive computational
procedure for solving a problem in a finite number of steps.
The specification of an algorithm must be “precise”. To write a precise algorithm, we need to pay attention
to three things:
o Capability. Make sure that the computer knows what and how to do the operations.
o Language. Ensure that the description is unambiguous—that is, it can only be read and
understood one way.
o Context. Make few assumptions about the input and execution setting. I.2 Properties of
algorithms:
To write algorithms that are specified well enough for a computer to follow, make sure every algorithm
has five essential properties:
❖ Inputs specified
❖ Outputs specified
❖ Definiteness or unambiguous
❖ Effectiveness
❖ Finiteness
1. Input specified:
An algorithm accepts zero or more inputs. The inputs are the data that will be transformed during the
computation to produce the output. We must specify the type of the data, the amount of data, and the
form that the data will take.
• Suppose the algorithm is a recipe. We must list the ingredients (type of inputs), their quantities
(amount of input), and their preparation, if any (form of inputs), as in “1/4 cup onion, minced.”
2. Outputs specified:
The outputs are the data resulting from the computation, the intended result. It produces at least one output.
3. Definiteness:
Algorithms must specify every step. Definiteness means specifying the sequence of operations for
transforming the inputs into the outputs. Every detail of each step must be spelled out, including how to
handle errors. Definiteness ensures that if the algorithm is performed at different times or by different
5. Finiteness:
An algorithm must have finiteness; that is the algorithm have to stop after a finite number of instructions
are executed. It must eventually stop, either with the right output or with a statement that no solution is
possible.
If no answer comes back, we can’t tell whether the algorithm is still working on an answer or is just plain
“stuck”. Finiteness becomes an issue for computer algorithms because if the algorithm doesn’t specify
when to stop the repetition, the computer will continue to repeat the instructions forever. For example,
divide 3 into 10. Any process having these five properties will be called an algorithm.
We use algorithms in our daily life. For example, to wash hand, the following algorithms may be used.
1. Start 1. Start
2. Turn on water 2. Turn on water
3. dispense soap 3. dispense soap
4. wash hand till clean 4. Repeat Rub hands together
5. Rince soap off 5. until hand clean
6. Turn off water 6. Rince soap off
7. Dry hand 7. Turn off water
8. Stop 8. Dry hand
9. Stop
The above-mentioned algorithm terminates after six steps. This explains the feature of finiteness. Every
action of the algorithm is precisely defined; hence, there is no scope for ambiguity.
b) Advantages of flowchart
The flowchart shows how the program works before you begin actually coding it. Some advantages of
flowcharting are the following.
c) Limitations of Flowcharts
Flowchart can be used for designing the basic concept of the program in pictorial form, but cannot be
used for programming purposes. Some of the limitations of the flowchart are given below:
• Complex: The major disadvantage in using flowcharts is that when a program is very large, the
flowcharts may continue for many pages, making them hard to follow.
• Costly: If the flowchart is to be drawn for a huge program, the time and cost factor of program
development may get out of proportion, making it a costly affair.
• Difficult to Modify: Due to its symbolic nature, any change or modification to a flowchart
usually requires redrawing the entire logic again, and redrawing a complex flowchart is not a
simple task.
• No Update: Usually, programs are updated regularly. However, the corresponding update of
flowcharts may not take place, especially in the case of large programs
I.2 Pseudocode
It is an English-like representation of the code required for an algorithm. It is part English and part
structured code.
Pseudocode is a detailed yet readable description of what an algorithm must do, expressed in a
formally-styled natural language rather than in a programming language. It describes the entire
logic of the algorithm so that implementation becomes a rote mechanical task of translating line by
line into source code.
a) Pseudocode Structures
Before going ahead with pseudocode, let us discuss some keywords, which are often used to indicate
input, output and processing operations.
b) Example of pseudocode
• It is easier to develop a program from a pseudocode rather than from a flowchart or decision table.
• Often, it is easy to translate pseudocode into a programming language, a step which can be
accomplished by less-experienced programmers (Ease of understanding)
• Unlike flowcharts, pseudocode is compact and does not tend to run over many pages. Its simple
structure and readability makes it easier to modify as well (Reduced complexity)..
Although pseudocode is a very simple mechanism to simplify problem-solving logic, it has its own
limitations. Some of the most notable limitations are as follows:
Computer scientists have defined three constructs for a structured program or algorithm. The three
general programming constructs are:
❖ sequence,
❖ decision (selection)
❖ repetition (loop)
Each of these constructs can be embedded inside any other construct. It has been proven that three basic
constructs for flow of control are sufficient to implement any 'proper' algorithm.
Sequence construct is a linear progression where one task is performed sequentially after another. The
actions are performed in the same sequence (top to bottom) in which they are written
Flowchart Pseudocode C
Begin #include<stdio.h>
Input a int main()
P=4×a { int a, P, S; printf(“Enter
S=a×a the length”);
Print P, S scanf(“%d”,&a);
END P=4*a;S=
a*a;
printf(“\nPerimetre = %d”, P);
printf(“\nSurface = %d”, S); return
0; }
Note that there is no branching and no process is repeated again. Each process is contributing something
to the next process.
III.1 Selection (Decision)
Selection is a process of deciding which choice is made between two or more alternative courses
of action. Selection logic is depicted as an IF-THEN-ELSE-ENDIF or CASE- ENDCASE
structure. As the name suggests, in case of the IF-THEN-ELSE-ENDIF construct, if the condition
is true, the true alternative actions are performed and if condition is false, then false alternative
actions are performed on.
a) IF-THEN-ELSE-ENDIF construct
Flowchart Pseudocode C
• • •
• •
IF condition THEN if(condition)
List of actions {
ELSE List of actions
List of different actions }
ENDIF Else
• • {
• List of different actions
}
•
•
Note that the ELSE keyword and 'Action 2' are optional. In case you do not want to choose between
two alternate courses of actions, then simply use IF-THEN-ENDIF
Flowchart Pseudocode C
• • If(condition) •
• { •
IF condition THEN List
List of actions • }
of actions
ENDIF
• •
•
•
•
•
Hence, if the condition is true, then perform the list of actions listed in the IF-THEN-ENDIF
construct and then move on to the other actions in the process. In case the condition is false, then
move on to the rest of the actions in the process directly. Let us write a pseudocode to find the
largest of three numbers
a) CASE-ENDCASE construct
If there are a number of conditions to be checked, then using multiple IFs may look very clumsy. Hence,
it is advisable to use the CASE-ENDCASE selection construct for multiple way selection logic. A CASE
construct indicates a multiway branch based on many conditions. CASE construct uses four keywords,
CASE, OF, OTHERS and ENDCASE, along with conditions that are used to
Flowchart Pseudocode C
#include<stdio.h> int
main() { int N;
printf("Enter the number:
START ");
Input N scanf("%d",&N);
If N < 100 Then if(N<100)
N = N × 100 N=N+100;
Else N = N - Else N=N-
100 100;
Print N
printf("\nnumber is %d",N);
STOP return 0;
}
Flowchart Pseudocode C
case (expression)
{ case value 1: Sequence
1;
CASE expression OF break ; case
Condition 1: Sequence 1 value 2:
Condition 2: Sequence
Sequence 2 2; break ;
• •
• case value n:
•
Sequence
Condition n: Sequence n
n; break ;
OTHERS : default
default :
sequence
default sequence ; }
ENCASE
Flowchart Pseudocode C
START #include<stdio.h>
READ code int main() {
CASE Grade char code ;
OF switch (
A: discount = 0.0 code )
B: discount = 0.1 { case 'A': discount =
C: discount = 0.2 0.0; break;
OTHERS : discount = case 'B': discount = 0.1;
0.3 ENDCASE break;
DISPLAY discount case 'C': discount = 0.2;
STOP break;
default: discount = 0.3;
}
Printf( "discount is: %f ",
discount); }
Looping construct is used when some particular task(s) is to be repeated for a number of times
according to the specified condition. By using looping, the programmer avoids repeating the same
set of instructions. As the selection, the loop is also represented in flowchart by a diamond. The
difference is just at the orientation of the arrows.
In case of WHILE-ENDWHILE, the loop will continue as long as the condition is true. The loop
is entered only if the condition is true. The 'statement' is performed for each iteration. At the
conclusion of each iteration, the condition is evaluated and the loop continues as long as the
condition is true.
Flowchart Pseudocode C
Flowchart Pseudocode C
#include<stdio.h>
main()
INITIALIZE Count to zero
WHILE Count >= 10 { int i=0;
ADD 1 to Count while(i<10)
PRINT Count { printf("%d
ENDWHILE ",i); i++;
STOP } return
0;
}
The DO-WHILE (REPEAT-UNTIL) Loop is similar to the WHILE-ENDWHILE, except that the test is
performed at the bottom of the loop instead of at the top
• Like a while loop, a do-while loop is a loop that repeats while some condition is satisfied.
• Unlike a while loop, a do-while loop tests its condition at the end of the loop. This means that
its sequence of activities always runs at least once.
Flowchart Pseudocode C
The 'statement' in this type of loop is always performed at least once, because the test is performed
after the statement is executed. At the end of each iteration, the condition is evaluated, and the loop
repeats until the condition gets true. The loop terminates when the condition becomes true
Example To display the first ten natural numbers using DO-WHILE Loop
Flowchart Pseudocode C
#include<stdio.h>
main()
{ int
INITIALIZE Count to zero
i=0;
REPEAT
do
ADD 1 to Count
{ printf("%d
PRINT Count
",i); i++; }
UNTIL Count is less than
10 while(i<10);
STOP return 0;
}
c) FOR Loop
• The counter has the following three numeric values:
– Initial counter value
– Increment (the amount to add to the counter each time the loop runs) – Final counter
value
• The loop ends when the counter reaches the final counter value, or, if there is an associated test condition,
when the test condition is true.
Output:
Hello World
Hello World
Hello World
Hello World
APPLICATION EXERCISES
Exercise 1: Write the flowchart corresponding to the following pseudo code
Exercise 4:
On a separate sheet of paper, make a flowchart organizing the “flow” of getting ready to go to
school in the morning. Be sure to include the following steps in your chart, but don’t be afraid to
add other things if you need them
A greedy algorithm would take the blue path, as a result of shortsightedness, rather than the orange path,
which yields the largest sum.
ADVANTAGES
➢ Always taking the best available choice is usually easy. It usually requires sorting the choices. ➢
Repeatedly taking the next available best choice is usually linear work. But don’t forget the cost of
sorting the choices.
➢ Much cheaper than exhaustive search. Much cheaper than most other algorithms.
DISADVANTAGES
➢ Sometimes greedy algorithms fail to find the globally optimal solution because they do not
consider all the data. The choice made by a greedy algorithm may depend on choices it has made
so far, but it is not aware of future choices it could make.
The sub problems are optimized that is to say we try to find out the minimum or the maximum solution of the
problem
Consider an example of the Fibonacci series. The following series is the Fibonacci series: 0, 1, 1, 2,
3, 5, 8, 13, 21, 34, 55, 89, 144, ,…
The numbers in the above series are not randomly calculated. Mathematically, we could write each of the terms
using the below formula:
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above relationship. For
example, F(2) is the sum f(0) and f(1), which is equal to 1.
The following are the steps that the dynamic programming follows:
➢ Top-down approach
➢ Bottom-up approach
The top-down approach follows the memorization technique, while bottom-up approach follows the
tabulation method.
Here memorization is equal to the sum of recursion and caching. Recursion means calling the function itself,
while caching means storing the intermediate results.
Initially, the first two values, i.e., 0 and 1 can be represented as:
When i=2 then the values 0 and 1 are added shown as below:
When i=3 then the values 1and 1 are added shown as below:
When i=4 then the values 2 and 1 are added shown as below:
In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem
is solved independently. Then solution of all sub-problems is finally merged in order to obtain the solution of the
original problem.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the
original problem. This step generally takes a recursive approach to divide the problem until no sub-problem is
further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the actual
problem.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution of
the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that
they appear as one.
Examples
The following computer algorithms are based on divide-and-conquer programming approach −
• Merge Sort
• Quick Sort
• Binary Search
• Strassen's Matrix Multiplication
• Closest pair (points)
There are various ways available to solve any computer problem, but the mentioned are a good example of divide
and conquer approach. ADVANTAGES:
Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way
of breaking the problem into sub-problems, of solving the trivial cases and of combining sub-problems to
the original problem. Similarly, decrease and conquer only requires reducing the problem to a single
smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height n to
moving a tower of height n − 1.
➢ Algorithm efficiency
The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for
example, to Karatsuba’s fast multiplication method, the quicksort and mergesort algorithms, the Strassen
algorithm for matrix multiplication, and fast Fourier transforms.
In all these examples, the Divide and Conquer approach led to an improvement in the asymptotic cost of
the solution. For example, if (a) the base cases have constant-bounded size, the work of splitting the
problem and combining the partial solutions is proportional to the problem’s size n, and (b) there is a
bounded number p of sub problems of size ~ n/p at each stage, then the cost of the divide-and-conquer
algorithm will be O(n logpn).
➢ Parallelism
Divide and conquer algorithms are naturally adapted for execution in multi-processor machines,
especially shared-memory systems where the communication of data between processors does not need
to be planned in advance, because distinct sub-problems can be executed on different processors.
In computations with rounded arithmetic, e.g. with floating point numbers, a divide-and-conquer
algorithm may yield more accurate results than a superficially equivalent iterative method. For example,
one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C
algorithm called pairwise summation that breaks the data set into two halves, recursively computes the
sum of each half, and then adds the two sums. While the second method performs the same number of
additions as the first, and pays the overhead of the recursive calls, it is usually more accurate.
DISADVANTAGES:
One of the most common issues with this sort of algorithm is the fact that the recursion is slow, which in
some cases outweighs any advantages of this divide and conquer process.
Another concern with it is the fact that sometimes it can become more complicated than a basic iterative
approach, especially in cases with a large n. In other words, if someone wanted to add a large amount of
numbers together, if they just create a simple loop to add them together, it would turn out to be a much
simpler approach than it would be to divide the numbers up into two groups, add these groups
recursively, and then add the sums of the two groups together.
Another downfall is that sometimes once the problem is broken down into sub problems, the same sub
problem can occur many times. In cases like these, it can often be easier to identify and save the solution
to the repeated sub problem, which is commonly referred to as memorization.
The last recognizable implementation issue is that these algorithms can be carried out by a non-recursive
program that will store the different sub problems in things called explicit stacks, which gives more
freedom in deciding just which order the sub problems should be solved.
These implementation issues do not make this process a bad decision when it comes to solving difficult
problems, but rather this paradigm is the basis of many frequently used algorithms.
(BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial
For example
➢ Tree sort
➢ Graph sort
The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the
amount of data the algorithm must process. Usually there are natural units for the domain and range of this
function. There are two main complexity measures of the efficiency of an algorithm:
• Time complexity is the amount of time taken by an algorithm to run, as a function of the length of
the input. It measures the time taken to execute each statement of code in an algorithm.
• Space complexity is a function describing the amount of memory (space) an algorithm takes in
terms of the amount of input to the algorithm.
For example, we might say "this algorithm takes n2 time," where n is the number of items in the input. Or
we might say "this algorithm takes constant extra space," because the amount of extra memory needed
doesn't vary with the number of items processed.
First algorithm is defined to print the statement only once. The time taken to execute is shown as 0
nanoseconds. While the second algorithm is defined to print the same statement but this time it is set to
run the same statement in FOR loop for 10 times. In the second algorithm, the time taken to execute both
the line of code – FOR loop and print statement, is 2 milliseconds. And, the time taken increases, as N
value increases, since the statement is going to get executed N times.
The time complexity of Insertion Sort in the best case is O(n). In the worst case, the time complexity is O(n^2).
This sorting technique has a stable time complexity for all kinds of cases. The time complexity of Merge Sort in
the best case is O(nlogn). In the worst case, the time complexity is O(nlogn). This is because Merge Sort
The time complexity of Quick Sort in the best case is O(nlogn). In the worst case, the time complexity is O(n^2).
Quicksort is considered to be the fastest of the sorting algorithms due to its performance of O(nlogn) in best and
average cases.
Let us now dive into the time complexities of some Searching Algorithms and understand which of them is faster.
Linear Search follows the sequential access. The time complexity of Linear Search in the best case is O(1). In the
Binary Search is the faster of the two searching algorithms. However, for smaller arrays, linear search does a better
job. The time complexity of Binary Search in the best case is O(1). In the worst case, the time complexity is O(log
n).