Learning Module: Algorithm and Complexity (AL101)
Learning Module: Algorithm and Complexity (AL101)
LEARNING MODULE
IN
ALGORITHM AND
COMPLEXITY
(AL101)
MIDTERM
1ST SEMESTER A.Y. 2020-2021
FELIPE G. ANTE JR
CS Instructor
Page | 1
LEARNING TASK 6-7 (April 12-24,2021)
TOPICS
DIVIDE & CONQUER
MAX-MIN PROBLEM
MERGE SORT
TOPIC OVERVIEW
Learning Task 6-7 will discuss Divide & Conquer, Max-Min Problem and Merge Sort
1. Identify the complexities in Divide and Conquer and Max-Min Problem n Design Strategies
2. Identify the complexities in Merge Sort in Design Strategies
-------------------------------------------------------------------------------------------------------------------------------
ACTIVITY 6-7
-------------------------------------------------------------------------------------------------------------------------------
PRE-ASSESSMENT 6-7
-------------------------------------------------------------------------------------------------------------------------------
Page | 2
Broadly, we can understand divide-and-conquer approach in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem. This step generally takes a recursive approach to divide
the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic
in nature but still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the
problems are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until
they formulate a solution of the original problem. This algorithmic approach works recursively and
conquer & merge steps works so close that they appear as one.
Examples
The following computer algorithms are based on divide-and-conquer programming approach −
Merge Sort
Quick Sort
Binary Search
Strassen's Matrix Multiplication
Closest pair (points)
There are various ways available to solve any computer problem, but the mentioned are a
good example of divide and conquer approach.
Problem Statement
The Max-Min Problem in algorithm analysis is finding the maximum and minimum value
in an array.
Solution
To find the maximum and minimum numbers in a given array numbers[] of size n, the
following algorithm can be used. First we are representing the naive method and then we will
present divide and conquer approach.
Naïve Method
Naïve method is a basic method to solve any problem. In this method, the maximum and
minimum number can be found separately. To find the maximum and minimum numbers, the
following straightforward algorithm can be used.
Algorithm: Max-Min-Element (numbers[])
max := numbers[1]
min := numbers[1]
for i = 2 to n do
if numbers[i] > max then
max := numbers[i]
if numbers[i] < min then
min := numbers[i]
return (max, min)
Analysis
Page | 3
The number of comparison in Naive method is 2n - 2.
The number of comparisons can be reduced using the divide and conquer approach. Following
is the technique.
So,
T(n)=2.T(n2)+2=2.(2.T(n4)+2)+2.....=3n2−2T(n)=2.T(n2)+2=2.(2.T(n4)+2)+2.....=3n2−2
Compared to Naïve method, in divide and conquer approach, the number of comparisons is less.
However, using the asymptotic notation both of the approaches are represented by O(n).
Merge sort is a sorting technique based on divide and conquer technique. With worst-
case time complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the following −
We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.
This does not change the sequence of appearance of items in the original. Now we divide these
two arrays into halves.
We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down. Please note the
color codes given to these lists.
Page | 4
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target
list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42
and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −
Pseudocode
We shall now see the pseudocodes for merge sort functions. As our algorithms point out two
main functions − divide & merge.
Merge sort works with recursion and we shall see our implementation in the same way.
l1 = mergesort( l1 )
l2 = mergesort( l2 )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
Page | 5
while ( b has elements )
add b[0] to the end of c
remove b[0] from b
end while
return c
end procedure
-------------------------------------------------------------------------------------------------------------------------------
POST-ASSESSMENT 6-7
1. Create an algorithm for the following Design Strategies and show the flowchart
Merge Sort
Quick Sort
Binary Search
2. Using C# create a sample program for Merge Sort. Show the program output.
------------------------------------------------------------------------------------------------------------------------------
TOPICS
BINARY SEARCH
STRASSEN’S MATRIX MULTIPLICATION
GREEDY METHOD
DYNAMIC PROGRAMMING
TOPIC OVERVIEW
Learning Task 8-9 focuses on other important design strategies such as binary search,
Strassen’s Matrix Multiplication, Greedy Method and Dynamic Programming
ACTIVITY 8-9
-----------------------------------------------------------------------------------------------------------------------------
PRE-ASSESSMENT 8-9
Page | 6
-------------------------------------------------------------------------------------------------------------------------------
I. BINARY SEARCH
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer. For this algorithm to work properly, the
data collection should be in the sorted form.
Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is greater than
the item, then the item is searched in the sub-array to the left of the middle item. Otherwise, the
item is searched for in the sub-array to the right of the middle item. This process continues on
the sub-array as well until the size of the subarray reduces to zero.
How Binary Search Works?
For a binary search to work, it is mandatory for the target array to be sorted. We shall
learn the process of binary search with a pictorial example. The following is our sorted array and
let us assume that we need to search the location of value 31 using binary search.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find
that the value at location 4 is 27, which is not a match. As the value is greater than 27 and we
have a sorted array, so we also know that the target value must be in the upper portion of the
array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
The value stored at location 7 is not a match, rather it is more than what we are looking for. So,
the value must be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
Page | 7
We conclude that the target value 31 is stored at location 5.
Binary search halves the searchable items and thus reduces the count of comparisons to be
made to very less numbers.
Pseudocode
The pseudocode of binary search algorithms should look like this −
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure
Problem Statement
Let us consider two matrices X and Y. We want to calculate the resultant matrix Z by
multiplying X and Y.
Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y.
Using Naïve method, two matrices (X and Y) can be multiplied if the order of these matrices are p
× q and q × r. Following is the algorithm.
Algorithm: Matrix-Multiplication (X, Y, Z)
for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
Z[i,j] := Z[i,j] + X[i,k] × Y[k,j]
Complexity
Here, we assume that integer operations take O(1) time. There are three for loops in this
algorithm and one is nested in other. Hence, the algorithm takes O(n3) time to execute.
Page | 8
Using Strassen’s Algorithm compute the following −
M1:=(A+C)×(E+F)M1:=(A+C)×(E+F)
M2:=(B+D)×(G+H)M2:=(B+D)×(G+H)
M3:=(A−D)×(E+H)M3:=(A−D)×(E+H)
M4:=A×(F−H)M4:=A×(F−H)
M5:=(C+D)×(E)M5:=(C+D)×(E)
M6:=(A+B)×(H)M6:=(A+B)×(H)
M7:=D×(G−E)M7:=D×(G−E)
Then,
I:=M2+M3−M6−M7I:=M2+M3−M6−M7
J:=M4+M6J:=M4+M6
K:=M5+M7K:=M5+M7
L:=M1−M3−M4−M5L:=M1−M3−M4−M5
Analysis
T(n)={c7xT(n2)+dxn2ifn=1otherwiseT(n)={cifn=17xT(n2)+dxn2otherwise where c and d a
re constants
Using this recurrence relation, we get T(n)=O(nlog7)T(n)=O(nlog7)
Hence, the complexity of Strassen’s matrix multiplication algorithm is O(nlog7)O(nlog7).
Greedy algorithms build a solution part by part, choosing the next part in such a way, that
it gives an immediate benefit. This approach never reconsiders the choices taken previously. This
approach is mainly used to solve optimization problems. Greedy method is easy to implement
and quite efficient in most of the cases. Hence, we can say that Greedy algorithm is an algorithmic
paradigm based on heuristic that follows local optimal choice at each step with the hope of finding
global optimal solution.
In many problems, it does not produce an optimal solution though it gives an approximate
(near optimal) solution in a reasonable time.
Areas of Application
Greedy approach is used to solve many problems, such as
Finding the shortest path between two vertices using Dijkstra’s algorithm.
Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.
Page | 9
computed solutions are stored in a table, so that these don’t have to be re-computed. Hence,
this technique is needed where overlapping sub-problem exists.
For example, Binary Search does not have overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many overlapping sub-problems.
Optimal Sub-Structure
A given problem has Optimal Substructure Property, if the optimal solution of the given
problem can be obtained using optimal solutions of its sub-problems. For example, the Shortest
Path problem has the following optimal substructure property
If a node x lies in the shortest path from a source node u to destination node v, then the
shortest path from u to v is the combination of the shortest path from u to x, and the shortest
path from x to v.
The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are
typical examples of Dynamic Programming.
Steps of Dynamic Programming Approach
Dynamic Programming algorithm is designed using the following four steps −
Characterize the structure of an optimal solution.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution, typically in a bottom-up fashion.
Construct an optimal solution from the computed information.
------------------------------------------------------------------------------------------------------------------------------
POST-ASSESSMENT 8-9
NAME: ____________________________ COURSE & SECTION: ______ SCORE: ____
2. The code below uses Greedy Method, explain the code and explain why it is a Greedy
Method/Approach (15 pts)
Page | 10
A decreases by 20,
B increases by 5
-------------------------------------------------------------------------------------------------------------------------------
TOPICS
SPANNING TREE
SHORTEST PATHS
MULTISTAGE GRAPH
TRAVELLING SALESMAN PROBLEM
TOPIC OVERVIEW
Learning Task 10 will discuss another set of design strategies in algorithm such as
Spanning Tree, Shortest Paths, Multistage graph and Travelling Salesman Problem
1. Identify the techniques or methods in solving more complex algorithm design strategies
2. Analyze all the design strategies and apply it in creating a quality product such as software or
firmware.
-------------------------------------------------------------------------------------------------------------------------------
PRE-ASSESSMENT 10
Answer the following questions. Use a separate sheet of paper for your answer. 10 pts. each.
-------------------------------------------------------------------------------------------------------------------------------
CONTENT DEVELOPMENT 10
I. SPANNING TREE
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum
possible number of edges. Hence, a spanning tree does not have cycles and it cannot be
disconnected.
By this definition, we can draw a conclusion that every connected and undirected Graph
G has at least one spanning tree. A disconnected graph does not have any spanning tree, as it
cannot be spanned to all its vertices.
Page | 11
We found three spanning trees off one complete graph. A complete undirected graph can
have maximum nn-2 number of spanning trees, where n is the number of nodes. In the above
addressed example, n is 3, hence 33−2 = 3 spanning trees are possible.
General Properties of Spanning Tree
We now understand that one graph can have more than one spanning tree. Following are a
few properties of the spanning tree connected to graph G −
A connected graph G can have more than one spanning tree.
All possible spanning trees of graph G, have the same number of edges and vertices.
The spanning tree does not have any cycle (loops).
Removing one edge from the spanning tree will make the graph disconnected, i.e. the
spanning tree is minimally connected.
Adding one edge to the spanning tree will create a circuit or loop, i.e. the spanning tree
is maximally acyclic.
Shortest path algorithms have various uses, most notable being Route planning
software such as Google Maps, and us – here at MyRouteOnline we make use of shortest path
algorithms to generate your routes.
As there are a number of different shortest path algorithms, we’ve gathered the most important to
help you understand how they work and which is the best.
Dijkstra’s Algorithm
Dijkstra’s Algorithm stands out from the rest due to its ability to find the shortest path from
one node to every other node within the same graph data structure. This means, that rather than
just finding the shortest path from the starting node to another specific node, the algorithm works
to find the shortest path to every single reachable node – provided the graph doesn’t change.
The algorithm runs until all of the reachable nodes have been visited. Therefore, you would
only need to run Dijkstra’s algorithm once, and save the results to be used again and again without
re-running the algorithm – again, unless the graph data structure changed in any way.
In the case of a change in the graph, you would need to rerun the graph to ensure you have the
most updated shortest paths for your data structure.
Let’s take our routing example from above, if you want to go from A to B in the shortest way
possible, but you know that some roads are heavily congested, blocked, undergoing works, and
Page | 12
so on, when using Dijkstra, the algorithm will find the shortest path while avoiding any edges with
larger weights, thereby finding you the shortest route.
Bellman-Ford Algorithm
Similar to Dijkstra’s algorithm, the Bellman-Ford algorithm works to find the shortest path
between a given node and all other nodes in the graph. Though it is slower than the former,
Bellman-Ford makes up for its a disadvantage with its versatility. Unlike Dijkstra’s algorithm,
Bellman-Ford is capable of handling graphs in which some of the edge weights are negative.
It’s important to note that if there is a negative cycle – in which the edges sum to a negative
value – in the graph, then there is no shortest or cheapest path. Meaning the algorithm is
prevented from being able to find the correct route since it terminates on a negative cycle.
Bellman-Ford is able to detect negative cycles and report on their existence.
Floyd-Warshall Algorithm
The Floyd-Warshall stands out in that unlike the previous two algorithms it is not a single-
source algorithm. Meaning, it calculates the shortest distance between every pair of nodes in the
graph, rather than only calculating from a single node. It works by breaking the main problem into
smaller ones, then combines the answers to solve the main shortest path issue.
Floyd-Warshall is extremely useful when it comes to generating routes for multi-stop trips
as it calculates the shortest path between all the relevant nodes. For this reason, many route
planning software’ will utilize this algorithm as it will provide you with the most optimized route
from any given location. Therefore, no matter where you currently are, Floyd-Warshall will
determine the fastest way to get to any other node on the graph.
Johnson’s Algorithm
Johnson’s algorithm works best with sparse graphs – one with fewer edges, as it’s runtime
depends on the number of edges. So, the fewer edges, the faster it will generate a route.
This algorithm varies from the rest as it relies on two other algorithms to determine the
shortest path. First, it uses Bellman-Ford to detect negative cycles and eliminate any negative
edges. Then, with this new graph, it relies on Dijkstra’s algorithm to calculate the shortest paths
in the original graph that was inputted.
Final Note
More often than not, the best algorithm to use won’t be left up to you to decide, rather it
will be dependant upon the type of graph you are using and the shortest path problem that is
being solved. For example, for problems with negative weight edges, you would turn to Bellman-
Ford, whereas for sparse graphs with no negative edges you would turn to Dijsktra’s algorithm.
When it comes down to it, many aspects of these algorithms are the same, however, when you
look at performance and use, that’s where the differences come to light. Therefore, rather than
asking which algorithm is the best, you should consider which is the right one for the type of graph
you are operating on and shortest path problem you are trying to solve.
Page | 13
According to the formula, we have to calculate the cost (i, j) using the following steps
Step-1: Cost (K-2, j)
In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options to choose
the minimum cost at this step.
Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7
Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5
Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5
Step-2: Cost (K-3, j)
Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3. So, the
value i = 2 and j = 2 and 3.
Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +
Cost(6, 8) + Cost(8, 9)} = 8
Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) + Cost(6,
8) + Cost(8, 9)} = 10
Step-3: Cost (K-4, j)
Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8) +
Cost(8, 9))} = 12
c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13
Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.
Problem Statement
A traveler needs to visit all the cities from a list, where distances between all the cities are
known and each city should be visited just once. What is the shortest possible route that he visits
each city exactly once and returns to the origin city?
Solution
Travelling salesman problem is the most notorious computational problem. We can use
brute-force approach to evaluate every possible tour and select the best one. For n number of
vertices in a graph, there are (n - 1)! number of possibilities.
Instead of brute-force using dynamic programming approach, the solution can be obtained in
lesser time, though there is no polynomial time algorithm.
Let us consider a graph G = (V, E), where V is a set of cities and E is a set of weighted
edges. An edge e(u, v) represents that vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.
Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence,
this is a partial tour. We certainly need to know j, since this will determine which cities are most
convenient to visit next. We also need to know all the cities visited so far, so that we don't repeat
any of them. Hence, this is an appropriate sub-problem.
For a subset of cities S Є {1, 2, 3, ... , n} that includes 1, and j Є S, let C(S, j) be the length
of the shortest path visiting each node in S exactly once, starting at 1 and ending at j.
When |S| > 1, we define C(S, 1) = ∝ since the path cannot start and end at 1.
Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j. We
should select the next city in such a way that
C(S,j)=minC(S−{j},i)+d(i,j)wherei∈Sandi≠jc(S,j)=minC(s−{j},i)+d(i,j)wherei∈Sandi≠jC(S,j)=minC(S
−{j},i)+d(i,j)wherei∈Sandi≠jc(S,j)=minC(s−{j},i)+d(i,j)wherei∈Sandi≠j
Algorithm: Traveling-Salesman-Problem
C ({1}, 1) = 0
Page | 14
for s = 2 to n do
for all subsets S Є {1, 2, 3, … , n} of size s and containing 1
C (S, 1) = ∞
for all j Є S and j ≠ 1
C (S, j) = min {C (S – {j}, i) + d(i, j) for i Є S and i ≠ j}
Return minj C ({1, 2, 3, …, n}, j) + d(j, i)
Analysis
There are at the most 2n.n2n.n sub-problems and each one takes linear time to solve.
Therefore, the total running time is O(2n.n2)O(2n.n2).
Example
In the following example, we will illustrate the steps to solve the travelling salesman problem.
Page | 15
The minimum cost path is 35.
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select
the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d
[4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 3]. Selecting path 4 to 3 (cost is 9), then
we shall go to then go to s = Φ step. We get the minimum value for d [3, 1] (cost is 6).
POST-ASSESSMENT 10
NAME: ____________________________ COURSE & SECTION: ______ SCORE: ____
Page | 16
AL101 Property of Norzagaray College
Introduction to Algorithms
Page | 17