Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Algorithm Design Unit 4

Algorithm Design Unit 4

Uploaded by

komalkhati457
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Algorithm Design Unit 4

Algorithm Design Unit 4

Uploaded by

komalkhati457
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

ALGORITHM DESIGN

UNIT 4

Design and Analysis of Algorithms

What is meant by Algorithm Analysis?

Algorithm analysis is an important part of computational complexity theory, which provides


theoretical estimation for the required resources of an algorithm to solve a specific
computational problem. Analysis of algorithms is the determination of the amount of time and
space resources required to execute it.

Why Analysis of Algorithms is important?

 To predict the behavior of an algorithm without implementing it on a specific


computer.
 It is much more convenient to have simple measures for the efficiency of an
algorithm than to implement the algorithm and test the efficiency every time a
certain parameter in the underlying computer system changes.
 It is impossible to predict the exact behavior of an algorithm. There are too
many influencing factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them to
determine the best one for our purpose.

Types of Algorithm Analysis:

1. Best case
2. Worst case
3. Average case
Basics on Analysis of Algorithms:
1. What is algorithm and why analysis of it is important?
2. Asymptotic Notation and Analysis (Based on input size) in Complexity Analysis
of Algorithms
3. Worst, Average and Best Case Analysis of Algorithms
4. Types of Asymptotic Notations in Complexity Analysis of Algorithms
5. How to Analyse Loops for Complexity Analysis of Algorithms
6. How to analyse Complexity of Recurrence Relation
7. Introduction to Amortized Analysis
Asymptotic Notations:
1. Analysis of Algorithms | Big-O analysis
2. Difference between Big Oh, Big Omega and Big Theta
3. Examples of Big-O analysis
4. Difference between big O notations and tilde
5. Analysis of Algorithms | Big – Ω (Big- Omega) Notation
6. Analysis of Algorithms | Big – Θ (Big Theta) Notation
Types of Asymptotic Notations in Complexity Analysis of Algorithms
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of Algorithms.
The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants and don’t require algorithms to be implemented
and time taken by programs to be compared. Asymptotic notations are mathematical tools to
represent the time complexity of algorithms for asymptotic analysis.
. Theta Notation (Θ-Notation) :
Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
2. Big-O Notation (O-notation) :
Big-O notation represents the upper bound of the running time of an algorithm. Therefore,
it gives the worst-case complexity of an algorithm.
3. Omega Notation (Ω- Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

Design and Analysis - Divide and Conquer

Previous Page
Next Page
Using divide and conquer approach, the problem in hand, is divided into smaller sub-
problems and then each problem is solved independently. When we keep dividing the sub-
problems into even smaller sub-problems, we may eventually reach a stage where no more
division is possible. Those smallest possible sub-problems are solved using original solution
because it takes lesser time to compute. The solution of all sub-problems is finally merged in
order to obtain the solution of the original problem.
Broadly, we can understand divide-and-conquer approach in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem. This step generally takes a recursive approach to
divide the problem until no sub-problem is further divisible. At this stage, sub-problems
become atomic in size but still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the
problems are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively
and conquer & merge steps works so close that they appear as one.
Pros and cons of Divide and Conquer Approach
Divide and conquer approach supports parallelism as sub-problems are independent. Hence,
an algorithm, which is designed using this technique, can run on the multiprocessor system or
in different machines simultaneously.
In this approach, most of the algorithms are designed using recursion, hence memory
management is very high. For recursive function stack is used, where function state needs to
be st

Data Structures - Merge Sort Algorithm

Previous Page
Next Page
Merge sort is a sorting technique based on divide and conquer technique. With worst-case
time complexity being Ο(n log n), it is one of the most respected algorithms.

Merge sort first divides the array into equal halves and then combines them in a sorted
manner.

How Merge Sort Works?

To understand merge sort, we take an unsorted array as the following −

We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.

This does not change the sequence of appearance of items in the original. Now we divide
these two arrays into halves.

We further divide these arrays and we achieve atomic value which can no more be divided.

Now, we combine them in exactly the same manner as they were broken down. Please note
the color codes given to these lists.

We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the
target list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35
whereas 42 and 44 are placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −

Now we should learn some programming aspects of merge sorting.

QuickSort – Data Structure and Algorithm Tutorials


QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an
element as a pivot and partitions the given array around the picked pivot by placing the pivot
in its correct position in the sorted array.
How does QuickSort work?
The key process in quickSort is a partition(). The target of partitions is to place the pivot
(any element can be chosen to be a pivot) at its correct position in the sorted array and put
all smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct
position and this finally sorts the array.
reedy Algorithms (General Structure and Application

The general structure of a greedy algorithm can be summarized in the following steps:

1. Identify the problem as an optimization problem where we need to find the best
solution among a set of possible solutions.
2. Determine the set of feasible solutions for the problem.
3. Identify the optimal substructure of the problem, meaning that the optimal
solution to the problem can be constructed from the optimal solutions of its
subproblems.
4. Develop a greedy strategy to construct a feasible solution step by step, making the
locally optimal choice at each step.
Prove the correctness of the algorithm by showing that the locally optimal choices
at each step lead to a globally optimal solution.

Some common applications of greedy algorithms include:

1. Coin change problem: Given a set of coins with different denominations, find the
minimum number of coins required to make a given amount of change.
Fractional knapsack problem: Given a set of items with weights and values, fill a
knapsack with a maximum weight capacity with the most valuable items, allowing
fractional amounts of items to be included.
Huffman coding: Given a set of characters and their frequencies in a message,
construct a binary code with minimum average length for the characters.
Shortest path algorithms: Given a weighted graph, find the shortest path between
two nodes.
Minimum spanning tree: Given a weighted graph, find a tree that spans all nodes
with the minimum total weight.
Greedy algorithms can be very efficient and provide fast solutions for many
problems. However, it is important to keep in mind that they may not always
provide the optimal solution and to analyze the problem carefully to ensure the
correctness of the algorithm.
2. Greedy Algorithms work step-by-step, and always choose the steps which provide
immediate profit/benefit. It chooses the “locally optimal solution”, without
thinking about future consequences. Greedy algorithms may not always lead to the
optimal global solution, because it does not consider the entire data. The choice
made by the greedy approach does not consider future data and choices. In some
cases making a decision that looks right at that moment gives the best solution
(Greedy), but in other cases, it doesn’t. The greedy technique is used for
optimization problems (where we have to find the maximum or minimum of
something). The Greedy technique is best suited for looking at the immediate
situation.

Introduction to Knapsack Problem, its Types and How to solve them

The Knapsack problem is an example of the combinational optimization problem. This


problem is also commonly known as the “Rucksack Problem“. The name of the problem is
defined from the maximization problem as mentioned below:
Given a bag with maximum weight capacity of W and a set of items, each having a weight
and a value associated with it. Decide the number of each item to take in a collection such
that the total weight is less than the capacity and the total value is maximized.
Types of Knapsack Problem:
The knapsack problem can be classified into the following types:
1. Fractional Knapsack Problem
2. 0/1 Knapsack Problem
3. Bounded Knapsack Problem
4. Unbounded Knapsack Problem

Dynamic Programming

Learn more about Dynamic Programming in DSA Self Paced Course


Practice Problems on Dynamic Programming
Top Quizzes on Dynamic Programming
What is Dynamic Programming?
Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a
recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic
Programming. The idea is to simply store the results of subproblems, so that we do not have to
re-compute them when needed later. This simple optimization reduces time complexities from
exponential to polynomial.
For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential
time complexity and if we optimize it by storing solutions of subproblems, time complexity
reduces to linear.
Topics:
Basic Concepts
Advanced Concepts
Standard Dynamic Programming problems
Quick Links
Basic Concepts:
1. What is memoization? A Complete tutorial
2. Introduction to Dynamic Programming – Data Structures and Algorithm
Tutorials
3. Tabulation vs Memoizatation
4. Optimal Substructure Property
5. Overlapping Subproblems Property
6. How to solve a Dynamic Programming Problem ?
Advanced Concepts:
1. Bitmasking and Dynamic Programming | Set 1
2. Bitmasking and Dynamic Programming | Set-2 (TSP)
3. Digit DP | Introduction
4. Sum over Subsets | Dynamic Programming
Optimal Binary Search Tree | DP-24

An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree, is
a binary search tree that minimizes the expected search cost. In a binary search tree, the
search cost is the number of comparisons required to search for a given key.
5. In an OBST, each node is assigned a weight that represents the probability of the key
being searched for. The sum of all the weights in the tree is 1.0. The expected search
cost of a node is the sum of the product of its depth and weight, and the expected search
cost of its children.
6. To construct an OBST, we start with a sorted list of keys and their probabilities. We
then build a table that contains the expected search cost for all possible sub-trees of the
original list. We can use dynamic programming to fill in this table efficiently. Finally,
we use this table to construct the OBST.
7. The time complexity of constructing an OBST is O(n^3), where n is the number of keys.
However, with some optimizations, we can reduce the time complexity to O(n^2). Once
the OBST is constructed, the time complexity of searching for a key is O(log n), the
same as for a regular binary search tree.
8. The OBST is a useful data structure in applications where the keys have different
probabilities of being searched for. It can be used to improve the efficiency of searching
and retrieval operations in databases, compilers, and other computer programs.
9. Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency
counts, where freq[i] is the number of searches for keys[i]. Construct a binary search
tree of all keys such that the total cost of all the searches is as small as possible.
Let us first define the cost of a BST. The cost of a BST node is the level of that node
multiplied by its frequency. The level of the root is 1.

You might also like