Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views

Module 4 Algorithmic Thinking With Python (1)

The document discusses various computational approaches to problem-solving, including brute-force, divide-and-conquer, dynamic programming, greedy algorithms, and randomized approaches. Each method is explained with examples, advantages, and disadvantages, emphasizing the importance of selecting the appropriate strategy based on the problem's characteristics. The conclusion highlights the significance of effective problem-solving skills in professional settings.

Uploaded by

Salini
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Module 4 Algorithmic Thinking With Python (1)

The document discusses various computational approaches to problem-solving, including brute-force, divide-and-conquer, dynamic programming, greedy algorithms, and randomized approaches. Each method is explained with examples, advantages, and disadvantages, emphasizing the importance of selecting the appropriate strategy based on the problem's characteristics. The conclusion highlights the significance of effective problem-solving skills in professional settings.

Uploaded by

Salini
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CIT 108 PROBLEM SOLVING STRATEGIES

UNIT 3 COMPUTATIONAL APPROACHES TO


PROBLEM-SOLVING

1.0 Introduction
2.0 Intended Learning Outcome
3.0 Main Content
3.1 Brute-force Approach
3.2 Divide-and-conquer Approach
3.2.1 Example: The Merge Sort Algorithm
3.2.2 Advantages of Divide and Conquer Approach
3.2.3 Disadvantages of Divide and Conquer Approach
3.3 Dynamic Programming Approach
3.3.1 Example: Fibonacci series
3.3.2 Recursion vs Dynamic Programming
3.4 Greedy Algorithm Approach
3.4.1 Characteristics of the Greedy Algorithm
3.4.2 Motivations for Greedy Approach
3.4.3 Greedy Algorithms vs Dynamic Programming
3.5 Randomized Approach
4.0 Conclusion
5.0 Summary
6.0 Self-Assessment Exercise
7.0 References/Further Reading

1.0 INTRODUCTION

Solving a problem involves finding a way to move from a current


situation to a desired outcome. To be able to solve a problem using
computational approaches, the problem itself needs to have certain
characteristics such as:

 The problem needs to be clearly defined — this means that one


should be able to identify the current situation, the end goal, the
possible means of reaching the end goal, and the potential
obstacles
 The problem needs to be computable — one should consider
what type of calculations are required, and if these are feasible
within a reasonable time frame and processing capacity
 The data requirements of the problem need to be examined, such
as what types of data the problem involves, and the storage
capacity required to keep this data
 One should be able to determine if the problem can be
approached using decomposition and abstraction, as these
methods are key for tackling complex problems
24
CIT 108 MODULE 1

Once these features of the given problem are identified, an informed


decision can then be made as to whether the problem is solvable or not
using computational approaches.

2.0 INTENDED LEARNING OUTCOME

At the end of this unit, students should be able to:

 Describe the various computational approaches available for


solving a problem
 Classify computational approaches based on their paradigms
 Evaluate a computational approach best suited for a given
problem
 Apply a computational approach to solve a problem

3.0 MAIN CONTENT

3.1 Brute-force Approach

This strategy is characterised by a lack of sophistication in terms of their


approach to the solution. It typically takes the most direct or obvious
route, without attempting to minimise the number of operations required
to compute the solution.

Brute-force approach is considered quite often in the course of


searching. In a searching problem, we are required to look through a list
of candidates in an attempt to find a desired object. In many cases, the
structure of the problem itself allows us to eliminate a large number of
the candidates without having to actually search through them. As an
analogy, consider the problem of trying to find a frozen pie in an
unfamiliar grocery store. You would immediately go to the frozen food
aisle, without bothering to look down any of the other aisles. Thus, at
the outset of your search, you would eliminate the need to search down
most of the aisles in the store. Brute force approach, however, ignores
such possibilities and naively search through all candidates in an attempt
to find the desired object. This approach is otherwise known as
exhaustive search.

Example:

Imagine a small padlock with 4 digits, each from 0-9. You forgot your
combination, but you don't want to buy another padlock. Since you can't
remember any of the digits, you have to use a brute force method to
open the lock. So you set all the numbers back to 0 and try them one by
one: 0001, 0002, 0003, and so on until it opens. In the worst case
scenario, it would take 104, or 10,000 tries to find your combination.
25
CIT 108 PROBLEM SOLVING STRATEGIES

3.2 Divide-and-conquer Approach

In the divide and conquer strategy, a problem is solved recursively by


applying three steps at each level of the recursion: Divide, conquer, and
combine.

Divide

“Divide” is the first step of the divide and conquer strategy. In this step
the problem is divided into smaller sub-problems until it is small enough
to be solved. At this step, sub-problems become smaller but still
represent some part of the actual problem. As stated above, recursion is
used to implement the divide and conquer algorithm. A recursive
algorithm calls itself with smaller or simpler input values, known as the
recursive case. So, when the divide step is implemented, the recursive
case is determined which will divide the problem into smaller sub-
problems.

Then comes the “conquer” step where we straightforwardly solve the


sub-problems. By now, the input has already been divided into the
smallest possible parts and we’re now going to solve them by
performing basic operations. The conquer step is normally implemented
with recursion by specifying the recursive base case. Once the sub-
problems become small enough that it can no longer be divided, we say
that the recursion “bottoms out” and that we’ve gotten down to the base
case. Once the base case is arrived at, the sub-problem is solved.

Combine

In this step, the solution of the sub-problems is combined to solve the


whole problem. The output returned from solving the base case will be
the input of larger sub-problems. So after reaching the base case we will
begin to go up to solve larger sub-problems with input returned from
smaller sub-problems. In this step, we merge output from the conquer
step to solve bigger sub-problems. Solutions to smaller sub-problems
propagate from the bottom up until they are used to solve the whole
original problem.

Example: The Merge Sort Algorithm

The merge sort algorithm closely follows the divide and conquer
paradigm. In the merge sort algorithm, we divide the n-element
sequence to be sorted into two subsequences of 𝑛 = 2 elements each.
Next, we sort the two subsequences recursively using merge sort.
Finally, we combine the two sorted subsequences to produce the sorted
answer.
26
CIT 108 MODULE 1

Let the given array be:

Divide the array into two halves

Again, divide each subpart recursively into two halves until you get
individual elements.

Now, combine the individual elements in a sorted manner. Here,


conquer and combine steps go side by side.

3.2.1 Advantages of Divide and Conquer Algorithms

The first, and probably the most recognizable benefit of the divide and
conquer paradigm is the fact that it allows us to solve difficult problems.
Being given a difficult problem can often be discouraging if there is no
27
CIT 108 PROBLEM SOLVING STRATEGIES

idea how to go about solving it. However, with the divide and conquer
method, it reduces the degree of difficulty since it divides the
problem into easily solvable sub-problems.

Another advantage of this paradigm is that it often plays a part in finding


other efficient algorithms. In fact, it played a central role in finding the
quick sort and merge sort algorithms. It also uses memory caches
effectively. The reason for this is the fact that when the sub-problems
become simple enough, they can be solved within a cache, without
having to access the slower main memory, which saves time and
makes the algorithm more efficient. And in some cases, it can even
produce more precise outcomes in computations with rounded
arithmetic than iterative methods would.

In the divide and conquer strategy problems are divided into sub-
problems that can be executed independently from each other. Thus,
making this strategy suited for parallel execution.

3.2.2 Disadvantages of Divide and Conquer Algorithms

One of the most common issues with this sort of algorithm is the fact
that the recursion is slow, which in some cases outweighs any
advantages of this divide and conquer process. Another concern with it
is the fact that sometimes it can become more complicated than a
basic iterative approach, especially in cases with a large n. In other
words, if someone wanted to add large numbers together, if they just
create a simple loop to add them together, it would turn out to be a much
simpler approach than it would be to divide the numbers up into two
groups, add these groups recursively, and then add the sums of the two
groups together.

3.3 Dynamic Programming Approach

Dynamic programming approach is similar to divide-and-conquer in that


both solve problems by breaking it down into several sub-problems that
can be solved recursively. The difference between the two is that in the
dynamic programming approach, the results obtained from solving
smaller sub-problems are reused in the calculation of larger sub-
problems. Thus, dynamic programming is a bottom-up technique that
usually begins by solving the smallest sub=problems, saving these
results and then reusing them to solve larger and larger sub-problems
until the solution to the original problem is obtained. This is in contrast
to the divide-and-conquer approach, which solves problems in a top-
down fashion. In this case the original problem is solved by breaking it
down into increasingly smaller sub-problems, and no attempt is made to
reuse previous results in the solution of any of the sub-problems.
28
CIT 108 MODULE 1

It is important to realise that a dynamic programming approach is only


justified if there is some degree of overlap in the sub-problems. The
underlying idea is to avoid calculating the same result twice. This is
usually accomplished by constructing a table in memory, and filling it
with known results as they are calculated (memoization). These results
are then used to solve larger sub-problems. Note that retrieving a given
result from this table takes Θ(1) time.

Dynamic programming is often used to solve optimisation problems. In


an optimisation problem, there are typically large number of possible
solutions, and each has a cost associated with it. The goal is to find a
solution that has the smallest cost (i.e., optimal solution).

Example: Fibonacci Series

Let's find the Fibonacci sequence up to the 5th term. A Fibonacci series
is the sequence of numbers in which each number is the sum of the two
preceding ones. For example, 0,1,1, 2, 3. Here, each number is the sum
of the two preceding numbers.

Algorithm

Let 𝑛 be the number of terms.


1. If 𝑛 ≤ 1, return 1.
2. Else return the sum of two preceding numbers.

We are calculating the Fibonacci sequence up to the 5th term.


1. The first term is 0.
2. The second term is 1.
3. The third term is sum of 0 (from step 1) and 1(from step 2),
which is 1.
4. The fourth term is the sum of the third term (from step 3) and
second term (from step 2) i.e. 1 + 1 = 2.
5. The fifth term is the sum of the fourth term (from step 4) and
third term (from step 3) i.e. 2 + 1 = 3.

Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results
of the previous steps as shown below. This is called a dynamic
programming approach.

29
CIT 108 PROBLEM SOLVING STRATEGIES

F(0) = 0
F(1) = 1
F(2) = F(1) + F(0)
F(3) = F(2) + F(1)
F(4) = F(3) + F(2)

Recursion vs Dynamic Programming

Dynamic programming is mostly applied to recursive algorithms. This is


not a coincidence, most optimization problems require recursion and
dynamic programming is used for optimization. But not all problems
that use recursion can use Dynamic Programming. Unless there is a
presence of overlapping sub-problems like in the Fibonacci sequence
problem, a recursion can only reach the solution using a divide and
conquer approach. This is the reason why a recursive algorithm like
Merge Sort cannot use Dynamic Programming, because the sub-
problems are not overlapping in any way.

3.4 Greedy Algorithm Approach

In a greedy algorithm, at each decision point the choice that has the
smallest immediate (i.e., local) cost is selected, without attempting to
look ahead to determine if this choice is part of our optimal solution to
the problem as a whole (i.e., a global solution). By locally optimal, we
mean a choice that is optimal with respect to some small portion of the
total information available about a problem.

The most appealing aspect of greedy algorithm is that they are simple
and efficient – typically very little effort is required to compute each
local decision. However, for general optimization problems, it is
obvious that this strategy will not always produce globally optimal
solutions. Nevertheless, there are certain optimization problems for
which a greedy strategy is, in fact, guaranteed to yield a globally optimal
solution.

30
CIT 108 MODULE 1

3.4.1 Characteristics of the Greedy Algorithm

The important characteristics of a Greedy algorithm are:

1. There is an ordered list of resources, with costs or value


attributions. These quantify constraints on a system.
2. Take the maximum quantity of resources in the time a constraint
applies.
3. For example, in an activity scheduling problem, the resource
costs are in hours, and the activities need to be performed in
serial order.

3.4.2 Motivations for Greedy Approach

Here are the reasons for using the greedy approach:

 The greedy approach has a few trade-offs, which may make it


suitable for optimization.
 One prominent reason is to achieve the most feasible solution
immediately. In the activity selection problem (Explained below),
if more activities can be done before finishing the current
activity, these activities can be performed within the same time.
 Another reason is to divide a problem recursively based on a
condition, with no need to combine all the solutions.
 In the activity selection problem, the “recursive division” step is
achieved by scanning a list of items only once and considering
certain activities.

3.4.3 Greedy Algorithms vs Dynamic Programming

Greedy algorithms are similar to dynamic programming in the sense that


they are both tools for optimization. However, greedy algorithms look
for locally optimum solutions or in other words, a greedy choice, in the
hopes of finding a global optimum. Hence greedy algorithms can make a
guess that looks optimum at the time but becomes costly down the line
and do not guarantee a globally optimum. Dynamic programming, on
the other hand, finds the optimal solution to sub-problems and then
makes an informed choice to combine the results of those sub-problems
to find the most optimum solution.

3.5 Randomized Approach

This approach is dependent not only on the input data, but also on the
values provided by a random number generator. If some portion of an
algorithm involves choosing between a number of alternatives, and it is
31
CIT 108 PROBLEM SOLVING STRATEGIES

difficult to determine the optimal choice, then it is often more effective


to choose the course of action at random rather than taking the time to
determine the vest alternative. This is particularly true in cases where
there are a large number of choices, most of which are “good.”

Although randomising an algorithm will typically not improve its worst-


case running time, it can be used to ensure that no particular input
always produces the worst-case behaviour. Specifically, because the
behaviour of a randomised algorithm is determined by a sequence of
random numbers, it would be unusual for the algorithm to behave the
same way on successive runs even when it is supplied with the same
input data.

Randomised approaches are best suited in game-theoretic situations


where we want to ensure fairness in the face of mutual suspicion. This
approach is widely used in computer and information security as well as
in various computer-based games.

4.0 CONCLUSION

Solving problems is a key professional skill. Quickly weighing up


available options and taking decisive actions to select the best
computational approach to a problem is integral to efficient
performance.

It is important to always get the problem-solving process right, avoiding


taking too little time to define the problem or generate potential
solutions. A wide range of computational techniques for problem
solving exist, and each can be appropriate given the peculiarity of the
problem and the individual involved. The important skills to attain are to
assess the situation independently of any other factors and to know when
to trust your own instincts and when to ask for a second opinion on a
potential solution to a problem.

5.0 SUMMARY

In this Unit computational approaches for solving a problem were


discussed viz. brute force, divide and conquer, dynamic programming,
genetic algorithm and randomized. The technique for classifying the
computational approaches based on their paradigms was deliberated and
various computational approaches best suited for a given problem were
evaluated and recommended. The conclusion of the Unit applies the
computational approach to solve a problem.

32
CIT 108 MODULE 1

6.0 SELF ASSESSMENT EXERCISE

1. State the characteristics of the Greedy algorithm


2. Explain how the divide-and-conquer algorithm works
3. Define brute-force approach in the problem-solving process
4. In what problem-solving scenario is dynamic programming a
preferred option?
5. Give an instance where the use of a randomised algorithm is
desirable.

7.0 REFERENCES/FURTHER READINGS

Chevalier, M., Giang, C., Piatti, A., & Mondada, F. (2020). Fostering
computational thinking through educational robotics: a model for
creative computational problem solving. International Journal of
STEM Education, 7(1), 1-18.

Costa, E. J. F., Campos, L. M. R. S., & Guerrero, D. D. S. (2017).


Computational thinking in mathematics education: A joint
approach to encourage problem-solving ability. In 2017 IEEE
Frontiers in Education Conference (FIE) 1-8. IEEE.

Doleck, T., Bazelais, P., Lemay, D. J., Saxena, A., & Basnet, R. B.
(2017). Algorithmic thinking, cooperativity, creativity, critical
thinking, and problem solving: exploring the relationship between
computational thinking skills and academic performance. Journal
of Computers in Education, 4(4), 355-369.
de Ruffieu, F. L. (2016). Divide and Conquer Book 1: Fundamental
Dressage Techniques: Xenophon Press LLC.

Priemer, B., Eilerts, K., Filler, A., Pinkwart, N., Rösken-Winter, B.,
Tiemann, R., & Zu Belzen, A. U. (2020). A framework to foster
problem-solving in STEM and computing education. Research in
Science & Technological Education, 38(1), 105-130.

Roughgarden, T. (2019). Algorithms Illuminated: Greedy algorithms


and dynamic programming. Part 3: Soundlikeyourself
Publishing, LLC.
33

You might also like