Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
16 views

Week 2-3 - Analysis of Algorithms

The document discusses time complexity analysis of algorithms. It covers measuring runtime experimentally and theoretically, calculating complexity by counting primitive operations, and asymptotic analysis using Big-O notation. The worst-case time complexity is generally used to evaluate and compare algorithms.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Week 2-3 - Analysis of Algorithms

The document discusses time complexity analysis of algorithms. It covers measuring runtime experimentally and theoretically, calculating complexity by counting primitive operations, and asymptotic analysis using Big-O notation. The worst-case time complexity is generally used to evaluate and compare algorithms.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Learning Object:

● Algorithm analysis
● Time complexity
● Experimental approach
● Theoretical approach
○ Count primitive operations
● Worst-case time complexity
1. Program Performance
As we know from the previous lecture, the DSA can help to improve the quality of the
application so as to measure its effectiveness, the term Program Performance was
used. Program performance is the amount of computer memory/space and time needed
to run a program.
Memory/ Spac complexity is the amount of memory that a program uses when it runs.
Usually measured in Bytes, KB, or MB.
Time complexity is the amount of time that a program needs to run. Usually measured
by the number of operations.

2. Time complexity
Time complexity is the amount of computer time a program needs to run, as a function
of the length of the input.
Unlike space complexity, the modern user usually cares more about the time that an
app takes than the space in memory because the memory can be upgraded but the time
is still a problem for us. If there are many solutions to a problem, we’ll choose the
fastest.
Besides that, some computers require upper limits for program execution times, and
nowadays, real-time response is one of the requirements of an application.
In some cases: Theoretically, we can solve the problem, but practically, we can't; it
just takes too long when the input size gets larger. E.g: RSA public-key
cryptosystems.

3. Measuring the Runtime(Experimental approach)


The experimental approach means calculating the actual running time of an algorithm
in a computer with many data sets of vary sizes. Determine the actual running time
using a system call to measure time.
Disadvantage:
1. Need to implement the algorithm, which may be difficult, maybe can depend
on the language(Each language will have different ways to run so time will be
difference).
2. Results are not indicative of the running time on other input size, which is not
included in the experiment.
3. In order to compare two algorithms, the same hardware and software
environments must be used.
4. Measuring the Runtime (Theoretical approach)
To fix the problem from the experimental approach, the theoretical approach will
calculate how time taken by an algorithm increases with the input size, won’t measure
the actual running time.
Advantages:
1. Use high-level description of the algorithms (pseudo-code or diagrams), instead
of language dependent implementations.
2. Measure the runtime as a function of the input size, n. (So, we know what
happens when n gets larger)
3. Care for all possible inputs.
4. Measure the runtime independent of the hardware/software environment.
Pseudo-code: A description of an algorithm that is:
● More structured than usual prose
● Less formal than a programming language
What is Pseudocode Explained | How to Write Pseudocode Algorithm | Examples, Benefits
& Steps
Control flow:
● if … then … [else …]
● while … do …
● repeat … until …
● for … do …
Method declaration
● Algorithm name (param1, param2)
● Input …
● Output …
Method:
● Calls: object method (args)
● Returns: return value
Expressions:
● ← Assignment (like = in Java)
● = Equality testing (like == in Java)
● n2 Superscripts and other mathematical formatting allowed
Diagrams:
5. How to evaluate
Inspect the pseudo-code/diagram and count the number of primitive operations
executed by the algorithm:
● Make an addition, assignment = 1 operation
● Calling a method or returning from a method = 1 operation.
● Index in an array = 1 operation.
● Comparison = 1 operation, etc.
T(n): a function of the input size n
Primitive operations:
● Basic computations performed by an algorithm
● Largely independent from the programming language
● Assumed to take a constant amount of time in the RAM model
Calculating Time Complexity | Data Structures and Algorithms| GeeksforGeeks

6. Best case Time complexity


The best case of an algorithm A is a function:
T: ℕ → ℕ where T(n) is the minimum number of steps performed by A on an
input of size n.

7. Average case time Complexity


• The ATC of an algorithm A is the function:
T: ℕ → ℕ where T(n) is the average number of steps performed by A on an input
of size n.

8. Worst case Time Complexity


The worst case of an algorithm A is a function:
T: ℕ → ℕ where T(n) is the maximum number of steps performed by A on an
input of size n.

We need to analyze the worst case to:


Prepare for the worst case to know in advance what happen when the input size gets
huge (Programmers need to care when design algorithms).
● Many algorithms perform to their worst case a large part of the time.
● The best case is not very informative because many algorithms perform exactly
the same in the best case -> hard to compare their performance.
● Determining average-case performance is not always easy.
● The worst case gives us an upper bound on performance.
-> Use worst-case running time as the main measure of time complexity

9. Asymptotic behavior
The leading term (fastest growing term) will dominate the running time.
To compare the time complexity of two algorithms, we compare the growing rate of
the leading terms. (Note: compare n3 and 2n when n = 1, 2, 4, 8,…).

10. Big-Oh Notation


Big-Oh represents the worst case time complexity of an algorithm.
Big-Oh represents an upper-bound of the growth rate of a function.
Big-O notation in 5 minutes
● Rule for sums: O(f(n)) + O(g(n)) is O(max(f(n), g(n)))
● Rule for products O(f(n)).O(g(n)) is O(f(n).g(n))
● The running time of each assignment, read, write statement can be taken to be
O(1)
● The running time of a sequence of statements is determined by the sum rule
● The running time of an if-statement is the cost of the conditionally executed
statements, plus the time for evaluating the condition..
● The time to execute a loop is the sum, over all times around the loop.

You might also like