Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 2 Fundamentals of The Analysis of Algorithm Efficiency Student

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

1

Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency

There are two kinds of efficiency: time efficiency and space efficiency. Time
efficiency, also called time complexity, indicates how fast an algorithm in question runs.
Space efficiency, also called space complexity, refers to the amount of memory units
required by the algorithm in addition to the space needed for its input and output.

Measuring an input’s size

Obviously, almost all algorithms run longer on larger inputs. Therefore, it is logical
to investigate an algorithm’s efficiency as a function of some parameter 𝑛 indicating the
algorithm’s input size.

Units for Measuring Running Time

We can simply use some standard unit of time measurement, such as second, or
millisecond, and so on to measure the running time of a program implementing the
algorithm. However, there are some drawbacks to such an approach.

One possible approach is to count the number of times the algorithm’s basic
operation is executed.
Note: The basic operation of an algorithm is the most important one. It contributes the most
to the total running time.

Let 𝑓(𝑛) be the polynomial that represents the number of times the algorithm’s basic
operation is executed on inputs of size 𝑛, and let 𝑡 be the execution time of the basic
operation on a particular computer. Then we can estimate the running time 𝑇(𝑛) of a
program implementing this algorithm on that computer by the formula

𝑇(𝑛) ≈ 𝑡 × 𝑓(𝑛)

The count 𝑓(𝑛) does not contain any information about operations that are not basic.
As a result, the count itself is often computed only approximately. Further, the constant 𝑡
is also an approximation whose reliability is not always easy to assess. However, the
formula can give a reasonable estimate of the algorithm’s running time.
2

Orders/Rates of Growth

Given a polynomial 𝑓(𝑛) that represents the number of times the algorithm’s basic
operation is executed on inputs of size 𝑛. For large values of 𝑛, constants as well as all
terms except the one of largest degree will be eliminated.

Example: Consider two functions 0.1𝑛2 + 𝑛 + 100 and 0.1𝑛2.


𝒏 𝟎. 𝟏𝒏𝟐 𝟎. 𝟏𝒏𝟐 + 𝒏 + 𝟏𝟎𝟎
10 10 120
100 1000 1200
1000 100000 101100

Example: Given 𝑓(𝑛) = 12𝑛(𝑛 − 1). If the value of 𝑛 is large enough then

1 1
𝑓(𝑛) = 2𝑛2 − 2𝑛 ≈ 𝑛2

By the way, we may wonder how much longer will the algorithm run if we double
its input size? The answer is about four times longer. Why?

𝑇(2𝑛) 𝑓(2𝑛) (2𝑛)2


= = =4
𝑇(𝑛) 𝑓(𝑛) 𝑛2

The following table illustrates values (some approximate) of several standard


functions important for analysis of algorithms

𝑛 log 2 𝑛 𝑛 𝑛 log 2 𝑛 𝑛2 𝑛3 2𝑛 𝑛!
101 3.3 101 3.3×101 102 103 103 3.6×106
102 6.6 102 6.6×102 104 106 1.3×1030 9.3×10157
103 10 103 10×103 106 109
104 13 104 13×104 108 1012
105 17 105 17×105 1010 1015
106 20 106 20×106 1012 1018

The exponential function 2𝑛 and the factorial function 𝑛! grow so fast that their
values become astronomically large even for rather small values of 𝑛. Both of them are
often referred to as “exponential-growth functions” (or simply “exponential”).
PRINTED BY: Phuong Nguyen <ntphuong@cecs.pdx.edu>. Printing is for personal, private use only. No part of this book may be reproduced or transmitted without
publisher's prior permission. Violators will be prosecuted.

“The greatest singer in the world cannot save a bad song!”

Example: Binary search

Example: Computing the 𝑛𝑡ℎ Fibonacci number in recursive manner.

Fibonacci(n) {
if (n ≤ 1)
return n;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}

If this algorithm is programmed on a computer that makes one billion operations


per second then

𝑛 40 60 80 100 120 160 200


Time

Worst-Case, Best-Case, and Average-Case Efficiencies

For many algorithms the running time depends not only on an input size but also on
the specifics of a particular input.
4

Example: Sequential search – the algorithm that searches for a given search key 𝑘
in a list of 𝑛 elements.
search(a[1 .. n], k) {
for (i = 1; i ≤ n; i++)
if (a[i] == k)
return 1;
return 0;
}

The best-case efficiency of an algorithm is its efficiency for the best-case input of
size 𝑛, which is an input of size 𝑛 for which the algorithm runs the fastest among all
possible inputs of that size.

The worst-case efficiency of an algorithm is its efficiency for the worst-case input
of size 𝑛, which is an input of size 𝑛 for which the algorithm runs the longest among all
possible inputs of that size.

The average-case efficiency of an algorithm indicates the algorithm’s behavior on


a “typical” or “random” input. To analyze the algorithm’s average-case efficiency, we must
make some assumptions about possible inputs of size 𝑛.

Asymptotic Notations

In the following discussion, 𝑓(𝑛) and 𝑔(𝑛) can be any nonnegative functions
defined on the set of natural numbers. In the context we are interested in, 𝑓(𝑛) will be an
algorithm’s running time, and 𝑔(𝑛) will be some simple function to compare with.
Informally:
– O(𝑔(𝑛)) is the set of all functions with a lower or same order of growth as 𝑔(𝑛).
– Ω(𝑔(𝑛)) is the set of all functions with a higher or same order of growth as 𝑔(𝑛).
– Θ(𝑔(𝑛)) is the set of all functions that have the same order of growth as 𝑔(𝑛).
5

Big 𝑂 notation
O(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ, 0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛), ∀𝑛 ≥ 𝑛0 }

Explanation: A function 𝑓(𝑛) is said to be in O(𝑔(𝑛)) if 𝑓(𝑛) is bounded above by some


positive constant multiple of 𝑔(𝑛) for all large 𝑛
Example: If 𝑓(𝑛) = 2𝑛 + 1, 𝑔(𝑛) = 𝑛2 then 𝑓(𝑛) ∈ O(𝑛2 )

Big 𝛺 notation
Ω(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ, 0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛), ∀𝑛 ≥ 𝑛0 }

Explanation: A function 𝑓(𝑛) is said to be in Ω(𝑔(𝑛)) if 𝑓(𝑛) is bounded below by some


positive constant multiple of 𝑔(𝑛) for all large 𝑛
Example: If 𝑓(𝑛) = 𝑛3 + 2𝑛2 + 3, 𝑔(𝑛) = 𝑛2 then 𝑓(𝑛) ∈ Ω(𝑛2 ).

Big 𝛩 notation
Θ(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐1 , 𝑐2 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ, 0 ≤ 𝑐1 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛), ∀𝑛 ≥ 𝑛0 }

Explanation: A function 𝑓(𝑛) is said to be in Θ(𝑔(𝑛)) if 𝑓(𝑛) is bounded both above and
below by some positive constant multiples of 𝑔(𝑛) for all large 𝑛
1
Example: If 𝑓(𝑛) = 2 𝑛2 − 3𝑛, 𝑔(𝑛) = 𝑛2 then 𝑓(𝑛) ∈ Θ(𝑛2 )
6

Some related theorems

Theorem 1: Given 𝑓(𝑛) ∈ ℝ+ and 𝑔(𝑛) ∈ ℝ+ :


𝑓(𝑛) ∈ Θ(𝑔(𝑛)) ⇔ 𝑓(𝑛) ∈ O(𝑔(𝑛)) ∧ 𝑓(𝑛) ∈ Ω(𝑔(𝑛))

Theorem 2: Given 𝑓(𝑛) = ∑𝑑𝑖=0 𝑎𝑖 𝑛𝑖 , 𝑎𝑑 > 0:


𝑓(𝑛) ∈ O(𝑛𝑑 )
where 𝑐 = ∑𝑑𝑖=0|𝑎𝑖 | , ∀𝑛 > 1.

Theorem 3: If 𝑓1 (𝑛) ∈ O(𝑔1 (𝑛)) and 𝑓2 (𝑛) ∈ O(𝑔2 (𝑛)):


𝑓1 (𝑛) + 𝑓2 (𝑛) ∈ O (𝑚𝑎𝑥(𝑔1 (𝑛), 𝑔2 (𝑛)))
(The analogous assertions are true for the Θ and Ω notations as well.)
7

Mathematical Analysis of Nonrecursive Algorithms

General plan for analyzing the time efficiency of nonrecursive algorithms is as


follows:

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed depends only on
the size of an input. If it also depends on some additional property, the worst-case,
average-case, and best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is
executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-form
formula for the count or, at the very least, establish its order of growth.

Example: Finding the value of the largest element in a list of 𝑛 numbers

MaxElement(a[1 .. n]) {
max = a[1];
for (i = 2; i ≤ n; i++)
if (a[i] > max)
max = a[i];
return max;
}

Example: Multiplying two matrices

MatrixMultiplication(a[1 .. n, 1 .. n], b[1 .. n, 1 .. n])


{
for (i = 1; i ≤ n; i++)
for (j = 1; j ≤ n; j++) {
c[i, j] = 0;
for (k = 1; k ≤ n; k++)
c[i, j] = c[i, j] + a[i, k] * b[k, j];
}
}
8

Example: Bubble sort

BubbleSort(a[1 .. n]) {
for (i = 2; i < n; i++)
for (j = n; j  i; j--)
if (a[j - 1] > a[j])
a[j - 1]  a[j];
}

Let’s denote 𝐶(𝑛) the number of times the comparison is executed. Now, we find a
formula expressing it as a function of size 𝑛.
𝑛(𝑛 − 1)
𝐶(𝑛) = ∈ Θ(𝑛2 )
2

A minor improvement of Bubblesort

BubbleSort(a[1 .. n]) {
flag = true;
m = 1;
while (flag) {
flag = false;
m++;
for (j = n; j  m; j--)
if (a[j - 1] > a[j]) {
a[j - 1]  a[j];
flag = true;
}
}
}

The best-case efficiency: 𝐵(𝑛) = (𝑛 − 1)

The worst-case efficiency:


𝑛( 𝑛 − 1)
𝑊(𝑛) = ∈ Θ(𝑛2 )
2
9

The average-case efficiency:


The standard assumption is that the probability the program stops after each iteration
1
of the while loop is 𝑛−1.
Let’s denote 𝐶(𝑖) the number of times the comparison is executed after the 𝑖𝑡ℎ
iteration of the while loop. Then, we can find the average number of comparisons 𝐴(𝑛) as
follows:
𝑛−1
1
𝐴(𝑛) = ∑ 𝐶(𝑖) ∈ Θ(𝑛2 )
𝑛−1
𝑖=1

Example: Insertion sort

Original version Insertion sort with sentinel


InsertionSort(a[1 .. n]) { InsWithSentinel(a[1 .. n]) {
for (i = 2; i  n; i++) { for (i = 2; i ≤ n; i++) {
v = a[i]; a[0] = v = a[i];
j = i – 1; j = i – 1;
while (j  1) && (a[j] while (a[j] > v) {
> v) { a[j + 1] = a[j];
a[j + 1] = a[j]; j--;
j--; }
} a[j + 1] = v;
a[j + 1] = v; }
} }
}

The best-case efficiency: 𝐵(𝑛) = 𝑛 − 1 ∈ Θ(𝑛)

The worst-case efficiency:


𝑛
𝑛 ( 𝑛 − 1)
𝑊(𝑛) = ∑(𝑖 − 1) = ∈ Θ(𝑛2 )
2
𝑖=2

The average-case efficiency: Let’s denote 𝐶(𝑖) the average number of times the
comparison is executed when the algorithm inserts the 𝑖𝑡ℎ element into the left sorted
subarray.
10

𝑖−1
1 1
𝐶(𝑖) = × (𝑖 − 1) + ∑ × 𝑗
𝑖 𝑖
𝑗=1
The average number of comparisons 𝐴(𝑛) is as follows:
𝑛
𝑛2 − 𝑛 𝑛2
𝐴(𝑛) = ∑ 𝐶(𝑖) ≈ + 𝑛 − ln 𝑛 − 𝛾 ≈ ∈ Θ(𝑛2 )
4 4
𝑖=2
Hint:

𝑛
1 1 1 1
∑ = 1 + + + ⋯ + ≈ ln 𝑛 + 𝛾
𝑖 2 3 𝑛
𝑖=1
where 𝛾 = 0.5772 … is Euler constant.

Example: Finding the number of binary digits in the binary representation of a


positive decimal integer.

BitCount(n) {
count = 1;
while (n > 1) {
count++;
n = n / 2;
}
return count;
}

The exact formula for the number of times the comparison will be executed is
actually ⌊log 2 𝑛⌋ + 1 ∈ Θ(log 𝑛). This formula also indicates the number of bits in the
binary representation of 𝑛.
11

Recurrence relations

Example: How many binary strings of length 𝑛 with no two adjacent 0’s.

Example: Rabbits and the Fibonacci Numbers (Fibonacci, 1202)


A young pair of rabbits (one of each sex) is placed on a desert island. A pair of
rabbits does not breed until they are 2 months old. After they are 2 months old, each pair
of rabbits produces another pair each month. Find a recurrence relation for the number of
pairs of rabbits on the island after 𝑛 months, assuming that no rabbits ever die.
Let’s denote 𝐹𝑛 the number of pairs of rabbits after 𝑛 months. Firstly, there is not
any pair of rabbits on this island so 𝐹0 = 0.

Reproducing pairs Young pairs Month Total pairs


 1 1
 2 1
  3 2
   4 3
     5 5
   
6 8
   

To find the number of pairs after 𝑛 months, add the number on the island the
previous month, 𝐹𝑛−1 , and the number of newborn pairs, which equals 𝐹𝑛−2 , because each
newborn pair comes from a pair at least 2 months old.
Consequently, the Fibonacci sequence is defined by the initial conditions 𝐹0 =
0, 𝐹1 = 1, and the recurrence relation:
𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 for 𝑛 = 2,3,4, …

Solving recurrence relations

We say that we have solved the recurrence relation together with the initial
conditions when we find an explicit formula, called a closed formula, for the terms of the
sequence.
12

Example: Solve the recurrence relation 𝑥𝑛 = 2𝑥𝑛−1 + 1 with the initial condition
𝑥1 = 1.

First approach: Forward substitution


𝑥1 = 1 = 21 − 1
𝑥2 = 2𝑥1 + 1 = 3 = 22 − 1
𝑥3 = 2𝑥2 + 1 = 7 = 23 − 1
𝑥4 = 2𝑥3 + 1 = 15 = 24 − 1

𝑥𝑛 = 2𝑥𝑛−1 + 1 = 2𝑛 − 1

In this approach, we find successive terms beginning with the initial condition and
ending with 𝑥𝑛 .

Second approach: Backward substitution


𝑥𝑛 = 2𝑥𝑛−1 + 1 = 2(2𝑥𝑛−2 + 1) + 1 = 22 𝑥𝑛−2 + 21 + 20 = 22 (2𝑥𝑛−3 + 1) + 21 + 20
= 23 𝑥𝑛−3 + 22 + 21 + 20 = ⋯ = 2𝑖 𝑥𝑛−𝑖 + 2𝑖−1 + ⋯ + 22 + 21 + 20
Based on the initial condition 𝑥1 = 1, let’s set 𝑛 − 𝑖 = 1 and we have 𝑖 = 𝑛 − 1.
Therefore, we see that:
𝑥𝑛 = 2𝑖 𝑥𝑛−𝑖 + 2𝑖−1 + ⋯ + 22 + 21 + 20 = 2𝑛−1 𝑥𝑛−(𝑛−1) + 2𝑛−1−1 + ⋯ + 22 + 21 +
20 = 2𝑛−1 𝑥1 + 2𝑛−2 + ⋯ + 22 + 21 + 20 = 2𝑛 − 1

This approach is called backward substitution, because we began with 𝑥𝑛 and


iterated to express it in terms of falling terms of the sequence until we found it in terms of
𝑥1 – the initial condition.

Note: When we use forward/backward substitution, we essentialy guess a formula for the
terms of the sequence. We need to use mathematical induction to prove that our guess is
correct.

Solving Linear Recurrence Relations


A wide variety of recurrence relations occur in models. Some of these recurrence
relations can be solved using iteration (forward/backward substitution) or some other ad
hoc technique. However, one important class of recurrence relations can be explicitly
solved in a systematic way. These are recurrence relations that express the terms of a
sequence as linear combinations of previous terms.
13

Definition: A linear homogeneous recurrence relation of degree 𝑘 with constant


coefficients is a recurrence relation of the form
𝑥𝑛 = 𝑐1 𝑥𝑛−1 + 𝑐2 𝑥𝑛−2 + ⋯ + 𝑐𝑘 𝑥𝑛−𝑘
or
𝑓(𝑛) = 𝑥𝑛 − 𝑐1 𝑥𝑛−1 − 𝑐2 𝑥𝑛−2 − ⋯ − 𝑐𝑘 𝑥𝑛−𝑘 = 0
where 𝑐1 , 𝑐2 , … , 𝑐𝑘 ∈ ℝ, 𝑐𝑘 ≠ 0 and the k initial conditions:
𝑥0 = 𝐶0 , 𝑥1 = 𝐶1 , … , 𝑥𝑘−1 = 𝐶𝑘−1

2
Note: The recurrence relation 𝑥𝑛 = 𝑥𝑛−1 + 𝑥𝑛−2 is not linear. The recurrence relation 𝑥𝑛 =
2𝑥𝑛−1 + 3 is not homogeneous. The recurrence relation 𝑥𝑛 = 𝑛𝑥𝑛−1 does not have
constant coefficients.

Solving linear homogeneous recurrence relations with constant coefficients

We can observe that 𝑥𝑛 = 𝑟 𝑛 is a solution of the recurrence relation:


𝑥𝑛 = 𝑐1 𝑥𝑛−1 + 𝑐2 𝑥𝑛−2 + ⋯ + 𝑐𝑘 𝑥𝑛−𝑘
if and only if
𝑟 𝑛 = 𝑐1 𝑟 𝑛−1 + 𝑐2 𝑟 𝑛−2 + ⋯ + 𝑐𝑘 𝑟 𝑛−𝑘
When both sides of this equation are divided by 𝑟 𝑛−𝑘 (when 𝑟 ≠ 0) and the right-
hand side is subtracted from the left, we obtain the equation
𝑟 𝑘 − 𝑐1 𝑟 𝑘−1 − 𝑐2 𝑟 𝑘−2 − ⋯ − 𝑐𝑘 𝑟 0 = 0
We call this the characteristic equation of the recurrence relation. The solutions of
this equation are called the characteristic roots of the recurrence relation. These
characteristic roots can be used to give an explicit formula for all the solutions of the
recurrence relation.

For simplicity, let’s consider linear homogeneous recurrence relations of degree


two. The characteristic equation in this case has the following form:
𝑎𝑟 2 + 𝑏𝑟 + 𝑐 = 0

Case 1: Suppose that the equation has two distinct roots 𝑟1 ∈ ℝ and 𝑟2 ∈ ℝ. Then the
solution is:
𝑥𝑛 = 𝛼𝑟1 𝑛 + 𝛽𝑟2 𝑛 , ∀𝛼, 𝛽 ∈ ℝ
Case 2: Suppose that the equation has only one root 𝑟 ∈ ℝ. Then the solution is:
𝑥𝑛 = 𝛼𝑟 𝑛 + 𝛽𝑛𝑟 𝑛 , ∀𝛼, 𝛽 ∈ ℝ
14

Example: What is the solution of the recurrence relation


𝑥𝑛 = 𝑥𝑛−1 + 2𝑥𝑛−2
with initial conditions 𝑥0 = 2, 𝑥1 = 7.
Hint: The solution is 𝑥𝑛 = 3 × 2𝑛 − (−1)𝑛

Example: Find an explicit formula for the following recurrence relation


𝑥𝑛 = 6𝑥𝑛−1 − 9𝑥𝑛−2
with initial conditions 𝑥0 = 0, 𝑥1 = 3.
Hint: The solution is 𝑥𝑛 = 𝑛3𝑛
15

Mathematical Analysis of Recursive Algorithms

General plan for analyzing the time efficiency of recursive algorithms is as follows:

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on
different inputs of the same size. If it can, the worst-case, average-case, and best-
case efficiencies have to be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of
times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

Example: Finding the factorial of 𝑛: 𝑛!

Factorial(n) {
if (n == 0)
return 1;
return Factorial(n – 1) * n;
}

Let’s denote 𝑀(𝑛) the number of times the basic operation is executed. The
recurrence relation is as follows:
𝑀(𝑛) = 𝑀(𝑛 − 1) + 1
with the initial condition 𝑀(0) = 0.
Hint: 𝑀(𝑛) ∈ Θ(𝑛)

Example: Tower of Hanoi puzzle with 𝑛 disks

HNTower(n, left, middle, right) {


if (n) {
HNTower(n – 1, left, right, middle);
Movedisk(1, left, right);
HNTower(n – 1, middle, left, right);
}
}
Let’s denote 𝑀(𝑛) the number of moves. The recurrence relation is as follows:
16

𝑀(𝑛) = 𝑀(𝑛 − 1) + 1 + 𝑀(𝑛 − 1) = 2𝑀(𝑛 − 1) + 1


with the initial condition 𝑀(1) = 1.
Hint: 𝑀(𝑛) = 2𝑛 − 1 ∈ Θ(2𝑛 )

Example: Finding the number of binary digits in the binary representation of a


positive decimal integer.

BitCount(n) {
if (n == 1) return 1;
return BitCount(n / 2) + 1;
}

Let’s denote 𝐴(𝑛) the number of times the basic operation is executed. Then, the
𝑛
number of additions made in computing BitCount(n / 2) is 𝐴 (⌊2 ⌋). The recurrence
relation is as follows:
𝑛
𝐴(𝑛) = 𝐴 (⌊2 ⌋) + 1
with the initial condition 𝐴(1) = 0

Definition:
Let 𝑔(𝑛) be a nonnegative function defined on the set of natural numbers. 𝑔(𝑛) is
called smooth if it is eventually nondecreasing and
𝑔(2𝑛) ∈ Θ(𝑔(𝑛))

Theorem “Smoothness rule”:


Let 𝑓(𝑛) be an eventually nondecreasing function and 𝑔(𝑛) be a smooth function.
If 𝑓(𝑛) ∈ Θ(𝑔(𝑛)) for values of 𝑛 that are powers of 𝑏 where 𝑏 ≥ 2, then
𝑓(𝑛) ∈ Θ(𝑔(𝑛))
(The analogous results hold for the cases of 𝑂 and Ω as well.)

The standard approach to solving such a recurrence is to solve it only for 𝑛 = 2𝑘


and then take advantage of the above theorem, which claims that the order of growth
observed for 𝑛 = 2𝑘 gives a correct answer about the order of growth for all values of 𝑛.
𝑛
Let’s assume that 𝑛 = 2𝑘 . The recurrence relation 𝐴(𝑛) = 𝐴 (⌊ ⌋) + 1 takes the
2
form:
𝐴(2𝑘 ) = 𝐴(2𝑘−1 ) + 1
17

𝐴(20 ) = 0
Hint: 𝐴(𝑛) = log 2 𝑛 ∈ Θ(log 𝑛)

Example: Computing the 𝑛𝑡ℎ Fibonacci number

The recurrence relation is rewritten as follows:


𝐹𝑛 − 𝐹𝑛−1 − 𝐹𝑛−2 = 0
with two initial conditions 𝐹0 = 0, 𝐹1 = 1. Then, the characteristic equation of the
recurrence relation is:
𝑟2 − 𝑟 − 1 = 0
and
1 ± √(−1)2 − 4(1)(−1) 1 ± √5
𝑟1,2 = =
2 2
are two distinct roots of this equation.
Hence,
𝑛 𝑛
1 + √5 1 − √5
𝐹𝑛 = 𝛼 ( ) +𝛽( )
2 2

How to determine the values of 𝛼 and 𝛽? Based on two initial conditions, we may
construct a system of equations:
0 0
1 + √5 1 − √5
𝐹0 = 𝛼 ( ) +𝛽( ) =0
2 2
1 1
1 + √5 1 − √5
𝐹1 = 𝛼 ( ) +𝛽( ) =1
2 2
Solving this system of equations gives us 𝛼 = 1⁄√5 and 𝛽 = − 1⁄√5. Therefore,
𝑛 𝑛
1 1 + √5 1 1 − √5
𝐹𝑛 = ( ) − ( )
√5 2 √5 2
1+√5 1−√5
Let’s denote 𝜙 = 2 ≈ 1.61803, 𝜙̂ = 2 = − 1⁄𝜙 ≈ −0.61803:
1
𝐹𝑛 = (𝜙 𝑛 − 𝜙̂ 𝑛 )
√5
18

Some algorithms for computing the 𝑛𝑡ℎ Fibonacci number

Recursive approach

Fibonacci(n) {
if (n ≤ 1)
return n;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}

Let’s denote 𝐴(𝑛) the number of times the basic operation is executed to compute
𝐹𝑛 . We get the following recurrence equation for it:
𝐴(𝑛) = 𝐴(𝑛 − 1) + 𝐴(𝑛 − 2) + 1 với 𝑛 > 1
and two initial conditions 𝐴(0) = 0, 𝐴(1) = 0. After solving this recurrence equation, we
get:
1
𝐴(𝑛) = (𝜙 𝑛+1 − 𝜙̂ 𝑛+1 ) − 1 ∈ Θ(𝜙 𝑛 )
√5

Nonrecursive approach
1
It’s easy to construct a linear algorithm using the formula 𝐹𝑛 = (𝜙 𝑛 − 𝜙̂ 𝑛 ).
√5
1
Note: In practice we may use the formula 𝐹𝑛 = 𝑟𝑜𝑢𝑛𝑑 ( 𝜙 𝑛 ) when 𝑛 → ∞.
√5

Dynamic programming (Θ(𝑛))

Fibonacci(n) { Fibonacci(n) {
if (n ≤ 1) if (n ≤ 1)
return n; return n;
f0 = 0; f1 = 1; f0 = 0;
for (i = 2; i ≤ n; i++) { f1 = 1;
fn = f0 + f1; for (i = 2; i ≤ n; i++) {
f0 = f1; f1 = f1 + f0;
f1 = fn; f0 = f1 – f0;
} }
return fn; return f1;
} }
19

Matrix approach
It’s easy to prove the correctness of the following equation using mathematical
induction:
𝐹(𝑛 + 1) 𝐹(𝑛) 1 1𝑛
[ ]=[ ] với 𝑛 ≥ 1
𝐹(𝑛) 𝐹(𝑛 − 1) 1 0
1 1𝑛
The question is how to efficiently compute [ ] ? The following formula is our
1 0
answer:
2
1 1 𝑛/2
([ ] ) 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
1 1𝑛 1 0
[ ] = 2
1 0 1 1 ⌊𝑛/2⌋ 1 1
([ ] ) ×[ ] 𝑛 𝑖𝑠 𝑜𝑑𝑑
{ 1 0 1 0
Obviously, the running time of the “matrix” approach is Θ(log 𝑛).
Recursive version
int fib(int n) { void multiply(F[2][2],
F[2][2] = {{1, 1},{1, 0}}; T[2][2]) {
if (n == 0) return 0; t1 = F[0][0]*T[0][0] +
power(F, n - 1); F[0][1]*T[1][0];
return F[0][0]; t2 = F[0][0]*T[0][1] +
} F[0][1]*T[1][1];
t3 = F[1][0]*T[0][0] +
void power(int F[2][2], int F[1][1]*T[1][0];
n) { t4 = F[1][0]*T[0][1] +
if (n ≤ 1) F[1][1]*T[1][1];
return; F[0][0] = t1;
T[2][2] = {{1, 1},{1, 0}}; F[0][1] = t2;
power(F, n / 2); F[1][0] = t3;
multiply(F, F); F[1][1] = t4;
if (n % 2 != 0) }
multiply(F,T); void main() {
} cout << fib(5);
}

The weakness of the above code is recursive calls. Using a “loop” approach is
always better.
20

While loop version


int Fibonacci(int n) {
i = 1; j = 0;
k = 0; h = 1;
while (n) {
if (n % 2) {
t = j * h;
j = i * h + j * k + t;
i = i * k + t;
}
t = h * h;
h = 2 * k * h + t;
k = k * k + t;
n = n / 2;
}
return j;
}

You might also like