Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 2 (8 Lectures)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

CHAPTER 2 (8 LECTURES)

ROOTS OF NON-LINEAR EQUATIONS IN ONE VARIABLE

1. Introduction
Finding one or more root (or zero) of the equation
f (x) = 0
is one of the more commonly occurring problems of applied mathematics. In most cases explicit
solutions are not available and we must be satisfied with being able to find a root to any specified
degree of accuracy. The numerical procedures for finding the roots are called iterative methods.
Definition 1.1 (Simple and multiple root). A zero (root) has a “multiplicity”, which refers to the
number of times that its associated factor appears in the equation. A root having multiplicity one is
called a simple root. For example, f (x) = (x − 1)(x − 2) has a simple root at x = 1 and x = 2, but
g(x) = (x − 1)2 has a root of multiplicity 2 at x = 1, which is therefore not a simple root.
A multiple root is a root with multiplicity m ≥ 2 is called a multiple point or repeated root. For example,
in the equation (x − 1)2 = 0, x = 1 is multiple (double) root.
If a polynomial has a multiple root, its derivative also shares that root.
Let α be a root of the equation f (x) = 0, and imagine writing it in the factored form
f (x) = (x − α)m φ(x)
with some integer m ≥ 1 and some continuous function φ(x) for which φ(α) 6= 0. Then we say that α
is a root of f (x) of multiplicity m.
Now we study some iterative methods to solve the non-linear equations.

2. The Bisection Method


2.1. Method. Let f (x) be a continuous function on some given interval [a, b] and it satisfies the
condition f (a) f (b) < 0, then by Intermediate Value Theorem the function f (x) must have at least one
root in [a, b]. The bisection method repeatedly bisects the interval [a, b] and then selects a subinterval
in which a root must lie for further processing. It is a very simple and robust method, but it is also
relatively slow. Usually [a, b] is chosen to contain only root α.

Figure 1. Bisection method

Example 1. Perform the five iterations of the bisection method to obtain the smallest root of the
equation x3 − 5x + 1 = 0.
1
2 ROOTS OF NON-LINEAR EQUATIONS

Sol. We write f (x) = x3 − 5x + 1 = 0.


Since f (0) > 0 and f (1) < 0,
=⇒ the smallest root lies in the interval (0, 1). 1 Taking a0 = 0 and b0 = 1, we obtain c1 = 12 (a0 +b0 ) =
0.5.
Now f (c1 ) = −1.375, f (a0 ) f (c1 ) < 0.
This implies root lies in the interval [0, 0.5].
Now we take a1 = 0 and b1 = 0.5, then c2 = 12 (a1 + b1 ) = 0.25
f (c2 ) = −0.2343, and f (a1 ) f (c2 ) < 0,
which implies root lies in interval [0, 0.25].
Similarly applying the same procedure, we can obtain the other iterations as given in the following
Table.
Table 1. Iterations in bisection method
n an−1 bn−1 cn f (an−1 ) f (cn )
1 0 1 0.5 <0
2 0 0.5 0.25 <0
3 0 0.25 0.125 >0
4 0.125 0.25 0.1875 >0
5 0.1875 0.25 0.21875 <0

0.1875 + 0.21875
Root lies in (0.1875, 0.21875), and we take the mid point = 0.203125 as root α.
2
Further we discuss the convergence of approximate solution to exact solution. In this step firstly we
define the usual meaning of convergence and order of convergence.
Definition 2.1 (Convergence). A sequence {xn } is said to be converge to a point α with order p if
there is exist a constant c such that
|xn+1 − α|
lim p = c, n ≥ 0.
n→∞ |xn − α|

The constant c is known as asymptotic error constant.


Two cases are given special attention.
(i) If p = 1 (and c < 1), the sequence is linearly convergent.
(ii) If p = 2, the sequence is quadratically convergent.
Definition 2.2. Let {βn } is a sequence which converges to zero and {xn } is any sequence. If there
exists a constant c > 0 and integer N > 0 such that
|xn − α| ≤ c|βn |, ∀n ≥ N,
then we say that {xn } converges to α.
2.2. Convergence analysis. Now we analyze the convergence of the iterations generated by the
bisection method.
Theorem 2.3. Suppose that f ∈ C[a, b] and f (a)·f (b) < 0. The Bisection method generates a sequence
{ck } approximating a zero α of f with linear convergence.
Proof. Let [a1 , b1 ], [a2 , b2 ], · · · denote the successive intervals produced by the bisection algorithm.
Thus
a = a1 ≤ a2 ≤ · · · ≤ b1 = b
b = b1 ≥ b2 ≥ · · · ≥ a1 = a.
1Choice of initial approximations: Initial approximations to the root are often known from the physical significance of
the problem. Graphical methods are used to find the zero of f (x) = 0 and any value in the neighborhood of root can be
taken as initial approximation.
If the given equation f (x) = 0 can be written as f1 (x) = f2 (x) = 0, then the point of the intersection of the graphs
y = f1 (x) and y = f2 (x) gives the root of the equation. Any value in the neighborhood of this point can be taken as
initial approximation.
ROOTS OF NON-LINEAR EQUATIONS 3

This implies {an } and {bn } are monotonic and bounded and hence convergent.
Since
b1 − a1 = (b − a)
1 1
b2 − a2 = (b1 − a1 ) = (b0 − a0 )
2 2
........................
1
bn − an = n−1
(b1 − a1 ). (2.1)
2
Hence
lim (bn − an ) = 0.
n→∞
Here b − a denotes the length of the original interval with which we started. Take limit
lim an = lim bn = α (say).
n→∞ n→∞
Since f is continuous function, therefore
lim f (an ) = f ( lim an ) = f (α).
n→∞ n→∞
The bisection method ensures that
f (an )f (bn ) < 0
which implies
lim f (an )f (bn ) = f 2 (α) < 0
n→∞
=⇒ f (α) = 0.
i.e. limit of {an } and {bn } is a zero of [a, b].
Since the root α is in either the interval [an , cn ] or [cn , bn ]. Therefore
1
|α − cn | < cn − an = bn − cn = (bn − an )
2
This is the error bound for cn . Combining with (2.1), we obtain the further bound
1
|α − cn | < n (b − a).
2
This shows that the iterates cn converge to α as n → ∞.
By definition of convergence, we can say that the bisection method converges linearly with rate 12 .

Illustrations: 1. Since the method brackets the root, the method is guaranteed to converge, however,
can be very slow.
an + bn
2. Computing cn : It might happen that at a certain iteration n, computation of cn = will
2
give overflow. It is better to compute cn as:
bn − an
cn = an + .
2
3. Stopping Criteria: Since this is an iterative method, we must determine some stopping criteria
that will allow the iteration to stop. Criterion |f (cn )| very small can be misleading since it is possible
to have |f (cn )| very small, even if cn is not close to the root.
Let’s now find out what is the minimum number of iterations N needed with the bisection method to
b−a
achieve a certain desired accuracy. The interval length after N iterations is N . So, to obtain an
2
b−a
accuracy of , we must have N ≤ . That is,
2
2−N (b − a) ≤ ε,
or
log(b − a) − log ε
N≥ .
log 2
Note the number N depends only on the initial interval [a, b] bracketing the root.
4. If a function is such that it just touches the x-axis, for example f (x) = x2 , then we don’t have a
and b such that f (a)f (b) < 0 but x = 0 is the root of f (x) = 0.
4 ROOTS OF NON-LINEAR EQUATIONS

5. For functions where there is a singularity and it reverses sign at the singularity, bisection method
1
may converge on the singularity. An example include f (x) = . We can chose a and b such that
x
f (a)f (b) < 0. However, the function is not continuous and the theorem that a root exists is not
applicable.
Example 2. Use the bisection method to find solutions accurate to within 10−2 for x3 −7x2 +14x−6 = 0
on [0, 1].
Sol. Number of iterations
log(1 − 0) − log(10−2 )
N≥ = 6.6439.
log 2
Thus, a minimum of 7 iterations will be needed to obtain the desired accuracy using the bisection
method. This yields the following results for mid-points cn and f (cn ):

n cn f (cn )
0 0.5 −0.6250000
1 0.75 0.9843750
2 0.625 0.2597656
3 0.5625 −0.1618652
4 0.59375000 0.0540466
5 0.57812500 −0.0526237
6 0.58593750 0.0010313

Example 3. The sum of two numbers is 20. If each number is added to its square root, then the
product of the resulting sums is 155.55. Perform five iterations of bisection method to determine the
two numbers.
Sol. Let x and y be the two numbers. Then,
x + y = 20.
√ √
Now x is added to x and y is added to y. The product of these sums is
√ √
(x + x)(y + y) = 155.55.
√ √
∴ (x + x)(20 − x + 20 − x) = 155.55.
Write the above equation in to root finding problem
√ √
f (x) = (x + x)(20 − x + 20 − x) − 155.55 = 0.
As f (6)f (7) < 0, so there is a root in interval (6.7).
Below are the iterations of bisection method for finding root. Therefore root is 6.53125.

n a b c signf (a)f (c)


1 6.000000 7.000000 6.500000 >0
2 6.500000 7.000000 6.750000 <0
3 6.500000 6.750000 6.625000 <0
4 6.500000 6.625000 6.562500 >0
5 6.500000 6.562500 6.531250 <0

If x = 6.53125, then y = 20 − 6.53125 = 13.4688.

3. Fixed-point iteration method


A fixed point for a function is a number at which the value of the function does not change when
the function is applied. The terminology was first used by the Dutch mathematician L. E. J. Brouwer
(1882-1962) in the early 1900s.
The number α is a fixed point for a given function g if g(α) = α.
In this section we consider the problem of finding solutions to fixed-point problems and the connec-
tion between the fixed-point problems and the root-finding problems we wish to solve. Root-finding
problems and fixed-point problems are equivalent classes in the following sense:
ROOTS OF NON-LINEAR EQUATIONS 5

Given a root-finding problem f (x) = 0, we can define functions g with a fixed point at x in a number of
ways. Conversely, if the function g has a fixed point at α, then the function defined by f (x) = x − g(x)
has a zero at α.
Although the problems we wish to solve are in the root-finding form, the fixed-point form is easier to
analyze, and certain fixed-point choices lead to very powerful root-finding techniques.
Example 4. Determine any fixed points of the function g(x) = x2 − 2.
Sol. A fixed point x for g has the property that
x = g(x) = x2 − 2
which implies that
0 = x2 − x − 2 = (x + 1)(x − 2).
A fixed point for g occurs precisely when the graph of y = g(x) intersects the graph of y = x, so g has
two fixed points, one at x = −1 and the other at x = 2.

Fixed-point iterations: We now consider solving an equation x = g(x) for a root α by the iteration
xn+1 = g(xn ), n ≥ 0,
with x0 as an initial guess to α.
Each solution of x = g(x) is called a fixed point of g.

For example, consider the solving x2 − 3 = 0.


We can write 1. x = x2 + x − 3 or x = x2 + c(x − 3), c 6= 0.
2. x = 3/x.
3. x = 12 (x + 3/x).
Let x0 = 2.
Table 2. Table for iterations in three cases
n 1 2 3
0 2.0 2.0 2.0
1 3.0 1.5 1.75
2 9.0 2.0 1.732147
3 87.0 1.5 1.73205


Now 3 = 1.73205 and it is clear that third choice is correct but why other two are not working?
Therefore which of the approximation is correct or not, we will answer after the convergence result
(which require |g 0 (α) < 1| and a ≤ g(x) ≤ b, ∀x ∈ [a, b] in the neighborhood of α).
Lemma 3.1. Let g(x) be a continuous function on [a, b] and assume that a ≤ g(x) ≤ b, ∀x ∈ [a, b]
then x = g(x) has at least one solution in [a, b].
Proof. Let g be a continuous function on [a, b].
Let assume that a ≤ g(x) ≤ b, ∀x ∈ [a, b].
Now consider φ(x) = g(x) − x.
If g(a) = a or g(b) = b then proof is trivial. Hence we assume that a 6= g(a) and b 6= g(b).
Now since a ≤ g(x) ≤ b
=⇒ g(a) > a and g(b) < b.
Now
φ(a) = g(a) − a > 0
and
φ(b) = g(b) − b < 0.
Now φ is continuous and φ(a)φ(b) < 0, therefore by Intermediate Value Theorem φ has at least one
zero in [a, b], i.e. there exists some α s.t.
g(α) = α, α ∈ [a, b].
Graphically, the roots are the intersection points of y = x & y = g(x) as shown in the Figure.
6 ROOTS OF NON-LINEAR EQUATIONS

Figure 2. An example of Lemma

Theorem 3.2 (Contraction Mapping Theorem). Let g & g 0 are continuous functions on [a, b] and
assume that g satisfy a ≤ g(x) ≤ b, ∀x ∈ [a, b]. Furthermore, assume that there is a positive constant
λ < 1 exists with
|g 0 (x)| ≤ λ, ∀x ∈ (a, b).
Then
1. x = g(x) has a unique solution α of x = g(x) in the interval [a, b].
2. The iterates xn+1 = g(xn ), n ≥ 1 will converge to α for any choice of x0 ∈ [a, b].
3.
λn
|α − xn | ≤ |x1 − x0 |, n ≥ 0.
1−λ
4. Convergence is linear.
Proof. Let g and g 0 are continuous functions on [a, b] and assume that a ≤ g(x) ≤ b, ∀x ∈ [a, b]. By
previous Lemma, there exists at least one solution to x = g(x).
By Mean-Value Theorem, ∃ a point c s.t.
g(x) − g(y) = g 0 (c)(x − y).
|g(x) − g(y)| ≤ λ|x − y|, 0 < λ < 1, ∀x ∈ [a, b].
1. Let x = g(x) has two solutions, say α and β in [a, b] then α = g(α), and β = g(β). Now
|α − β| = |g(α) − g(β)| ≤ λ|α − β|
=⇒ (1 − λ)|α − β| ≤ 0
Since 0 < λ < 1, =⇒ α = β.
=⇒ x = g(x) has a unique solution in [a, b] which is α (say).
2. To check the convergence of iterates {xn }, we observe that they all remain in [a, b] as xn ∈
[a, b], xn+1 = g(xn ) ∈ [a, b].
Now
|α − xn+1 | = |g(α) − g(xn )| = |g 0 (cn )||α − xn |
for some cn between α and xn .
=⇒ |α − xn+1 | ≤ λ|α − xn | ≤ λ2 |α − xn−1 |
................
≤ λn+1 |α − x0 |
As n → ∞, λn → 0 which =⇒ xn → α. Also
|α − xn | ≤ λn |α − x0 |. (3.1)
ROOTS OF NON-LINEAR EQUATIONS 7

3. To find the bound:


Since
|α − x0 | = |α − x1 + x1 − x0 |
≤ |α − x1 | + |x1 − x0 |
≤ λ|α − x0 | + |x1 − x0 |
=⇒ (1 − λ)|α − x0 | ≤ |x1 − x0 |
1
=⇒ |α − x0 | ≤ |x1 − x0 |
1−λ
λn
=⇒ λn |α − x0 | ≤ |x1 − x0 |
1−λ
Therefore using (3.1)
λn
|α − xn | ≤ λn |α − x0 | ≤ |x1 − x0 |
1−λ
λn
=⇒ |α − xn | ≤ |x1 − x0 |.
1−λ
4. Now
|α − xn+1 |
lim = λ = lim |g 0 (cn )|
n→∞ |α − xn | n→∞
for some cn between α and xn .
Now xn → α =⇒ cn → α.
Hence
|α − xn+1 |
lim = |g 0 (α)|.
n→∞ |α − xn |
If |g 0 (α)| < 1, the above formula shows that iterates are linearly convergent with rate (asymptotic error
constant) |g 0 (α)|. If in addition g 0 (α) 6= 0, then formula proves that convergence is exactly linear, with
no higher order of convergence being possible.
Illustrations: 1. In practice, it is difficult to find an interval [a, b] for which a ≤ g(x) ≤ b condition
is satisfied. On the contrary if |g 0 (α)| > 1, then the iteration method xn+1 = g(xn ) will not converge
to α.
When |g 0 (α)| = 1, no conclusion can be drawn and even if convergence occur, the method would be
far too slow for the iteration method to be practical.
2. If
λn
|α − xn | ≤ |x1 − x0 | < ε
1−λ
where ε is desired accuracy. This bound can be used to find the number of iterations to achieve the
accuracy ε.
Also from part 2, |α − xn | ≤ λn |α − x0 | ≤ λn max{x0 − a, b − x0 } < ε, can be used to find the number
of iterations.
3. The possible behavior of fixed-point iterates {xn } is shown in Figure for various values of g 0 (α). To
see the convergence, consider the case case of x1 = g(x0 ), the height of y = g(x) at x0 . We bring the
number x1 back to the x-axis by using the line y = x and the height y = x1 . We continue this with
each iterate, obtaining a stair-step behavior when g 0 (α) > 0. When g 0 (α) < 0, the iterates oscillates
around the fixed point α, as can be seen in the Figure. In first figure (on top) iterations are monotonic
convergence, in second oscillatory convergent, in third figure iterations are divergent and in the last
figure iterations are oscillatory divergent.
Theorem 3.3. Let α is a root of x = g(x), and g(x) is p times continuously differentiable function
for all x ∈ [a − δ, a + δ], g(x) ∈ [a − δ, a + δ], for some p ≥ 2. Furthermore assume
g 0 (α) = · · · = g (p−1) (α) = 0. (3.2)
Then if the initial guess x0 is sufficiently close to α, then iteration
xn+1 = g(xn ), n≥0
will have order of convergence p.
8 ROOTS OF NON-LINEAR EQUATIONS

Figure 3. Convergent and non-convergent sequences xn+1 = g(xn )

Proof. Let g(x) is p times continuously differentiable function for all x ∈ [a−δ, a+δ], g(x) ∈ [a−δ, a+δ]
and satisfying the conditions in equation (3.2) stated above.
Now expand g(xn ) in a Taylor polynomial about α.
xn+1 = g(xn )
= g(α + xn − α)
(xn − α)p−1 (p−1) (xn − α)p (p)
= g(α) + (xn − α)g 0 (α) + · · · + g (α) + g (ξn ),
(p − 1)! p!
for some ξn between xn and α.
Using equation (3.2) and g(α) = α, we obtain
(xn − α)p (p)
xn+1 − α = g (ξn )
p!
xn+1 − α g (p) (ξn )
=⇒ =
(xn − α)p p!
α − xn+1 g (p) (ξn )
=⇒ = (−1)p−1 .
(α − xn )p p!
Take limits n → ∞ on both sides,
α − xn+1 (p) g (p) (α)
p−1 g (ξn )
lim p = lim (−1) = (−1)p−1 .
n→∞ (α − xn ) n→∞ p! p!
By definition of convergence, the iterations will have order of convergence p.
ROOTS OF NON-LINEAR EQUATIONS 9

Example 5. Consider the equation x3 − 7x + 2 = 0 in [0, 1]. Write a fixed-point iteration which will
converge to the solution.
1
Sol. We rewrite the equation in the form x = (x3 + 2) and define the fixed-point iteration xn+1 =
7
1 3
(x + 2).
7 n
1
Now g(x) = (x3 + 2)
7
then g : [0, 1] → [0, 1] and |g 0 (x)| < 3/7 < 1, ∀x ∈ [0, 1].
Hence by the Contraction Mapping Theorem the sequence {xn } defined above will converge to the
unique solution of given equation. Starting with x0 = 0.5, we can compute the solution as following.
x1 = 0.303571429
x2 = 0.28971083
x3 = 0.289188016.
Therefore root correct to three decimals is 0.289.
Example 6. An equation ex = 4x2 has a root in [4, 5]. Show that we cannot find that root using
x = g(x) = 12 ex/2 for the fixed-point iteration method. Can you find another iterative formula which
will locate that root ? If yes, then find third iterations with x0 = 4.5. Also find the error bound.
Sol. Here g(x) = 12 ex/2 , g 0 (x) = 41 ex/2 > 0 for all x ∈ (4, 5), therefore, the fixed-point iteration fails to
converge to the root in [4, 5].
2
Now consider x = g(x) = ln(4x2 ) and |g 0 (x)| = < 1 for all x ∈ (4, 5).
x
Also 4 ≤ g(x) ≤ 5, the fixed-point iteration converges to the root in [4, 5].
Using the fixed-point iteration method with x0 = 4.5 gives the iterations as
x1 = g(x0 ) = ln(4 × 4.52 ) = 4.3944
x2 = 4.3469
x3 = 4.3253.
Now λ = max |g 0 (x)| = g 0 (4) = 0.5
4≤x≤5
We have the error bound
0.53
|α − x3 | ≤ |4.3944 − 4.5| = 0.0264.
1 − 0.5
Example 7. The equation x3 + 4x2 − 10 = 0 has a unique root in [1, 2]. Write the fixed-point
representations which converge to unique solution.
Sol. We discus the possibilities of writing the g(x).
(1)
x = g1 (x) = x − x3 − 4x2 + 10
For g1 (x) = x − x3 − 4x2 + 10, we have g1 (1) = 6 and g1 (2) = −12, so g1 does not map [1, 2] into
itself. Moreover, g10 (x) = 1 − 3x2 − 8x, so |g10 (x)|> 1 for all x in [1, 2]. Although Convergence
Theorem does not guarantee that the method must fail for this choice of g, there is no reason
to expect convergence.
(2)
x3 = 10 − 4x2
x2 = 10/x − 4x
x = [(10/x) − 4x]1/2 = g2 (x)
With g2 (x) = [10/x − 4x]1/2 , we can see that g2 does not map [1, 2] into [1, 2], and the sequence
{xn }∞
n=0 is not defined when p0 = 1.5. Moreover, there is no interval containing α ≈ 1.365
such that |g20 (x)|< 1, because |g20 (α)|≈ 3.4. There is no reason to expect that this method will
converge.
10 ROOTS OF NON-LINEAR EQUATIONS

(3)
4x2 = 10 − x3
1
x = (10 − x3 )1/2 = g3 (x)
2
For the function g3 (x) = 21 (10 − x3 )1/2 , we have
3
g30 (x) = − x2 (10 − x3 )1/2 < 0 on[1, 2],
4
so g3 is strictly decreasing on [1, 2]. However, |g30 (2)|≈ 2.12, so the condition |g30 (x)|≤ λ < 1
fails on [1, 2]. A closer examination of the sequence {xn }∞ n=0 starting with x0 = 1.5 shows that
it suffices to consider the interval [1, 1.5] instead of [1, 2]. On this interval it is still true that
g30 (x) < 0 and g3 is strictly decreasing, but, additionally,
1 < 1.28 ≈ g3 (1.5) ≤ g3 (x) ≤ g3 (1) = 1.5,
for all x ∈ 1, 1.5]. This shows that g3 maps the interval [1, 1.5] into itself. It is also true that
|g30 (x)|≤ |g30 (1.5)|≈ 0.66 on this interval, so Convergence Theorem confirms the convergence.
(4)
x3 + 4x2 = 10
x2 (x + 4) = 10
10 1/2
 
x = = g4 (x).
x+4
For g4 (x) we have

5 5
|g40 (x)|= √ ≤ √ < 0.15, for all x ∈ [1, 2].

3/2
10(4 + x) 10(5)3/2
The bound on the magnitude of g40 (x) is much smaller than the bound (found in (3)) on the
magnitude of g30 (x), which explains the more rapid convergence using g4 .
(5) The sequence defined by
x3 + 4x2 − 10
g5 (x) = x −
3x2 + 8x
converges much more rapidly than our other choices. In the next sections (Newton’s Method)
we will see where this choice came from and why it is so effective.
Starting with x0 = 1.5, the following table shows some of the iterates.
n (1) (2) (3) (4) (5)
0 1.5 1.5 1.5 1.5 1.5
1 -0.875 0.8165 1.286953768 1.348399725 1.373333333
2 6.732 2.9969 1.402540804 1.367376372 1.365262015
3 -4.697 (−8.65) 1/2 1.345458374 1.364957015 1.365230014
4 1.375170253 1.365264748 1.365230013
5 1.360094193 1.365225594
6 1.367846968 1.365230576
Example 8. Use a fixed-point method to determine a solution to within 10−4 for x = tan x, for x in
[4, 5].
Sol. Using g(x) = tan x and x0 = 4 gives x1 = g(x0 ) = tan 4 = 1.158, which is not in the interval [4, 5].
So we need a different fixed-point function.
If we note that x = tan x implies that
1 1
=
x tan x
1 1
=⇒ x = x − + .
x tan x
ROOTS OF NON-LINEAR EQUATIONS 11

1 1
Starting with x0 and taking g(x) = x − + ,
x tan x
we obtain x1 = 4.61369, x2 = 4.49596, x3 = 4.49341, x4 = 4.49341.
As x3 and x4 agree to five decimals, it is reasonable to assume that these values are sufficiently accurate.
Example 9. The iterates xn+1 = 2 − (1 + c)xn + cx3n will converge to α = 1 for some values of constant
c (provided that x0 is sufficiently close to α). Find the values of c for which convergence occurs? For
what values of c, if any, convergence is quadratic?
Sol. Fixed-point iteration
xn+1 = g(xn )
with
g(x) = 2 − (1 + c)x + cx3 .
If α = 1 is a fixed point then for convergence |g 0 (α)| < 1
=⇒ | − (1 + c) + 3cα2 | < 1
=⇒ 0 < c < 1.
For this value of c, g 00 (α) 6= 0.
For quadratic convergence
g 0 (α) = 0 & g 00 (α) 6= 0.
This gives c = 1/2.
Example 10. Which of the following iterations
1 6
a. xn+1 = (x2n + )
4 xn
6
b. xn+1 = (4 − )
x2n
is suitable to find a root of the equation x3 = 4x2 − 6 in the interval [3, 4]? Estimate the number of
iterations required to achieve 10−3 accuracy, starting from x0 = 3.
1 2 6
Sol. a. Let g(x) = (x + ) which is continuous in [3, 4], but g 0 (x) > 1 for all x ∈ (3, 4). So this
4 x
choice of g(x) is not suitable.
6
b. g(x) = (4 − 2 ) which is continuous in [3, 4] and g(x) ∈ [3, 4] for all x ∈ [3, 4].
x
Also |g 0 (x)| = |12/x3 | < 1 for all x ∈ (3, 4). Then from the Contraction Mapping Theorem implies
that a unique fixed-point exists in [3, 4]. To find an approximation of that is accurate to within 10−3 ,
we need to determine the number of iterations n so that
λn
|α − xn | ≤ |x1 − x0 | < 10−3
1−λ
Here λ = max |g 0 (x)| = 4/9 and using the fixed-point method by taking x0 = 3, we have x1 = g(x0 ) =
3≤x≤4
10/3, we have
(4/9)n
|α − xn | ≤ |10/3 − 3| < 10−3 .
1 − 4/9
Solving for n, we get, n = 8.

4. Iteration method based on first degree equation


4.1. The Secant Method. Let f (x) = 0 be the given non-linear equation.
Let (x0 , f (x0 )) and (x1 , f (x1 )) are two pints on the curve y = f (x). Then the equation of the secant
line joining two points on the curve y = f (x) is given by
f (x1 ) − f (x0 )
y − f (x1 ) = (x − x1 ).
x1 − x0
12 ROOTS OF NON-LINEAR EQUATIONS

Let intersection point of the secant line with the x-axis is (x2 , 0) then at x = x2 , y = 0. Therefore
f (x1 ) − f (x0 )
0 − f (x1 ) = (x2 − x1 )
x1 − x0
x1 − x0
x2 = x1 − f (x1 ).
f (x1 ) − f (x0 )
Here x0 and x1 are two approximations of the root. The point (x2 , 0) can be taken as next approx-
imation of the root. This method is called the secant or chord method and successive iterations are
given by
xn − xn−1
xn+1 = xn − f (xn ), n = 1, 2, . . .
f (xn ) − f (xn−1 )
Geometrically, in this method we replace the unknown function by a straight line or chord passing
through (x0 , f (x0 )) and (x1 , f (x1 )) and we take the point of intersection of the straight line with the
x-axis as the next approximation to the root and continue the process.

Figure 4. Secant method

Example 11. Apply secant method to find the root of the equation
cos x − x ex = 0.
Sol. Let f (x) = cos x − x ex = 0.
The successive iterations of the secant method are given by
xn − xn−1
xn+1 = xn − f (xn ), n = 1, 2, . . .
f (xn ) − f (xn−1 )
As f (0)f (1) < 0, we take initial guesses x0 = 0 and x1 = 1, and obtain
x2 = 0.3146653378
x3 = 0.4467281466
etc.
Example 12. Let f ∈ C 0 [a, b]. If α is a simple root of f (x) = 0, then show that the sequence {xn }
generated by the secant method has order of convergence 1.618.
Sol. We assume that α is a simple root of f (x) = 0 then f (α) = 0.
Let xn = α + εn .
An iterative method is said to has order of convergence p if
|xn+1 − α| = C |xn − α|p .
Or equivalently
|εn+1 | = C|εn |p .
ROOTS OF NON-LINEAR EQUATIONS 13

Successive iteration in secant method are given by


xn − xn−1
xn+1 = xn − f (xn ) n = 1, 2, . . .
f (xn ) − f (xn−1 )
Error equation is written as
εn − εn−1
εn+1 = εn − f (α + εn ).
f (α + εn ) − f (α + εn−1 )
By expanding f (α + εn ) and f (α + εn−1 ) in Taylor series, we obtain the error equation
 
0 1 2 00
(εn − εn−1 ) εn f (α) + εn f (α) + . . .
2
εn+1 = εn −  
0
1 00
(εn − εn−1 ) f (α) + (εn + εn−1 ) f (α) + . . .
2
00
−1
f 00 (α)
 
1 2 f (α) 1
= εn − εn + εn 0 + . . . 1 + (εn−1 + εn ) 0 + ...
2 f (α) 2 f (α)
1 2 f 00 (α) f 00 (α)
  
1
= εn − εn + εn 0 + . . . 1 − (εn−1 + εn ) 0 + ...
2 f (α) 2 f (α)
1 f 00 (α)
= × εn εn−1 + O(ε2n εn−1 + εn ε2n−1 )
2 f 0 (α)
Therefore
εn+1 ≈ Aεn εn−1
00
1 f (α)
where constant A = .
2 f 0 (α)
This relation is called the error equation. Now by the definition of the order of convergence, we expect
a relation of the following type
εn+1 = Cεpn .
1/p
Making one index down, we obtain εn = Cεpn−1 or εn−1 = C 1/p εn .
Hence
C εpn = Aεn C 1/p ε1/p
n
=⇒ εpn = AC (−1+1/p) ε1+1/p
n .
Comparing the powers of εn on both sides, we get
p = 1 + 1/p,
which gives two values of p, one is p = 1.618 and another one is negative (and we neglect negative
value of p as order of convergence is non-negative).
Therefore, order of convergence of secant method is less than 2.
4.2. Newton’s Method. Let f (x) = 0 be the given non-linear equation.
Let the tangent line at point (x0 , f (x0 )) on the curve y = f (x) intersect with the x-axis at (x1 , 0). The
equation of tangent is given by
y − f (x0 ) = f 0 (x0 )(x − x0 ).
0
Here the number f (x0 ) gives the slope of tangent at x0 . At x = x1 ,
0 − f (x0 ) = f 0 (x0 )(x1 − x0 )
f 0 (x0 )
x1 = x0 − .
f (x0 )
Here x0 is the approximations of the root.
This is called the Newton’s method and successive iterations are given by
f (xn )
xn+1 = xn − 0 , n = 0, 1, . . . .
f (xn )
The method can be obtained directly from the secant method by taking limit xn−1 → xn . In the
limiting case the chord joining the points (xkn−1 , f (xn−1 )) and (xn , f (xn )) becomes the tangent at
(xn , f (xn )).
14 ROOTS OF NON-LINEAR EQUATIONS

In this case problem of finding the root of the equation is equivalent to finding the point of intersection
of the tangent to the curve y = f (x) at point (xn , f (xn )) with the x-axis.

Figure 5. Newton’s method


Example 13. Use Newton’s Method in computing of 2.
Sol. This number satisfies the equation f (x) = 0 where f (x) = x2 − 2 = 0.
Since f 0 (x) = 2x, it follows that in Newton’s Method, we can obtain the next iterate from the previous
iterate xn by
x2 − 2 xn 1
xn+1 = xn − n = + .
2xn 2 xn
Starting with x0 = 1, we obtain

1 1
x1 = + = 1.5
2 1
1.5 1
x2 = + = 1.41666667
2 1.5
x3 = 1.41421569
x4 = 1.41421356
x5 = 1.41421356.
Since the fourth and fifth iterates agree in to eight decimal places, we assume that 1.41421356 is a
correct solution to f (x) = 0, to at least eight decimal places.
Example 14. Perform four iterations to Newton’s method to obtain the approximate value of (17)1/3
start with x0 = 2.0.
Sol. Let x = 171/3 which implies x3 = 17.
Let f (x) = x3 − 17 = 0.
Newton’s approximations are given by
x3n − 17 2x3n + 17
xn+1 = xn − = , n = 0, 1, 2, . . . .
3x2n 3x2n
Start with x0 = 2.0, we obtain
x1 = 2.75, x2 = 2.582645, x3 = 2.571332, x4 = 2.571282 etc.
ROOTS OF NON-LINEAR EQUATIONS 15

4.2.1. The Newton’s Method can go bad.


• Once the Newton’s Method catches scent of the root, it usually hunts it down with amazing
speed. But since the method is based on local information, namely f (xn ) and f 0 (xn ), the
Newton’s Method’s sense of smell is deficient.
• If the initial estimate is not close enough to the root, the Newton’s Method may not converge,
or may converge to the wrong root.
• If f(x) be twice continuously differentiable on the closed finite interval [a, b] and the following
conditions are satisfied:
(i) f (a) f (b) < 0.
(ii) f 0 (x) 6= 0, ∀x ∈ [a, b].
(iii) Either f 00 (x) ≥ 0 or f 00 (x) ≤ 0, ∀x ∈ [a, b].
(iv) At the end points a, b,
|f (a)| |f (b)|
< b − a, < b − a.
|f 0 (a)| |f 0 (b)|
Then the Newton’s method converges to the unique solution α of f (x) = 0 in [a, b] for any
choice of x0 ∈ [a, b].
Conditions (i) and (ii) guarantee that there is one and only one solution in [a, b]. Condition
(iii) states that the graph of f (x) is either concave from above or concave from below, and
furthermore together with condition (ii) implies that f 0 (x) is monotone on [a, b]. Added to
these, condition (iv) states that the tangent to the curve at either endpoint intersects the
x−axis within the interval [a, b].

Figure 6. An example where Newton’s method will not work.

The following example shows that choice of initial guess is very important for convergence.
Example 15. Using Newton’s Method to find a non-zero solution of x = 2 sin x.
Sol. Let f (x) = x − 2 sin x.
Then f 0 (x) = 1 − 2 cos x, and the Newton’s iteration is
f (xn ) xn − 2 sin xn 2(sin xn − xn cos xn )
xn+1 = xn − 0
= xn − = .
f (xn ) 1 − 2 cos xn 1 − 2 cos xn
16 ROOTS OF NON-LINEAR EQUATIONS

Figure 7. One more example of where Newton’s method will not work.

Let x0 = 1.1. The next six estimates, to 3 decimal places, are:


x1 = 8.453, x2 = 5.256, x3 = 203.384, x4 = 118.019, x5 = −87.471, x6 = −203.637. Therefore
iterations diverges.
Note that choosing x0 = π/3 ≈ 1.0472 leads to immediate disaster, since then 1 − 2 cos x0 = 0 and
therefore x1 does not exist. The trouble was caused by the choice of x0 .
Let’s see whether we can do better. Draw the curves y = x and y = 2 sin x. A quick sketch shows that
they meet a bit past π/2. If we take x0 = 1.5. Here are the next five estimates
x1 = 2.076558, x2 = 1.910507, x3 = 1.895622, x4 = 1.895494, x5 = 1.895494.
Example 16. Find, correct to 5 decimal places, the x-coordinate of the point on the curve y = ln x
which is closest to the origin. Use the Newton’s Method.
Sol. Let (x, ln x) be a general point on the curve, and let S(x) be the square of the distance from
(x, ln x) to the origin. Then
S(x) = x2 + ln2 x.
We want to minimize the distance. This is equivalent to minimizing the square of the distance. Now
the minimization process takes the usual route. Note that S(x) is only defined when x > 0. We have
ln x 2
S 0 (x) = 2x + 2 = (x2 + ln x).
x x
Our problem thus comes down to solving the equation S 0 (x) = 0. We can use the Newton’s Method
directly on S 0 (x), but calculations are more pleasant if we observe that S 0 (x) = 0 is equivalent to
x2 + ln x = 0.
Let f (x) = x2 + ln x. Then f 0 (x) = 2x + 1/x and we get the recurrence relation
x2k + ln xk
xk+1 = xk − , k = 0, 1, · · ·
2xk + 1/xk
We need to find a suitable starting point x0 . Experimentation with a calculator suggests that we take
x0 = 0.65.
Then x1 = 0.6529181, and x2 = 0.65291864.
Since x1 agrees with x2 to 5 decimal places, we can perhaps decide that, to 5 places, the minimum
distance occurs at x = 0.65292.
ROOTS OF NON-LINEAR EQUATIONS 17

4.3. Convergence Analysis.

Theorem 4.1. Let f ∈ C 2 [a, b]. If α is a simple root of f (x) = 0 and f 0 (α) 6= 0, then Newton’s method
generates a sequence {xn } converging quadratically to root α for any initial approximation x0 near to
α.

Proof. The proof is based on analyzing Newton’s method as the fixed point iteration scheme xn+1 =
g(xn ), for n ≥ 1, with
f (x)
g(x) = x − .
f 0 (x)
Let λ be in (0, 1). We first find an interval [α − δ, α + δ] such that g(x) ∈ [α − δ, α + δ] and for which
|g 0 (x)| ≤ λ, for all x ∈ (α − δ, α + δ).
Since f 0 is continuous and f 0 (α) 6= 0, i.e. a continuous function is non-zero at a point which implies it
will remain non-zero in a neighborhood of α.
Thus g is defined and continuous in a neighborhood of α. Also in that neighborhood

f 0 (x)f 0 (x) − f (x)f 00 (x) f (x)f 00 (x)


g 0 (x) = 1 − = . (4.1)
f 0 (x)2 [f 0 (x)]2

Now since f (α) = 0, therefore


f (α)f 00 (α)
g 0 (α) = = 0.
[f 0 (α)]2
Since g is continuous and 0 < λ < 1, then there exists a number δ such that

|g 0 (x)| ≤ λ, ∀x ∈ [α − δ, α + δ].

Now we will show that g maps [α − δ, α + δ] into [α − δ, α + δ].


If x ∈ [α − δ, α + δ], the Mean Value Theorem implies that for some number ξ between x and α,

|g(x) − α| = |g(x) − g(α)| = |g 0 (ξ)| |x − α| ≤ λ|x − α| < |x − α|.

It follows that if |x − α| < δ =⇒ |g(x) − α| < δ. Hence, g maps [α − δ, α + δ] into [α − δ, α + δ]. All
the hypotheses of the Fixed-Point Convergence Theorem (Contraction Mapping) are now satisfied, so
the sequence xn converges to root α. Further from Eqs. (4.1)

f 00 (α)
g 00 (α) = 6= 0,
f 0 (α)

which proves that convergence is of second-order.

Example 17. Find all the roots of cos x − x2 − x = 0 to five decimal places.

Sol. f (x) = cos x − x2 − x = 0 has two roots in the interval (−2, −1) and (0, 1). Applying Newton
method,
cos xn − x2n − xn
xn+1 = xn − .
− sin xn − 2xn − 1
Take x0 = −1.5 for the root in the interval (−2, −1), we obtain

x1 = −1.27338985, x2 = −1.25137907, x3 = −1.25115186, x4 = −1.25114184.

Starting with x0 = 0.5, we can obtain the root in (0, 1) and iterations are given by

x1 = 0.55145650, x2 = 0.55001049, x3 = 0.55000935.

Hence roots correct to five decimals are −1.25115 and 0.55001.


18 ROOTS OF NON-LINEAR EQUATIONS

4.4. Newton’s method for multiple roots. Let α be a root of f (x) = 0 with multiplicity m. In
this case we can write
f (x) = (x − α)m φ(x).
In this case
f (α) = f 0 (α) = ... = f (m−1) (α) = 0, f (m) (α) 6= 0.
Recall that we can regard Newton’s method as a fixed point method:
f (x)
xn+1 = g(xn ), g(x) = x − 0 .
f (x)
Then we substitute
f (x) = (x − α)m φ(x)
to obtain
(x − α)m φ(x)
g(x) = x −
m(x − α)m−1 φ(x) + (x − α)m φ0 (x)
(x − α) φ(x)
= x− .
mφ(x) + (x − α)φ0 (x)
Therefore we obtain
1
g 0 (α) = 1 − 6= 0.
m
For m > 1, this is nonzero, and therefore Newton’s method is only linearly convergent.
There are ways of improving the speed of convergence of Newton’s method, creating a modified method
that is again quadratically convergent. In particular, consider the fixed point iteration formula
f (x)
xn+1 = g(xn ), g(x) = x − m 0
f (x)
in which we assume to know the multiplicity m of the root α being sought. Then modifying the above
argument on the convergence of Newton’s method, we obtain
1
g 0 (α) = 1 − m = 0
m
and the iteration method will be quadratically convergent. But most of the time we don’t know the
multiplicity.
One method of handling the problem of multiple roots of a function f is to define
f (x)
µ(x) = 0 .
f (x)
If α is a zero of f of multiplicity m with f (x) = (x − α)m φ(x), then
(x − α)m φ(x)
µ(x) =
m(x − α)m−1 φ(x) + (x − α)m φ0 (x)
φ(x)
= (x − α)
mφ(x) + (x − α)φ0 (x)
also has a zero at α. However, φ(α) 6= 0, so
φ(α) 1
0
= 6= 0,
mφ(α) + (α − α)φ (α) m
and α is a simple zero of µ(x). Newton’s method can then be applied to µ(x) to give
µ(x) f (x)/f 0 (x)
g(x) = x − = x −
µ0 (x) {[f 0 (x)]2 − [f (x)][f 00 (x)]}/[f 0 (x)]2
which simplifies to
f (x)f 0 (x)
g(x) = x − .
[f 0 (x)]2 − f (x)f 00 (x)
If g has the required continuity conditions, functional iteration applied to g will be quadratically con-
vergent regardless of the multiplicity of the zero of f. Theoretically, the only drawback to this method
is the additional calculation of f 00 (x) and the more laborious procedure of calculating the iterates. In
ROOTS OF NON-LINEAR EQUATIONS 19

practice, however, multiple roots can cause serious round-off problems because the denominator of the
above expression consists of the difference of two numbers that are both close to 0.
Example 18. Let f (x) = ex − x − 1. Show that f has a zero of multiplicity 2 at x = 0. Show that
Newton’s method with x0 = 1 converges to this zero but not quadratically.
Sol. We have f (x) = ex − x − 1, f 0 (x) = ex − 1 and f 00 (x) = ex .
Now f (0) = 1 − 0 − 1 = 0, f 0 (0) = 1 − 1 = 0 and f 00 (0) = 1. Therefore f has a zero of multiplicity 2
at x = 0.
Starting with x0 = 1, iterations are given by
f (xn )
xn+1 = xn − .
f 0 (xn )
x1 = 0.58198, x2 = 0.31906, x3 = 0.16800 x4 = 0.08635, x5 = 0.04380, x6 = 0.02206.
By using the modified Newton’s Method
f (xn )f 0 (xn )
xn+1 = xn − .
[f 0 (xn )]2 − f (xn )f 00 (xn )
Starting with x0 = 1.0, we obtain
x1 = −0.023421, x2 = −0.0084527, x3 = −0.000011889.
We observe that modified Newton’s is converging faster to root 0.
Example 19. The equation f (x) = x3 − 7x2 + 16x − 12 = 0 has a double root at x = 2.0. Starting
with x0 = 1, find the root correct to three decimals with Newton’s and its modified version.
Sol. Firstly we apply simple Newton’s method and successive iterations are given by
x3n − 7x2n + 16xn − 12
xn+1 = xn − , n = 0, 1, 2, . . .
3x2n − 14xn + 16
Start with x0 = 1.0, we obtain
x1 = 1.4, x2 = 1.652632, x3 = 1.806484, x4 = 1.89586
x5 = 1.945653, x6 = 1.972144, x7 = 1.985886, x8 = 1.992894
x9 = 1.996435, x10 = 1.998214, x11 = 1.999106, x12 = 1.999553.
The root correct to 3 decimal places is x12 = 2.000.
If we apply modified Newton’s Method then
x3n − 7x2n + 16xn − 12
xkn+1 = xn − 2 , n = 0, 1, 2, . . .
3x2n − 14xn + 16
Start with x0 = 1.0, we obtain
x1 = 1.8, x2 = 1.984615, x3 = 1.999884.
The root correct to 3 decimal places is 2.000 and in this case we need less iterations to get desired
accuracy.

Exercises
(1) Use the bisection method to find solutions accurate to within 10−3 for the following problems.
a. x − 2−x = 0 for 0 ≤ x ≤ 1 b. ex − x2 + 3x − 2 = 0 for 0 ≤ x ≤ 1
c. x + 1 − 2 sin(πx) = 0 for √0 ≤ x ≤ 0.5 and 0.5 ≤ x ≤ 1.
(2) Find an approximation to 3 25 correct to within 10−3 using the bisection algorithm.
(3) Find a bound for the number of iterations needed to achieve an approximation by bisection
method with accuracy 10−2 to the solution of x3 − x − 1 = 0 lying in the interval [1, 2]. Find
an approximation to the root with this degree of accuracy.
(4) Sketch the graphs of y = x and y = 2 sin x. Use the bisection method to find an approximation
to within 10−3 to the first positive value of x with x = 2 sin x.
20 ROOTS OF NON-LINEAR EQUATIONS

(5) Let f (x) = (x + 2)(x + 1)2 x(x − 1)3 (x − 2). To which zero of f does the bisection method
converge when applied on the following intervals?
a. [-1.5, 2.5] b. [-0.5, 2.4] c. [-0.5, 3] d. [-3, -0.5].
(6) For each of the following equations, use the given interval or determine an interval [a, b] on
which fixed-point iteration will converge. Estimate the number of iterations necessary to obtain
approximations accurate to within 10−2 , and perform the calculations.
a. 2 + sin x − x = 0 use [2, 3] b. x3 − 2x − 5 = 0 use [2, 3] c. 3x2 − ex = 0 d. x − cos x = 0.
(7) Use the fixed-point iteration method to find smallest and second smallest positive roots of the
equation tan x = 4x, correct to 4 decimal places.
(8) Show that g(x) = π + 0.5 sin(x/2) has a unique fixed point on [0, 2π]. Use fixed-point iteration
to find an approximation to the fixed point that is accurate to within 10−2 . Also estimate the
number of iterations required to achieve 10−2 accuracy, and compare this theoretical estimate
to the number actually needed.
(9) Find all the zeros of f (x) = x2 + 10 cos x by using the fixed-point iteration method for an
appropriate iteration function g. Find the zeros accurate to within 10−2 .
(10) What is the order of convergence of the iteration
xn (x2n + 3a)
xn+1 =
3x2n + a

as it converges to the fixed point α = a?
(11) Let A be a given positive constant and g(x) = 2x − Ax2 .
a. Show that if fixed-point iteration converges to a nonzero limit, then the limit is α = 1/A,
so the inverse of a number can be found using only multiplications and subtractions.
b. Find an interval about 1/A for which fixed-point iteration converges, provided x0 is in that
interval.
(12) Consider the root-finding problem f (x) = 0 with root α, with f 0 (x) 6= 0. Convert it to the
fixed-point problem
x = x + cf (x) = g(x)
with c a nonzero constant. How should c be chosen to ensure rapid convergence of
xn+1 = xn + cf (xn )
to α (provided that x0 is chosen sufficiently close to α)? Apply your way of choosing c to the
root-finding problem x3 − 5 = 0.
(13) Show that if A is any positive number, then the sequence defined by
1 A
xn = xn−1 + , for n ≥ 1,
2 2xn−1

converges to A whenever x0 > 0. What happens if x0 < 0?
(14) A particle starts at rest on a smooth inclined plane whose angle θ is changing at a constant
rate

= ω < 0.
dt
At the end of t seconds, the position of the object is given by
e − e−ωt
 ωt 
g
x(t) = − 2 − sin ωt .
2ω 2
Suppose the particle has moved 1.7 ft in 1 s. Find, to within 10−5 , the rate ω at which θ
changes. Assume that g = 32.17 ft/s2 .
(15) Use secant method to find solutions accurate to within 10−3 for the following problems.
a. −x3 − cos x = with x0 = −1 and x1 = 0.
b. x − cos x = 0, x ∈ [0, π/2].
c. ex + 2−x + 2 cos x − 6 = 0, x ∈ [1, 2].
(16) Use Newton’s method to find solutions accurate to within 10−3 to the following problems.
a. x − e−x = 0 for 0 ≤ x ≤ 1.
b. 2x cos 2x − (x − 2)2 = 0 for 2 ≤ x ≤ 3 and 3 ≤ x ≤ 4.
ROOTS OF NON-LINEAR EQUATIONS 21

(17) Use Newton’s method to approximate the positive root of 2 cos x = x4 correct to six decimal
places.
(18) A calculator is defective: it can only add, subtract, and multiply. Use the equation 1/x = 1.732,
the Newton’s Method, and the defective calculator to find 1/1.732 correct to 4 decimal places.
(19) Find all positive roots of the equation
Z x
2
10 e−x dt = 1
0
with six correct decimals with Newton’s method.
(20) a. Apply Newton’s method to the function
 √
√ x, x≥0
f (x) =
− −x, x < 0
with the root α = 0. What is the behavior of the iterates? Do they converge, and if so, at what
rate?
b. Do the same but with
 3√
2 x≥0
f (x) = √x ,
3 2
− x , x<0
(21) Suppose that
 2
e−1/x , x 6= 0
f (x) =
0, x = 0
The function f is continuous everywhere, in fact differentiable arbitrarily often everywhere, and
0 is the only solution of f (x) = 0. Show that if x0 = 0.0001, it takes more than one hundred
million iterations of the Newton’s Method to get below 0.00005.
(22) The function f (x) = sin x has a zero on the interval (3, 4), namely, x = π. Perform three
iterations of Newton’s method to approximate this zero, using x0 = 4. Determine the absolute
error in each of the computed approximations. What is the apparent order of convergence?
(23) Apply the Newton’s method with x0 = 0.8 to the equation f (x) = x3 − x2 − x + 1 = 0, and
verify that the convergence is only of first-order. Further show that root α = 1 has multiplicity
2 and then apply the modified Newton’s method with m = 2 and verify that the convergence
is of second-order.
(24) Use Newton’s method to approximate, to within 10−4 , the value of x that produces the point
on the graph of y = x2 that is closest to (1, 0).
(25) Use Newton’s method and the modified Newton’s method to find a solution of
√ √
cos(x + 2) + x(x/2 + 2) = 0, for − 2 ≤ x ≤ −1
accurate to within 10−3 .
arctan 6
(26) The function f (x) = tan πx − 6 has a zero at ≈ 0.447431543. Use ten iterations of
π
each of the following methods to approximate this root. Which method is most successful and
why?
a. Bisection method in interval [0,1].
b. Secant method with x0 = 0 and x1 = 0.48.
c. Newton’s method with x0 = 0.4.
(27) Suppose α is a zero of multiplicity m of f , where f (m) is continuous on an open interval
containing α. Show that the fixed-point method x = g(x) with the following g has second-
order convergence:
f (x)
g(x) = x − m 0 .
f (x)
(28) An object falling vertically through the air is subjected to viscous resistance as well as to the
force of gravity. Assume that an object with mass m is dropped from a height s0 and that the
height of the object after t seconds is
mg m2 g
s(t) = s0 − t + 2 (1 − e−kt/m ),
k k
22 ROOTS OF NON-LINEAR EQUATIONS

where g = 32.17 ft/s2 and k represents the coefficient of air resistance in lb-s/ft. Suppose
s0 = 300 ft, m = 0.25 lb, and k = 0.1 lb-s/ft. Find, to within 0.01 s, the time it takes this
quarter-pounder to hit the ground.

Appendix A. Algorithms
Algorithm (Bisection):
To determine a root of f (x) = 0 that is accurate within a specified tolerance value ε, given values a
and b such that f (a) f (b) < 0.
a+b
Define c = .
2
If f (a) f (c) < 0, then set b = c, otherwise a = c.
End if.
Until |a − b| ≤ ε (tolerance value).
Print root as c.

Algorithm (Fixed-point):
To find the fixed point of g in an interval [a, b], given the equation x = g(x) with an initial guess
x0 ∈ [a, b]
1. n = 1.
2. xn = g(xn−1 ).
3. If |xn − xn−1 | < ε then step 5.
4. n → n + 1; go to 2.
5. End of Procedure.

Algorithm (Secant):
1. Give inputs and take two initial guesses x0 and x1 .
2. Start iterations
x1 − x0
x2 = x1 − f0 .
f1 − f0
3. If |f (x2 )| < ε (error tolerance) then stop and print the root.
4. Repeat the iterations (step 2). Also check if the number of iterations has exceeded the maximum
number of iterations.

Algorithm (Newton’s):
Let f : R → R be a differentiable function. The following algorithm computes an approximate solution
x of the equation f (x) = 0.
1. Choose an initial guess x0
2. for n = 0, 1, 2, . . . do
if f (x) is sufficiently small
then x∗ = x
return x
end
f (xn )
3. xn+1 = xn − 0
f (xn )
If |xn+1 − xn | is sufficiently small
then x∗ = xn+1
return x
end
4. end (for main loop)

Bibliography
[Burden] Richard L. Burden, J. Douglas Faires and Annette Burden, “Numerical Analysis,” Cengage
Learning, 10th edition, 2015.
ROOTS OF NON-LINEAR EQUATIONS 23

[Atkinson] K. Atkinson and W. Han, “Elementary Numerical Analysis,” John Willey and Sons, 3rd
edition, 2004.

You might also like