Selection
Selection
MATH20602 - 2014
1. (from May 21, 2010)
State Lagranges formula for the interpolating polynomial of degree n or less pn (x) which passes through
the points (xi , f (xi )), i = 0, 1, . . . , n, where all the points xi are distinct.
The following is a partial tabulation of the function f (x) = ln(1 + x):
x
0.3
0.4
0.6
0.7
f (x) 0.2624 0.3365 0.4700 0.5306
Using all four of these points in Lagranges formula, estimate ln(1.5).
n
X
Li (x)f (xi ),
i=0
where
n
Y
(x xj )
.
Li (x) =
(xi xj )
j=1
j6=i
For evaluating ln(1.5), we compute pn (x) for x = 0.5 (as pn (x) interpolates f (x) = ln(1 + x)) using the
following table:
xi f (xi ) x xi Li (x) Li (x)f (xi )
0.3 0.2624 0.2
16
0.04373
2
0.4 0.3365 0.1
0.22433
3
2
0.6 0.4700 0.1
0.31333
3
1
0.7 0.5306 0.2 6
0.08843
P
= 0.4055
As indicated, the interpolating polynomial evaluated at x = 0.5 is the sum of the last column, so we have
the approximation ln(1.5) 0.4055.
MATH20602 - 2014
2. (from May 22, 2012)
(a) Define the divided differences f [xi , xi+1 , . . . , xi+k ] for a function f (x).
(b) Consider the quadratic polynomial
p2 (x) = f [x0 ] + f [x0 , x1 ](x x0 ) + f [x0 , x1 , x2 ](x x0 )(x x1 ).
Show that this polynomial interpolates f (x) at the points (xi , f (xi )), i = 0, 1, 2.
(c) Use divided differences to construct the quadratic polynomial p2 (x) that passes through the points
(0.1, 0.1248), (0.2, 0.2562), and (0.4, 0.6108).
(d) Given that all these points lie on the curve y = f (x), use the polynomial p2 (x) of the previous part to
estimate f (0.3).
Solution 2. (a) The divided differences f [xi , . . . , xi+k ] are defined as follows:
f [xi ] := f (xi ),
f [xi+1 ] f [xi ]
f [xi , xi+1 ] :=
,
xi+1 xi
f [xi+1 , xi+2 , . . . , xi+k ] f [xi , xi+2 , . . . , xi+k1 ]
.
f [xi , xi+1 , . . . , xi+k ] : =
xi+k xi
(b) The Newton interpolation polynomial of degree 2 is defined as
p2 (x) = f [x0 ] + f [x0 , x1 ](x x0 ) + f [x0 , x1 , x2 ](x x0 )(x x1 ).
It is required to satisfy the interpolation condition p2 (xi ) = f (xi ) for i = 0, 1, 2. We verify this as follows:
p2 (x0 ) = f [x0 ] + 0 + 0 = f (x0 ),
p2 (x1 ) = f [x0 ] + f [x0 , x1 ](x1 x0 ) = f (x0 ) +
f (x1 ) f (x0 )
(x1 x0 ) = f (x1 ),
x1 x0
and
p2 (x2 ) = f [x0 ] + f [x0 , x1 ](x2 x0 ) + f [x0 , x1 , x2 ](x2 x0 )(x2 x1 )
f [x1 , x2 ] f [x0 , x1 ]
= f [x0 ] + f [x0 , x1 ](x2 x0 ) +
(x2 x0 )(x2 x1 )
x2 x0
= f [x0 ] + f [x0 , x1 ](x2 x0 ) + f [x1 , x2 ](x2 x1 ) f [x0 , x1 ](x2 x1 )
= f [x0 ] + f [x0 , x1 ](x1 x0 ) + f [x1 , x2 ](x2 x1 )
f (x2 ) f (x1 )
= f (x1 ) +
(x2 x1 ) (we used identity ( 0.1) here)
x2 x1
= f (x2 ).
(0.1)
MATH20602 - 2014
(c) The divided difference table for the given points is
0.1 0.1248
1.314
0.2 0.2562
1.53
1.773
0.4 0.6108
The Newton interpolation polynomial has the form
p2 (x) = 0.1248 + 1.314(x 0.1) + 1.53(x 0.1)(x 0.2).
(d) Evaluated at x = 0.3 we get
p2 (0.3) = 0.1248 + 0.2 1.314 + 0.1 0.2 1.53 = 0.4182.
MATH20602 - 2014
3. (from MayR 21, 2010)
2 x
The integral 1 e x dx is to be estimated using the Composite Trapezium Rule
Z
xn
x0
h
f (x) dx
2
f (x0 ) + 2
n1
X
!
f (xj ) + f (xn ) ,
j=1
where xj = x0 + jh, j = 0, 1, . . . , n and h is the constant step size. The error in this quadrature rule is
given by
1
(xn x0 )h2 f 00 ()
12
for some (x0 , xn ). Find the largest value of h that can be used if the error in the estimate is to be no
more than 5 104 in modulus.
As f 00 (x) is decreasing on the interval [1, 2], it takes its largest value at x = 1 in that interval. Hence,
|f 00 (x)| e1 (1 + 2 + 2) = 5e1 1.84,
x [1, 2].
Let E(h) be the error term in the composite trapezium rule with step length h. Then
|E(h)|
1 2
h 1.84.
12
We require that
1 2
h 1.84 5 104 .
12
This is satisfied if h 5.7 102 .
MATH20602 - 2014
4. (from June 3, 2011)
(a) The numerical integration formula
Z h
1
3
5
x 2 f (x) dx h 2 (f (0) + f (h)) + h 2 (1 f 0 (0) + 1 f 0 (h))
0
16
63
and =
26
63
x 2 sin(x) dx
0
by using the formula derived above. (Give three significant figures in your answer.)
Solution 4. (a) The idea here is to use the exactness conditions to build a system of equations that has the unknown parameters , , 1 , 2 as solutions. We start by comparing the exact integrals with the interpolants
for f (x) = 1, x, x2 , x3 .
The exact integrals are
Z h
Z h
1
1
2 3
2 5
x 2 x dx = h 2 ,
x 2 dx = h 2 ,
3
5
0
0
Z h
Z h
1
1
2 7
2 9
x 2 x2 dx = h 2 ,
x 2 x3 dx = h 2 .
7
9
0
0
If we denote the interpolants by I(f ), then the interpolants are given by
3
I(1) = h 2 ( + ),
3
I(x) = h 2 h + h 2 (1 + 1 )
I(x2 ) = h 2 h2 + 2h 2 1 h,
I(x3 ) = h 2 + 3h 2 1 h2 .
+ 1 + 1 =
2
5
2
+ 31 = .
9
These are four equations in four unknowns. Subtracting the last from the third one, we get
1 =
2 2
4
=
9 7
63
16
,
63
1 =
6
16
.
315
MATH20602 - 2014
(b) Setting h =
Z
0
and f (x) = sin(x), f 0 (x) = cos(x), the numerical integration formula reads
5
32 16
26
2 16
4
sin(0) +
sin(/2) +
cos(0)
cos(/2)
x sin(x) dx
2
63
63
2
315
63
26 3/2
16 52
=
= 0.9695556
+
63 2
315 2
1
2
MATH20602 - 2014
5. (from May 21, 2010)
Solve the following system of simultaneous linear equations by carrying out two iterations of the GaussSeidel method using the initial vector (0.3, 0.3, 0.25)> :
20 8 0
x1
6
4 20 4 = x2 = 6 .
0 4 20
x3
5
Show that the Gauss-Seidel method will converge from any initial vector x0 for this set of equations.
c = (L + D)1 b.
and A = L + D + U with L,D and U denoting the lower triangular, diagonal, and upper triangular parts,
respectively.
We can also write this iteration in terms of the individual components of the vectors being iterated. The
componentwise version of the Gauss-Seidel iteration scheme for this system of equations is given as
1
(k)
(k+1)
6 + 8x2
x1
=
20
1
(k+1)
(k+1)
(k)
x2
=
6 + 4x1
4x3
20
1
(k+1)
(k+1)
.
x3
=
5 + 4x2
20
Starting with the initial value x0 = (0.3, 0.3, 0.25)> , we get
(1)
1
(6 + 8 0.3) = 0.42
20
1
=
(6 + 4 0.42 4 0.25) = 0.3340
20
1
=
(5 + 4 0.3340) = 0.3168.
20
x1 =
(1)
x2
(1)
x3
and
(2)
1
(6 + 8 0.42) = 0.4336
20
1
=
(6 + 4 0.4336 4 0.3168) = 0.3234
20
1
=
(5 + 4 0.3234) = 0.3147.
20
x1 =
(2)
x2
(2)
x3
The method converges if and only if (T ) < 1. To verify this, we compute the eiganvalues of T as the
roots of the polynomial
det((L + D) + U ) = 0.
8
MATH20602 - 2014
Written out, this determinant looks like
20 8
0
4 = 20(4002 + 16) + 8(802 )
det 4 20
0
4 20
= 80003 3202 = 0.
Two of the roots are 0, while the third root is
=
320
= 0.04.
8000
This is also the value of the spectral radius, which is clearly < 1, and therefore the method converges.
Alternatively, we can read out an expression for the matrix T from the componentwise iteration:
0 2 0
1
T = 0 15 1 .
5
2
0 25
15
Then one can compute any norm (for example, the -norm kT k ) to see the norm criterium for convergence applies. With the above matrix, clearly kT k < 1, so we have convergence with respect to the
-norm and therefore also with respect to the 2-norm.
MATH20602 - 2014
6. (from May 22, 2012)
(a) Calculate kAk1 and kAk2 for the matrix
A=
1 2
.
3 4
A A=
10 14
14 20
221. Therefore,
10
MATH20602 - 2014
7. (from May 21, 2010)
Show that the equation
x3 2x 2 = 0
has a root between 1 and 2. By calculating the maximum and minimum values of x3 2x 2, show that
this is the only real root.
Consider the following iteration schemes for computing the real root of the above equation:
(a) xn+1 = 21 (x3n 2), n = 0, 1, . . .
1
2x3n +2
,
3x2n 2
n = 0, 1, . . . .
Show that for one of these schemes the fixed point theorem does not guarantee convergence, one will
converge linearly, and one will converge quadratically.
Solution 7. The function
f (x) = x3 2x 2
satisfies f (1) = 3 and f (2) = 2. The sign changes between 1 and 2 (f (1)f (2) < 0), and by the mean
value theorem and the fact that the function is continuous, it must have a root in that interval.
The derivative
p
2
is given by 3x 2. Setting this to zero,
x = 2/3. Looking at the
p we see that we have critical points at p
second derivatives, we see that x1 = 2/3 is a local minimum, while x2 = 2/3 is a local maximum. It
follows that on the interval [1, 2] the function f (x) is strictly monotonically increasing (otherwise it would
have a critical point in that interval), and can therefore only have one root.
In the following, we verify the conditions of the fixed point theorem.
g(x) = 21 (x3n 2). We first verify that a fixed point of this is indeed a root to the above equation
f (x) = 0. Note that
x = g(x) 2x = x3 2 f (x) = 0.
Lets verify the conditions of the fixed-point theorem. We have g(1) = 12 and g(2) = 3, both outside
the interval [1, 2]. The derivative g 0 (x) = 3x/2 also fails the condition |g 0 (x)| < 1 in any interval
around the root in [1, 2], so we cant deduce convergence.
1
MATH20602 - 2014
g(x) =
2x3 +2
.
3x2 2
2x3 + 2
3x3 2x = 2x2 + 2 f (x) = 0,
3x2 2
so a fixed point is indeed a root of f (x). Next, lets see if we can verify quadratic convergence. From
the lecture notes (Lecture 17) we know that the fixed point iteration converges quadratically if, at the
fixed-point , we have
g 0 () = 0.
The derivative of g is given by
(2x3 + 2)6x
6x2
6x
g (x) = 2
= 2
2
2
3x 2
(3x 2)
3x 2
0
2x3 + 2
6x
x 2
= 2
(x g(x)).
3x 2
3x 2
If is a fixed-point, that is, = g(), then clearly g 0 () = 0. It follows that we have quadratic
convergence for a starting point that is close enough to .
Note that if we write down Newtons formula for the function f (x), we get
g(x) = x
f (x)
x3 2x 2
2x3 + 2
=
x
=
.
f 0 (x)
3x2 2
3x2 2
That is, the iteration xn+1 = g(xn ) is precisely the fixed-point formulation for Newtons method.
12
MATH20602 - 2014
8. (based on May 22, 2012)
Use the Newton iteration (also called Newton-Raphson) method to find (correct to three significant figures)
a root of the equation F (x) = 0 near x = 2, when
F (x) = x2 + 10 cos(x).
(Here, x is measured in radians).
Solution 8. Given the function F (x) = x2 + 10 cos(x), the derivative is F 0 (x) = 2x 10 sin(x) and
Newtons method is the fixed-point iteration for the function
g(x) = x
f (x)
x2 + 10 cos(x)
=
x
.
f 0 (x)
2x 10 sin(x)
x2 = 1.9689.
As x1 and x2 agree to three significant figures, the required root is 1.97 (to three significant figures).
13
MATH20602 - 2014
R9.1(from May 21, 2013) A quadrature rule I(f ) has degree of precision k on [0, 1], if it evaluates the integral
f (x) dx exactly for polynomials of degree up to k. Determine a value of [0, 1] such that the degree
0
of precision of the quadrature rule
2
1
2
I(f ) = f () f (1/2) + f (1 )
3
3
3
on the interval [0, 1] is at least two.
Solution 9. The answer is
1
2
2
I(f ) = f (1/4) f (1/2) + f (3/4) ,
3
3
3
that is, = 1/4. To determine this, we first evaluate the integrals of 1, x, x2 :
Z 1
Z 1
Z 1
1
1
x2 dx = .
1 dx = 1,
x dx = ,
2 0
3
0
0
We then write down the quadrature rule
2
1
2
I(1) = 1 1 + 1 = 1,
3
3
3
2
11 2
1
+ (1 ) = ,
I(x) =
3
32 3
2
2
1
2
1
1
I(x2 ) = 2
+ (1 )2 = ,
3
34 3
3
The first equation is always satisfied. The second equation is also satisfied, since + 1 = 1. From the
third one we get a quadratic equation
162 16 + 3 = 0,
which we see has solutions 1/4 and 3/4 in the interval [0, 1].
14
MATH20602 - 2014
10. (from May 21, 2013)
(a) In order to solve the equation
ex
2 /2
3 cos(x2 ) = 0,
perform two iterations of the bisection method with starting values 0.4, 1.8, followed by one iteration of Newtons method on the approximate value found with the bisection method (x measured in
radians).
(b) How does the Newton method behave if we choose x0 = 0.4 as starting point?
Solution 10.
(a) In order to solve the equation
ex
2 /2
3 cos(x2 ) = 0.
2 /2
+ 6x sin(x2 ).
Starting with 0.4 and 1.8, we get the iterates for the bisection method
x
f (x)
0.4 2.0386
1.8 3.1838
1.1 0.5130
lower bound
upper bound
0.4
1.1
1.8
1.8
and the estimate (1.1 + 1.8)/2 = 1.45 after the second iterate. Using this as starting point x0 in
Newtons method
x1 = x0 f (x0 )/f 0 (x0 ) = 1.45 1.8705/6.9921 = 1.1825.
The function value at this point is f (1.1825) = 0.0180, already very close to zero.
(b) At the point 0.4 the derivative is f 0 (0.4) = 0.0131, very small. Consequently, applying a step of
Newtons method we get
0.4 f (0.4)/f 0 (0.4) = 155.8121.
We end up very far from our original point.
15