Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Ch.-4 Solution of A System of Linear Equations

Uploaded by

refalar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Ch.-4 Solution of A System of Linear Equations

Uploaded by

refalar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

FACULTY OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF APPLIED SCIENCE AND


HUMANITIES
(4th SEMESTER) B.TECH PROGRAMME
PROBABILITY, STATISTICS AND NUMERICAL
METHODS (203191251)
ACADEMIC YEAR 2019-2020

SOLUTIONS OF A SYSTEM OF LINEAR EQUATIONS:

Iterative methods:

The direct methods lead to exact solutions in many cases but are subject to errors
due to round off and other factors. In the iterative method, an approximation to the
true solution is assumed initially to start the method. By applying the method
repeatedly, better and better approximations are obtained. For large systems,
iterative methods are faster than direct methods and round-off errors are also
smaller. Any error made at any stage of computation gets automatically corrected
in the subsequent steps.
We will discuss two iterative methods:
(i) Gauss-Jacobi method
(ii) Gauss-Seidel method

DIAGONALLY DOMINANT PROPERTY (SUFFICIENT CONDITION


FOR CONVERGENCE):
A matrix is said to be diagonally dominant if for every row of the matrix, the
magnitude of the diagonal entry in a row is larger than or equal to the sum of the
magnitudes of all the other (non-diagonal) entries in that row. More precisely, the
matrix A is diagonally dominant if
aii   aij for all i
j i

where aij denotes the entry in the i th row and j th column.


Note that this definition uses a weak inequality, and is therefore sometimes
called weak diagonal dominance. If a strict inequality (>) is used, this is
called strict diagonal dominance. The unqualified term diagonal dominance can
mean both strict and weak diagonal dominance.
(i) Gauss-Jacobi method:

Given a general set of n equations and n unknowns, we have

a11 x1  a12 x2  a13 x3  ...  a1n xn  c1

a21 x1  a22 x2  a23 x3  ...  a2n xn  c2

. .

. .

. .

an1 x1  an 2 x2  an3 x3  ...  ann xn  cn

If the diagonal elements are non-zero, each equation is rewritten for the
corresponding unknown, that is, the first equation is rewritten with x1 on the left
hand side, the second equation is rewritten with x 2 on the left hand side and so on
as follows

c1  a12 x 2  a13 x3   a1n x n


x1 
a11
c 2  a 21 x1  a 23 x3   a 2 n x n
x2 
a 22


c n 1  a n 1,1 x1  a n 1, 2 x 2   a n 1,n  2 x n  2  a n 1,n x n
x n 1 
a n 1,n 1
c n  a n1 x1  a n 2 x 2    a n ,n 1 x n 1
xn 
a nn

These equations can be rewritten in a summation form as


n
c1   a1 j x j
j 1
j 1
x1 
a11

n
c2   a2 j x j
j 1
j2
x2 
a 22

.
.
n
c n 1  a
j 1
n 1, j xj
j  n 1
x n 1 
a n 1,n 1

n
c n   a nj x j
j 1
jn
xn 
a nn

Hence for any row i ,


n
ci   aij x j
j 1
j i
xi  , i  1,2,  , n.
aii

Now to find xi ’s, one assumes an initial guess for the xi ’s and then uses the
rewritten equations to calculate the new estimates. Remember, one always uses
the most recent estimates to calculate the next estimates, xi .
The above iteration process is continued until two successive approximations are
nearly equal.

Working Rule:

(i) Arrange the equations in such a manner that the leading elements are large in
magnitude in their respective rows satisfying the conditions

a11  a12  a13


a22  a21  a23
a33  a31  a32

(ii) Express the variables having large coefficients in terms of other variables.

(iii) Start the iteration 1 by assuming the initial values of ( x, y, z) as ( x0 , y0 , z0 ) and


obtain

( x1 , y1 , z1 ) using equations
c1  a12 x 2  a13 x3   a1n x n
x1 
a11
c 2  a 21 x1  a 23 x3   a 2 n x n
x2 
a 22


c n 1  a n 1,1 x1  a n 1, 2 x 2   a n 1,n  2 x n  2  a n 1,n x n
x n 1 
a n 1,n 1
c n  a n1 x1  a n 2 x 2    a n ,n 1 x n 1
xn 
1. a nn

(iv) Start the iteration 2 by putting the values of ( x, y, z) as ( x1, y1, z1 ) and obtain

( x2 , y2 , z2 ) .

(v) The above iteration process is continued until two successive approximations
are nearly

equal.

Example: Solve the system of linear equations by Gauss- Jacobi method correct
up to 2-

decimal places.
6 x  2 y  z  4 , x  5 y  z  3 , 2 x  y  4 z  27

Solution:

Re-writing the equations,

1 
x  (4  2 y  z ) 
6

1 
y  (3  x  z )  ......................................(1)
5 
1 
z  (27  2 x  y ) 
4 

Iteration 1: Assuming x0  0 , y0  0 and z0  0 as initial approximation and putting in


Eq.1
2
x1   0.67
3
1
y1  (3)  0.6
5
1
z1  (27)  6.75
4

Iteration 2: Putting x1 , y1 and z1 in Eq.1

1
x2  (4  2(0.6)  6.75)  1.59
6
1
y2  (3  0.67  6.75)  0.884
5
1
z2  (27  2(0.67)  0.6)  6.265
4

Continuing in this way, we have the solution as


x5  2.00
y5  1.00
z5  6.00

Which is the required solution.

(ii) Gauss-Seidel method:

The system of linear equations are same as given in Gauss-Jacobi method.

Working Rule:

(i) Arrange the equations in such a manner that the leading elements are large in
magnitude in their respective rows satisfying the conditions

a11  a12  a13


a22  a21  a23
a33  a31  a32

(ii) Express the variables having large coefficients in terms of other variables.

(iii) Start the iteration 1 by assuming the initial values of ( x, y, z) as ( x0 , y0 , z0 )

(iv) In the iteration 1, put y  y0, z  z0 in the equation of x to obtain x1 ,put


x  x1 and z  z0 in the eq. of y to obtain y1 , put x  x1 , y  y1 in the eq. of z to obtain
z1 .

(v) The above iteration process is continued until two successive approximations
are nearly
equal.

Example: Find the solution to the following system of equations using the Gauss-
Seidel method.

12 x1  3x2  5x3  1

x1  5x2  3x3  28

3x1  7 x2  13x3  76

Use x1  1, x2  0 and x3  1 as the initial guess .

Solution:

The given system is diagonally dominant as

a11  12  12  a12  a13  3   5  8

a 22  5  5  a 21  a 23  1  3  4

a33  13  13  a31  a32  3  7  10

and the inequality is strictly greater than for at least one row. Hence, the solution
should converge using the Gauss-Seidel method.

Rewriting the equations, we get


1  3 x 2  5 x3
x1 
12

28  x1  3x3
x2 
5

76  3x1  7 x 2
x3 
13

Assuming an initial guess of x1  1, x2  0 and x3  1

Iteration 1
1  3  0   5 1
x11 
12

 0.50000
28   0.50000   3 1
x21 
5

 4.9000

76  3  0.50000   7  4.9000 
x31 
13

 3.0923

Iteration 2
1  3  4.9000   5  3.0923
x12 
12

 0.14679

28   0.14679   3  3.0923
x2 2 
5

 3.7153

76  3  0.14679   7  3.7153
x32 
13

 3.8118

as you conduct more iterations, the solution converges as follows.

Iteration x1 x2 x3

1 0.50000 4.9000 3.0923

2 0.14679 3.7153 3.8118

3 0.74275 3.1644 3.9708

4 0.94675 3.0281 3.9971

5 0.99177 3.0034 4.0001

6 0.99919 3.0001 4.0001

Examples: Solve
(1) 5x  y  z  10;2 x  4 y  z  14; x  y  8z  20 by using Gauss-Seidel method
correct upto three decimal places. ( x  2; y  2; z  2)
(2) 6 x  y  z  105;4 x  8 y  3z  155;5x  4 y  10 z  65 by using Gauss-Seidel
method correct upto three decimal places. ( x  15; y  10; z  5)
(3) 20 x  y  2 z  17;3x  20 y  z  18;2 x  3 y  20z  25 by using Gauss-Jacobi
method correct upto three decimal places. ( x  1; y  1; z  1)
(4) 28x  4 y  z  32;2 x  17 y  4z  35; x  3 y  10z  24 by using Gauss-Jacobi
method correct upto two decimal places. ( x  0.99; y  1.51; z  1.85)

ROOTS OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

We all are aware about algebraic equation, so now we will discuss about
transcendental functions and equations.

TRANSCENDENTAL FUNCTIONS AND EQUATION:

A transcendental function "transcends" algebra in that it cannot be expressed in


terms of a finite sequence of the algebraic operations of addition, multiplication,
and root extraction. Examples of transcendental functions include the exponential
function, the logarithm, and the trigonometric functions.

( x , c x , x x , log c x ,sin x etc.)

An equation which is includes transcendental functions is known as


Transcendental Equation.

To solve these types of equations we use the following methods:

1) Bisection Method

2) Regula-Falsi Method(False-Position Method)

3) Newton-Raphson Method

1) Bisection Method:One of the first numerical methods developed to find the


root of a nonlinear equation f ( x)  0 was the bisection method (also called Binary-
Search method). The method is based on the following theorem:
Theorem:An equation f ( x)  0 , where f (x) is a real continuous function, has at
least one root between x  and x u if f ( x  ) f ( xu )  0 ( Figure .1).Note that if
f ( x ) f ( xu )  0 , there may or may not be any root between x  and x u (Figures 2 and
3). If f ( x  ) f ( xu )  0 , then there may be more than one root between x  and x u
(Figure 4). So the theorem only guarantees one root between x  and x u .
f(x)
f(x)

x xu
x
x xu x

f(x)
f(x)

xl x
x x xu
xu

Fig.1 Fig.2

Fig.1 At least one root exists between two points if the function is real,
continuous, and changes sign.
Fig.2 If function f (x) does not change sign between two points, roots of
f ( x)  0 may still exist between the two points.
Fig.3 If the function f (x) does not change sign between two points, there may not
be any roots f ( x)  0 between the two points.

f(x)

xu
x
x

Fig .4 If the function f (x) changes sign between two points, more than one root for
f ( x)  0 may exist between the two points.

A general rule is:

- If the f(xl) and f(xu) have the same sign ( f ( x ) f ( xu )  0 ):

- There is no root between xl and xu.

- There is an even number of roots between xl and xu.

- If the f(xl) and f(xu) have different signs ( f ( x ) f ( xu )  0 ):

- The is an odd number of roots.

Advantages of Bisection Method:

The bisection method is always convergent. Since the method brackets the
a)
root, the method is guaranteed to converge.
b) As iterations are conducted, the interval gets halved. So one can guarantee
the decrease in the error in the solution of the equation.
Drawbacks of Bisection Method:

a) The convergence of bisection method is slow as it is simply based on


halving the interval.
b) If one of the initial guesses is closer to the root, it will take larger number of
iterations to reach the root.
c) If a function f (x) is such that it just touches the x-axis such as
f ( x)  x 2  0

it will be unable to find the lower guess, x  , and upper guess, x u , such that
f ( x ) f ( xu )  0

Working rule for Bisection Method:


The steps to apply the bisection method to find the root of the equation f ( x)  0 are:

1. Choose x  and x u as two guesses for the root such that f ( x  ) f ( x u )  0 , or in other words, f (x)
changes sign between x  and x u .
2. Estimate the root, x m of the equation f ( x)  0 as the mid-point between x  and x u as
x   xu
xm =
2

3. Now check the following


a. If f ( x ) f ( xm )  0 , then the root lies between x  and x m ; then x  x and xu  xm .
b. If f ( x ) f ( xm )  0 , then the root lies between x m and x u ; then x   x m and x u  x u .
c. If f ( x ) f ( xm )  0 ; then the root is x m . Stop the algorithm if this is true.
4. Find the new estimate of the root
x   xu
xm =
2

Find the absolute approximate relative error as

xmnew - xmold
a = x 100
xmnew

where

xmnew = estimated root from present iteration

xmold = estimated root from previous iteration

5. Compare the absolute relative approximate error  a with the pre-specified relative error tolerance  s .
If  a   s , then go to Step 3, else stop the algorithm. Note one should also check whether the number
of iterations is more than the maximum number of iterations allowed. If so, one needs to terminate the
algorithm and notify the user about it.
Example: Find a root of the equation x3  4 x  9  0 using Bisection Method
correct up to 3 decimal places.

Sol: Let f ( x)  x3  4x  9

Since f (2) is –ve and f (3) is +ve , a root lies between 2 and 3.

23
Therefore, the first approximation to the root is x1   2.5
2

Thus f (2.5)  (2.5)3  4(2.5)  9  3.375 i.e.,  ve

Therefore , the root lies between x1 and 3.

2.5  3
Thus the second approximation to the root is x2   2.75
2

Thus , f (2.75)  (2.75)3  4(2.75)  9  0.7969 i.e.,  ve

 the root lies between x1 and x2 .

2.5  2.75
Thus the third approximation to the root is x3   2.625
2

Then f (2.625)  (2.625)3  4(2.625)  9  1.4121 i.e.,  ve

The root lies between x2 and x3 thus the fourth approximation to the root is
2.625  2.75
x4   2.6875
2

Repeating this process, the successive approximations are


x5  2.71875 x6  2.70313 x7  2.71094
x8  2.70703 x9  2.70508 x10  2.70605
x11  2.70654 x12  2.70642

Hence the root is 2.70642.

Example: Find the real root of by bisection method correct


upto three decimal places.
Solution: Given equation ( )
( ) ( )
Therefore, the root lies between 0 and 1. Hence by Bisection method:
( )
-2.17<0 [0,1]
1 0.5 0.053
2 0.75 -0.85
3 0.625 -0.3567
4 0.5625 -0.14
5 0.5313 -0.04
6 0.5157 0.0064
7 0.5235 -0.017
8 0.5196 -0.0056
9 0.5177 0.0003
10 0.5187 -0.0027
11 0.5182
The root correct to three decimal places is 0.5182

Example:

(1)Find the real root of the equation 3x  cos x  1, correct up to four decimal
places. ( x  0.6071)

(2) Find the real root of the equation xe x  1, correct up to four decimal places.
( x  0.5671)

2) Regula-Falsi Method(False-Position Method):


f(x)

f(xu)

xr
xl
x
xu

f(xl)

False-position method.
A shortcoming of the bisection method is that in dividing the interval from xl to xu
into equal halves, no account is taken of the magnitude of f(xi) and f(xu). Indeed, if
f(xi) is close to zero, the root is more close to xl than x0.
The false position method uses this property:
A straight line joins f(xi) and f(xu). The intersection of this line with the x-axis
represents an improvement estimate of the root. This new root can be computed as:

f xl  f xu 

xr  xl xr  xu

f xu xl  xu  af (b)  bf (a)


xr  xu  or xr 
f xl   f xu 
Giving This is
f (b)  f (a)
Then, xr replaces the initial guess for which the function value has the same sign as
f(xr)
The difference between bisection method and false-position method is that in
bisection method, both limits of the interval have to change. This is not the case for
false position method, where one limit may stay fixed throughout the computation
while the other guess converges on the root.
Working rule for False-Position Method (Regula-Falsi Method):

a 1. Define the first interval (a, b) such that solution exists between them. Check f (a) f (b)  0 .
2. Compute the first estimate of the numerical solution xr using the above equation.
3. Find out whether the actual solution is between a and xr or xr and b .This is accomplished by checking
Pitfalls of the false-position method:
the sign of the product f (a) f ( xr )
Although, the false position method is an improvement of the bisection method. In
- If f (a) f ( xr )  0 , the solution is between a and xr .
some cases, the bisection method will converge faster and yields to better results
- If f (a) f ( xr )  0 , the solution is between xr and b .

4. Select the subinterval that contains the solution (a, xr ) or ( xr , b) is the new interval (a, b) and go to
step no.2 and repeat the process. The method of False Position always converges to an answer, provided a root is
initially bracketed in the interval (a, b) .

f(x)
Slow convergence of the false-position method.

Example Using the False Position method, find a root of the function
f ( x)  e x  3 x 2 to an accuracy of 5 digits. The root is known to lie between 0.5
and 1.0.

Sol:
We apply the method of False Position with a = 0.5 and b = 1.0. and the equation
is
af (b)  bf (a)
xr 
f (b)  f (a)

The calculations based on the method of False Position are shown in the Table

n A b f ( a) f (b) xr f ( xr )
1 0.5 1 0.89872 -0.28172 0.88067 0.08577
2 0.88067 1 0.08577 -0.28172 0.90852 0.00441
3 0.90852 1 0.00441 -0.28172 0.90993 0.00022
4 0.90993 1 0.00022 -0.28172 0.91000 0.00001
5 0.91000 1 0.00001 -0.28172 0.91001 0

The root is 0.91 accurate to five digits.

Example: Use False-position method to find a root of the function


in th range , correct upto three decimal places.
Solution: We have ( ) .
Given that . ( ) ( )
( ) ( )
Using false position method, we have ( ) ( )
( )
0 1 -2
1 3 4 1 3
2 1.6667 -0.8889 1.6667 3
3 1.9091 -0.2644 1.9091 3
4 1.9767 -0.0692 19767 3
5 1.9941 -0.0177 1.9941 3
6 1.9985 -0.0044 1.9985 3
7 1.9996 -0.0012 1.9996 3
8 1.9999 -0.0012

Example: (1)Find a positive root of x3  4 x  1 correct up to three decimal places.


( x  0.254)

(2) Find a positive root of x log10 x  1.2 correct up to three decimal places.
( x  2.740)

3) Newton-Raphson method:

Newton-Raphson method is based on the principle that if the initial guess of the
root of f(x)=0 is at xi, then if one draws the tangent to the curve at f(xi), the point
xi+1 where the tangent crosses the x-axis is an improved estimate of the root (Figure
3.12).

Using the definition of the slope of a function, at x  xi

f(xi )  0
 i)==
f (x
xi  xi 1

which gives

f(xi )
xi 1 = xi -
f'(xi )

This equation is called the Newton-Raphson formula for solving nonlinear


equations of the form f x   0 . So starting with an initial guess, xi, one can find the
next guess, xi+1, by using the above equation. One can repeat this process until one
finds the root within a desirable tolerance.
f(x)

f(xi)
x i, f  xi 

f(xi+1)


xi+2 xi+1 xi X

Figure. Geometrical illustration of the Newton-Raphson method.

Working rule for Newton-Raphson method:


The steps to apply using Newton-Raphson method to find the root of an equation f(x) = 0 are

1. Evaluate f(x) symbolically


2. Use an initial guess of the root, xi, to estimate the new value of the root xi+1 as
f(xi )
xi 1 = xi -
f'(xi )

3. Find the absolute relative approximate error,  a as


xi1- xi
a =  100
xi1

4. Compare the absolute relative approximate error,  a with the pre-specified relative error
tolerance,  s . If  a >  s , then go to step 2, else stop the algorithm. Also, check if the
number of iterations has exceeded the maximum number of iterations.

Drawbacks of Newton-Raphson Method:

Divergence at inflection points: If the selection of a guess or an iterated value


turns out to be close to the inflection point of f(x), that is, near where f’(x)=0, the
roots start to diverge away from the true root.

- Division by zero or near zero: The formula of Newton-Raphson method is

f(xi )
xi 1  xi 
f (xi )
Consequently if an iteration value, x i is such that f xi   0 , then one can face
division by zero or a near-zero number. This will give a large magnitude for the
next value, xi+1.

- Root jumping: In some case where the function f(x) is oscillating and has a
number of roots, one may choose an initial guess close to a root. However, the
guesses may jump and converge to some other root.

Example: Find the root of the function x3  x  1  0 correct up to four decimal


places.

Sol :
Let f ( x)  x3  x 1

f (0)  1 and f (1)  1

Since, f (0)  0 and f (1)  0 ,the root lies between 0 and 1.

Let x0  1

f '( x)  3x2  1

f(xi )
By the Newton-Raphson method , xi 1  xi 
f (xi )

f ( x0 )  f (1)  1
f '( x0 )  f '(1)  4

f(x0 ) 1
x1  x0   1   0.75
 0)
f (x 4

 f ( x1 )  f (0.75)  0.171875
 f '( x1 )  f '(0.75)  2.6875

f(x1 ) 0.171875
x2  x1   0.75   0.68605
 1)
f (x 2.6875

 f ( x2 )  f (0.68605)  0.00894
 f '( x2 )  f '(0.68605)  2.41198

f(x2 ) 0.00894
x3  x2   0.68605   0.68234
 2)
f (x 2.41198

 f ( x3 )  f (0.68234)  0.000028
 f '( x3 )  f '(0.68234)  2.39676
f(x3 ) 0.000028
x4  x3   0.68234   0.68233
 3)
f (x 2.39676

Since x3 and x4 are same up to four decimal places, the root is 0.6823.

Example:Find a real root of , correct upto three decimal places, by


using Newton-Raphson’s method.
Solution: Let ( )
( ) ( )
The root lies between 0 and 1.
Using Newton-Raphson’s method, we have
( )
( )
Choose the initial approximation
Here, ( ) ( )
( ) ( )
0 0.5 -1.1756 2.4731
1 0.9754 0.5867 5.2392
2 0.8634 0.0473 4.4185
3 0.8527 0.0004 4.3464
4 0.8526
The approximate value to the root = 0.8526

Example:
(1) Find the root of the function x4  x  10  0 correct up to three decimal places.
( x  2.7406)
(2) Find the root of the function e x  3x correct up to three decimal places.
( x  1.512)

Some deduction from Newton-Raphson’s Formula:

We can derive the following useful results from the Newton-Raphson’s Formula

1
1) Iteration formula for :
N
1 1 1
Let x   N   N 0
N x x

1 1
Also let f ( x)   N and f '( x)  
x x2
By the Newton-Raphson method ,

1 
  N 
xi 1  xi 
f(xi )
 xi   xi 
 i)
f (x  1 
 2 
 xi 
1 
 xi  xi 2   N 
 xi 
 xi (2  Nxi )

2) Iteration formula for N :


Let x  N  N  x  x  N  0
2 2

Also let f ( x)  x2  N and f '( x)  2x

By the Newton-Raphson method ,

xi 1  xi 
f(xi )
 xi 

xi 2  N 
 i)
f (x  2 xi 
1 N
 xi   xi  
2 xi 
1 N
  xi  
2 xi 

1
3) Iteration formula for :
N

1 1 1
Let x   x2   x2   0
N N N
1
Also let f ( x)  x 2  and f '( x)  2 x
N
By the Newton-Raphson method ,
 2 1
 xi  
 xi  
f(xi ) N
xi 1  xi 
 i)
f (x  2 xi 
 2 1 
 2 xi  xi  N
2

 
 2 xi 
 
1 1 
  xi  
2 xi N 

k
4) Iteration formula for N:

Let x  k N  N  x  x  N  0
k k

Also let f ( x)  xk  N and f '( x)  kxk 1


By the Newton-Raphson method ,

xi 1  xi 
f(xi )
 xi 
xi k  N 
 i)
f (x 
kxi k 1 

 ( xi )(kxi k 1 )  xi k  N

 
 kxi k 1 
 
1 N 
  (k  1) xi  k 1 
k xi 

Example: Evaluate the following (correct to four decimal places) by Newton’s


iteration method:
1
(i) (ii) 24 (iii) 1/ 14 (iv) 3 24
31

1
Sol: (i)
31

Taking N  31in the equation xi 1  xi (2  Nxi )

We have xi 1  xi (2  31xi ) , Since an approximate value of 1/31=0.03, we take


x0  0.03 .

Then
x1  x0 (2  31x0 )  0.03(2  31(0.03))  0.0321
x2  x1 (2  31x1 )  0.0321(2  31(0.0321))  0.032257
x3  x2 (2  31x2 )  0.032257(2  31(0.032257))  0.03226

Since x2  x3 up to 4 decimal places, we have 1/ 31  0.0323

(ii ) 28

1 N
Taking N  24 in the equation xi 1   xi  
2 x  i 

1 28 
We have xi 1   xi   , Since the value of 25  5 , we take x0  5 .
2 xi 

Then

1 28  1  28 
x1   x0     5    5.3
2 x0  2  5 
1 28  1  28 
x2   x1     5.3    5.29151
2 x1  2  5.3 
1 28  1  28 
x3   x2     5.29151    5.29150
2 x2  2  5.29151 

Since x2  x3 up to 4 decimal places, we have 28  5.2915 .

(iii ) 1/ 14

1 1 
Taking N  14 in the equation xi 1   xi  
2 
xN i 

1 1 
We have xi 1   xi   , Since the value of 1/ 16  1/ 4  0.25 , we take x0  0.25 .
2 
x 14 i 

Then

1 1  1 1 
x1   x0     (0.25)    0.26785
2 x014  2  (0.25)14 
1 1  1 1 
x2   x1     (0.26785)    0.2672618
2 x114  2  (0.26785)14 
1 1  1 1 
x3   x2     (0.2672618)    0.2672612
2 x214  2  (0.2672618)14 

Since x2  x3 up to 4 decimal places, we have 1/ 14  0.2673 .


3
(iv) 24
1 N 
Taking N  24 and k  3 in the equation xi 1   (k  1) xi  k 1 
k  x i 

1 24  1
We have xi 1   (2) xi  2  , Since the value of 27 3  3 , we take x0  3 .
3  x i 

Then

1 24  1  24 
x1   (2) x0  2    (2)(3)  2   2.8889
3 x0  3  (3) 
1 24  1  24 
x2   (2) x1  2    (2)(2.8889)  2 
 2.88451
3 x1  3  (2.8889) 
1 24  1  24 
x3   (2) x2  2    (2)(2.88451)  2 
 2.8845
3 x2  3  (2.88451) 

Since x2  x3 up to 4 decimal places, we have 3 24  2.8845

Since x2  x3 up to 4 decimal places, we have 3 24  2.8845

You might also like