Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

4 Numerical Methods PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Numerical methods

Numerical solution of system of Linear equations


Consider the following m first degree equations consisting of n unknowns x1, x=2….xn
a11 x1 + a12 x2 +…………………….+a1j xi + ……………………..+a1n xn = b1
a21 x1 + a22 x2 +……………………..+ a2j xj + …………………….+a2n xn = b2
…………………………………………………………………………………………………a ml x1 + am2 x2 + ………………..+ a2j xj + …………………..+amn xmn =
bm
or in matrix notation, we have
a11a12 .......... .a1n   x1  b1 
a a .......... .a   x   
 21 22 2n   2   b2 
.......... .......... .....  ..... .....
     
am1am 2 ......... amn  m n  xn  n1 bm  m1

 AX = B
By finding a solution of the above system of equation we mean to obtain the values of x1,x2…..xn such that they
satisfy all the given equations simultaneously. the system of equations, given above is said to be homogenous if all bi (i=
1………m) vanish, otherwise it is called as non homogenous system. there are number of methods to solve the above
system of linear equation . These are as follows.
1. Matrix Inversion Method
2. Cramer’s Rule
3. Crout’s and Dolittle’s Method (Triangularisation Methods )
4. Gauss-Elimination Method
5. Gauss-Jordan’s Method
6. Gauss- Seidel Iterative Method
7. Jacobi Iterative Method
We shall focus on Triangularisation, Gausss-Elimination and Gauss-Seidel Methods only.

Method of factorization or Triangularisation Method (Dolittle’s Triangularisation Method)


This method is based on the fact that a square matrix A can be factorised into the form LU where L is unit lower
triangular and U is a upper triangular, if all the principal minors of A are non singular i.e., it is a standard result of linear
algebra that such a factorization, when it exists, is unique., we consider, sfor definiteness, the linear system
a11x1 + a12x2 +a13x3 = b1
a21x1 + a22x2 +a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
Which can be written in the form AX = B ………………(i)
A = LU …..………..(ii)
1 0 0

Where L  I 21 1 0 ………………(iii)

 I 31 I 32 1
U11 U12 U13 

And, U  O U 22 U 23  …………………(iv)

 O O U 33 

(i) becomes LUX = B………………..(v)


If we set UX = Y ……………….(vi)
Then (v) way be written as
LY = B……………….(vii)
Which is equivalent to the system Y1 = b1
 21 y1  y 2  b2
 31 y1   32 y 2  y 3  b3
and can be solved for y1, y2, y3 by the forward substitution, Once, Y is known, the system (vi) become
U11X1 +U12X2 + U13X3 = Y1
U22X2 + U23X3 = Y2
U33X3 = Y3

1 | www.mindvis.in
Which can be solved by backward substitution.
We shall now describe a scheme for computing the matrices L and U, and illustrate the procedure with a matrix of
order 3. From the relation (ii), we obtain
1 0 0 U 11 U 12 U 13  a11 ` a12 a13 
I 1 0  0 U U 23    a 21 a 22 a 23 
 21  22

 I 31 I 31 1  0 0 U 33   a31 a32 a33 


Multiplying the matrices on the left and equating the corresponding elements both sides we gets
u11  a11 , u12  a12 , u13  a13
 21u11  a 21
a 21
or  21 ,
u11
 21u12  u 22  a 22
 u 22  a 22   221u12
 31u13  u 23  a 23


u23  a23   21u13
 31u11  a31
a31
  31 
u11
  31u12   32 u 22  a32
a I u
  32  32 31 12
u 22
Lastly,
 31u13   32  u23  u33  a33


u33  a33   31u13   32u23
 The variables are solved in the following order u11, u12, u13 then  21 , u22 , u23 lastly,  31 ,  32 , u33
Gauss Elimination Method

This is the elementary elimination method and it reduces the system of equations to an equivalent upper-triangular system,
Axˆ  B is written as an augmented matrix [A | B] which is
which an be solved by back substitution. (i.e., the system
reduced to [U |B ]easily which corresponds to the system Axˆ = B’, which can be solved easily by back substitution) we
consider the system given previously, viz., the system of linear equations in n unknowns
a11 x1  a12 x 2  a13 x3  ....  a1n x n  b1 
a 21 x1  a 22 x 2  a 23 x3  .....  a 2 n x n  b2 

 
a 21 x1  a n 2 x 2  a n 3 x3  ......  a nn x n  bn 
There are two steps in the solution of the system viz, the elimination of unknowns and back substitution. Step1: The
unknowns are eliminated to obtain an upper-triangular system. To eliminate x1 from the
a 21 a a
 a 21 x1  a12 x3  .....  a1n 21 xn  b1 21
a11 a11 a11
Adding the above equation to the second equation of (i), we obtain
 a   a   a  a
 a22  a12 21  x2   a23  a13 21  x3  ...   a2 n  a1n 21  xn  b2  b1 21
 a11   a11   a11  a11

2 | www.mindvis.in
Which can be written as
a'22 x2  a'23 x3  ......  a'2n xn  b'2
Where a’22 = a22 – a12(a21/a11), etc. Thus the primes indicated that the original element has changed its value. Similarly, we
can multiply the first equation by – a31/a11 and add it to the third equation of the system (i). this eliminates the unknown x1
from the third equation of (i) and we obtain
a'32 x2  a'33 x3  ......  a'3n xn  b'3
In a similar fashion, we can eliminate x1 from the remaining equations and after eliminating x1 from
a11 x1  a12 x2  a13 x3  .....  a1n xn  b 
a' 22 x2  a' 23 x3  ..........  a' 2 n xn  b' 2 

a'32 x2  a'33 x3  ........  a'3n xn  b'3 
 

a' n 2 x2  a' n 3 x3  ........ a' mn xn  b' n 
We next eliminate x2 from the last (n – 2 ) equation of (ii). Before this, it is important to notice that in the process of obtaining
the above system, we have multiplied the first row by ( -a21/a11), i.e., we have divided it by a 11 which is therefore assumed
to be nonzero. For this reason, the first equation in the system (ii0 is called the pivot equation, and a 11 is called the pivot or
pivotal element. the method obviously fails if a11 = 0 .
We shall discuss this important point after completing the description of the elimination method. Now , to eliminate x2from
the third equation of (ii), we multiply the second equation by(-a’32/a’22) and add it to the third equation . Repeating this
process with the remaining equations, we obtain the system
a11 x1  a12 x2  a13 x3  .....  a1n xn  b1 
a'22 x2  a'23 x3  ...  a'2 n xn  b' 2 


 
a"33 x3  ......  b"nn xn  b"n ' 

In (iii), the ‘double dash’ indicate that the elements have changed twice. it is easily seen that this procedure can be continued
to eliminate x3 from the fourth equation onwards, x4 from the fifth equation onwards, etc., till we finally obtain the upper-
a11 x1  a12 x2  a13 x3  ....  a1n xn  b1 

a'22 x2  a'23 x3  ...  a'2 n xn  b'2 

a"33 x3  ....  a'2 n xn  b"3 
 

triangular form:
( n 1)
ann xn  bn( n1 , 
( n 1)
Where ann indicates that the element ann has changed (n -1 ) times. We thus have completed the first step of elimination
of unknowns and reduction to the upper-triangular form.
Step2: We now have to obtain the required solution from the system (iv). From the last equation of this system, we obtain.
an( n1)
xn  ( n 1)
ann
This is then substituted in the (n – 1) the equation to obtain xn – 1 and the process is repeated to compute the other unknowns
. We have therefore first computed xn, then xn-1,……., x2, x1, in that order. Due to this reason, the process is called back
substitution
We now come to the important case any of the pivots being zero or very close to zero. If the pivot is zero, the entire process
fails and if it is close to zero, round-off errors may occur. These problems canbe avoided by adopting a procedure called
pivoting. If a11 is either zero or very small compared to the other coefficients of the equation, then we find the largest
available coefficient in the columns below the pivot equation and then interchange the two rows. In this way, we obtain a
new pivot Equation with a nonzero pivot. Such a process is called partial pivoting, since in this case we search only the
columns below for the largest element. If, on the other hand, we search both columns and rows for the largest element the
procedure is called complete pivoting. it is obvious that complete pivoting involves more complexity in computations since
interchange of columns means change of ‘order’ of unknowns which invariably requires more programming effort. In
comparison , partial pivoting, i.e. row interchange, is easily adopted in programming. Due to this reason, complete pivoting
is rarely used gauss elimination with partial pivoting is widely used in practice

Iterative Methods: Gauss-Seidel an Jacobi Methods

3 | www.mindvis.in
There are many methods to solve the linear system of equations out of which there is one method
known as Iterative methods or indirect methods which start from an approximation to the true solution and it
convergent, derive as sequence of closer approximation – the cycle of computation being repeated till the
required accuracy is obtained. This means that in a direct method the amount of computation is fixed,
while in an iterative method the amount computation depends on the accuracy required.
In general one should prefer a direct method for solution of a linear system, but in the case of matrices with a
large number of zero elements, it will be advantageous to use iterative methods which preserves these elements. Let the
system be given by
a11 x1  a12 x2  ........  a1n xn  b1 
x21 x1  a 22 x2  ........  a 2 n xn  b2  
 ………………(i)
.......... .......... .......... .......... .......... ....... 
a n1 x1  a 22 x2  ..........  a nn xn  bn  
in which the diagonal elements aii do not vanish. If this is not the case , then the equations should be rearranged so
that this condition is satisfied. Re-writing the system (i) as
b1 a12 a a 
x1   x 2  13 x3 ........  1n x n 
a11 a11 a11 a11

b2 a 21 a 23 a2n 
x2   x1  x3 ........  xn 
a 22 a 22 a 22 a 22  …………….(ii)
.......... .......... .......... .......... .......... ......... 

bn a n1 an2 a nn 1 
xn   x1  x 2  ........ 
a nn a nn a nn a nn 
Suppose x1(1) ………… x n(1) are any first approximations to the unknowns X1, X2 ……….Xn. Substituting in
right hand side of (ii) we find a system of second approximation
b1 a12 (1) a 
x1( 2 )   x 2  .......... .  1n x n(1) 
a11 a11 a11

b a a 
x 2( 2 )  2  21 x1(1)  .......... .  2 n x n(1) 
a 22 a 22 a 22  ………………….(iii)
.......... .......... .......... .......... .......... ....... 

bn a n1 (1) a nn 1 (1) 
xn 
( 2)
 x1  .......... ..  x n 1 
a nn a nn a nn 
(n) (n) (n)
Similarly if x1 , x 2 .......... .x n are a system of - n approximations, then the next approximation is
given by formula
b1 a12 ( 2 ) a 
x1( n 1)   x 2  ....... 1n x n( n ) 
a11 a11 a11

( n 1) b2 a 21 ( n ) a2n (n) 
x2   x1  .......  xn 
a 22 a 22 a 22  ……………..(iv)
.......... .......... .......... .......... .......... .... 

b a a 
x n( n 1)  2  n1 x1( n )  .........  nn 1 x n( n1) 
a nn a nn a nn 
In matrix form then the Iteration formula (iv) may be written as
X ( n1)  BX ( n)  C ……………………(v)
This method is due to “Jacobi” and is called the “method of Simultaneous displacements”.
this method is also called “Jacobi’s method”

4 | www.mindvis.in
Gauss Seidel Method

In the first equation of (ii), we substitute the first approximation


( x1(1) , x2(1) ,......... ...... xn(1) into right hand side and denote the result as x1( 2) .
in the seconds equation we substitute ( x1( 2) , x2(1) ,......... xn(1) and denote the result as x3( 2) . in this manner we complete
the first stage of iteration and the entire process is repeated till the values of
x1, x2………….xn are obtained to the accuracy required. it is clear therefore that this method uses an improved component as
soon as it is available and it is called the method of “Successive displacements” or “ Gauss-Seidel method”
Note : it can be shown that the Gauss-Seidel method converges twice as fast as the “Jacobi method”

Numerical integration (Quadrature) by Trapezoidal and simpson’s rules


The general problem of numerical integraton may be stated as follows. given a set of data points (xo, y0), (x1,
y1)…………(xn, yn). of a function y = f(x), where f(x) is not known explicity it is required to compute the value of
the definite integral,

I   ydx
b

a
As in the case of numerical differentiation, we replace f(x) by an interpolating polynomial (x) and obtain on
integration an approximate value of the definite integral. Thus, different integration formulas can be obtained
depending upon the type of interpolation formula used. Let the interval

[a,b] be divided into n equal subintervals such that


a  x0  x1  x2  .......... .....  xn  b
Clearly xn  x0  nh
x0
Hence, the integral becomes, I   y dx
x0
Approximating y by Newton’s Forward Difference formula, we obtain,

x0 p( p  1) 2 p( p  1)( p  2) 3
I  x0
[ y 0  py 0 
2
 y0 
6
 y 0  .......... .......... ]dx.

Since x  x0  ph, dx  hdp and hence the above integral becomes

n  p ( p  1) 2 p ( p  1)( p  2) 3 
h   y 0  py 0   y0   y 0  ... dp
0
 2 6 
Which gives on simplification

xn  n n(2n  3) n( n  2) 2 3 
 x0
ydx  nh y 0 
 2
y 0 
12
2 y 0 
24
 y 0  ....

this is known as General formula, we can obtain different integration formulas by putting
n = 1, 2, 3, ……..etc. we derive here a few of these formulae but is should be remarked that the trapezoidal and
simpson’s 1/3 rules are found to give sufficient accuracy for use in practical problems. the following table
shows how y0 , y1 , 2 y0 are derived from (x0, y0), (x1,y1), (x2,y2) etc.

X0 Y0
X1 Y1 y0
X2 Y2 Y1 2Y0

5 | www.mindvis.in
y 0  y1  y 0
y1  y 2  y1
and 2 y 0  y1  y 0  y 2  2 y1  y 0

Trapezoidal Rule
Setting n = 1 in the general formula, all differences higher than the first will become zero and we obtain:

    h
ydx  h  y 0  y 0   h  y 0  ( y1  y 0 )   y 0  y1 
x12 1 1
x0
 2   2  2
…………(i)
For the next interval [x1, x2], we deduce similarly
h
y1  y 2 
x2
 x1
ydx 
2
………………….(ii)

and so on. For the last interval [[xn-1,xn] we have


h
 y n1  y n 
xn

xn 1
ydx 
2
…………………….(iii)

combining all these expressions, we obtain the rule

h
y0  2 y1  y 2  ....  y n1   y n 
xn
x0
ydx 
2
Which is known as trapezoidal rule. The geometrical significance of this rule is that the curve y = f(x) is replaced by n straight
lines joining s the points (x0,y0) and (x1,y1); (x1,y1) and (x2,y2) ……..;(xn-1,) and (xn,yn). The area bonded by the curve y = f9x),
the ordinates x = x0 and x= xn and the x- axis is then approximately equivalent to the sum of the areas of n trapeziums
obtained.
y = f(x)
y1

y0

x0 = a h x1 = b

Shaded Area = Area of Trapezium   f ( x)dx


a
Compound Trapezoidal Rule (with 4 pts and 3 intervals )

y1

y0

x0 = a x1 x2 x3 = b

6 | www.mindvis.in
b

Shaded Area = Sum of Area of 3 trapezium   f ( x)dx


a

Simpson’s 1/3 Rule


This rule is obtained by putting gn = 2 in general formula i.e., by replacing the curve by n/2
arcs of second degree polynomials or parabolas. we have been
x2
 1 
 ydx  2h  y
x0
0  y0  2 y0 
6 
h 
y0   y1  y0    y 2  2 y1  y0 
1
 
3 6 

h
 y0  4 y1  y2 
3
Similarly,
x4

 ydx  3 y  4 y1  y 4 
h
2
x2
And finally
x0

 ydx  3  y  4 y n 1  y n 
h
n2
xn 1
Summing up we obtain

xn

 ydx  3  y  4 y1  y3  y5  ....  y n1   2 y 2  y 4  ...  y n2   y n 


h
0
x2

Which is known as “Simpson’s 1/3 rule” or simply “Simpson’s rule” it should be noted that this rule requires the divisions of
the whole range into an even number of subintervals of width h. Simple Simpson’s Rule

y = f(x)

h h
x0 = a x1 x2 = b

b
Shaded Area   f ( x)dx
a
Compound Simpson’s Rule: (7 pts or 6 intervals)

I   f ( x)dx  l1  l 2  l3

Simpson’s 3/8 Rule

Setting n = 3 in general formula we observe that all difference s higher thatn the third will become zero and we obtain,

7 | www.mindvis.in
x2
 3 3 1 
 ydx  3h  y
x0
0  y 0  2 y 0  3 y 0 
2 4 8 

 
 3h  y 0   y1  y 0    y 2  2 y1  y0    y3  3 y 2  3 y1  y0 
3 3 1
 2 4 8 


3h
y0  3 y1  3 y2  y3 
8
x6

Similarly,  ydx 
3h
 y3  3 y 4  3 y5  y 6 
x3
8
And so on. Summing up all these, we obtain,

x2

 ydx  8  y  3 y1  3 y 2  y3    y3  3 y 4  3 y5  y 6   ......  y n3  3 y n 2  3 y n 1  y n 


3h
0
x0

x2

 ydx  8 y  3 y1  3 y 2  2 y3  3 y 4  3 y5  2 y6  2 y n 3  3 y n2  3 y n 1  y n 
3h
0
x0
This rule called “Simpson’s 3/8 rule”, is not so accurate as Simpson’s rule.

Truncation Error Formulae for Trapezoidal and Simpson’s Rule

Let h be the step size used in integration.


The truncation error formula for simple trapezoidal rule with 2 pts is given by
h3
TE  f "  
12
For composite trapezoidal rule with Ni intervals.
h3
TE (max)  N i f "  
12
The absolute TE bond for simple trapezoidal rule is given by

h3 h3
| T |bound  max f "    max f "   Where, x0    xn
12 12

For Composite rule also similarly


h3 h 312
| TE |bound  max N i f "   = N i max | f " ( )) | where x    xn
12
The truncation error for simple Simpson’s rule witih 3 pts is given by
h 5 iv
TE f  
90
For composite Simpon’s rule with Ni intervals, the truncation error bond is given by
h  iv
TE (max)  f  N si
90
Where, Nsi is number of simpsons’s intervals.

8 | www.mindvis.in
Ni
N si 
Since, 2
h 5  N  iv
TE (max)    f  
So
90  2 
The absolute truncation error bound for simple Simpson’s rule is given by,
h 5 iv h 5 90
| TE |bound  max  f ( )  max | f iv   Where, x0    xn
90
The absolute truncation error bond for composite Simpon’s rule with N intervals is given by
h 5 90  N  iv
| TE | bond  max   f  
2
h5  Ni 
   max | f   |
iv

90  2 
h5
 N i max | f iv   |
90
In all these formulae, N1 = (b- a )/h (where a and b are the limits of integration) and Ni = Npt = 1( where Npt is the number of
pts used in the integration). Since TE for simple trapezoidal rule is proportional to h3, it is a third order method. i.e. TE = O(h3).
Since TE for simple simpson’s rule is proportional to h5, it is a fith order method. i.e. TE = O(h5).Important Note:
1. Trapezoidal rule gives exact result while integrating polynomials upot degree = 1.
2. Simpson’’s rule gives exact results while integrating polynomials upto degree = 3.

Introduction
Analytical methods of solution are applicable only to limited class of differential equations. Frequently differential equations
appearing in physical problems do not belong to any of these familiar types and one is obliged to resort to numerical
methods. These methods are of even greater importance when we realize that computing machines are now available which
reduce the time taken to do numerical computation considerably.
A number of numerical methods are available for the solution of first order differential equations of the form:
dy
 f ( x, y), Given y(x0) = y0 …………(i)
dx
These methods yield solutions either as a power series in x from which the values of y can be found by direct substitution,
or as a set of values of x and y. the method of Picard and Taylor series belong to the former class of solutions whereas those
of Euler, Runge-Kutta, milne, Adams-Bashforth etc. belong to the latter class. In these later methods, the values of y are
calculated in short steps for equal intervals of x and therefore, termed as step-by step methods.
Eulaer and Rung-Kutta methods are used for computing over limited range of x-yalues whereas Milne and Adams-Bashforth
method may be-applied for finding y over a wider range of x-values. These later methods require staring values which are
found by Picard’s or Taylor series or Runge=Kutta methods.
The intial condition in (i) is pecified at the point x0 such problems in which all the initial conditions are given at the initial
point only are called initial value problems. But there are problems where conditions are gien at two or more points. These
are known as boundary value problems. in this chapter , we shall study three methods common used for solution of first
order differential equations mamely.
1. Eler’s Method
2. Modified Euler’s Method
3. Runge-Kutta Method of fourth Order (Classifical runge-Kutta Method)

Euler’s method
Consider the equations. dy/dx = f(x,y)
Given that y(x0) = y0. its curve of solution through P(x0, y0) is shown in Fig. Now we have to find the ordinate of any other
point Q on this curve.

9 | www.mindvis.in
True value
Q
y
Pn
Approx
value of
Q R y
Error
1 P2 n
P1 R
2

P  R
1

O x
L L1 L2 M

x0 x0+h x0+2h x0+nh

Let us divide LM into n sub-intervals each of width h at L1, L2, so that h is quiet small. In the interval LL1, we approximate the
curve by the tangent at p. If the ordinate through L1 meets this tangent in P1(x0 +h, y1), then
y  L1 P1
 LP  R1 P1
 y0  PR1 tan 
 dy 
 y0  
 dx  P
 y0  hf ( x0 , y0 )
Let P1Q1 be the curve of solution of (i) through P1 and let is tangent at P1 meet the ordinate through L2 in P2(x0 _2h, y2). Then
Repeating this process n times, we finally reach an approximation MPn of MQ gien by
yn1  f ( x0  (n  1)h, yn1 ) . in general we may write

yi1  yi  hf ( xi , yi )
This is Euler’s method of finding an approximate solution (i).
Obs. In Euler’s method, we approximate the curve of solution by the tangent in each intervat, i.e. by a sequence of short
lines. Unless h is small, the error is bound to be quiet significant.this sequence of lines may also deviate considerably from
the curve of solution. Hence there is a modification of this method which is given in the next section, called modified Euler’s
method, which is more accurate.

Modified Euler’s Method


In Euler’s method yi1  yi  hf ( xi , yi )
In backward Euler’s method yi1  yi  h f ( xi1 , yi1 )
A numerical method where yi+1 appears on LHS and RHS of the iterative equation is called an implicit method. SO backward
Euler’s method is an implicit method, while Euler’s method is explicit since yi+1 appears only on left side of iterative equation.
In backward Euler’s method, we need to rearrange and solve (i) for yi+1 before proceeding further.

Runge-Kutta Method
The Taylor’s series method of solving differential equations numerically is restricted by the labour involved in finding the
higher order derivatives. However there is a class of methods known as runge-Kutta method which do not require the
calculations of higher order derivatives. These methods agree whith Taylor’s series solution upto the terms in h. where r
differs from method to method and is called the order of that method. Euler’s method Modified Euler’s method and Rnge’s
method are the Runge-Kutta’ method’ or classical runge- Kutta method
Working rule of finding the increment k of y corresponding to an increment h of x by Rung-Kutta method from

10 | www.mindvis.in
dy
 f ( x, y), y( x0 )  y0 is as follows:
dx
Calculate successively
k1  hf ( x0 , y0 )
 1 1 
k 2  h f  x h, y0  k1 
 2 2 
 1 1 
k 3  h f  x0  h, y0  k 2 
 2 2 
And
k 4  f f ( x0  h, y0  k3
1
Finally compute k  (k1  2k 2  2k 3  k 4
6
Which gives the required approximate value y1 = y0 +k.
(Note that k is the weighted mean of k1, k2, k3 and k4).
Obs. One of the advantages of these methods is that the operation is identical whether the differential equitation in is linear
or non-linear

Stability Analysis
If the effect of round off error remains bonded as j, with a fixed step size, then the method is said to be stable;
otherwise unstable. Unstable methods will diverge away from solution and cause overflow error.
Using a general single step method equation
yj+ 1 = E. y ……………….(i)
Condition for absolute stability is
|E|  1
Using a test equation y’ =  y,
Let us find the condition for stability for Euler’s method.
Euler’s method equation y j 1  y j  hf ( x j , y j )
 y j  h y j
 (1  h ) y j
Now, comparing with (i) we get E = 1 + h
Condition for Stability if |E| < 1
| 1+ h | < 1
- 1<1+h<1
So, condition for stability is
- 2 < h < 0

Questions:

1. The accuracy of Simpson’s rule quadrature for a step size ℎ is


(A) 𝑂(ℎ2 )
(B) 𝑂(ℎ3 )
(C) 𝑂(ℎ4 )
(D) 𝑂(ℎ5 )

2. The values of a function 𝑓(𝑥) are tabulated below

𝑥 𝑓(𝑥)
0 1
1 2
2 1
3 10

11 | www.mindvis.in
Using Newton’s forward difference formula, the cubic polynomial that can be fitted to the above data, is
(A) 2𝑥 3 + 7𝑥 2 − 6𝑥 + 2
(B) 2𝑥 3 − 7𝑥 2 + 6𝑥 − 2
(C) 𝑥 3 − 7𝑥 2 − 6𝑥 2 + 1
(D) 2𝑥 3 − 7𝑥 2 + 6𝑥 + 1

0,𝑓𝑜𝑟𝑡<𝑎
3. A delayed unit step function is defined as 𝑈(𝑡 − 𝑎) = {1,𝑓𝑜𝑟𝑙≥𝑎 Its Laplace transform is
(A) 𝑎𝑒 −𝑎𝑠
𝑒 −𝑎𝑠
(B)
𝑠
𝑒 𝑎𝑠
(C)
𝑠
𝑒 𝑎𝑠
(D)
𝑎

4. Starting from 𝑥0 = 1, one step of Newton‐Raphson method in solving the equation 𝑥 3 + 3𝑥 − 7 = 0 gives the next
value (𝑥1 ) as
(A) 𝑥1 = 0.5
(B) 𝑥1 = 1.406
(C) 𝑥1 = 1.5
(D) 𝑥1 = 2

5. Match the items in column I and II.


Column I Column II
P. Gauss‐Seidel method 1. Interpolation
Q. Forward Newton‐Gauss method 2. Non‐linear differential equations
R. Runge‐Kutta method 3. Numerical integration
S. Trapezoidal Rule 4. Linear algebraic equations

(A) 𝑃‐l, 𝑄‐4, 𝑅‐3, 𝑆‐2


(C) 𝑃‐l. 𝑄‐3, 𝑅‐2, 𝑆‐4
(B) 𝑃‐l, 𝑄‐4, 𝑅‐2, 𝑆‐3
(D) 𝑃‐4, 𝑄‐l, 𝑅‐2, 𝑆‐3

6. A calculator has accuracy up to 8 digits after decimal place. The value of


2𝜋
∫ 𝑠𝑖𝑛 𝑥𝑑𝑥
0
when evaluated using the calculator by trapezoidal method with 8 equal intervals, to 5 significant digits is
(A) 0.00000
(B) 1.0000
(C) 0.00500
(D) 0.00025

31
7. The integral ∫1 𝑑𝑥, when evaluated by using Simpson’s 1/3 rule on two equal sub‐intervals each of length 1, equals
𝑥
(A) 1.000
(B) (B) 1.098
(C) 1.111
(D) (D) 1.120
Answers:

1 2 3 4 5 6 7
D D B C B A C

12 | www.mindvis.in

You might also like