Solution Manual - Applied Numerical Methods With MATLAB For Engineers and Scientists
Solution Manual - Applied Numerical Methods With MATLAB For Engineers and Scientists
to accompany
Steven C. Chapra
Tufts University
CHAPTER 1
1.1 You are given the following differential equation with the initial condition, v(t = 0) = 0,
dv c
= g − d v2
dt m
m dv m
= g − v2
c d dt c d
Define a = mg / c d
m dv
= a2 − v2
c d dt
dv cd
∫ a 2 − v 2 = ∫ m dt
A table of integrals can be consulted to find that
dx 1 x
∫a 2
−x 2
= tanh −1
a a
1 v c
tanh −1 = d t + C
a a m
1 v c
tanh −1 = d t
a a m
gm ⎛ gc d ⎞
v= tanh ⎜ t⎟
cd ⎜ m ⎟
⎝ ⎠
1.2 This is a transient computation. For the period from ending June 1:
1
Balance = Previous Balance + Deposits – Withdrawals
The balances for the remainder of the periods can be computed in a similar fashion as
tabulated below:
1.3 At t = 12 s, the analytical solution is 50.6175 (Example 1.1). The numerical results are:
absolute
step v(12) relative error
2 51.6008 1.94%
1 51.2008 1.15%
0.5 50.9259 0.61%
analytical − numerical
absolute relative error = × 100%
analytical
2.0%
1.0%
relative error
0.0%
0 0.5 1 1.5 2 2.5
2
dv c'
=g− v
dt m
g c'
sV − v(0) = − V
s m
Solve for
g v ( 0)
V= + (1)
s ( s + c ' / m) s + c ' / m
The first term to the right of the equal sign can be evaluated by a partial fraction expansion,
g A B
= + (2)
s ( s + c ' / m) s s + c ' / m
g A( s + c' / m) + Bs
=
s ( s + c ' / m) s ( s + c ' / m)
A+ B=0
c'
g= A
m
Therefore,
mg mg
A= B=−
c' c'
These results can be substituted into Eq. (2), and the result can be substituted back into Eq.
(1) to give
mg / c' mg / c' v ( 0)
V= − +
s s + c' / m s + c' / m
mg mg −( c '/ m )t
v= − e + v ( 0) e − ( c ' / m ) t
c' c'
or
3
v = v(0)e −( c '/ m )t +
mg
c'
(
1 − e −( c '/ m )t )
where the first term to the right of the equal sign is the general solution and the second is the
particular solution. For our case, v(0) = 0, so the final solution is
v=
mg
c'
(
1 − e −( c '/ m )t )
(b) The numerical solution can be implemented as
⎡ 12.5 ⎤
v(2) = 0 + ⎢9.81 − (0) 2 = 19.62
⎣ 68.1 ⎥⎦
⎡ 12.5 ⎤
v(4) = 19.62 + ⎢9.81 − (19.62)⎥ 2 = 6.2087
⎣ 68 .1 ⎦
The computation can be continued and the results summarized and plotted as:
t v dv/dt
0 0 9.81
2 19.6200 6.2087
4 32.0374 3.9294
6 39.8962 2.4869
8 44.8700 1.5739
10 48.0179 0.9961
12 50.0102 0.6304
60
40
20
0
0 4 8 12
Note that the analytical solution is included on the plot for comparison.
4
1.5 (a) The first two steps are
t c dc/dt
0 10.0000 -2.0000
0.1 9.8000 -1.9600
0.2 9.6040 -1.9208
0.3 9.4119 -1.8824
0.4 9.2237 -1.8447
0.5 9.0392 -1.8078
0.6 8.8584 -1.7717
0.7 8.6813 -1.7363
0.8 8.5076 -1.7015
0.9 8.3375 -1.6675
1 8.1707 -1.6341
(b) The results when plotted on a semi-log plot yields a straight line
2.4
2.3
2.2
2.1
2
0 0.2 0.4 0.6 0.8 1
ln(8.1707) − ln(10)
= −0.20203
1
Thus, the slope is approximately equal to the negative of the decay rate.
⎡ 400 400 ⎤
y (0.5) = 0 + ⎢3 sin 2 (0) − 0.5 = 0 + [0 − 0.33333] 0.5 = −0.16667
⎣ 1200 1200 ⎥⎦
[ ]
y (1) = −0.16667 + sin 2 (0.5) − 0.333333 0.5 = −0.21841
5
The process can be continued to give
t y
0 0
0.5 -0.16667
1 -0.21841
1.5 -0.03104
2 0.299793
2.5 0.546537
3 0.558955
3.5 0.402245
4 0.297103
4.5 0.416811
5 0.727927
0.8
y
0.4
0
0 1 2 3 4 5
-0.4
⎛c⎞
gm −⎜ ⎟ t
1.7 v(t ) = (1 − e ⎝ m ⎠ )
c
⎛ 12.5 ⎞
9.8(68.1) −⎜ ⎟ 10
jumper #1: v(t ) = (1 − e ⎝ 68.1 ⎠ ) = 44.87 m / s
12.5
⎛ 14 ⎞
9.8(75) −⎜ ⎟ t
jumper #2: 44.87 = (1 − e ⎝ 75 ⎠ )
14
44.87 = 52.5 − 52.5e −0.18666 t
0.14533 = e −0.18666 t
ln 0.14533 = ln e −0.18666 t
t = 10.33 sec
Q1 = Q2 + Q3
6
30 = 20 + vA3
10 = 5 A3
A3 = 2 m2
1.9 ∑ M in − ∑ M out = 0
7
CHAPTER 2
2.1
2.2
>> z = linspace(-3,3);
>> f = 1/sqrt(2*pi)*exp(-z.^2/2);
>> plot(z,f)
>> xlabel('z')
>> ylabel('frequency')
2.3 (a)
>> t = linspace(5,30,6)
8
t =
5 10 15 20 25 30
(b)
>> x = linspace(-3,3,7)
x =
-3 -2 -1 0 1 2 3
2.4 (a)
>> v = -2:.75:1
v =
-2.0000 -1.2500 -0.5000 0.2500 1.0000
(b)
>> r = 6:-1:0
r =
6 5 4 3 2 1 0
2.5
>> F = [10 12 15 9 12 16];
>> x = [0.013 0.020 0.009 0.010 0.012 0.010];
>> k = F./x
k =
1.0e+003 *
0.7692 0.6000 1.6667 0.9000 1.0000 1.6000
>> U = .5*k.*x.^2
U =
0.0650 0.1200 0.0675 0.0450 0.0720 0.0800
>> max(U)
ans =
0.1200
2.6
>> TF = 32:3.6:93.2;
>> TC = 5/9*(TF-32);
>> rho = 5.5289e-8*TC.^3-8.5016e-6*TC.^2+6.5622e-5*TC+0.99987;
>> plot(TC,rho)
9
2.7
A =
0.0350 0.0001 10.0000 2.0000
0.0200 0.0002 8.0000 1.0000
0.0150 0.0010 20.0000 1.5000
0.0300 0.0007 24.0000 3.0000
0.0220 0.0003 15.0000 2.5000
>> U = sqrt(A(:,2))./A(:,1).*(A(:,3).*A(:,4)./(A(:,3)+2*A(:,4))).^(2/3)
U =
0.3624
0.6094
2.5167
1.5809
1.1971
2.8
>> t = 10:10:60;
>> c = [3.4 2.6 1.6 1.3 1.0 0.5];
>> tf = 0:70;
>> cf = 4.84*exp(-0.034*tf);
>> plot(t,c,'s',tf,cf,'--')
10
2.9
>> t = 10:10:60;
>> c = [3.4 2.6 1.6 1.3 1.0 0.5];
>> tf = 0:70;
>> cf = 4.84*exp(-0.034*tf);
>> semilogy(t,c,'s',tf,cf,'--')
2.10
>> v = 10:10:80;
>> F = [25 70 380 550 610 1220 830 1450];
>> vf = 0:100;
>> Ff = 0.2741*vf.^1.9842;
>> plot(v,F,'d',vf,Ff,':')
11
2.11
>> v = 10:10:80;
>> F = [25 70 380 550 610 1220 830 1450];
>> vf = 0:100;
>> Ff = 0.2741*vf.^1.9842;
>> loglog(v,F,'d',vf,Ff,':')
2.12
>> x = linspace(0,3*pi/2);
>> c = cos(x);
>> cf = 1-x.^2/2+x.^4/factorial(4)-x.^6/factorial(6);
>> plot(x,c,x,cf,'--')
12
13
CHAPTER 3
3.1 The M-file can be written as
function sincomp(x,n)
i = 1;
tru = sin(x);
ser = 0;
fprintf('\n');
fprintf('order true value approximation error\n');
while (1)
if i > n, break, end
ser = ser + (-1)^(i - 1) * x^(2*i-1) / factorial(2*i-1);
er = (tru - ser) / tru * 100;
fprintf('%3d %14.10f %14.10f %12.8f\n',i,tru,ser,er);
i = i + 1;
end
>> sincomp(1.5,8)
function futureworth(P, i, n)
nn = 0:n;
F = P*(1+i).^nn;
y = [nn;F];
fprintf('\n year future worth\n');
fprintf('%5d %14.2f\n',y);
>> futureworth(100000,0.08,8)
14
3.3 The M-file can be written as
function annualpayment(P, i, n)
nn = 1:n;
A = P*i*(1+i).^nn./((1+i).^nn-1);
y = [nn;A];
fprintf('\n year annualpayment\n');
fprintf('%5d %14.2f\n',y);
>> annualpayment(35000,.076,5)
year annualpayment
1 37660.00
2 19519.34
3 13483.26
4 10473.30
5 8673.76
>> avgtemp(5.2,22.1,0,59)
ans =
-10.8418
>> avgtemp(23.1,33.6,180,242)
ans =
33.0398
15
>> tankvol(1,0.5)
ans =
0.1309
>> tankvol(1,1.2)
ans =
1.6755
>> tankvol(1,3.0)
ans =
7.3304
>> tankvol(1,3.1)
??? Error using ==> tankvol
overtop
This function can be used to evaluate the test cases. For example, for the first case,
>> [r,th]=polar(1,1)
r =
1.4142
th =
90
16
x y r θ
1 1 1.4142 90
1 −1 1.4142 −90
1 0 1.0000 0
−1 1 1.4142 135
−1 −1 1.4142 −135
−1 0 1.0000 180
0 1 1.0000 90
0 −1 1.0000 −90
0 0 0.0000 0
function polar2(x, y)
r = sqrt(x .^ 2 + y .^ 2);
n = length(x);
for i = 1:n
if x(i) < 0
if y(i) > 0
th(i) = atan(y(i) / x(i)) + pi;
elseif y(i) < 0
th(i) = atan(y(i) / x(i)) - pi;
else
th(i) = pi;
end
else
if y(i) > 0
th(i) = pi / 2;
elseif y(i) < 0
th(i) = -pi / 2;
else
th(i) = 0;
end
end
th(i) = th(i) * 180 / pi;
end
ou = [x;y;r;th];
fprintf('\n x y radius angle\n');
fprintf('%8.2f %8.2f %10.4f %10.4f\n',ou);
This function can be used to evaluate the test cases and display the results in tabular form,
>> polar2(x,y)
x y radius angle
1.00 1.00 1.4142 90.0000
1.00 -1.00 1.4142 -90.0000
1.00 0.00 1.0000 0.0000
-1.00 1.00 1.4142 135.0000
-1.00 -1.00 1.4142 -135.0000
-1.00 0.00 1.0000 180.0000
0.00 1.00 1.0000 90.0000
0.00 -1.00 1.0000 -90.0000
0.00 0.00 0.0000 0.0000
17
3.8 The M-file can be written as
>> lettergrade(95)
ans =
A
>> lettergrade(45)
ans =
F
>> lettergrade(80)
ans =
B
function Manning(A)
A(:,5) = sqrt(A(:,2))./A(:,1).*(A(:,3).*A(:,4)./(A(:,3)+2*A(:,4))).^(2/3);
fprintf('\n n S B H U\n');
fprintf('%8.3f %8.4f %10.2f %10.2f %10.4f\n',A');
>> Manning(A)
n S B H U
0.035 0.0001 10.00 2.00 0.3624
0.020 0.0002 8.00 1.00 0.6094
0.015 0.0010 20.00 1.50 2.5167
0.030 0.0007 24.00 3.00 1.5809
0.022 0.0003 15.00 2.50 1.1971
18
3.10 The M-file can be written as
function beam(x)
xx = linspace(0,x);
n=length(xx);
for i=1:n
uy(i) = -5/6.*(sing(xx(i),0,4)-sing(xx(i),5,4));
uy(i) = uy(i) + 15/6.*sing(xx(i),8,3) + 75*sing(xx(i),7,2);
uy(i) = uy(i) + 57/6.*xx(i)^3 - 238.25.*xx(i);
end
plot(xx,uy)
function s = sing(xxx,a,n)
if xxx > a
s = (xxx - a).^n;
else
s=0;
end
>> beam(10)
function cylinder(r, L)
h = linspace(0,2*r);
V = (r^2*acos((r-h)./r)-(r-h).*sqrt(2*r*h-h.^2))*L;
plot(h, V)
>> cylinder(2,5)
19
20
CHAPTER 4
4.1 The true value can be computed as
6(0.577)
f ' (1.22) = = 2,352,911
(1 − 3 × 0.577 2 ) 2
x = 0.577
x 2 = 0.332929 ⎯⎯ ⎯⎯→ 0.332
chopping
3x 2 = 0.996
1 − 3x 2 = 0.004
3.46 3.46
f ' (0.577) = = = 216,250
(1 − 0.996) 2
0.004 2
2,352,911 − 216,250
εt = = 90.8%
2,352,911
x = 0.577
x 2 = 0.332929 ⎯⎯ ⎯⎯→ 0.3329
chopping
3x = 0.9987
2
1 − 3x 2 = 0.0013
3.462 3.462
f ' (0.577) = = = 2,048,521
(1 − 0.9987) 2
0.0013 2
2,352,911 − 2,048,521
εt = = 12.9%
2,352,911
Although using more significant digits improves the estimate, the error is still considerable.
The problem stems primarily from the fact that we are subtracting two nearly equal numbers
in the denominator. Such subtractive cancellation is worsened by the fact that the
denominator is squared.
21
(a) Using 3-digits with chopping
0.043053 − 0.12
εt = = 178.7%
0.043053
y = 0.397 − 0.35
y = 0.047
0.043053 − 0.47
εt = = 9.2%
0.043053
Hence, the second form is superior because it tends to minimize round-off error.
4.3 (a) For this case, xi = 0 and h = x. Thus, the Taylor series is
f " ( 0) 2 f ( 3 ) ( 0) 3
f ( x) = f (0) + f ' (0) x + x + x + ⋅⋅⋅
2! 3!
22
1 2 1 3
f ( x) = 1 + x + x + x + ⋅⋅⋅
2! 3!
(b) The true value is e–1 = 0.367879 and the step size is h = xi+1 – xi = 1 – 0.25 = 0.75. The
complete Taylor series to the third-order term is
h2 h3
f ( xi +1 ) = e − xi − e − xi h + e − xi − e − xi
2 3!
Zero-order approximation:
0.367879 − 0.778801
εt = 100% = 111.7%
0.367879
First-order approximation:
0.367879 − 0.1947
εt = 100% = 47.1%
0.367879
Second-order approximation:
0.75 2
f (1) = 0.778801 − 0.778801(0.75) + 0.778801 = 0.413738
2
0.367879 − 0.413738
εt = 100% = 12.5%
0.367879
Third-order approximation:
0.75 2 0.75 3
f (1) = 0.778801 − 0.778801(0.75) + 0.778801 − 0.778801 = 0.358978
2 6
0.367879 − 0.358978
εt = 100% = 2.42%
0.367879
zero-order:
23
⎛π ⎞
cos⎜ ⎟ ≅ 1
⎝4⎠
0.707107 − 1
εt = 100% = 41.42%
0.707107
first-order:
⎛π ⎞ (π / 4) 2
cos⎜ ⎟ ≅ 1 − = 0.691575
⎝4⎠ 2
0.707107 − 0.691575
εt = 100% = 2.19%
0.707107
0.691575 − 1
εa = 100% = 44.6%
0.691575
second-order:
⎛π ⎞ (π / 4) 4
cos⎜ ⎟ ≅ 0.691575 + = 0.707429
⎝4⎠ 24
0.707107 − 0.707429
εt = 100% = 0.456%
0.707107
0.707429 − 0.691575
εa = 100% = 2.24%
0.707429
third-order:
⎛π ⎞ (π / 4) 6
cos⎜ ⎟ ≅ 0.707429 − = 0.707103
⎝4⎠ 720
0.707107 − 0.707103
εt = 100% = 0.0005%
0.707107
0.707103 − 0.707429
εa = 100% = 0.046%
0.707103
zero-order:
24
⎛π ⎞
sin ⎜ ⎟ ≅ 0.785398
⎝4⎠
0.707107 − 0.785398
εt = 100% = 11.1%
0.707107
first-order:
⎛π ⎞ (π / 4) 3
sin ⎜ ⎟ ≅ 0.785398 − = 0.704653
⎝4⎠ 6
0.707107 − 0.704653
εt = 100% = 0.347%
0.707107
0.704653 − 0.785398
εa = 100% = 11.46%
0.704653
second-order:
⎛π ⎞ (π / 4) 5
sin ⎜ ⎟ ≅ 0.704653 + = 0.707143
⎝4⎠ 120
0.707107 − 0.707143
εt = 100% = 0.0051%
0.707107
0.707143 − 0.704653
εa = 100% = 0.352%
0.707143
zero order:
102 − (−62)
f ( 2) = f (1) = −62 εt = 100% = 160.8%
102
first order:
102 − 8
f ( 2) = −62 + 70(1) = 8 εt = 100% = 92.1%
102
second order:
25
f " (1) = 150(1) − 12 = 138
138 2 102 − 77
f ( 2) = 8 + (1) = 77 εt = 100% = 24.5%
2 102
third order:
f ( 3)
(1) = 150
Because we are working with a third-order polynomial, the error is zero. This is due to the
fact that cubics have zero fourth and higher derivatives.
zero order:
1.098612 − 0
f (3) = f (1) = 0 εt = 100% = 100%
1.098612
first order:
1
f ' ( x) = f ' (1) = 1
x
1.098612 − 2
f (3) = 0 + 1(2) = 2 εt = 100% = 82.05%
1.098612
second order:
1
f " ( x) = − f " (1) = −1
x2
22 1.098612 − 0
f (3) = 2 − 1 =0 εt = 100% = 100%
2 1.098612
third order:
2
f ( 3)
( x) = f " (1) = 2
x3
23 1.098612 − 2.66667
f (3) = 0 + 2 = 2.66667 εt = 100% = 142.7%
6 1.098612
fourth order:
26
6
f ( 4)
( x) = − f ( 4)
(1) = −6
x4
24 1.098612 − (−1.33333)
f (3) = 2.66666 − 6 = −1.33333 εt = 100% = 221.4%
24 1.098612
The points needed to form the finite divided differences can be computed as
forward:
182.1406 − 102
f ' ( 2) = = 320.5625 E t = 283 − 320.5625 = 37.5625
0.25
backward:
102 − 39.85938
f ' ( 2) = = 248.5625 E t = 283 − 248.5625 = 34.4375
0.25
centered:
182.1406 − 39.85938
f ' ( 2) = = 284.5625 E t = 283 − 284.5625 = −1.5625
0 .5
Both the forward and backward differences should have errors approximately equal to
f " ( xi )
Et ≈ h
2
Therefore,
288
Et ≈ 0.25 = 36
2
27
For the central difference,
( 3)
f ( xi ) 2
Et ≈ − h
6
150
Et ≈ − (0.25) 2 = −1.5625
6
which is exact. This occurs because the underlying function is a cubic equation that has zero
fourth and higher derivatives.
For h = 0.2,
For h = 0.1,
Both are exact because the errors are a function of fourth and higher derivatives which are
zero for a 3rd-order polynomial.
4.10 Use εs = 0.5×102–2 = 0.5%. The true value = 1/(1 – 0.1) = 1.11111…
zero-order:
1
≅1
1 − 0.1
1.11111 − 1
εt = 100% = 10%
1.11111
first-order:
1
≅ 1 + 0 .1 = 1 .1
1 − 0.1
28
1.11111 − 1.1
εt = 100% = 1%
1.11111
1 .1 − 1
εa = 100% = 9.0909%
1 .1
second-order:
1
≅ 1 + 0.1 + 0.01 = 1.11
1 − 0.1
1.11111 − 1.11
εt = 100% = 0.1%
1.11111
1.11 − 1.1
εa = 100% = 0.9009%
1.11
third-order:
1
≅ 1 + 0.1 + 0.01 + 0.001 = 1.111
1 − 0.1
1.11111 − 1.111
εt = 100% = 0.01%
1.11111
1.111 − 1.11
εa = 100% = 0.090009%
1.111
The approximate error has fallen below 0.5% so the computation can be terminated.
1
f ( x) = x − 1 − sin x
2
1
f ' ( x) = 1 − cos x
2
1
f " ( x) = sin x
2
1
f ( 3)
( x) = cos x
2
1
f ( 4)
( x) = − sin x
2
29
Using the Taylor Series expansion, we obtain the following 1st, 2nd, 3rd, and 4th order Taylor
Series functions shown below in the MATLAB program−f1, f2, and f4. Note the 2nd and
3rd order Taylor Series functions are the same.
From the plots below, we see that the answer is the 4th Order Taylor Series expansion.
x=0:0.001:3.2;
f=x-1-0.5*sin(x);
subplot(2,2,1);
plot(x,f);grid;title('f(x)=x-1-0.5*sin(x)');hold on
f1=x-1.5;
e1=abs(f-f1); %Calculates the absolute value of the
difference/error
subplot(2,2,2);
plot(x,e1);grid;title('1st Order Taylor Series Error');
f2=x-1.5+0.25.*((x-0.5*pi).^2);
e2=abs(f-f2);
subplot(2,2,3);
plot(x,e2);grid;title('2nd/3rd Order Taylor Series Error');
f4=x-1.5+0.25.*((x-0.5*pi).^2)-(1/48)*((x-0.5*pi).^4);
e4=abs(f4-f);
subplot(2,2,4);
plot(x,e4);grid;title('4th Order Taylor Series Error');hold off
2 0.6
1 0.4
0 0.2
-1 0
0 1 2 3 4 0 1 2 3 4
2nd/3rd Order Taylor Series Error 4th Order Taylor Series Error
0.2 0.015
0.15
0.01
0.1
0.005
0.05
0 0
0 1 2 3 4 0 1 2 3 4
30
4.12
x f(x) f(x-1) f(x+1) f'(x)-Theory f'(x)-Back f'(x)-Cent f'(x)-Forw
-2.000 0.000 -2.891 2.141 10.000 11.563 10.063 8.563
-1.750 2.141 0.000 3.625 7.188 8.563 7.250 5.938
-1.500 3.625 2.141 4.547 4.750 5.938 4.813 3.688
-1.250 4.547 3.625 5.000 2.688 3.688 2.750 1.813
-1.000 5.000 4.547 5.078 1.000 1.813 1.063 0.313
-0.750 5.078 5.000 4.875 -0.313 0.313 -0.250 -0.813
-0.500 4.875 5.078 4.484 -1.250 -0.813 -1.188 -1.563
-0.250 4.484 4.875 4.000 -1.813 -1.563 -1.750 -1.938
0.000 4.000 4.484 3.516 -2.000 -1.938 -1.938 -1.938
0.250 3.516 4.000 3.125 -1.813 -1.938 -1.750 -1.563
0.500 3.125 3.516 2.922 -1.250 -1.563 -1.188 -0.813
0.750 2.922 3.125 3.000 -0.313 -0.813 -0.250 0.313
1.000 3.000 2.922 3.453 1.000 0.313 1.063 1.813
1.250 3.453 3.000 4.375 2.688 1.813 2.750 3.688
1.500 4.375 3.453 5.859 4.750 3.688 4.813 5.938
1.750 5.859 4.375 8.000 7.188 5.938 7.250 8.563
2.000 8.000 5.859 10.891 10.000 8.563 10.063 11.563
14.0
12.0
10.0
8.0
6.0 Theoretical
Backward
f'(x)
Centered
4.0
Forward
2.0
0.0
-2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5
-2.0
-4.0
x-values
31
0.500 3.125 3.516 2.922 4.000 3.000 3.000 1.500 3.000 4.500
0.750 2.922 3.125 3.000 3.516 3.453 4.500 3.000 4.500 6.000
1.000 3.000 2.922 3.453 3.125 4.375 6.000 4.500 6.000 7.500
1.250 3.453 3.000 4.375 2.922 5.859 7.500 6.000 7.500 9.000
1.500 4.375 3.453 5.859 3.000 8.000 9.000 7.500 9.000 10.500
1.750 5.859 4.375 8.000 3.453 10.891 10.500 9.000 10.500 12.000
2.000 8.000 5.859 10.891 4.375 14.625 12.000 10.500 12.000 13.500
15.0
10.0
5.0
f''(x)-Theory
f''(x)-Backward
f''(x)
0.0
f''(x)-Centered
-2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5
f''(x)-Forward
-5.0
-10.0
-15.0
x-values
4.13
>> macheps
ans =
2.2204e-016
>> eps
ans =
2.2204e-016
32
CHAPTER 5
5.1 The function to evaluate is
gm ⎛ gc d ⎞
f (c d ) = tanh⎜ t ⎟ − v(t )
cd ⎜ m ⎟
⎝ ⎠
9.81(80) ⎛ 9.81c d ⎞
f (c d ) = tanh⎜ 4 ⎟ − 36
cd ⎜ 80 ⎟
⎝ ⎠
0 .1 + 0 .2
xr = = 0.15
2
Therefore, the root is in the first interval and the upper guess is redefined as xu = 0.15. The
second iteration is
0.1 + 0.15
xr = = 0.125
2
0.125 − 0.15
εa = 100% = 20%
0.125
Therefore, the root is in the second interval and the lower guess is redefined as xu = 0.125.
The remainder of the iterations are displayed in the following table:
Thus, after six iterations, we obtain a root estimate of 0.1390625 with an approximate error of
1.12%.
5.2
function root = bisectnew(func,xl,xu,Ead)
% bisectnew(xl,xu,es,maxit):
33
% uses bisection method to find the root of a function
% with a fixed number of iterations to attain
% a prespecified tolerance
% input:
% func = name of function
% xl, xu = lower and upper guesses
% Ead = (optional) desired tolerance (default = 0.000001)
% output:
% root = real root
The following is a MATLAB session that uses the function to solve Prob. 5.1 with Ea,d =
0.0001.
fcd =
Inline function:
fcd(cd) = sqrt(9.81*80/cd)*tanh(sqrt(9.81*cd/80)*4)-36
ans =
0.14008789062500
9.81(80) ⎛ 9.81c d ⎞
f (c d ) = tanh ⎜ 4 ⎟ − 36
cd ⎜ 80 ⎟
⎝ ⎠
34
The first iteration is
− 1.19738(0.1 − 0.2)
x r = 0.2 − = 0.141809
0.86029 − (−1.19738)
Therefore, the root is in the first interval and the upper guess is redefined as xu = 0.141809.
The second iteration is
− 0.03521(0.1 − 0.141809)
x r = 0.141809 − = 0.140165
0.86029 − (−0.03521)
0.140165 − 0.141809
εa = 100% = 1.17%
0.140165
Therefore, after only two iterations we obtain a root estimate of 0.140165 with an
approximate error of 1.17% which is below the stopping criterion of 2%.
5.4
function root = falsepos(func,xl,xu,es,maxit)
% falsepos(xl,xu,es,maxit):
% uses the false position method to find the root
% of the function func
% input:
% func = name of function
% xl, xu = lower and upper guesses
% es = (optional) stopping criterion (%) (default = 0.001)
% maxit = (optional) maximum allowable iterations (default = 50)
% output:
% root = real root
if func(xl)*func(xu)>0 %if guesses do not bracket a sign change
error('no bracket') %display an error message and terminate
end
% default values
if nargin<5, maxit=50; end
if nargin<4, es=0.001; end
% false position
iter = 0;
xr = xl;
while (1)
xrold = xr;
xr = xu - func(xu)*(xl - xu)/(func(xl) - func(xu));
iter = iter + 1;
if xr ~= 0, ea = abs((xr - xrold)/xr) * 100; end
test = func(xl)*func(xr);
if test < 0
xu = xr;
elseif test > 0
xl = xr;
else
ea = 0;
35
end
if ea <= es | iter >= maxit, break, end
end
root = xr;
The following is a MATLAB session that uses the function to solve Prob. 5.1:
fcd =
Inline function:
fcd(cd) = sqrt(9.81*80/cd)*tanh(sqrt(9.81*cd/80)*4)-36
ans =
0.14016503741282
x−3 2
M + 100( x − 3)() + 150( x − (3)) − 265 x = 0
3<x<6 2 3
(2) M = −50 x + 415 x − 150
2
2
M = 150( x − (3)) + 300( x − 4.5) − 265 x
6<x<10 3
(3) M = −185 x + 1650
M + 100(12 − x) = 0
10<x<12
(4) M = 100 x − 1200
Combining Equations:
Because the curve crosses the axis between 6 and 10, use (3).
Set x L = 6; xU = 10
36
M ( x L ) = 540 x L + xU
xr = =8
M ( xU ) = −200 2
M ( x R ) = 170 → replaces x L
M ( x L ) = 170 8 + 10
xr = =9
M ( xU ) = −200 2
M ( x R ) = −15 → replaces xU
M ( x L ) = 170 8+9
xr = = 8.5
M ( xU ) = −15 2
M ( x R ) = 77.5 → replaces x L
M ( x L ) = 77.5 8.5 + 9
xr = = 8.75
M ( xU ) = −15 2
M ( x R ) = 31.25 → replaces x L
M ( x L ) = 31.25 8.75 + 9
xr = = 8.875
M ( xU ) = −15 2
M ( x R ) = 8.125 → replaces x L
M ( x L ) = 8.125 8.875 + 9
xr = = 8.9375
M ( xU ) = −15 2
M ( x R ) = −3.4375 → replaces xU
>> x=[-1:0.1:6];
37
>> f=-12-21*x+18*x.^2-2.75*x.^3;
>> plot(x,f)
>> grid
This plot indicates that roots are located at about –0.4, 2.25 and 4.7.
−1+ 0
xr = = −0.5
2
Therefore, the root is in the second interval and the lower guess is redefined as xl = –0.5. The
second iteration is
− 0.5 + 0
xr = = −0.25
2
− 0.25 − ( −0.5)
εa = 100% = 100%
− 0.25
Therefore, the root is in the first interval and the upper guess is redefined as xu = –0.25. The
remainder of the iterations are displayed in the following table:
38
4 −0.5 3.34375 −0.375 −1.4487305 −0.4375 0.8630981 14.29%
5 −0.4375 0.863098 −0.375 −1.4487305 −0.40625 −0.3136673 7.69%
6 −0.4375 0.863098 −0.40625 −0.3136673 −0.421875 0.2694712 3.70%
7 −0.42188 0.269471 −0.40625 −0.3136673 −0.414063 −0.0234052 1.89%
8 −0.42188 0.269471 −0.41406 −0.0234052 −0.417969 0.1227057 0.93%
Thus, after eight iterations, we obtain a root estimate of −0.417969 with an approximate error
of 0.93%, which is below the stopping criterion of 1%.
− 12(−1 − 0)
xr = 0 − = −0.287425
29.75 − (−12)
Therefore, the root is in the first interval and the upper guess is redefined as xu = –0.287425.
The second iteration is
− 4.4117349(−1 − (−0.287425))
x r = −0.287425 − = −0.3794489
29.75 − (−4.4117349)
− 0.3794489 − (−0.2874251)
εa = 100% = 24.25%
− 0.3794489
Therefore, the root is in the first interval and the upper guess is redefined as xu = –0.379449.
The remainder of the iterations are displayed in the following table:
Therefore, after five iterations we obtain a root estimate of –0.414022 with an approximate
error of 0.45%, which is below the stopping criterion of 1%.
>> x=[-0.5:0.1:1.5];
>> f=sin(x)-x.^2;
>> plot(x,f)
>> grid
39
This plot indicates that a nontrivial root (i.e., nonzero) is located at about 0.85.
0 .5 + 1
xr = = 0.75
2
Therefore, the root is in the second interval and the lower guess is redefined as xl = 0.75. The
second iteration is
0.75 + 1
xr = = 0.875
2
0.875 − 0.75
εa = 100% = 14.29%
0.875
Because the product is positive, the root is in the second interval and the lower guess is
redefined as xl = 0.875. The remainder of the iterations are displayed in the following table:
40
Therefore, after five iterations we obtain a root estimate of 0.890625 with an approximate
error of 1.75%, which is below the stopping criterion of 2%.
5.8 (a) A graph of the function indicates a positive real root at approximately x = 1.4.
0
-3 -2 -1 0 1 2 3
-2
-4
-6
-8
-10
-12
0 .5 + 2
xr = = 1.25
2
Therefore, the root is in the second interval and the lower guess is redefined as xl = 1.25. The
second iteration is
1.25 + 2
xr = = 1.625
2
1.625 − 1.25
εa = 100% = 23.08%
1.625
Therefore, the root is in the first interval and the upper guess is redefined as xu = 1.625. The
remainder of the iterations are displayed in the following table:
41
Thus, after three iterations, we obtain a root estimate of 1.4375 with an approximate error of
13.04%.
0.6862944(0.5 − 2)
xr = 2 − = 1.628707
− 2.086294 − 0.6862944
Therefore, the root is in the first interval and the upper guess is redefined as xu = 1.628707.
The second iteration is
1.4970143(0.5 − 1.628707)
x r = 0.2755734 − = 1.4970143
− 2.086294 − 0.2755734
1.4970143 − 1.6287074
εa = 100% = 8.8%
1.4970143
Therefore, the root is in the first interval and the upper guess is redefined as xu = 1.497014.
The remainder of the iterations are displayed in the following table:
Therefore, after three iterations we obtain a root estimate of 1.4483985 with an approximate
error of 3.36%.
5.9 (a) Equation (5.6) can be used to determine the number of iterations
⎛ ∆x 0 ⎞
n = 1 + log 2 ⎜ ⎟ = 1 + log 2 ⎛⎜ 35 ⎞⎟ = 10.45121
⎜E ⎟ ⎝ 0.05 ⎠
⎝ a,d ⎠
(b) Here is an M-file that evaluates the temperature in oC using 11 iterations of bisection
based on a given value of the oxygen saturation concentration in freshwater:
function TC = TempEval(osf)
% function to evaluate the temperature in degrees C based
% on the oxygen saturation concentration in freshwater (osf).
xl = 0 + 273.15;
xu = 35 + 273.15;
if fTa(xl,osf)*fTa(xu,osf)>0 %if guesses do not bracket
error('no bracket') %display an error message and terminate
42
end
xr = xl;
for i = 1:11
xrold = xr;
xr = (xl + xu)/2;
if xr ~= 0, ea = abs((xr - xrold)/xr) * 100; end
test = fTa(xl,osf)*fTa(xr,osf);
if test < 0
xu = xr;
elseif test > 0
xl = xr;
else
ea = 0;
end
end
TC = xr - 273.15;
end
>> TempEval(8)
ans =
26.7798
>> TempEval(10)
ans =
15.3979
>> TempEval(14)
ans =
1.5552
400
f ( y) = 1 − (3 + y )
9.81(3 y + y 2 / 2) 3
43
10
0
0 0.5 1 1.5 2 2.5
-10
-20
-30
-40
0 .5 + 2 .5
xr = = 1 .5
2
Therefore, the root is in the second interval and the lower guess is redefined as xl = 1.5. The
second iteration is
1 .5 + 2 .5
xr = =2
2
2 − 1.5
εa = 100% = 25%
2
Therefore, the root is in the first interval and the upper guess is redefined as xu = 2. The
remainder of the iterations are displayed in the following table:
44
After eight iterations, we obtain a root estimate of 1.5078125 with an approximate error of
0.52%.
0.81303(0.5 − 2.5)
x r = 2.5 − = 2.45083
− 32.2582 − 0.81303
Therefore, the root is in the first interval and the upper guess is redefined as xu = 2.45083.
The second iteration is
0.79987(0.5 − 2.45083)
x r = 2.45083 − = 2.40363
− 32.25821 − 0.79987
2.40363 − 2.45083
εa = 100% = 1.96%
2.40363
The root is in the first interval and the upper guess is redefined as xu = 2.40363. The
remainder of the iterations are displayed in the following table:
After ten iterations we obtain a root estimate of 2.09077 with an approximate error of 1.59%.
Thus, after ten iterations, the false position method is converging at a very slow pace and is
still far from the root in the vicinity of 1.5 that we detected graphically.
Discussion: This is a classic example of a case where false position performs poorly and is
inferior to bisection. Insight into these results can be gained by examining the plot that was
developed in part (a). This function violates the premise upon which false position was
based−that is, if f(xu) is much closer to zero than f(xl), then the root is closer to xu than to xl
(recall Fig. 5.8). Because of the shape of the present function, the opposite is true.
45
CHAPTER 6
6.1 The function can be set up for fixed-point iteration by solving it for x
xi +1 = sin (x)
i
( )
x1 = sin 0.5 = 0.649637
0.649637 − 0.5
εa = × 100% = 23%
0.649637
Second iteration:
(
x 2 = sin 0.649637 = 0.721524 )
0.721524 − 0.649637
εa = × 100% = 9.96%
0.721524
iteration xi |εa|
0 0.500000
1 0.649637 23.0339%
2 0.721524 9.9632%
3 0.750901 3.9123%
4 0.762097 1.4691%
5 0.766248 0.5418%
6 0.767772 0.1984%
7 0.768329 0.0725%
8 0.768532 0.0265%
9 0.768606 0.0097%
Thus, after nine iterations, the root is estimated to be 0.768606 with an approximate error
of 0.0097%.
6.2 (a) The function can be set up for fixed-point iteration by solving it for x in two different
ways. First, it can be solved for the linear x,
0.9 x i2 − 2.5
x i +1 =
1 .7
46
0.9(5) 2 − 2.5
x1 = = 11.76
1.7
11.76 − 5
εa = × 100% = 57.5%
11.76
Second iteration:
0.9(11.76) 2 − 2.5
x1 = = 71.8
1 .7
71.8 − 11.76
εa = × 100% = 83.6%
71.8
1.7 xi + 2.5
x i +1 =
0.9
1.7(5) + 2.5
x i +1 = = 3.496
0 .9
3.496 − 5
εa = × 100% = 43.0%
3.496
Second iteration:
1.7(3.496) + 2.5
x i +1 = = 3.0629
0.9
3.0629 − 3.496
εa = × 100% = 14.14%
3.0629
iteration xi |εa|
0 5.000000
1 3.496029 43.0194%
2 3.062905 14.1410%
3 2.926306 4.6680%
4 2.881882 1.5415%
5 2.867287 0.5090%
47
6 2.862475 0.1681%
7 2.860887 0.0555%
8 2.860363 0.0183%
9 2.860190 0.0061%
Thus, after 9 iterations, the root estimate is 2.860190 with an approximate error of
0.0061%. The result can be checked by substituting it back into the original function,
3.424658 − 5
εa = × 100% = 46.0%
3.424658
Second iteration:
2.924357 − 3.424658
εa = × 100% = 17.1%
2.924357
After 5 iterations, the root estimate is 2.860104 with an approximate error of 0.0000%. The
result can be checked by substituting it back into the original function,
48
6.3 (a)
>> x = linspace(0,4);
>> y = x.^3-6*x.^2+11*x-6.1;
>> plot(x,y)
>> grid
x i3 − 6 x i2 + 11xi − 6.1
x i +1 = x i −
3 x i2 − 12 xi + 11
3.191304 − 3.5
εa = × 100% = 9.673%
3.191304
Second iteration:
3.068699 − 3.191304
εa = × 100% = 3.995%
3.068699
49
Third iteration:
3.047317 − 3.068699
εa = × 100% = 0.702%
3.047317
1.775(2.5 − 3.5)
x1 = 3.5 − = 2.711111
− 0.475 − 1.775
2.711111 − 3.5
εa = × 100% = 29.098%
2.711111
Second iteration:
− 0.45152(3.5 − 2.711111)
x 2 = 2.711111 − = 2.871091
1.775 − (−0.45152)
2.871091 − 2.711111
εa = × 100% = 5.572%
2.871091
Third iteration:
− 0.31011(2.711111 − 2.871091)
x 3 = 2.871091 − = 3.221923
− 0.45152 − (−0.31011)
3.221923 − 2.871091
εa = × 100% = 10.889%
3.221923
50
0.02(3.5)1.775
x1 = 3.5 − = 3.207573
2.199893 − 1.775
3.207573 − 3.5
εa = × 100% = 9.117%
3.207573
Second iteration:
0.02(3.207573)0.453351
x 2 = 3.207573 − = 3.082034
0.685016 − 0.453351
3.082034 − 3.207573
εa = × 100% = 4.073%
3.082034
Third iteration:
0.02(3.082034)0.084809
x 3 = 3.082034 − = 3.050812
0.252242 − 0.084809
3.050812 − 3.082034
εa = × 100% = 1.023%
3.050812
(e)
>> a = [1 -6 11 -6.1]
a =
1.0000 -6.0000 11.0000 -6.1000
>> roots(a)
ans =
3.0467
1.8990
1.0544
6.4 (a)
>> x = linspace(0,4);
>> y = 7*sin(x).*exp(-x)-1;
>> plot(x,y)
>> grid
51
The lowest positive root seems to be at approximately 0.2.
7 sin( xi )e − xi − 1
x i +1 = x i −
7e − xi (cos( x i ) − sin( xi ))
0.144376 − 0.3
εa = × 100% = 107.8%
0.144376
Second iteration:
0.169409 − 0.144376
εa = × 100% = 14.776%
0.169409
Third iteration:
52
0.170179 − 0.169409
εa = × 100% = 0.453%
0.170179
0.532487(0.4 − 0.3)
x1 = 0.3 − = 0.119347
0.827244 − 0.532487
0.119347 − 0.3
εa = × 100% = 151.4%
0.119347
Second iteration:
− 0.26032(0.3 − 0.119347)
x 2 = 0.119347 − = 0.178664
0.532487 − (−0.26032)
0.178664 − 0.119347
εa = × 100% = 33.2%
0.178664
Third iteration:
0.04047(0.119347 − 0.178664)
x 3 = 0.178664 − = 0.170683
− 0.26032 − 0.04047
0.170683 − 0.178664
εa = × 100% = 4.68%
0.170683
0.01(0.3)0.532487
x1 = 0.3 − = 0.143698
0.542708 − 0.532487
0.143698 − 0.3
εa = × 100% = 108.8%
0.143698
53
Second iteration:
0.02(0.143698)(−0.13175)
x 2 = 0.143698 − = 0.169412
− 0.12439 − (−0.13175)
0.169412 − 0.143698
εa = × 100% = 15.18%
0.169412
Third iteration:
0.02(0.169412)(−0.00371)
x 3 = 0.169412 − = 0.170181
0.004456 − (−0.00371)
0.170181 − 0.169412
εa = × 100% = 0.452%
0.170181
50.06217
x1 = 0.5825 − = 2.300098
− 29.1466
2.300098 − 0.5825
εa = × 100% = 74.675%
2.300098
Second iteration
− 21.546
x1 = 2.300098 − = 90.07506
0.245468
90.07506 − 2.300098
εa = × 100% = 97.446%
90.07506
54
Thus, the result seems to be diverging. However, the computation eventually settles down
and converges (at a very slow rate) on a root at x = 6.5. The iterations can be summarized
as
0.05(0.5825)50.06217
x1 = 0.5825 − = 2.193735
49.15724 − 50.06217
2.193735 − 0.5825
εa = × 100% = 73.447%
2.193735
Second iteration:
0.05(2.193735)(−21.1969)
x 2 = 2.193735 − = −4.48891
− 21.5448 − (−21.1969)
55
− 4.48891 − 2.193735
εa = × 100% = 148.87%
− 4.48891
Again, the result seems to be diverging. However, the computation eventually settles down
and converges on a root at x = −0.2. The iterations can be summarized as
Explanation of results: The results are explained by looking at a plot of the function. The
guess of 0.5825 is located at a point where the function is relatively flat. Therefore, the first
iteration results in a prediction of 2.3 for Newton-Raphson and 2.193 for the secant method.
At these points the function is very flat and hence, the Newton-Raphson results in a very
high value (90.075), whereas the modified false position goes in the opposite direction to a
negative value (-4.49). Thereafter, the methods slowly converge on the nearest roots.
60
40
20
-2 0 2 4 6 8
-20
Modified Newton
secant Raphson
-40
-60
6.6
function root = secant(func,xrold,xr,es,maxit)
% secant(func,xrold,xr,es,maxit):
% uses secant method to find the root of a function
% input:
% func = name of function
% xrold, xr = initial guesses
% es = (optional) stopping criterion (%)
56
% maxit = (optional) maximum allowable iterations
% output:
% root = real root
% Secant method
iter = 0;
while (1)
xrn = xr - func(xr)*(xrold - xr)/(func(xrold) - func(xr));
iter = iter + 1;
if xrn ~= 0, ea = abs((xrn - xr)/xrn) * 100; end
if ea <= es | iter >= maxit, break, end
xrold = xr;
xr = xrn;
end
root = xrn;
>> secant(inline('x^3-6*x^2+11*x-6.1'),2.5,3.5)
ans =
3.0467
6.7
function root = modsec(func,xr,delta,es,maxit)
% secant(func,xrold,xr,es,maxit):
% uses the modified secant method
% to find the root of a function
% input:
% func = name of function
% xr = initial guess
% delta = perturbation fraction
% es = (optional) stopping criterion (%)
% maxit = (optional) maximum allowable iterations
% output:
% root = real root
% Secant method
iter = 0;
while (1)
xrold = xr;
xr = xr - delta*xr*func(xr)/(func(xr+delta*xr)-func(xr));
iter = iter + 1;
if xr ~= 0, ea = abs((xr - xrold)/xr) * 100; end
if ea <= es | iter >= maxit, break, end
end
root = xr;
57
Test by solving Prob. 6.3:
>> modsec(inline('x^3-6*x^2+11*x-6.1'),3.5,0.02)
ans =
3.0467
gm ⎛ gc d ⎞
f ( m) = tanh⎜ t⎟ − v
cd ⎜ m ⎟
⎝ ⎠
Note that
d tanh u du
= sech 2 u
dx dx
df (m) gm ⎛ gc d ⎞⎛ 1 m ⎞ cd g ⎛ gc d ⎞ 1 cd g
= sech 2 ⎜ t ⎟⎜ − ⎟t + tanh ⎜ t⎟
dm cd ⎜ m ⎟⎜ 2 c d g ⎟ m2 ⎜ m ⎟2 gm c d
⎝ ⎠⎝ ⎠ ⎝ ⎠
df (m) 1 cd g ⎛ gc d ⎞ 1 gm m c d g ⎛ gc d ⎞
= tanh⎜ t⎟ − t sech 2 ⎜ t⎟
dm 2 gm c d ⎜ m ⎟ 2 cd cd g m 2 ⎜ m ⎟
⎝ ⎠ ⎝ ⎠
The terms premultiplying the tanh and sech can be simplified to yield the final result
df (m) 1 g ⎛ gc d ⎞ g ⎛ gc d ⎞
= tanh ⎜ t⎟ − t sech 2 ⎜ t⎟
dm 2 mc d ⎜ m ⎟ 2 m ⎜ m ⎟
⎝ ⎠ ⎝ ⎠
− 2 + 6 xi − 4 xi2 + 0.5 x i3
x i +1 = x i −
6 − 8 xi + 1.5 xi2
58
5 8.996489 92.30526 55.43331 30.524%
6 7.331330 24.01802 27.97196 22.713%
7 6.472684 4.842169 17.06199 13.266%
8 6.188886 0.448386 13.94237 4.586%
9 6.156726 0.005448 13.6041 0.522%
10 6.156325 8.39E−07 13.59991 0.007%
Thus, after an initial jump, the computation eventually settles down and converges on a
root at x = 6.156325.
This time the solution jumps to an extremely large negative value The computation
eventually converges at a very slow rate on a root at x = 0.474572.
Explanation of results: The results are explained by looking at a plot of the function. Both
guesses are in a region where the function is relatively flat. Because the two guesses are on
opposite sides of a minimum, both are sent to different regions that are far from the initial
guesses. Thereafter, the methods slowly converge on the nearest roots.
59
4
0
0 2 4 6
-4
-8
-12
x= a
f ( x) = x 2 − a
f ' ( x) = 2 x
These functions can be substituted into the Newton-Raphson equation (Eq. 6.6),
x i2 − a
x i +1 = xi −
2 xi
xi + a / xi
x i +1 =
2
x i +1 = x i −
(
tanh x i2 − 9 )
2 x i sech 2
(
xi2 −9 )
Using an initial guess of 3.2, the iterations proceed as
60
2 3.670197 0.999738 0.003844 25.431%
3 −256.413 101.431%
(b) The solution diverges from its real root of x = 3. Due to the concavity of the slope, the
next iteration will always diverge. The following graph illustrates how the divergence
evolves.
0.5
0
2.6 2.8 3 3.2 3.4
-0.5
-1
As depicted below, the iterations involve regions of the curve that have flat slopes. Hence,
the solution is cast far from the roots in the vicinity of the original guess.
61
10
0
-5 5 10 15 20
-5
-10
x 6
f ( x) = − 0.05
1− x 2+ x
MATLAB can be used to develop a plot that indicates that a root occurs in the vicinity of x
= 0.03.
>> f = inline('x./(1-x).*sqrt(6./(2+x))-0.05')
f =
Inline function:
f(x) = x./(1-x).*sqrt(6./(2+x))-0.05
>> x = linspace(0,.2);
>> y = f(x);
>> plot(x,y)
62
>> format long
>> fzero(f,0.03)
ans =
0.02824944114847
a =
12.55778319740302
>> b = 0.0866*R*Tc/pc
b =
0.00186261539130
0.518(233.15) 12.557783
f (v) = 65,000 − +
v − 0.0018626 v(v + 0.0018626) 233.15
MATLAB can be used to generate a plot of the function and to solve for the root. One way
to do this is to develop an M-file for the function,
function y = fvol(v)
R = 0.518;pc = 4600;Tc = 191;
a = 0.427*R^2*Tc^2.5/pc;
b = 0.0866*R*Tc/pc;
T = 273.15-40;p = 65000;
y = p - R*T./(v-b)+a./(v.*(v+b)*sqrt(T));
>> v = linspace(0.002,0.004);
>> fv = fvol(v);
>> plot(v,fv)
>> grid
63
Thus, a root is located at about 0.0028. The fzero function can be used to refine this
estimate,
vroot =
0.00280840865703
V 3
mass = = = 1068.317 m 3
v 0.0028084
⎡ ⎛r − h⎞ 2 ⎤
f ( h) = V − ⎢r 2 cos −1 ⎜ ⎟ − (r − h) 2rh − h ⎥ L
⎣ ⎝ r ⎠ ⎦
function y = fh(h,r,L,V)
y = V - (r^2*acos((r-h)/r)-(r-h)*sqrt(2*r*h-h^2))*L;
h =
0.74001521805594
64
6.16 (a) The function to be evaluated is
TA ⎛ 500 ⎞ T A
f (T A ) = 10 − cosh⎜⎜ ⎟⎟ +
10 ⎝ T A ⎠ 10
TA =
1.266324360399887e+003
>> x = linspace(-50,100);
>> w = 10;y0 = 5;
>> y = TA/w*cosh(w*x/TA) + y0 - TA/w;
>> plot(x,y),grid
f (t ) = 9e −t sin(2πt ) − 3.5
>> t = linspace(0,2);
>> y = 9*exp(-t) .* sin(2*pi*t) - 3.5;
>> plot(t,y),grid
65
Thus, there appear to be two roots at approximately 0.1 and 0.4. The fzero function can be
used to obtain refined estimates,
t =
0.06835432096851
t =
0.40134369265980
2
1 1 ⎛ 1 ⎞
f (ω ) = − + ⎜ ωC − ⎟
Z R 2
⎝ ωL ⎠
2
1 ⎛ 2⎞
f (ω ) = 0.01 − + ⎜ 0.6 × 10 −6 ω − ⎟
50625 ⎝ ω⎠
ans =
220.0202
66
>> format long
>> fzero('2*40*x^(5/2)/5+0.5*40000*x^2-95*9.8*x-95*9.8*0.43',1)
ans =
0.16662477900186
6.20 If the height at which the throw leaves the right fielders arm is defined as y = 0, the y at 90
m will be –0.8. Therefore, the function to be evaluated is
⎛ π ⎞ 44.1
f (θ ) = 0.8 + 90 tan ⎜ θ0 ⎟ −
⎝ 180 ⎠ cos (πθ 0 / 180)
2
Note that the angle is expressed in degrees. First, MATLAB can be used to plot this
function versus various angles.
Roots seem to occur at about 40o and 50o. These estimates can be refined with the fzero
function,
theta =
37.8380
theta =
51.6527
Thus, the right fielder can throw at two different angles to attain the same result.
67
⎛π ⎞
f ( h) = πRh 2 − ⎜ ⎟h 3 − V
⎝3⎠
Because this equation is easy to differentiate, the Newton-Raphson is the best choice to
achieve results efficiently. It can be formulated as
⎛π ⎞
πRxi2 − ⎜ ⎟ xi3 − V
x i +1 = x i − ⎝3⎠
2πRxi − πxi2
⎛π ⎞
π (10) xi2 − ⎜ ⎟ xi3 − 1000
x i +1 = x i − ⎝3⎠
2π (10) xi − πx i2
Thus, after only three iterations, the root is determined to be 6.355008 with an approximate
relative error of 0.017%.
6.22
>> r = [-2 6 1 -4 8];
>> a = poly(r)
a =
1 -9 -20 204 208 -384
>> polyval(a,1)
ans =
0
b =
1 -4 -12
q =
1 -5 -28 32
r =
0 0 0 0 0 0
68
>> x = roots(q)
x =
8.0000
-4.0000
1.0000
>> a = conv(q,b)
a =
1 -9 -20 204 208 -384
>> x = roots(a)
x =
8.0000
6.0000
-4.0000
-2.0000
1.0000
>> a = poly(x)
a =
1.0000 -9.0000 -20.0000 204.0000 208.0000 -384.0000
6.23
>> a = [1 9 26 24];
>> r = roots(a)
r =
-4.0000
-3.0000
-2.0000
r =
-6.0000
-5.0000
-3.0000
-1.0000
( s + 4)( s + 3)( s + 2)
G (s) =
( s + 6)( s + 5)( s + 3)( s + 1)
69
CHAPTER 7
7.1
>> Aug = [A eye(size(A))]
>> A = rand(3)
A =
0.9501 0.4860 0.4565
0.2311 0.8913 0.0185
0.6068 0.7621 0.8214
⎡5 8 13⎤ ⎡ 3 −2 1 ⎤
(1) [ E ] + [ B ] = ⎢8 3 9 ⎥ (2) [ E ] − [ B] = ⎢− 6 1 3⎥
⎣⎢5 0 9 ⎥⎦ ⎣⎢ − 3 0 − 1⎦⎥
⎡20 15 35⎤
(3) [A] + [F] = undefined (4) 5[ F ] = ⎢ 5 10 30⎥
⎢⎣ 5 0 20⎥⎦
⎡54 68⎤
(5) [A] × [B] = undefined (6) [ B] × [ A] = ⎢36 45⎥
⎢⎣24 29⎥⎦
⎡5 2⎤
⎡4 3 7 ⎤
(9) [ D] = ⎢⎢4
T 1⎥ (10) I × [B ] = ⎢1 2 6⎥
3 7⎥ ⎢⎣1 0 4⎥⎦
⎢⎣6 5⎥⎦
70
⎡− 7 3 0 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ 10 ⎫⎪
⎢ 0 4 7 ⎥ ⎨ x 2 ⎬ = ⎨− 30⎬
⎣⎢− 4 3 − 7⎦⎥ ⎪⎩ x 3 ⎪⎭ ⎪⎩ 40 ⎪⎭
x =
-1.0811
0.8108
-4.7490
>> AT = A'
AT =
-7 0 -4
3 4 3
0 7 -7
>> AI = inv(A)
AI =
-0.1892 0.0811 0.0811
-0.1081 0.1892 0.1892
0.0618 0.0347 -0.1081
7.4
⎡ 23 − 8⎤
[ X ] × [Y ] = ⎢ 55 56 ⎥
⎢⎣− 17 24 ⎥⎦
⎡ 12 8⎤
[ X ] × [ Z ] = ⎢− 30 52⎥
⎢⎣− 23 2 ⎥⎦
[Y ] × [ Z ] = ⎡ 4 8⎤
⎢⎣− 47 34⎥⎦
[ Z ] × [Y ] = ⎡ 6 16 ⎤
⎢⎣− 20 32⎥⎦
71
2kx1 − kx 2 = m1 g
− kx 2 + kx3 = m3 g
⎡ 20 − 10 0 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ 19.62 ⎫⎪
⎢ − 10 20 − 10⎥ ⎨ x 2 ⎬ = ⎨ 29.43 ⎬
⎢⎣ 0 − 10 10 ⎥⎦ ⎪⎩ x 3 ⎪⎭ ⎪⎩24.525⎪⎭
A MATLAB session can be used to obtain the solution for the displacements
x =
7.3575
12.7530
15.2055
The parameters can be substituted and the result written in matrix form as
⎡6 0 − 1 0 0 ⎤ ⎧ c1 ⎫ ⎧ 50 ⎫
⎢− 3 3 0 0 0 ⎥ ⎪⎪c 2 ⎪⎪ ⎪⎪ 0 ⎪⎪
⎢ 0 − 1 9 0 0 ⎥ ⎨c3 ⎬ = ⎨160⎬
⎢ 0 − 1 − 8 11 − 2⎥ ⎪c 4 ⎪ ⎪ 0 ⎪
⎢⎣− 3 − 1 0 0 4 ⎥⎦ ⎪c ⎪ ⎪⎩ 0 ⎪⎭
⎩ 5⎭
>> Q = [6 0 -1 0 0;
-3 3 0 0 0;
0 -1 9 0 0;
0 -1 -8 11 -2;
-3 -1 0 0 4];
>> Qc = [50;0;160;0;0];
72
>> c = Q\Qc
c =
11.5094
11.5094
19.0566
16.9983
11.5094
⎡ 0.866 0 − 0 .5 0 0 0 ⎤ ⎧ F1 ⎫ ⎧ 0 ⎫
⎢ 0 .5 0 0.866 0 0 0 ⎥ ⎪ F2 ⎪ ⎪− 1000⎪
⎢− 0.866 − 1 0 −1 0 0 ⎥ ⎪⎪ F3 ⎪⎪ = ⎪ 0 ⎪
⎢ − 0 .5 0 0 0 − 1 0 ⎥ ⎨H 2 ⎬ ⎨ 0 ⎬
⎢ 0 1 0 .5 0 0 0 ⎥ ⎪ V2 ⎪ ⎪ 0 ⎪
⎢⎣ 0 0 − 0.866 0 0 − 1⎥⎦ ⎪⎪ V ⎪⎪ ⎪⎩ 0 ⎪⎭
⎩ 3⎭
MATLAB can then be used to solve for the forces and reactions,
F =
-500.0220
433.0191
-866.0381
-0.0000
250.0110
749.9890
Therefore,
⎡1 1 1 0 0 0 ⎤ ⎧i12 ⎫ ⎧ 0 ⎫
⎢0 − 1 0 1 − 1 0 ⎥ ⎪i52 ⎪ ⎪ 0 ⎪
⎢0 0 −1 0 0 1 ⎥ ⎪⎪i32 ⎪⎪ = ⎪ 0 ⎪
⎢0 0 0 0 1 − 1⎥ ⎨i65 ⎬ ⎨ 0 ⎬
⎢0 10 − 10 0 − 15 − 5⎥ ⎪i ⎪ ⎪ 0 ⎪
⎢⎣5 − 10 0 − 20 0 0 ⎥⎦ ⎪⎪i54 ⎪⎪ ⎪⎩200⎪⎭
⎩ 43 ⎭
73
>> A = [1 1 1 0 0 0 ;
0 -1 0 1 -1 0;
0 0 -1 0 0 1;
0 0 0 0 1 -1;
0 10 -10 0 -15 -5;
5 -10 0 -20 0 0];
>> b = [0 0 0 0 0 200]';
>> i = A\b
i =
6.1538
-4.6154
-1.5385
-6.1538
-1.5385
-1.5385
7.9
kmx =
0.9000
0.0000
-0.1000
74
CHAPTER 8
8.1 The flop counts for the tridiagonal algorithm in Fig. 8.6 can be summarized as
Thus, as n increases, the effort is much, much less than for a full matrix solved with Gauss
elimination which is proportional to n3.
8.2 The equations can be expressed in a format that is compatible with graphing x2 versus x1:
x 2 = 0.5 x1 + 3
1 34
x2 = − x1 +
6 6
10
9
8
7
6
5
4
3
2
1
0
0 2 4 6 8 10 12
Thus, the solution is x1 = 4, x2 = 5. The solution can be checked by substituting it back into
the equations to give
4( 4) − 8(5) = 16 − 40 = −24
4 + 6(5) = 4 + 30 = 34
8.3 (a) The equations can be expressed in a format that is compatible with graphing x2 versus x1:
x 2 = 0.11x1 + 12
x 2 = 0.114943x1 + 10
75
which can be plotted as
140
120
100
80
60
40
20
0
0 200 400 600 800 1000
Thus, the solution is approximately x1 = 400, x2 = 60. The solution can be checked by
substituting it back into the equations to give
(b) Because the lines have very similar slopes, you would expect that the system would be
ill-conditioned
D = 0⎡ 2 − 1⎤ − (−3) ⎡1 − 1⎤ + 7 ⎡1 2 ⎤
⎢⎣− 2 0 ⎥⎦ ⎢⎣5 0 ⎥⎦ ⎢⎣5 − 2⎥⎦
76
2 −3 7
3 2 −1
2 −2 0 − 68
x1 = = = 0.9855
− 69 − 69
0 2 7
1 3 −1
5 2 0 − 101
x2 = = = 1.4638
− 69 − 69
0 −3 2
1 2 3
5 −2 2 − 63
x3 = = = 0.9130
− 69 − 69
5 x1 − 2 x 2 =2
x1 + 2 x 2 − x3 = 3
− 3x 2 + 7 x3 = 2
Multiply pivot row 1 by 1/5 and subtract the result from the second row to eliminate the a21
term.
5 x1 − 2 x 2 =2
2.4 x 2 − x 3 = 2.6
− 3x 2 + 7 x3 = 2
5 x1 − 2 x 2 =2
− 3x 2 + 7 x3 = 2
2.4 x 2 − x 3 = 2.6
Multiply pivot row 2 by 2.4/(–3) and subtract the result from the third row to eliminate the
a32 term.
5 x1 − 2 x 2 =2
− 3 x 2 + 7 x3 = 2
4.6 x3 = 4.2
4 .2
x3 = = 0.913043
4 .6
2 − 7(0.913043)
x2 = = 1.463768
−3
77
2 + 2(1.463768)
x1 = = 0.985507
5
(d)
− 3(1.463768) + 7(0.913043) = 2
0.985507 + 2(1.463768) − (0.913043) = 3
5(0.985507) − 2(1.463768) = 2
ans =
0.8600
Prob. 8.4:
ans =
-69
8.6 (a) The equations can be expressed in a format that is compatible with graphing x2 versus x1:
x 2 = 0.5 x1 + 9.5
x 2 = 0.51x1 + 9.4
The resulting plot indicates that the intersection of the lines is difficult to detect:
22
20
18
16
14
12
10
5 10 15 20
Only when the plot is zoomed is it at all possible to discern that solution seems to lie at
about x1 = 14.5 and x2 = 10.
78
14.7
14.65
14.6
14.55
14.5
14.45
14.4
14.35
14.3
9.75 10 10.25
(c) Because the lines have very similar slopes and the determinant is so small, you would
expect that the system would be ill-conditioned
(d) Multiply the first equation by 1.02/0.5 and subtract the result from the second equation
to eliminate the x1 term from the second equation,
0.5 x1 − x 2 = −9.5
0.04 x 2 = 0.58
0.58
x2 = = 14.5
0.04
This result can be substituted into the first equation which can be solved for
− 9.5 + 14.5
x1 = = 10
0 .5
(e) Multiply the first equation by 1.02/0.52 and subtract the result from the second equation
to eliminate the x1 term from the second equation,
0.52 x1 − x 2 = −9.5
− 0.03846 x 2 = −0.16538
79
The second equation can be solved for
− 0.16538
x2 = = 4.3
− 0.03846
This result can be substituted into the first equation which can be solved for
− 9.5 + 4.3
x1 = = −10
0.52
Interpretation: The fact that a slight change in one of the coefficients results in a radically
different solution illustrates that this system is very ill-conditioned.
8.7 (a) Multiply the first equation by –3/10 and subtract the result from the second equation to
eliminate the x1 term from the second equation. Then, multiply the first equation by 1/10
and subtract the result from the third equation to eliminate the x1 term from the third
equation.
10 x1 + 2 x 2 − x 3 = 27
Multiply the second equation by 0.8/(–5.4) and subtract the result from the third equation to
eliminate the x2 term from the third equation,
10 x1 + 2 x 2 − x 3 = 27
5.351852 x3 = −32.11111
− 32.11111
x3 = = −6
5.351852
(−53.4 − 1.7(−6))
x2 = =8
− 5 .4
( 27 − 6 − 2(8))
x1 = = 0.5
10
(b) Check:
80
10(0.5) + 2(8) − (−6) = 27
8.8 (a) Pivoting is necessary, so switch the first and third rows,
− 8 x1 + x 2 − 2 x3 = −20
− 3 x1 − x 2 + 7 x3 = −34
2 x1 − 6 x 2 − x3 = −38
Multiply the first equation by –3/(–8) and subtract the result from the second equation to
eliminate the a21 term from the second equation. Then, multiply the first equation by 2/(–8)
and subtract the result from the third equation to eliminate the a31 term from the third
equation.
− 8 x1 + x2 − 2 x3 = −20
− 8 x1 + x2 − 2 x3 = −20
Multiply pivot row 2 by –1.375/(–5.75) and subtract the result from the third row to
eliminate the a32 term.
− 8 x1 + x2 − 2 x 3 = −20
8.108696 x3 = −16.21739
− 16.21739
x3 = = −2
8.108696
− 43 + 1.5(−2)
x2 = =8
− 5.75
81
− 20 + 2(−2) − 1(8)
x1 = =4
−8
(b) Check:
8.9 Multiply the first equation by –0.4/0.8 and subtract the result from the second equation to
eliminate the x1 term from the second equation.
⎡0.8 − 0.4 ⎤ ⎧ x1 ⎫ ⎧ 41 ⎫
⎢ ⎪ ⎪ ⎪ ⎪
0.6 − 0.4⎥ ⎨ x 2 ⎬ = ⎨45.5⎬
⎢ − 0.4 0.8 ⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩ 105 ⎪⎭
⎣
Multiply pivot row 2 by –0.4/0.6 and subtract the result from the third row to eliminate the
x2 term.
⎡0.8 − 0.4 ⎤ ⎧ x1 ⎫ ⎧ 41 ⎫
⎢ ⎪ ⎪ ⎪ ⎪
0.6 − 0.4 ⎥ ⎨ x 2 ⎬ = ⎨ 45.5 ⎬
⎢ 0.533333⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩135.3333⎪⎭
⎣
135.3333
x3 = = 253.75
0.533333
45.5 − ( −0.4)253.75
x2 = = 245
0 .6
41 − (−0.4)245
x1 = = 173.75
0 .8
(b) Check:
0.8(173.75) − 0.4(245) = 41
82
Q21c 2 + 400 = Q12 c1 + Q13 c1
or collecting terms
Substituting the values for the flows and expressing in matrix form
⎡ 120 − 20 0 ⎤ ⎧⎪ c1 ⎫⎪ ⎧⎪400⎫⎪
⎢ − 80 80 0 ⎥ ⎨c 2 ⎬ = ⎨ 0 ⎬
⎢⎣− 40 − 60 120⎥⎦ ⎪⎩c 3 ⎪⎭ ⎪⎩200⎪⎭
c =
4.0000
4.0000
5.0000
8.11 Equations for the amount of sand, fine gravel and coarse gravel can be written as
where xi = the amount of gravel taken from pit i. MATLAB can be used to solve this
system of equations for
x =
1.0e+003 *
7.0000
4.4000
7.6000
Therefore, we take 7000, 4400 and 7600 m3 from pits 1, 2 and 3 respectively.
83
8.12 Substituting the parameter values the heat-balance equations can be written for the four
nodes as
− 40 + 2.2T1 − T2 = 4
− T1 + 2.2T2 − T3 = 4
− T2 + 2.2T3 − T4 = 4
− T3 + 2.2T4 − 200 = 4
⎡2.2 − 1 0 0 ⎤ ⎧T1 ⎫ ⎧ 44 ⎫
⎢ − 1 2 . 2 − 1 0 ⎥ ⎪T2 ⎪ ⎪ 4 ⎪
⎢ 0 − 1 2.2 − 1⎥ ⎨T3 ⎬ = ⎨ 4 ⎬
⎢⎣ 0 0 − 1 2.2⎥⎦ ⎪T ⎪ ⎪⎩204⎪⎭
⎩ 4⎭
T =
50.7866
67.7306
94.2206
135.5548
84
CHAPTER 9
9.1 The flop counts for LU decomposition can be determined in a similar fashion as was done
for Gauss elimination. The major difference is that the elimination is only implemented for
the left-hand side coefficients. Thus, for every iteration of the inner loop, there are n
multiplications/divisions and n – 1 addition/subtractions. The computations can be
summarized as
∑ [n ]
n −1 n −1
∑k =1
(n − k )(n − k ) =
k =1
2
− 2nk + k 2
∑ [n ]
n −1
n3 n2 n
2
− 2nk + k 2 = − +
k =1 3 2 6
n −1
n3 n
∑k =1
(n − k )(n + 1 − k ) = −
3 3
[n ] [ ]
⎡1 ⎤ n
3
3
+ O ( n 2 ) − n 3 + O ( n) + ⎢ n 3 + O ( n 2 ) ⎥ = + O(n 2 )
⎣3 ⎦ 3
2n 3 n 2 n
− −
3 2 6
For forward substitution, the numbers of multiplications and subtractions are the same and
equal to
85
n −1
(n − 1) n n 2 n
∑i =
i =1 2
= −
2 2
Back substitution is the same as for Gauss elimination: n2/2 – n/2 subtractions and n2/2 +
n/2 multiplications/divisions. The entire number of flops can be summarized as
The total number of flops is identical to that obtained with standard Gauss elimination.
[ L][U ] = [ A] (9.7)
⎡ 10 2 − 1⎤
⎢− 3 − 6 2 ⎥
⎢⎣ 1 1 5 ⎥⎦
Multiply the first row by f21 = –3/10 = –0.3 and subtract the result from the second row to
eliminate the a21 term. Then, multiply the first row by f31 = 1/10 = 0.1 and subtract the
result from the third row to eliminate the a31 term. The result is
86
⎡10 2 − 1⎤
⎢ 0 − 5.4 1.7 ⎥
⎢⎣ 0 0.8 5.1⎥⎦
Multiply the second row by f32 = 0.8/(–5.4) = –0.148148 and subtract the result from the
third row to eliminate the a32 term.
⎡10 2 −1 ⎤
⎢ 0 − 5 .4 1.7 ⎥
⎢⎣ 0 0 5.351852⎥⎦
⎡ 1 0 0⎤ ⎡10 2 −1 ⎤
[ L]{U ] = ⎢− 0.3 1 0 ⎥ ⎢ 0 − 5 .4 1.7 ⎥
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎢⎣ 0 0 5.351852⎥⎦
Multiplying [L] and [U] yields the original matrix as verified by the following MATLAB
session,
A =
10.0000 2.0000 -1.0000
-3.0000 -6.0000 2.0000
1.0000 1.0000 5.0000
⎡ 1 0 0⎤ ⎡10 2 −1 ⎤
[ L]{U ] = ⎢− 0.3 1 0 ⎥ ⎢ 0 − 5 .4 1.7 ⎥
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎢⎣ 0 0 5.351852⎥⎦
Forward substitution:
⎡ 1 0 0⎤ ⎧⎪ 27 ⎫⎪
{d } = ⎢− 0.3 1 0⎥ ⎨− 61.5⎬
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎪⎩− 21.5⎪⎭
d1 = 27
Back substitution:
87
⎡10 2 − 1 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ 27 ⎫⎪
{x} = ⎢ 0 − 5.4 1.7 ⎥ ⎨ x 2 ⎬ = ⎨ − 53.5 ⎬
⎢⎣ 0 0 5.351852⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩− 32.11111⎪⎭
− 32.11111
x3 = = −6
5.351852
27 − 2(8) − (−1)(−6)
x1 = = 0.5
10
⎡ 1 0 0⎤ ⎧⎪ 12 ⎫⎪
{d } = ⎢− 0.3 1 0⎥ ⎨ 18 ⎬
⎣⎢ 0.1 − 0.148148 1⎥⎦ ⎪⎩− 6⎭⎪
d1 = 12
d 2 = 18 + 0.3(12) = 21.6
d 3 = −6 − 0.1(12) − (−0.148148)(18) = −4
Back substitution:
⎡10 2 − 1 ⎤ ⎧⎪ 12 ⎫⎪
{x} = ⎢ 0 − 5.4 1.7 ⎥ ⎨21.6⎬
⎣⎢ 0 0 5.351852⎦⎥ ⎪⎩ − 4 ⎪⎭
−4
x3 = = −0.747405
5.351852
21.6 − 1.7(−0.747405)
x2 = = −4.235294
− 5.4
12 − 2( −4.235294) − (−1)(−0.747405)
x1 = = 1.972318
10
⎡ 2 − 6 − 1⎤ ⎧⎪ − 38⎫⎪
[ A] = ⎢− 3 − 1 7 ⎥ {b} = ⎨− 34 ⎬
⎢⎣− 8 1 − 2⎥⎦ ⎪⎩− 20⎪⎭
Partial pivot:
88
⎡− 8 1 − 2 ⎤ ⎧⎪− 20⎫⎪
[ A] = ⎢− 3 − 1 7 ⎥ {b} = ⎨− 34 ⎬
⎢⎣ 2 − 6 − 1 ⎥⎦ ⎪⎩− 38⎪⎭
Forward eliminate
⎡− 8 1 −2 ⎤
[ A] = ⎢ 0 − 1.375 7.75 ⎥
⎢⎣ 0 − 5.75 − 1.5 ⎥⎦
Pivot again
⎡− 8 1 −2 ⎤ ⎧⎪− 20⎫⎪
[ A] = ⎢ 0 − 5.75 − 1.5 ⎥ {b} = ⎨− 38⎬
⎢⎣ 0 − 1.375 7.75 ⎥⎦ ⎪⎩− 34 ⎪⎭
Forward eliminate
⎡− 8 1 −2 ⎤
[ A] = ⎢ 0 − 5.75 − 1.5 ⎥
⎢⎣ 0 0 8.108696 ⎥⎦
⎡ 1 0 0⎤ ⎡ − 8 1 −2 ⎤
[ L]{U ] = ⎢− 0.25 1 0⎥ ⎢ 0 − 5.75 − 1 .5 ⎥
⎣⎢ 0.375 0.23913 1⎦⎥ ⎢⎣ 0 0 8.108696⎦⎥
Forward elimination
⎡ 1 0 0⎤ ⎧⎪− 20⎫⎪
{d } = ⎢− 0.25 1 0⎥ ⎨ − 38⎬
⎢⎣ 0.375 0.23913 1⎥⎦ ⎪⎩− 34 ⎪⎭
d1 = −20
Back substitution:
89
⎡− 8 1 − 2 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ − 20 ⎫⎪
⎢ 0 − 5.75 − 1.5 ⎥ ⎨ x 2 ⎬ = ⎨ − 43 ⎬
⎣⎢ 0 0 8.108696⎦⎥ ⎪⎩ x3 ⎪⎭ ⎪⎩− 16.21739⎪⎭
− 16.21739
x3 = = −2
8.108696
− 43 − (−1.5)(−2)
x2 = =8
− 5.75
− 20 − 1(8) − (−2)(−2)
x1 = =4
−8
[m,n] = size(A);
if m~=n, error('Matrix A must be square'); end
L = eye(n);
U = A;
% forward elimination
for k = 1:n-1
for i = k+1:n
L(i,k) = U(i,k)/U(k,k);
U(i,k) = 0;
U(i,k+1:n) = U(i,k+1:n)-L(i,k)*U(k,k+1:n);
end
end
L =
1.0000 0 0
-0.3000 1.0000 0
0.1000 -0.1481 1.0000
U =
10.0000 2.0000 -1.0000
0 -5.4000 1.7000
0 0 5.3519
90
Verification that [L][U] = [A].
>> L*U
ans =
10.0000 2.0000 -1.0000
-3.0000 -6.0000 2.0000
1.0000 1.0000 5.0000
>> [L,U]=lu(A)
L =
1.0000 0 0
-0.3000 1.0000 0
0.1000 -0.1481 1.0000
U =
10.0000 2.0000 -1.0000
0 -5.4000 1.7000
0 0 5.3519
9.7 The result of Example 9.4 can be substituted into Eq. (9.14) to give
a 21 = 2.44949 × 6.123724 = 15
91
9.8 (a) For the first row (i = 1), Eq. (9.15) is employed to compute
a12 20
u12 = = = 7.071068
u11 2.828427
a13 15
u13 = = = 5.303301
u11 2.828427
u 22 = a 22 − u12
2
= 80 − (7.071068) 2 = 5.477226
u 33 = a 33 − u13
2
− u 23
2
= 60 − (5.303301) 2 − (2.282177) 2 = 5.163978
The validity of this decomposition can be verified by substituting it and its transpose into
Eq. (9.14) to see if their product yields the original matrix [A]. This is left for an exercise.
(b)
>> A = [8 20 15;20 80 50;15 50 60];
>> U = chol(A)
U =
2.8284 7.0711 5.3033
0 5.4772 2.2822
0 0 5.1640
>> b = [50;250;100];
>> d=U'\b
d =
92
17.6777
22.8218
-8.8756
>> x=U\d
x =
-2.7344
4.8828
-1.7187
function U = cholesky(A)
% cholesky(A):
% cholesky decomposition without pivoting.
% input:
% A = coefficient matrix
% output:
% U = upper triangular matrix
[m,n] = size(A);
if m~=n, error('Matrix A must be square'); end
for i = 1:n
s = 0;
for k = 1:i-1
s = s + U(k, i) ^ 2;
end
U(i, i) = sqrt(A(i, i) - s);
for j = i + 1:n
s = 0;
for k = 1:i-1
s = s + U(k, i) * U(k, j);
end
U(i, j) = (A(i, j) - s) / U(i, i);
end
end
ans =
2.8284 7.0711 5.3033
0 5.4772 2.2822
0 0 5.1640
>> U = chol(A)
U =
2.8284 7.0711 5.3033
0 5.4772 2.2822
0 0 5.1640
93
CHAPTER 10
10.1 First, compute the LU decomposition The matrix to be evaluated is
⎡ 10 2 − 1⎤
⎢− 3 − 6 2 ⎥
⎢⎣ 1 1 5 ⎥⎦
Multiply the first row by f21 = –3/10 = –0.3 and subtract the result from the second row to
eliminate the a21 term. Then, multiply the first row by f31 = 1/10 = 0.1 and subtract the
result from the third row to eliminate the a31 term. The result is
⎡10 2 − 1⎤
⎢ 0 − 5.4 1.7 ⎥
⎢⎣ 0 0.8 5.1⎥⎦
Multiply the second row by f32 = 0.8/(–5.4) = –0.148148 and subtract the result from the
third row to eliminate the a32 term.
⎡10 2 −1 ⎤
⎢ 0 − 5 .4 1.7 ⎥
⎢⎣ 0 0 5.351852⎥⎦
⎡ 1 0 0⎤ ⎡10 2 −1 ⎤
[ L]{U ] = ⎢− 0.3 1 0 ⎥ ⎢ 0 − 5 .4 1.7 ⎥
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎢⎣ 0 0 5.351852⎥⎦
The first column of the matrix inverse can be determined by performing the forward-
substitution solution procedure with a unit vector (with 1 in the first row) as the right-hand-
side vector. Thus, the lower-triangular system, can be set up as,
⎡ 1 0 0⎤ ⎧⎪ d1 ⎫⎪ ⎧⎪1⎫⎪
⎢− 0.3 1 0⎥ ⎨d 2 ⎬ = ⎨0⎬
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎪⎩ d 3 ⎪⎭ ⎩⎪0⎪⎭
and solved with forward substitution for {d}T = ⎣1 0.3 − 0.055556⎦ . This vector can then
be used as the right-hand side of the upper triangular system,
⎡10 2 − 1 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ 1 ⎫⎪
⎢0 − 5 .4 1.7 ⎥ ⎨ x 2 ⎬ = ⎨ 0 .3 ⎬
⎢⎣ 0 0 5.351852⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩− 0.055556⎪⎭
which can be solved by back substitution for the first column of the matrix inverse,
94
⎡ 0.110727 0 0⎤
[ A] −1 = ⎢− 0.058824 0 0⎥
⎢⎣ − 0.010381 0 0⎥⎦
⎡ 1 0 0⎤ ⎧⎪ d1 ⎫⎪ ⎧⎪0⎫⎪
⎢− 0.3 1 0⎥ ⎨d 2 ⎬ = ⎨1⎬
⎢⎣ 0.1 − 0.148148 1⎥⎦ ⎪⎩ d 3 ⎪⎭ ⎪⎩0⎪⎭
This can be solved with forward substitution for {d}T = ⎣0 1 0.148148⎦ , and the results
are used with [U] to determine {x} by back substitution to generate the second column of
the matrix inverse,
⎡ 0.110727 0.038062 0⎤
[ A] −1 = ⎢− 0.058824 − 0.176471 0⎥
⎢⎣ − 0.010381 0.027682 0⎥⎦
Finally, the same procedures can be implemented with {b}T = ⎣0 0 1⎦ to solve for {d}T =
⎣0 0 1⎦ , and the results are used with [U] to determine {x} by back substitution to
generate the third column of the matrix inverse,
This result can be checked by multiplying it times the original matrix to give the identity
matrix. The following MATLAB session can be used to implement this check,
ans =
1.0000 -0.0000 -0.0000
0.0000 1.0000 -0.0000
-0.0000 0.0000 1.0000
⎡− 8 1 − 2 ⎤ ⎧⎪ − 38⎫⎪
[ A] = ⎢ 2 − 6 − 1 ⎥ {b} = ⎨− 34 ⎬
⎢⎣− 3 − 1 7 ⎥⎦ ⎪⎩− 20⎪⎭
Forward eliminate
95
⎡− 8 1 −2 ⎤
[ A] = ⎢ 0 − 5.75 − 1.5 ⎥
⎢⎣ 0 − 1.375 7.75 ⎥⎦
Forward eliminate
⎡− 8 1 −2 ⎤
[ A] = ⎢ 0 − 5.75 − 1.5 ⎥
⎢⎣ 0 0 8.108696 ⎥⎦
⎡ 1 0 0⎤ ⎡ − 8 1 −2 ⎤
[ L]{U ] = ⎢− 0.25 1 0⎥ ⎢ 0 − 5.75 − 1.5 ⎥
⎢⎣ 0.375 0.23913 1⎥⎦ ⎢⎣ 0 0 8.108696⎥⎦
The first column of the matrix inverse can be determined by performing the forward-
substitution solution procedure with a unit vector (with 1 in the first row) as the right-hand-
side vector. Thus, the lower-triangular system, can be set up as,
⎡ 1 0 0⎤ ⎧⎪ d 1 ⎫⎪ ⎧⎪1⎫⎪
⎢− 0.25 1 0⎥ ⎨d 2 ⎬ = ⎨0⎬
⎢⎣ 0.375 0.23913 1⎥⎦ ⎪⎩ d 3 ⎪⎭ ⎪⎩0⎭⎪
and solved with forward substitution for {d}T = ⎣1 0.25 − 0.434783⎦ . This vector can then
be used as the right-hand side of the upper triangular system,
⎡− 8 1 − 2 ⎤ ⎧⎪ x1 ⎫⎪ ⎧⎪ 1 ⎫⎪
⎢ 0 − 5.75 − 1.5 ⎥ ⎨ x 2 ⎬ = ⎨ 0.25 ⎬
⎢⎣ 0 0 8.108696⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩− 0.434783⎪⎭
which can be solved by back substitution for the first column of the matrix inverse,
⎡ - 0.115282 0 0⎤
[ A] −1 = ⎢ − 0.029491 0 0⎥
⎢⎣− 0.053619 0 0⎥⎦
⎡ 1 0 0⎤ ⎧⎪ d 1 ⎫⎪ ⎧⎪0⎫⎪
⎢− 0.25 1 0⎥ ⎨d 2 ⎬ = ⎨1⎬
⎢⎣ 0.375 0.23913 1⎥⎦ ⎪⎩ d 3 ⎪⎭ ⎪⎩0⎪⎭
This can be solved with forward substitution for {d}T = ⎣0 1 − 0.23913⎦ , and the results
are used with [U] to determine {x} by back substitution to generate the second column of
the matrix inverse,
96
⎡ - 0.115282 − 0.013405 0⎤
[ A] −1 = ⎢ − 0.029491 − 0.16622 0⎥
⎢⎣− 0.053619 − 0.029491 0⎥⎦
Finally, the same procedures can be implemented with {b}T = ⎣0 0 1⎦ to solve for {d}T =
⎣0 0 1⎦ , and the results are used with [U] to determine {x} by back substitution to
generate the third column of the matrix inverse,
(a)
>> A = [15 -3 -1;-3 18 -6;-4 -1 12];
>> format long
>> AI = inv(A)
AI =
0.07253886010363 0.01278065630397 0.01243523316062
0.02072538860104 0.06079447322971 0.03212435233161
0.02590673575130 0.00932642487047 0.09015544041451
(b)
>> b = [3800 1200 2350]';
>> format short
>> c = AI*b
c =
320.2073
227.2021
321.5026
(c) The impact of a load to reactor 3 on the concentration of reactor 1 is specified by the
−1
element a13 = 0.0124352. Therefore, the increase in the mass input to reactor 3 needed to
induce a 10 g/m3 rise in the concentration of reactor 1 can be computed as
10 g
∆b3 = = 804.1667
0.0124352 d
g
∆c 3 = 0.0259067(500) + 0.009326(250) = 12.9534 + 2.3316 = 15.285
m3
10.4 The mass balances can be written and the result written in matrix form as
97
⎡6 0 − 1 0 0 ⎤ ⎧ c1 ⎫ ⎧Q01c 01 ⎫
⎢− 3 3 0 0 0 ⎥ ⎪⎪c 2 ⎪⎪ ⎪⎪ 0 ⎪⎪
⎢ 0 − 1 9 0 0 ⎥ ⎨c 3 ⎬ = ⎨Q03 c 03 ⎬
⎢ 0 − 1 − 8 11 − 2⎥ ⎪c 4 ⎪ ⎪ 0 ⎪
⎢⎣− 3 − 1 0 0 4 ⎥⎦ ⎪c ⎪ ⎪ 0 ⎪
⎩ 5⎭ ⎩ ⎭
ans =
0.1698 0.0063 0.0189 0 0
0.1698 0.3396 0.0189 0 0
0.0189 0.0377 0.1132 0 0
0.0600 0.0746 0.0875 0.0909 0.0455
0.1698 0.0896 0.0189 0 0.2500
The concentration in reactor 5 can be computed using the elements of the matrix inverse as
in,
−1 −1
c5 = a 51 Q01c 01 + a 53 Q03 c 03 = 0.1698(5)20 + 0.0189(8)50 = 16.981 + 7.547 = 24.528
⎧ F1, h ⎫
⎡ 0.866 − 0.5 0 ⎤ ⎧ F1 ⎫ ⎪
F1,v ⎪
0 0 0
⎢ 0 .5 0 ⎥ F2⎪ ⎪
0 0.866 0 0 ⎪ ⎪
⎢− 0.866 − 1 0 −1 0 0 ⎥ ⎪⎪ F3 ⎪⎪ = ⎪ F2, h ⎪
⎢ − 0 .5 0 0 0 − 1 0 ⎥ ⎨ H 2 ⎬ ⎨ F2,v ⎬
⎢ 0 1 0.5 0 0 0 ⎥⎪ V ⎪ ⎪F ⎪
⎢⎣ 0 0 − 0.866 0 0 − 1⎥⎦ ⎪⎪ V2 ⎪⎪ ⎪ 3,h ⎪
⎩ 3 ⎭ ⎪⎩ F3,v ⎪⎭
AI =
0.8660 0.5000 0 0 0 0
0.2500 -0.4330 0 0 1.0000 0
-0.5000 0.8660 0 0 0 0
-1.0000 0.0000 -1.0000 0 -1.0000 0
-0.4330 -0.2500 0 -1.0000 0 0
0.4330 -0.7500 0 0 0 -1.0000
98
The forces in the members resulting from the two forces can be computed using the
elements of the matrix inverse as in,
−1 −1
F1 = a12 F1,v + a15 F3,h = 0.5(−2000) + 0(−500) = −1000 + 0 = −1000
−1 −1
F2 = a 22 F1,v + a 25 F3,h = −0.433(−2000) + 1(−500) = 866 − 500 = 366
−1 −1
F3 = a 32 F1,v + a 35 F3, h = 0.866(−2000) + 0(−500) = −1732 + 0 = −1732
10.6 The matrix can be scaled by dividing each row by the element with the largest absolute
value
A =
>> norm(A,'fro')
ans =
1.9920
>> norm(A,1)
ans =
2.8000
>> norm(A,inf)
ans =
2
ans =
13
>> norm(A,inf)
ans =
11
Prob. 10.3:
99
>> norm(A,'fro')
ans =
27.6586
>> norm(A,inf)
ans =
27
ans =
8.8963e+016
>> cond(A,inf)
ans =
3.2922e+018
⎡16 4 1⎤
⎢ 4 2 1⎥
⎢⎣49 7 1⎥⎦
The row-sum norm of the inverse is ⎪-2.3333⎪ + 2.8 + 0.5333 = 5.6667. Therefore, the
condition number is
100
ans =
323.0000
ans =
216.1294
Frobenius norm:
>> cond(A,'fro')
ans =
217.4843
>> A = hilb(10);
>> N = cond(A)
N =
1.6025e+013
The digits of precision that could be lost due to ill-conditioning can be calculated as
>> c = log10(N)
c =
13.2048
Thus, about 13 digits could be suspect. A right-hand side vector can be developed
corresponding to a solution of ones:
b =
2.9290
2.0199
1.6032
1.3468
1.1682
1.0349
0.9307
0.8467
0.7773
0.7188
>> x = A\b
101
x =
1.0000
1.0000
1.0000
1.0000
0.9999
1.0003
0.9995
1.0005
0.9997
1.0001
>> e=max(abs(x-1))
e =
5.3822e-004
>> e=mean(abs(x-1))
e =
1.8662e-004
Thus, some of the results are accurate to only about 3 to 4 significant digits. Because
MATLAB represents numbers to 15 significant digits, this means that about 11 to 12 digits
are suspect.
>> x1 = 4;x2=2;x3=7;x4=10;x5=3;x6=5;
>> A = [x1^5 x1^4 x1^3 x1^2 x1 1;x2^5 x2^4 x2^3 x2^2 x2 1;x3^5 x3^4
x3^3 x3^2 x3 1;x4^5 x4^4 x4^3 x4^2 x4 1;x5^5 x5^4 x5^3 x5^2 x5 1;x6^5
x6^4 x6^3 x6^2 x6 1]
A =
1024 256 64 16 4 1
32 16 8 4 2 1
16807 2401 343 49 7 1
100000 10000 1000 100 10 1
243 81 27 9 3 1
3125 625 125 25 5 1
>> N = cond(A)
N =
1.4492e+007
The digits of precision that could be lost due to ill-conditioning can be calculated as
>> c = log10(N)
c =
7.1611
102
Thus, about 7 digits might be suspect. A right-hand side vector can be developed
corresponding to a solution of ones:
>> b=[sum(A(1,:));sum(A(2,:));sum(A(3,:));sum(A(4,:));sum(A(5,:));
sum(A(6,:))]
b =
1365
63
19608
111111
364
3906
x =
1.00000000000000
0.99999999999991
1.00000000000075
0.99999999999703
1.00000000000542
0.99999999999630
>> e = max(abs(x-1))
e =
5.420774940034789e-012
>> e = mean(abs(x-1))
e =
2.154110223528960e-012
Some of the results are accurate to about 12 significant digits. Because MATLAB
represents numbers to about 15 significant digits, this means that about 3 digits are suspect.
Thus, for this case, the condition number tends to exaggerate the impact of ill-conditioning.
103
CHAPTER 11
11.1 (a) The first iteration can be implemented as
41 + 0.4 x 2 41 + 0.4(0)
x1 = = = 51.25
0 .8 0 .8
Second iteration:
41 + 0.4(56.875)
x1 = = 79.6875
0.8
25 + 0.4(79.6875) + 0.4(159.6875)
x2 = = 150.9375
0.8
105 + 0.4(150.9375)
x3 = = 206.7188
0.8
79.6875 − 51.25
ε a ,1 = × 100% = 35.69%
79.6875
150.9375 − 56.875
ε a,2 = × 100% = 62.32%
150.9375
206.7188 − 159.6875
ε a ,3 = × 100% = 22.75%
206.7188
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
104
x3 230.2344 10.21% 37.11%
4 x1 150.2344 15.65%
x2 221.4844 10.62%
x3 241.9922 4.86% 15.65%
5 x1 161.9922 7.26%
x2 233.2422 5.04%
x3 247.8711 2.37% 7.26%
6 x1 167.8711 3.50%
x2 239.1211 2.46%
x3 250.8105 1.17% 3.50%
Thus, after 6 iterations, the maximum error is 3.5% and we arrive at the result: x1 =
167.8711, x2 = 239.1211 and x3 = 250.8105.
(b) The same computation can be developed with relaxation where λ = 1.2.
First iteration:
41 + 0.4 x 2 41 + 0.4(0)
x1 = = = 51.25
0 .8 0 .8
Second iteration:
41 + 0.4(62)
x1 = = 88.45
0.8
25 + 0.4(93.84) + 0.4(202.14)
x2 = = 179.24
0 .8
105 + 0.4(200.208)
x3 = = 231.354
0 .8
105
Relaxation yields: x 3 = 1.2( 231.354) − 0.2(202.14) = 237.1968
93.84 − 61.5
ε a ,1 = × 100% = 34.46%
93.84
200.208 − 74.4
ε a,2 = × 100% = 62.84%
200.208
237.1968 − 202.14
ε a ,3 = × 100% = 14.78%
237.1968
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
Thus, relaxation speeds up convergence. After 6 iterations, the maximum error is 4.997%
and we arrive at the result: x1 = 171.423, x2 = 244.389 and x3 = 253.622.
27 − 2 x 2 + x3 27 − 2(0) + 0
x1 = = = 2.7
10 10
Second iteration:
106
27 − 2(8.9) − 6.62
x1 = = 0.258
10
0.258 − 2.7
ε a ,1 = × 100% = 947%
0.258
7.914333 − 8.9
ε a,2 = × 100% = 12.45%
7.914333
− 5.934467 − (−6.62)
ε a ,3 = × 100% = 11.55%
− 5.934467
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
Thus, after 5 iterations, the maximum error is 0.59% and we arrive at the result: x1 =
0.500253, x2 = 8.000112 and x3 = −6.00007.
27 − 2 x 2 + x3 27 − 2(0) + 0
x1 = = = 2.7
10 10
107
− 61.5 + 3x1 − 2 x 3 − 61.5 + 3(0) − 2(0)
x2 = = = 10.25
−6 −6
− 21.5 − x1 − x 2 − 21.5 − 0 − 0
x3 = = = −4.3
5 5
Second iteration:
27 − 2(10.25) − 4.3
x1 = = 0.22
10
0.22 − 2.7
ε a ,1 = × 100% = 1127%
0.258
7.466667 − 10.25
ε a,2 = × 100% = 37.28%
7.466667
− 6.89 − ( −4.3)
ε a ,3 = × 100% = 37.59%
− 6.89
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
108
x3 -6.0186 0.77% 10.92%
6 x1 0.501047 1.47%
x2 7.99695 0.14%
x3 -5.99583 0.38% 1.47%
Thus, after 6 iterations, the maximum error is 1.47% and we arrive at the result: x1 =
0.501047, x2 = 7.99695 and x3 = −5.99583.
Second iteration:
294.4012 − 253.3333
ε a ,1 = × 100% = 13.95%
294.4012
212.1842 − 108.8889
ε a,2 = × 100% = 48.68%
212.1842
311.6491 − 289.3519
ε a ,3 = × 100% = 7.15%
311.6491
109
2 x1 294.4012 13.95%
x2 212.1842 48.68%
x3 311.6491 7.15% 48.68%
3 x1 316.5468 7.00%
x2 223.3075 4.98%
x3 319.9579 2.60% 7.00%
4 x1 319.3254 0.87%
x2 226.5402 1.43%
x3 321.1535 0.37% 1.43%
5 x1 320.0516 0.23%
x2 227.0598 0.23%
x3 321.4388 0.09% 0.23%
Note that after several more iterations, we arrive at the result: x1 = 320.2073, x2 = 227.2021
and x3 = 321.5026.
11.5 The equations must first be rearranged so that they are diagonally dominant
− 8 x1 + x 2 − 2 x3 = −20
2 x1 − 6 x 2 − x3 = −38
− 3 x1 − x 2 + 7 x3 = −34
− 20 − x 2 + 2 x 3 − 20 − 0 + 2(0)
x1 = = = 2 .5
−8 −8
− 38 − 2 x1 + x3 − 38 − 2( 2.5) + 0
x2 = = = 7.166667
−6 −6
− 34 + 3 x1 + x 2 − 34 + 3(2.5) + 7.166667
x3 = = = −2.761905
7 7
Second iteration:
− 20 − 7.166667 + 2(−2.761905)
x1 = = 4.08631
−8
− 38 − 2 x1 + x3 − 38 − 2(4.08631) + ( −2.761905)
x2 = = = 8.155754
−6 −6
110
4.08631 − 2.5
ε a ,1 = × 100% = 38.82%
4.08631
8.155754 − 7.166667
ε a,2 = × 100% = 12.13%
8.155754
− 1.94076 − (−2.761905)
ε a ,3 = × 100% = 42.31%
− 1.94076
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
Thus, after 3 iterations, the maximum error is 2.92% and we arrive at the result: x1 =
4.004659, x2 = 7.99168 and x3 = −1.99919.
(b) The same computation can be developed with relaxation where λ = 1.2.
First iteration:
− 20 − x 2 + 2 x 3 − 20 − 0 + 2(0)
x1 = = = 2 .5
−8 −8
− 38 − 2 x1 + x3 − 38 − 2(3) + 0
x2 = = = 7.333333
−6 −6
− 34 + 3 x1 + x 2 − 34 + 3(3) + 8.8
x3 = = = −2.3142857
7 7
111
Second iteration:
− 20 − x 2 + 2 x3 − 20 − 8.8 + 2(−2.7771429)
x1 = = = 4.2942857
−8 −8
− 38 − 2 x1 + x3 − 38 − 2(4.5531429) − 2.7771429
x2 = = = 8.3139048
−6 −6
− 34 + 3 x1 + x 2 − 34 + 3(4.5531429) + 8.2166857
x3 = = = −1.7319837
7 7
4.5531429 − 3
ε a ,1 = × 100% = 34.11%
4.5531429
8.2166857 − 8.8
ε a,2 = × 100% = 7.1%
8.2166857
− 1.5229518 − (−2.7771429)
ε a ,3 = × 100% = 82.35%
− 1.5229518
The remainder of the calculation proceeds until all the errors fall below the stopping
criterion of 5%. The entire computation can be summarized as
112
x3 -2.022594 -2.050162 8.07% 8.068%
6 x1 4.0048286 4.0122254 1.11%
x2 8.0124354 8.0272613 1.11%
x3 -1.990866 -1.979007 3.60% 3.595%
Thus, relaxation actually seems to retard convergence. After 6 iterations, the maximum
error is 3.595% and we arrive at the result: x1 = 4.0122254, x2 = 8.0272613 and x3 =
−1.979007.
11.6 As ordered, none of the sets will converge. However, if Set 1 and 3 are reordered so that
they are diagonally dominant, they will converge on the solution of (1, 1, 1).
Set 1: 8x + 3y + z = 12
2x + 4y – z = 5
−6x +7z = 1
Set 3: 3x + y − z = 3
x + 4y – z = 4
x + y +5z =7
Because it is not diagonally dominant, Set 2 will not converge on the correct solution of (1,
1, 1). However, it will also not diverge. Rather, it will oscillate. The way that this occurs
depends on how the equations are ordered. For example, if they can be ordered as
−2x + 4y − 5z = −3
2y – z = 1
−x + 3y + 5z = 7
113
x2 0.749984 66.66%
x3 1.499994 66.67% 127.29%
8 x1 -0.75002 466.65%
x2 1.249997 40.00%
x3 0.499999 200.00% 466.65%
−x + 3y + 5z = 7
2y – z = 1
−2x + 4y − 5z = −3
f 1 ( x , y ) = − x 2 + x + 0 .5 − y
f 2 ( x, y ) = x 2 − y − 5 xy
The partial derivatives can be computed and evaluated at the initial guesses
114
∂f 1, 0 ∂f 1, 0
= −2 x + 1 = −2(1.2) + 1 = −1.4 = −1
∂x ∂y
∂f 2,0 ∂f 2,0
= 2 x − 5 y = 2(1.2) − 5(1.2) = −3.6 = −1 − 5 x = −1 − 5(1.2) = −7
∂x ∂y
They can then be used to compute the determinant of the Jacobian for the first iteration is
− 0.94(−3.6) − (−6.96)(−1)
x1 = 1.2 − = 1.26129
6.2
− 6.96(−1.4) − (−0.94)(−3.6)
x 2 = 1.2 − = 0.174194
6.2
The computation can be repeated until an acceptable accuracy is obtained. The results are
summarized as
y = x2 −1
y = 5 − x2
115
5
0
0 0.5 1 1.5 2 2.5
-1
-2
(b) The equations can be solved in a number of different ways. For example, the first
equation can be solved for x and the second solved for y. For this case, successive
substitution does not work
First iteration:
x = 5 − y 2 = 5 − (1.5) 2 = 1.658312
y = (1.658312) 2 − 1 = 1.75
Second iteration:
x = 5 − (1.75) 2 = 1.391941
y = (1.391941) 2 − 1 = 0.9375
Third iteration:
x = 5 − (0.9375) 2 = 2.030048
y = (2.030048) 2 − 1 = 3.12094
Thus, the solution is moving away from the solution that lies at approximately x = y = 1.6.
An alternative solution involves solving the second equation for x and the first for y.
For this case, successive substitution does work
First iteration:
x = y + 1 = 1.5 + 1 = 1.581139
y = 5 − x 2 = 5 − (1.581139) 2 = 1.581139
Second iteration:
x = 1.581139 = 1.606592
y = 5 − (1.606592) 2 = 1.555269
Third iteration:
116
x = 5 − (1.555269) 2 = 1.598521
y = (1.598521) 2 − 1 = 1.563564
After several more iterations, the calculation converges on the solution of x = 1.600485 and
y = 1.561553.
f 1 ( x, y ) = x 2 − y − 1
f 2 ( x, y ) = 5 − y 2 − x 2
The partial derivatives can be computed and evaluated at the initial guesses
∂f 1, 0 ∂f 1,0
= 2x = −1
∂x ∂y
∂f 2,0 ∂f 2, 0
= −2 x = −2 y
∂x ∂y
They can then be used to compute the determinant of the Jacobian for the first iteration is
− 0.94(−3.6) − (−6.96)(−1)
x1 = 1.2 − = 1.26129
6.2
− 6.96(−1.4) − (−0.94)(−3.6)
x 2 = 1.2 − = 0.174194
6.2
The computation can be repeated until an acceptable accuracy is obtained. The results are
summarized as
117
CHAPTER 12
12.1 The data can be tabulated as
i y (yi – y )2
1 8.8 0.725904
2 9.4 0.063504
3 10 0.121104
4 9.8 0.021904
5 10.1 0.200704
6 9.5 0.023104
7 10.1 0.200704
8 10.4 0.559504
9 9.5 0.023104
10 9.5 0.023104
11 9.8 0.021904
12 9.2 0.204304
13 7.9 3.069504
14 8.9 0.565504
15 9.6 0.002704
16 9.4 0.063504
17 11.3 2.715904
18 10.4 0.559504
19 8.8 0.725904
20 10.2 0.300304
21 10 0.121104
22 9.4 0.063504
23 9.8 0.021904
24 10.6 0.898704
25 8.9 0.565504
Σ 241.3 11.8624
241.3
y= = 9.652
25
11.8624
sy = = 0.703041
25 − 1
s y2 = 0.7030412 = 0.494267
0.703041
c.v. = × 100% = 7.28%
9.652
12.2 The data can be sorted and then grouped. We assume that if a number falls on the border
between bins, it is placed in the lower bin.
118
8 8.5 0
8.5 9 4
9 9.5 7
9.5 10 6
10 10.5 5
10.5 11 1
11 11.5 1
8
7
6
Frequency
5
4
3
2
1
0
7 8 9 10 11 12
Bin
i y (yi – y )2
1 28.65 0.390625
2 28.65 0.390625
3 27.65 0.140625
4 29.25 1.500625
5 26.55 2.175625
6 29.65 2.640625
7 28.45 0.180625
8 27.65 0.140625
9 26.65 1.890625
10 27.85 0.030625
11 28.65 0.390625
12 28.65 0.390625
13 27.65 0.140625
14 27.05 0.950625
15 28.45 0.180625
16 27.65 0.140625
17 27.35 0.455625
18 28.25 0.050625
19 31.65 13.14063
20 28.55 0.275625
21 28.35 0.105625
22 28.85 0.680625
23 26.35 2.805625
24 27.65 0.140625
119
25 26.85 1.380625
26 26.75 1.625625
27 27.75 0.075625
28 27.25 0.600625
Σ 784.7 33.0125
784.7
(a) y = = 28.025
28
33.0125
(b) s y = = 1.105751
28 − 1
1.105751
(d) c.v. = × 100% = 3.95%
28.025
8
7
6
Frequency
5
4
3
2
1
0
26 27 28 29 30 31 32
Bin
120
(f) 68% of the readings should fall between y − s y and y + s y . That is, between 28.025 –
1.10575096 = 26.919249 and 28.025 + 1.10575096 = 29.130751. Twenty values fall
between these bounds which is equal to 20/28 = 71.4% of the values which is not that
far from 68%.
12.4 The sum of the squares of the residuals for this case can be written as
n
Sr = ∑ (yi =1
i − a1 x i )
2
The partial derivative of this function with respect to the single parameter a1 can be
determined as
∂S r
∂a1
= −2 ∑ [( y i − a1 x i ) xi ]
Setting the derivative equal to zero and evaluating the summations gives
∑y i − a1 ∑x i
a1 =
∑y i
∑x i
So the slope that minimizes the sum of the squares of the residuals for a straight line with a
zero intercept is merely the ratio of the sum of the dependent variables (y) over the sum of
the independent variables (x).
12.5
i xi yi x i2 x iy i
1 0 9.8100 0 0
2 20000 9.7487 4.0E+08 194974
3 40000 9.6879 1.6E+09 387516
4 60000 9.6278 3.6E+09 577668
5 80000 9.5682 6.4E+09 765456
Σ 200000 48.4426 1.2E+10 1925614
5(1,925,614) − 200,000(48.4426)
a1 = = −3.0225 × 10 −6
5(1.2 × 10 ) − 200,000
10 2
48.4426 200,000
a0 = − 3.0225 × 10 −6 = 9.80942
5 5
Therefore, the line of best fit is (using the nomenclature of the problem)
121
g = 9.80942 − 3.0225 × 10 −6 y
12000
10000
8000
6000
-50 0 50 100 150
⎛ p⎞V
R=⎜ ⎟
⎝T ⎠ n
p
= 30.3164
T
1 kg
n=
28 g/mole
⎛ 10 ⎞
R = 30.3164⎜ 3 ⎟ = 8.487
⎝ 10 / 28 ⎠
0.2
0
0 2 4 6 8 10
122
Forcing a zero intercept yields
0.6 y = 0.061x
2
R = 0.8387
0.4
0.2
0
0 2 4 6 8 10
0.6 0.4069
y = 0.1827x
2
R = 0.9024
0.4
0.2
0
0 2 4 6 8 10
However, this seems to represent a poor compromise since it misses the linear trend in the
data. An alternative approach would to assume that the physically-unrealistic non-zero
intercept is an artifact of the measurement method. Therefore, if the linear slope is valid,
we might try y = 0.0454x.
12.8 The function can be linearized by dividing it by x and taking the natural logarithm to yield
ln( y / x) = ln α 4 + β 4 x
Therefore, if the model holds, a plot of ln(y/x) versus x should yield a straight line with an
intercept of lnα4 and an intercept of β4.
x y ln(y/x)
0.1 0.75 2.014903
0.2 1.25 1.832581
0.4 1.45 1.287854
0.6 1.25 0.733969
0.9 0.85 -0.05716
1.3 0.55 -0.8602
1.5 0.35 -1.45529
1.7 0.28 -1.80359
1.8 0.18 -2.30259
123
3 y = -2.4733x + 2.2682
2
2 R = 0.9974
1
0
-1
-2
-3
0 0.5 1 1.5 2
y = 9.661786 xe −2.4733 x
0
0 0.5 1 1.5 2
12.9 The data can be transformed, plotted and fit with a straight line
v, m/s F, N ln v ln F
10 25 2.302585 3.218876
20 70 2.995732 4.248495
30 380 3.401197 5.940171
40 550 3.688879 6.309918
50 610 3.912023 6.413459
60 1220 4.094345 7.106606
70 830 4.248495 6.721426
80 1450 4.382027 7.279319
8 y = 1.9842x - 1.2941
2
7 R = 0.9481
6
5
4
3
2
2 2.5 3 3.5 4 4.5
124
The least-squares fit is
ln y = 1.9842 ln x − 1.2941
The exponent is 1.9842 and the leading coefficient is e−1.2941 = 0.274137. Therefore, the
result is the same as when we used common or base-10 logarithms:
y = 0.274137 x 1.9842
2000
1600
1200
800
400
0
0 10 20 30
The plot indicates that the data is somewhat curvilinear. An exponential model (i.e., a semi-
log plot) is the best choice to linearize the data. This conclusion is based on
125
y = -0.0532x + 7.5902
7.6 2
R = 0.9887
7.2
6.8
6.4
6
0 10 20 30
Therefore, the coefficient of the exponent (β1) is −0.0532 and the lead coefficient (α1) is
e7.5902 = 1978.63, and the fit is
c = 1978.63e −0.0532t
Consequently the concentration at t = 0 is 1978.63 CFU/100 ml. Here is a plot of the fit
along with the original data:
2400
2000
1600
1200
800
400
0
0 10 20 30
(b) The time at which the concentration will reach 200 CFU/100 mL can be computed as
⎛ 200 ⎞
ln⎜ ⎟ = −0.0532t
⎝ 1978.63 ⎠
⎛ 200 ⎞
ln⎜ ⎟
t= ⎝
1978.63 ⎠
= 43.08 d
− 0.0532
12.11 (a) The exponential fit can be determined with the base-10 logarithm as
126
20 650 2.812913
24 560 2.748188
Therefore, the coefficient of the exponent (β5) is −0.0231 and the lead coefficient (α5) is
103.2964 = 1978.63, and the fit is
c = 1978.63(10) −0.0231t
(b) The time at which the concentration will reach 200 CFU/100 mL can be computed as
⎛ 200 ⎞
log10 ⎜ ⎟ = −0.0231t
⎝ 1978.63 ⎠
⎛ 200 ⎞
log10 ⎜ ⎟
t= ⎝ 1978.63 ⎠
= 43.08 d
− 0.0231
Thus, the results are identical to those obtained with the base-e model.
e −α1t = 10 −α 5t
− α 1t = −α 5 t ln 10
or
α 1 = 2.302585α 5
127
12.12 The power fit can be determined as
0.37
logA = 0.3799logW - 0.3821
0.36 2
R = 0.9711
0.35
0.34
0.33
0.32
0.31
1.8 1.84 1.88 1.92 1.96
Therefore, the power is b = 0.3799 and the lead coefficient is a = 10−0.3821 = 0.4149, and the
fit is
A = 0.4149W 0.3799
2.35
2.3
2.25
2.2
2.15
2.1
2.05
70 75 80 85 90
The value of the surface area for a 95-kg person can be estimated as
128
12.13 The power fit can be determined as
Mass Metabolism
(kg) (kCal/day) log Mass log Met
300 5600 2.477121 3.748188
70 1700 1.845098 3.230449
60 1100 1.778151 3.041393
2 100 0.30103 2
0.3 30 -0.52288 1.477121
2
logMet = 0.7497logMass + 1.818
1 2
R = 0.9935
0
-1 0 1 2 3
Therefore, the power is b = 0.7497 and the lead coefficient is a = 101.818 = 65.768, and the
fit is
8000
6000
4000
2000
0
0 100 200 300 400
129
0.7 0.8 0.9 1 1.1 1.2
-2.4
-2.8
-3.2
-3.6
Therefore,
B = 10 -5.41 = 3.88975 × 10 -6
m = 2.6363
0.005
0.004
0.003
0.002
0.001
0
0 5 10 15
8
6
4
2
0
0 2 4 6
130
12.16 The data can be transformed
1.1
1
0.9
0.8
0.7
0.6
1.6 1.8 2 2.2
τ = 0.72765γ& 0.54298
A plot of the power model along with the data can be created as
12
0
0 50 100 150
131
CHAPTER 13
13.1 The data can be tabulated and the sums computed as
i x y x2 x3 x4 xy x2y
1 10 25 100 1000 10000 250 2500
2 20 70 400 8000 160000 1400 28000
3 30 380 900 27000 810000 11400 342000
4 40 550 1600 64000 2560000 22000 880000
5 50 610 2500 125000 6250000 30500 1525000
6 60 1220 3600 216000 12960000 73200 4392000
7 70 830 4900 343000 24010000 58100 4067000
8 80 1450 6400 512000 40960000 116000 9280000
Σ 360 5135 20400 1296000 87720000 312850 20516500
Normal equations:
which can be solved for the coefficients yielding the following best-fit polynomial
2000
1500
1000
500
0
0 20 40 60 80 100
-500
The predicted values can be used to determined the sum of the squares. Note that the mean
of the y values is 641.875.
i x y ypred ( yi − y) 2 (y − ypred)2
1 10 25 -13.5417 380535 1485
2 20 70 158.8393 327041 7892
3 30 380 338.6607 68579 1709
132
4 40 550 525.9226 8441 580
5 50 610 720.625 1016 12238
6 60 1220 922.7679 334229 88347
7 70 830 1132.351 35391 91416
8 80 1450 1349.375 653066 10125
Σ 1808297 213793
1808297 − 213793
r2 = = 0.88177
1808297
The model fits the trend of the data nicely, but it has the deficiency that it yields physically
unrealistic negative forces at low velocities.
13.2 The sum of the squares of the residuals for this case can be written as
∑ (y )
n
2
Sr = i − a1 x i − a 2 x i2
i =1
The partial derivatives of this function with respect to the unknown parameters can be
determined as
∑ [( y ]
∂S r
= −2 − a1 x i − a 2 xi2 ) x i
∂a1
i
∑ [( y ]
∂S r
= −2 − a1 x i − a 2 x i2 ) xi2
∂a 2
i
Setting the derivative equal to zero and evaluating the summations gives
(∑ x )a + (∑ x )a = ∑ x y
2
i 1
3
i 2 i i
(∑ x )a + (∑ x )a = ∑ x
3
i 1
4
i 2
2
i yi
a1 =
∑x y ∑x −∑x y ∑x
i i
4
i
2
i i
3
i
∑ x ∑ x − (∑ x ) 2
i
4
i
3 2
i
a2 =
∑x ∑x y −∑x y ∑x
2
i
2
i i i i
3
i
∑ x ∑ x − (∑ x ) 2
i
4
i
3 2
i
The model can be tested for the data from Table 12.1.
133
x y x2 x3 x4 xy x2y
10 25 100 1000 10000 250 2500
20 70 400 8000 160000 1400 28000
30 380 900 27000 810000 11400 342000
40 550 1600 64000 2560000 22000 880000
50 610 2500 125000 6250000 30500 1525000
60 1220 3600 216000 12960000 73200 4392000
70 830 4900 343000 24010000 58100 4067000
80 1450 6400 512000 40960000 116000 9280000
Σ 20400 1296000 87720000 312850 20516500
312850(87720000) − 20516500(1296000)
a1 = = 7.771024
20400(87720000) − (1296000) 2
20400(20516500) − 312850(1296000)
a2 = = 0.119075
20400(87720000) − (1296000)
y = 7.771024 x + 0.119075 x 2
2500
2000
1500
1000
500
0
0 20 40 60 80 100
i x y x2 x3 x4 x5 x6 xy x2y x3y
1 3 1.6 9 27 81 243 729 4.8 14.4 43.2
2 4 3.6 16 64 256 1024 4096 14.4 57.6 230.4
3 5 4.4 25 125 625 3125 15625 22 110 550
4 7 3.4 49 343 2401 16807 117649 23.8 166.6 1166.2
5 8 2.2 64 512 4096 32768 262144 17.6 140.8 1126.4
6 9 2.8 81 729 6561 59049 531441 25.2 226.8 2041.2
7 11 3.8 121 1331 14641 161051 1771561 41.8 459.8 5057.8
8 12 4.6 144 1728 20736 248832 2985984 55.2 662.4 7948.8
Σ 59 26.4 509 4859 49397 522899 5689229 204.8 1838.4 18164
Normal equations:
134
⎡ 8 59 509 4859 ⎤ ⎧a 0 ⎫ ⎧ 26.4 ⎫
⎢ 59 509 4859 49397 ⎥ ⎪ a1 ⎪ = ⎪ 204.8 ⎪
⎢ 509 4859 49397 522899 ⎥ ⎨a 2 ⎬ ⎨1838.4⎬
⎢⎣4859 49397 522899 5689229⎥⎦ ⎪a ⎪ ⎪⎩18164 ⎪⎭
⎩ 3⎭
which can be solved for the coefficients yielding the following best-fit polynomial
0
0 5 10 15
-2
The predicted values can be used to determined the sum of the squares. Note that the mean
of the y values is 3.3.
i x y ypred ( yi − y) 2 (y − ypred)2
1 3 1.6 1.83213 2.8900 0.0539
2 4 3.6 3.41452 0.0900 0.0344
3 5 4.4 4.03471 1.2100 0.1334
4 7 3.4 3.50875 0.0100 0.0118
5 8 2.2 2.92271 1.2100 0.5223
6 9 2.8 2.4947 0.2500 0.0932
7 11 3.8 3.23302 0.2500 0.3215
8 12 4.6 4.95946 1.6900 0.1292
Σ 7.6000 1.2997
7.6 − 1.2997
r2 = = 0.829
7 .6
13.4
function p = polyreg(x,y,m)
% polyreg(x,y,m):
% Polynomial regression.
135
% input:
% x = independent variable
% y = dependent variable
% m = order of polynomial
% output:
% p = vector of coefficients
n = length(x);
if length(y)~=n, error('x and y must be same length'); end
for i = 1:m+1
for j = 1:i
k = i+j-2;
s = 0;
for l = 1:n
s = s + x(l)^k;
end
A(i,j) = s;
A(j,i) = s;
end
s = 0;
for l = 1:n
s = s + y(l)*x(l)^(i-1);
end
b(i) = s;
end
p = A\b';
>> x = [3 4 5 7 8 9 11 12];
>> y = [1.6 3.6 4.4 3.4 2.2 2.8 3.8 4.6];
>> polyreg(x,y,3)
ans =
-11.4887
7.1438
-1.0412
0.0467
13.5 Because the data is curved, a linear regression will undoubtedly have too much error.
Therefore, as a first try, fit a parabola,
p =
0.00439523809524 -0.36335714285714 14.55190476190477
136
16
12
0
0 5 10 15 20 25 30
We can use this equation to generate predictions corresponding to the data. When these
values are rounded to the same number of significant digits the results are
Thus, although the plot looks good, discrepancies occur in the third significant digit.
>> p = polyfit(T,c,3)
p =
-0.00006444444444 0.00729523809524 -0.39557936507936 14.60023809523810
We can use this equation to generate predictions corresponding to the data. When these
values are rounded to the same number of significant digits the results are
137
13.6 The multiple linear regression model to evaluate is
o = a 0 + a1T + a 2 c
The [Z] and y matrices can be set up using MATLAB commands in a fashion similar to
Example 13.4,
>> a = Z\y
a =
13.52214285714286
-0.20123809523810
-0.10492857142857
We can evaluate the prediction at T = 12 and c = 15 and evaluate the percent relative error
as
>> cp = a(1)+a(2)*12+a(3)*15
cp =
9.53335714285714
>> ea = abs((9.09-cp)/9.09)*100
ea =
4.87741631305987
Thus, the error is considerable. This can be seen even better by generating predictions for
all the data and then generating a plot of the predictions versus the data. A one-to-one line
is included to show how the predictions diverge from a perfect fit.
138
16
12
4
4 8 12 16
The cause for the discrepancy is because the dependence of oxygen concentration on the
unknowns is significantly nonlinear. It should be noted that this is particularly the case for
the dependency on temperature.
y = a 0 + a1T + a 2 T 2 + a 3T 3 + a 4 c
>> T = 0:5:30;
>> T = [T T T]';
>> c = [0 0 0 0 0 0 0 10 10 10 10 10 10 10 20 20 20 20 20 20 20]';
>> o = [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]';
>> y = [14.6 12.8 11.3 10.1 9.09 8.26 7.56 12.9 11.3 10.1 9.03 8.17
7.46 6.85 11.4 10.3 8.96 8.08 7.35 6.73 6.2]';
>> Z = [o T T.^2 T.^3 c];
a =
14.02714285714287
-0.33642328042328
0.00574444444444
-0.00004370370370
-0.10492857142857
The model can then be used to predict values of oxygen at the same values as the data.
These predictions can be plotted against the data to depict the goodness of fit.
139
>> yp = Z*a
>> plot(y,yp,'o')
>> a(1)+a(2)*12+a(3)*12^2+a(4)*12^3+a(5)*15
ans =
9.16781492063485
y = a 0 + a1 x1 + a 2 x 2
>> x1 = [0 1 1 2 2 3 3 4 4]';
>> x2 = [0 1 2 1 2 1 2 1 2]';
>> y = [15.1 17.9 12.7 25.6 20.5 35.1 29.7 45.4 40.2]';
>> o = [1 1 1 1 1 1 1 1 1]';
>> Z = [o x1 x2 y];
>> a = (Z'*Z)\[Z'*y]
a =
14.4609
9.0252
-5.7043
140
The model can then be used to predict values of the unknown at the same values as the
data. These predictions can be used to determine the correlation coefficient and the standard
error of the estimate.
>> yp = Z*a
>> r = sqrt(r2)
r =
0.9978
>> a = (Z'*Z)\[Z'*log10(Q)]
a =
1.5609
2.6279
0.5320
141
Q = 101.5609 D 2.6279 S 0.5320 = 36.3813D 2.6279 S 0.5320
The unknowns can be entered and the [Z] matrix can be set up as in
>> p = [7 5.2 3.8 3.2 2.5 2.1 1.8 1.5 1.2 1.1]';
>> t = [0.5 1 2 3 4 5 6 7 8 9]';
>> Z = [exp(-1.5*t) exp(-0.3*t) exp(-0.05*t)];
>> pp = Z*a
>> plot(t,p,'o',t,pp)
13.11 First, an M-file function must be created to compute the sum of the squares,
function f = fSSR(a,Im,Pm)
Pp = a(1)*Im/a(2).*exp(-Im/a(2)+1);
f = sum((Pm-Pp).^2);
142
The data can then be entered as
a =
238.7124 221.8239
I
I − +1
P = 238.7124 e 221.8239
221.8239
>> Pp = a(1)*I/a(2).*exp(-I/a(2)+1);
>> plot(I,P,'o',I,Pp)
13.12 First, an M-file function must be created to compute the sum of the squares,
function f = fSSR(a,xm,ym)
yp = a(1)*xm.*exp(a(2)*xm);
f = sum((ym-yp).^2);
143
The minimization of the function is then implemented by
a =
9.8545 -2.5217
y = 9.8545 xe −2.5217 x
>> yp = a(1)*x.*exp(a(2)*x);
>> plot(x,y,'o',x,yp)
1 K 1 1
= 3
+
v0 k m [S ] km
If this model is valid, a plot of 1/v0 versus 1/[S]3 should yield a straight line with a slope of
K/km and an intercept of 1/km. The slope and intercept can be implemented in MATLAB
using the M-file function linregr (Fig. 12.12),
a =
1.0e+004 *
1.64527391375701 4.13997346408367
144
These results can then be used to compute km and K,
>> km=1/a(2)
km =
2.415474419523452e-005
>> K=km*a(1)
K =
0.39741170517893
2.415474 × 10 −5 [ S ]3
v0 =
0.39741 + [ S ]3
The fit along with the data can be displayed graphically. We will use a log-log plot because
of the wide variation of the magnitudes of the values being displayed,
(b) An M-file function must be created to compute the sum of the squares,
function f = fSSR(a,Sm,v0m)
v0p = a(1)*Sm.^3./(a(2)+Sm.^3);
f = sum((v0m-v0p).^2);
145
The minimization of the function is then implemented by
a =
0.00002430998303 0.39976314533880
2.431 × 10 −5 [ S ]3
v0 =
0.399763 + [ S ]3
The fit along with the data can be displayed graphically. We will use a log-log plot because
of the wide variation of the magnitudes of the values being displayed,
146
CHAPTER 14
14.1 (a) Newton’s polynomial. Ordering of points:
x1 = 3 f(x1) = 6.5
x2 = 4 f(x2) = 2
x3 = 2.5 f(x3) = 7
x4 = 5 f(x4) = 0
Note that based purely on the distance from the unknown, the fourth point would be (2, 5).
However, because it provides better balance and is located only a little bit farther from the
unknown, the point at (5, 0) is chosen.
First order:
2 − 6.5
f 1 (3.4) = 6.5 + (3.4 − 3) = 6.5 + ( −4.5)(3.4 − 3) = 4.7
4−3
Second order:
7−2
− (−4.5)
f 2 (3.4) = 4.7 + 2.5 − 4 (3.4 − 3)(3.4 − 4)
2.5 − 3
− 3.333333 − (−4.5)
= 4.7 + (3.4 − 3)(3.4 − 4)
2 .5 − 3
= 4.7 + (−2.333333)(3.4 − 3)(3.4 − 4) = 5.259887
Third order:
0−7
− (−3.333333)
5 − 2 .5 − (−2.333333)
f 3 (3.4) = 5.259887 + 5−4 (3.4 − 3)(3.4 − 4)(3.4 − 2.5)
5−3
− 2.8 − (−3.333333)
− (−2.333333)
= 5.259887 + 5−4 (3.4 − 3)(3.4 − 4)(3.4 − 2.5)
5−3
0.5333333 − (−2.333333)
= 5.259887 + (3.4 − 3)(3.4 − 4)(3.4 − 2.5) = 4.95152
5−3
First order:
3 .4 − 4 3 .4 − 3
f 1 (3.4) = 6 .5 + 2 = 4.7
3−4 4−3
147
Second order:
Third order:
14.2 The points can be ordered so that they are close to and centered around the unknown. A
divided-difference table can then be developed as
Note that the fact that the fourth divided difference is zero means that the data was
generated with a third-order polynomial.
First order:
Second order:
Third order:
Fourth order:
First order:
4−5 4−3
f 1 ( 4) = 5.25 + 19.75 = 12.5
3−5 5−3
148
Second order:
Third order:
14.4 (a) The points can be ordered so that they are close to and centered around the unknown. A
divided-difference table can then be developed as
Second order:
Third order:
(b) First, linear interpolation can be used to generate values for T = 10 and 15 at c = 15,
8.96 − 10.1
f 1 (T = 10, c = 15) = 10.1 + (15 − 10) = 9.53
20 − 10
8.08 − 9.03
f 1 (T = 15, c = 15) = 9.03 + (15 − 10) = 8.555
20 − 10
8.555 − 9.53
f 1 (T = 12, c = 15) = 9.53 + (12 − 10) = 9.14
15 − 10
(c) First, quadratic interpolation can be used to generate values for T = 5, 10 and 15 at c =
15,
149
f 2 (T = 10, c = 15) = 11.3 − 0.12(15 − 0) + 0.0003(15 − 0)(15 − 10) = 9.5225
14.5 MATLAB can be used to generate a cubic polynomial through the first 4 points in the table,
>> x = [1 2 3 4];
>> fx = [3.6 1.8 1.2 0.9];
>> p = polyfit(x,fx,3)
p =
-0.1500 1.5000 -5.2500 7.5000
or
Bisection can be employed the root of this polynomial. Using initial guesses of xl = 2 and xu
= 3, a value of 2.2156 is obtained with εa = 0.00069% after 16 iterations.
x2
0.93 =
1 + x2
0.93 + 0.93 x 2 = x 2
0.07 x 2 = 0.93
0.93
x= = 3.644957
0.07
(b) A quadratic interpolating polynomial can be fit to the last three points using the
polyfit function,
150
Thus, the best fit quadratic is
or
(c) A cubic interpolating polynomial can be fit to the last four points using the polyfit
function,
or
Bisection can be employed the root of this polynomial. Using initial guesses of xl = 3 and xu
= 4, a value of 3.61883 is obtained.
14.7 (a) Because they bracket the unknown, the two last points are used for linear interpolation,
151
6.7664 − 6.5453
f 1 (0.118) = 6.5453 + (0.118 − 0.11144) = 6.6487
0.12547 − 0.11144
Therefore, to the level of significance reported in the table the estimated entropy is 6.6077
(c) The inverse interpolation can be implemented in MATLAB. First, as in part (b), we can
fit a quadratic polynomial to the data to yield,
p =
354.2358 -64.9976 9.3450
or
In MATLAB, we can generate this polynomial by subtracting 6.45 from the constant
coefficient of the polynomial
>> p(3)=p(3)-6.45
p =
354.2358 -64.9976 2.8950
>> roots(p)
ans =
0.1074
0.0761
Thus, the value of the specific volume corresponding to an entropy of 6.45 is 0.1074.
14.8 This problem is nicely suited for the Newton interpolating polynomial. First, we can order
the data so that the points are closest to and centered around the unknown,
T D
300 1.139
152
350 0.967
400 0.854
250 1.367
450 0.759
200 1.708
Thus, the linear estimate is 1.036 to the level of significant digits provided in the original
data.
The quadratic estimate is 1.029 to the level of significant digits provided in the original
data.
f 4 (330) = 1.0289 − 2.93333 −10 (330 − 300)(330 − 350)(330 − 400)(330 − 250) = 1.0279
The quartic estimate now seems to be diverging slightly by moving to a value of 1.028.
This may be an initial indication that the higher-order terms are beginning to induce slight
oscillations.
f 2 (330) = 1.0279 − 2.77333 −12 (330 − 300)(330 − 350)(330 − 400)(330 − 250)(330 − 450) = 1.02902
153
Oscillations are now evidently occurring as the fifth-order estimate now jumps back up to
slightly above a value of 1.029.
On the basis of the foregoing, I would conclude that the cubic equation provides the best
approximation and that a value of 1.029 is a sound estimate to the level of significant digits
provided in the original data.
Inverse interpolation can be now used to determine the temperature corresponding to the
value of density of 1.029. First, MATLAB can be used to fit a cubic polynomial through
the four points that bracket this value. Interestingly, because of the large values of the
temperatures, we get an error message,
p =
0.0000 0.0000 -0.0097 3.2420
Let’s disregard this warn and proceed to adjust the polynomial so that it can be used to
solve the inverse interpolation problem. To do this, we subtract the specified value of the
density from the polynomial’s constant coefficient
>> p(4)=p(4)-1.029
p =
0.0000 0.0000 -0.0097 2.2130
Then we can use the roots function to determine the temperature that corresponds to this
value
>> roots(p)
ans =
1.0e+003 *
-2.8237
0.5938
0.3300
Thus, even though the polynomial is badly conditioned one of the roots corresponds to T =
330 as expected.
Now let’s perform the inverse interpolation, but with scaling. To do this, we will merely
subtract the value at the midpoint of the temperature range (325) from all the temperatures.
This acts to both reduce the magnitudes of the temperatures and centers them on zero,
154
>> T = [250 300 350 400];
>> T = T - 325;
>> p = polyfit(T,D,3)
p =
0.00000000400000 0.00001150000000 -0.00344250000000 1.04581250000000
>> p(4)=p(4)-1.029
p =
0.00000000400000 0.00001150000000 -0.00344250000000 0.01681250000000
We can then use the roots function to determine the temperature that corresponds to the
given density
>> r = roots(p)
ans =
1.0e+003 *
-3.14874694489127
0.26878060289231
0.00496634199799
By adding back the offset of 325, we arrive at the expected result of 330,
V = 148i 3 + 45i
>> polyval(p,0.10)
ans =
4.6480
14.10 Third-order case: The MATLAB polyfit function can be used to generate the cubic
polynomial and perform the interpolation,
155
>> p = polyfit(x,J,3)
p =
0.0670 -0.3705 0.1014 0.9673
>> Jpred = polyval(p,1.82)
Jpred =
0.3284
The built-in function besselj can be used to determine the true value which can then be
used to determine the percent relative error
Fourth-order case:
Fifth-order case:
14.11 In the same fashion as Example 14.6, MATLAB can be used to evaluate each of the cases,
First order:
156
>> polyval(p,(2000-1955)/35)
ans =
271.6900
Second order:
>> t = [t 1970];
>> pop = [pop 205.05];
>> ts = (t - 1955)/35;
>> p = polyfit(ts,pop,2);
>> polyval(p,(2000-1955)/35)
ans =
271.7400
Third order:
>> t = [t 1960];
>> pop = [pop 180.67];
>> ts = (t - 1955)/35;
>> p = polyfit(ts,pop,3);
>> polyval(p,(2000-1955)/35)
ans =
273.9900
Fourth order:
>> t = [t 1950];
>> pop = [pop 152.27];
>> ts = (t - 1955)/35;
>> p = polyfit(ts,pop,4);
>> polyval(p,(2000-1955)/35)
ans =
274.4200
Although the improvement is not great, the addition of each term causes the prediction for
2000 to increase. Thus, using higher-order approximations is moving the prediction closer
to the actual value of 281.42 that occurred in 2000.
157
CHAPTER 15
15.1 (a) The simultaneous equations for the natural spline can be set up as
⎡1 ⎤ ⎧ c1 ⎫ ⎧ 0 ⎫
⎢1 3 0.5 ⎥ ⎪c ⎪ ⎪ 0 ⎪
⎢ ⎥⎪ 2 ⎪ ⎪ ⎪
⎢ 0.5 2 0.5 ⎥ ⎪⎪c3 ⎪⎪ ⎪⎪ − 6 ⎪⎪
⎢ ⎥⎨ ⎬ = ⎨ ⎬
⎢ 0.5 3 1 ⎥ ⎪c 4 ⎪ ⎪− 24⎪
⎢ 1 4 1⎥ ⎪c5 ⎪ ⎪ 15 ⎪
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎣⎢ 1⎦⎥ ⎪⎩c6 ⎪⎭ ⎩⎪ 0 ⎭⎪
These equations can be solved for the c’s and then Eqs. (15.21) and (15.18) can be used to
solve for the b’s and the d’s. The coefficients for the intervals can be summarized as
interval a b c d
1 1 3.970954 0 0.029046
2 5 4.058091 0.087137 -0.40664
3 7 3.840249 -0.52282 -6.31535
4 8 -1.41909 -9.99585 5.414938
5 2 -5.16598 6.248963 -2.08299
These can be used to generate the following plot of the natural spline:
10
0
0 2 4 6
(b) The not-a-knot spline and its plot can be generated with MATLAB as
158
Notice how the not-a-knot version exhibits much more curvature, particularly between the
last points.
(c) The piecewise cubic Hermite polynomial and its plot can be generated with MATLAB
as
15.2 The simultaneous equations for the clamped spline with zero end slopes can be set up as
159
⎡ 1 0 .5 ⎤ ⎧ c1 ⎫ ⎧ 0 ⎫
⎢0.5 2 0.5 ⎥ ⎪c ⎪ ⎪ − 90 ⎪
⎢ ⎥⎪ 2 ⎪ ⎪ ⎪
⎢ 0.5 2 0.5 ⎥ ⎪c3 ⎪ ⎪− 108⎪
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎢ 0.5 2 0.5 ⎥ ⎨c 4 ⎬ = ⎨ 144 ⎬
⎢ 0.5 2 0.5 ⎥ ⎪c5 ⎪ ⎪ 36 ⎪
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎢ 0. 5 2 0 . 5 ⎪
⎥ 6c ⎪ ⎪ 18 ⎪
⎢ ⎥ ⎪ ⎪ ⎪ ⎪
⎣ 0.5 1 ⎦ ⎩c7 ⎭ ⎩ ⎭
These equations can be solved for the c’s and then Eqs. (15.21) and (15.18) can be used to
solve for the b’s and the d’s. The coefficients for the intervals can be summarized as
interval a b c d
1 70 0 15.87692 -31.7538
2 70 -7.93846 -31.7538 -24.7385
3 55 -58.2462 -68.8615 106.7077
4 22 -47.0769 91.2 -66.0923
5 13 -5.44615 -7.93846 13.66154
6 10 -3.13846 12.55385 -12.5538
The fit can be displayed in graphical form. Note that we are plotting the points as depth
versus temperature so that the graph depicts how the temperature changes down through the
tank.
0 50 100
0
Inspection of the plot indicates that the inflection point occurs in the 3rd interval. The cubic
equation for this interval is
where T = temperature and d = depth. This equation can be differentiated twice to yield the
second derivative
d 2T3 ( x)
= −137.729 + 640.2462(d − 1)
dx 2
160
This can be set equal to zero and solved for the depth of the thermocline as d = 1.21511 m.
>> x = linspace(0,1,11);
>> y = 1./((x-0.3).^2+0.01)+1./((x-0.9).^2+0.04)-6;
>> xx = linspace(0,1);
>> yy = spline(x,y,xx);
>> yh = 1./((xx-0.3).^2+0.01)+1./((xx-0.9).^2+0.04)-6;
>> plot(x,y,'o',xx,yy,xx,yh,'--')
(b) The piecewise cubic Hermite polynomial fit can be set up in MATLAB as
>> x = linspace(0,1,11);
>> y = 1./((x-0.3).^2+0.01)+1./((x-0.9).^2+0.04)-6;
>> xx = linspace(0,1);
>> yy = interp1(x,y,xx,'pchip');
>> yh = 1./((xx-0.3).^2+0.01)+1./((xx-0.9).^2+0.04)-6;
>> plot(x,y,'o',xx,yy,xx,yh,'--')
161
15.4 The simultaneous equations for the clamped spline with zero end slopes can be set up as
⎡ 1 ⎤ ⎧ c1 ⎫ ⎧ 0 ⎫
⎢100 400 100 ⎥ ⎪c ⎪ ⎪− 0.01946⎪
⎢ ⎥⎪ 2 ⎪ ⎪ ⎪
⎢ 100 600 200 ⎥ ⎪c3 ⎪ ⎪− 0.00923⎪
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎢ 200 800 200 ⎥ ⎨c 4 ⎬ = ⎨− 0.00098⎬
⎢ 200 800 200 ⎥ ⎪c5 ⎪ ⎪ 0.001843 ⎪
⎢ ⎥⎪ ⎪ ⎪ ⎪
⎢ 200 800 200⎥ ⎪c6 ⎪ ⎪ 0.001489 ⎪
⎢ ⎪ ⎪ ⎪ ⎪
⎣ 1 ⎥⎦ ⎩c7 ⎭ ⎩ 0 ⎭
These equations can be solved for the c’s and then Eqs. (15.21) and (15.18) can be used to
solve for the b’s and the d’s. The coefficients for the intervals can be summarized as
interval a b c d
1 0 0.009801 0 -1.6E-07
2 0.824361 0.005128 -4.7E-05 1.3E-07
3 1 -0.00031 -7.7E-06 1.31E-08
4 0.735759 -0.0018 2.13E-07 2.82E-09
5 0.406006 -0.00138 1.9E-06 -8.7E-10
6 0.199148 -0.00072 1.39E-06 -2.3E-09
1.2
0.8
0.6
0.4
0.2
0
0 500 1000
162
(c) The piecewise cubic Hermite polynomial fit can be set up in MATLAB as
163
(b) The clamped spline with zero end slopes can be set up in MATLAB as
(c) The piecewise cubic Hermite polynomial fit can be set up in MATLAB as
164
15.6 An M-file function to implement the natural spline can be written as
function yy = natspline(x,y,xx)
% natspline(x,y,xx):
% uses a natural cubic spline interpolation to find yy, the values
% of the underlying function y at the points in the vector xx.
% The vector x specifies the points at which the data y is given.
n = length(x);
m = length(xx);
aa(1,1) = 1; aa(n,n) = 1;
bb(1) = 0; bb(n) = 0;
for i = 2:n-1
aa(i,i-1) = h(x, i - 1);
aa(i,i) = 2 * (h(x, i - 1) + h(x, i));
aa(i,i+1) = h(x, i);
bb(i) = 3 * (fd(i + 1, i, x, y) - fd(i, i - 1, x, y));
end
c = aa\bb';
for i = 1:n - 1
a(i) = y(i);
b(i) = fd(i + 1, i, x, y) - h(x, i) / 3 * (2 * c(i) + c(i + 1));
d(i) = (c(i + 1) - c(i)) / 3 / h(x, i);
end
for i = 1:m
yy(i) = SplineInterp(x, n, a, b, c, d, xx(i));
end
function hh = h(x, i)
hh = x(i + 1) - x(i);
165
The program can be used to duplicate Example 15.3:
>> x = [1 3 5 6 7 9];
>> y = 0.0185*x.^5-0.444*x.^4+3.9125*x.^3-15.456*x.^2+27.069*x-14.1;
>> xx = linspace(1,9);
>> yy = spline(x,y,xx);
>> yc = 0.0185*xx.^5-0.444*xx.^4+3.9125*xx.^3-15.456*xx.^2+27.069*xx-14.1;
>> plot(x,y,'o',xx,yy,xx,yc,'--')
This function can be evaluated at the end nodes to give f'(1) = 6.211 and f'(9) = 11.787.
These values can then be added to the y vector and the spline function invoked to develop
the clamped fit:
166
>> yd = [6.211 y 11.787];
>> yy = spline(x,yd,xx);
>> plot(x,y,'o',xx,yy,xx,yc,'--')
167
CHAPTER 16
16.1 A table of integrals can be consulted to determine
1
∫ tanh dx = a ln cosh ax
Therefore,
t
t gm ⎛ gc d ⎞ gm m ⎡ ⎛ gc d ⎞⎤
∫ 0 cd
tanh ⎜
⎜ m ⎟
⎝
t ⎟ dt =
⎠ cd
⎢ln cosh⎜
gc d ⎣⎢ ⎜ m ⎟⎥
⎝
t ⎟⎥
⎠⎦ 0
gm 2 ⎡ ⎛ gc d ⎞ ⎤
⎢ln cosh⎜ t ⎟ − ln cosh(0)⎥
gc d2 ⎢⎣ ⎜ m ⎟ ⎥⎦
⎝ ⎠
m ⎛ gc d ⎞
ln cosh⎜ t⎟
cd ⎜ m ⎟
⎝ ⎠
∫
4
0
[
(1 − e − 2 x ) dx = x + 0.5e − 2 x ]
4
0 = 4 + 0.5e − 2 ( 4 ) − 0 − 0.5e − 2 ( 0 ) = 3.500167731
0 + 0.999665
(4 − 0) = 1.99329 (ε t = 42.88%)
2
n = 2:
0 + 2(0.981684) + 0.999665
(4 − 0) = 2.96303 (ε t = 15.35%)
4
n = 4:
168
0 + 4(0.981684) + 0.999665
(4 − 0) = 3.28427 (ε t = 6.17%)
6
π /2
∫ (6 + 3 cos x) dx = [6 x + 3 sin x ]0
π /2
= 6(π / 2) + 3 sin(π / 2) − 6(0) − 3 sin(0) = 12.424778
0
⎛π ⎞9+6
⎜ − 0⎟ = 11.78097 (ε t = 5.18%)
⎝2 ⎠ 2
n = 2:
⎛π ⎞ 9 + 2(8.12132) + 6
⎜ − 0⎟ = 12.26896 (ε t = 1.25%)
⎝2 ⎠ 4
n = 4:
⎛π ⎞ 9 + 4(8.12132) + 6
⎜ − 0⎟ = 12.4316 (ε t = 0.0550%)
⎝2 ⎠ 6
169
⎛π ⎞ 9 + 3(8.59808 + 7.5) + 6
⎜ − 0⎟ = 12.42779 (ε t = 0.0243%)
⎝2 ⎠ 8
4
4 ⎡ x2 x6 ⎤
∫ (1 − x − 4 x + 2 x ) dx = ⎢ x − − x4 + ⎥
3 5
−2
⎣ 2 3 ⎦ −2
42 46 (−2) 2 ( −2) 6
=4− − 44 + − (−2) + + (−2) 4 − = 1104
2 3 2 3
− 29 + 1789
(4 − (−2)) = 5280 (ε t = 378.3%)
2
n = 2:
− 29 + 2(−2) + 1789
(4 − (−2)) = 2634 (ε t = 138.6%)
4
n = 4:
− 29 + 4(−2) + 1789
(4 − (−2)) = 1752 (ε t = 58.7%)
6
170
16.5 (a) The analytical solution can be evaluated as
∫
1.2
0
[
e − x dx = − e − x ]
1.2
0 = −e −1.2 − (−e 0 ) = 0.69880579
4
⎡ x3
2
3 x ⎤
2
∫ − +
2
⎢
−2 3
3 y x y ⎥ dy
⎣ 2 ⎦0
( 4) 3
2 ( 4) 2
∫ −2 3
− 3 y 2 ( 4) + y 3
2
dy
2
∫ −2
21.33333 − 12 y 2 + 8 y 3 dy
[21.33333 y − 4 y 3
+ 2y4 ]
2
-2
(b) The composite trapezoidal rule with n = 2 can be used the evaluate the inner integral at
the three equispaced values of y,
− 12 + 2(−24) − 28
y = −2: (4 − 0) = −88
4
0 + 2(4) + 16
y = 0: (4 − 0) = 24
4
171
− 12 + 2(8) + 36
y = 2: (4 − 0) = 40
4
− 88 + 2(24) + 40
(2 − (−2)) =0
4
21.33333 − 0
εt = × 100% = 100%
21.33333
(c) Single applications of Simpson’s 1/3 rule can be used the evaluate the inner integral at
the three equispaced values of y,
− 12 + 4(−24) − 28
y = −2: ( 4 − 0) = −90.66667
6
0 + 4(4) + 16
y = 0: (4 − 0) = 21.33333
6
− 12 + 4(8) + 36
y = 2: (4 − 0) = 37.33333
6
21.33333 − 21.33333
εt = × 100% = 0%
21.33333
which is perfect
3
4 6 ⎡ x4 ⎤ 4 6
∫ ∫
−4 0
⎢ − 2 yzx ⎥ dy dz =
⎣ 4 ⎦ −1
∫ ∫
−4 0
20 − 8 yz dy dz
∫ [20 y − 4 zy ]
4 6 4 4
∫ ∫ ∫
2 6
20 − 8 yz dy dz = 0 dz = 120 − 144 z dz
−4 0 −4 −4
172
∫
4
[
120 − 144 z dz = 120 z − 72 z 2
−4
]
4
−4 = 120(4) − 72(4) 2 − 120(−4) + 72(−4) 2 = 960
(b) Single applications of Simpson’s 1/3 rule can be used the evaluate the inner integral at
the three equispaced values of y for each value of z,
z = −4:
− 1 + 4(1) + 27
y = 0: (3 − (−1)) = 20
6
23 + 4(25) + 51
y = 3: (3 − (−1)) = 116
6
47 + 4(49) + 75
y = 6: (3 − (−1)) = 212
6
20 + 4(116) + 212
( 6 − 0) = 696
6
z = 0:
− 1 + 4(1) + 27
y = 0: (3 − (−1)) = 20
6
− 1 + 4(1) + 27
y = 3: (3 − (−1)) = 20
6
− 1 + 4(1) + 27
y = 6: (3 − (−1)) = 20
6
20 + 4( 20) + 20
( 6 − 0) = 120
6
z = 4:
− 1 + 4(1) + 27
y = 0: (3 − (−1)) = 20
6
− 25 + 4(−23) + 3
y = 3: (3 − (−1)) = −76
6
173
− 49 + 4(−47) − 21
y = 6: (3 − (−1)) = −172
6
20 + 4(−76) − 172
( 6 − 0) = −456
6
960 − 960
εt = × 100% = 0%
960
p =
-0.00657842294444 0.01874733808337 0.56859435273356
4.46645555949356
>> tt = linspace(1,10);
>> vv = polyval(p,tt);
>> plot(tt,vv,t,v,'o')
174
The cubic can then be integrated to estimate the distance traveled,
10
d= ∫
1
− 0.006578t 3 + 0.018747t 2 + 0.568594t + 4.46646 dt
[ ]10
= − 0.001645t 4 + 0.006249t 3 + 0.284297t 2 + 4.46646t 1 = 58.14199
16.9
5.5982 × 1010
d= = 21.971
2.5480 × 10 9
175
0 + 2( 274.684 + 511.292 + 586.033 + 559.631 + 486.385) + 399.332
30
2(6) 13088.45
f = = = 13.139 m
996.1363 996.1363
176
CHAPTER 17
17.1 The integral can be evaluated analytically as,
2
2 ⎛ 3⎞ 2
I= ∫ 1
⎜ 2 x + ⎟ dx =
⎝ x⎠ ∫1
4 x 2 + 12 + 9 x − 2 dx
2
⎡ 4x 3 9⎤ 4( 2) 3 9 4(1) 3 9
I =⎢ + 12 x − ⎥ = + 12 ( 2) − − − 12(1) + = 25.8333
⎣ 3 x ⎦1 3 2 3 1
iteration → 1 2 3
εt → 6.9355% 0.1613% 0.0048%
εa → 1.6908% 0.0098%
1 27.62500000 25.87500000 25.83456463
2 26.31250000 25.83709184
4 25.95594388
[ ]
I = − 0.01094 x 5 + 0.21615 x 4 − 1.3854 x 3 + 3.14585 x 2 + 2 x 0 = 34.87808
8
iteration → 1 2 3 4
εt → 20.1699% 42.8256% 0.0000% 0.0000%
εa → 9.9064% 2.6766% 0.000000%
1 27.84320000 19.94133333 34.87808000 34.87808000
2 21.91680000 33.94453333 34.87808000
4 30.93760000 34.81973333
8 33.84920000
(8 + 0) + (8 − 0) x d 8−0
x= = 4 + 4 xd dx = dx d = 4dx d
2 2
∫ [− 0.0547(4 + 4 x ]
1
I= d ) 4 + 0.8646(4 + 4 x d ) 3 − 4.1562(4 + 4 x d ) 2 + 6.2917(4 + 4 x d ) + 2 4dx d
−1
177
The transformed function can be evaluated using the values from Table 17.1
which is exact.
(d)
>> format long
>> y = inline('-0.0547*x.^4+0.8646*x.^3-4.1562*x.^2+6.2917*x+2');
>> I = quad(y,0,8)
I =
34.87808000000000
17.3 Although it’s not required, the analytical solution can be evaluated simply as
I= ∫
3
0
[ ]3
xe x dx = e x ( x − 1) 0 = 41.17107385
iteration → 1 2 3
εt → 119.5350% 5.8349% 0.1020%
εa → 26.8579% 0.3579%
1 90.38491615 43.57337260 41.21305531
2 55.27625849 41.36057514
4 44.83949598
(3 + 0) + (3 − 0) x d 3−0
x= = 1.5 + 1.5 x d dx = dx d = 1.5dx d
2 2
∫ [(1.5 + 1.5x ]
1
I= d )e1.5+1.5 xd 1.5dx d
−1
The transformed function can be evaluated using the values from Table 17.1
178
I =
41.17107385090233
>> I = quadl(inline('x.*exp(x)'),0,3)
I =
41.17107466800178
ans =
0.96610514647531
I=
2
π ∫
1
−1
[e − ( 0.75+ 0.75 xd ) 2
]0.75dx d
The transformed function can be evaluated using the values from Table 17.1
(b) The transformed function can be evaluated using the values from Table 17.1
17.5 (a) The tableau depicting the implementation of Romberg integration to εs = 0.5% is
iteration → 1 2 3 4
εa → 19.1131% 1.0922% 0.035826%
1 199.66621287 847.93212300 1027.49455856 1051.60670352
2 685.86564547 1016.27190634 1051.22995126
4 933.67034112 1049.04507345
8 1020.20139037
179
Note that if 8 iterations are implemented, the method converges on a value of
1053.38523686. This result is also obtained if you use the composite Simpson’s 1/3 rule
with 1024 segments.
(30 + 0) + (30 − 0) x d 30 − 0
x= = 15 + 15 x d dx = dx d = 15dx d
2 2
1 ⎡ 15 + 15 x d − 2.5(15+15 xd ) / 30 ⎤
I = 200 ∫ ⎢
−1 22 + 15 x
⎣ d
e ⎥ 15dx d
⎦
The transformed function can be evaluated using the values from Table 17.1
(c) Interestingly, the quad function encounters a problem and exceeds the maximum
number of iterations
I =
1.085280043451920e+003
The quadl function converges rapidly, but does not yield a very accurate result:
>> I = quadl(inline('200*x/(7+x)*exp(-2.5*x/30)'),0,30)
I =
1.055900924411335e+003
∫ (10e )
1/ 2 2
−t
I= sin 2πt dt
0
iteration → 1 2 3 4
εa → 25.0000% 2.0824% 0.025340%
1 0.00000000 20.21768866 15.16502516 15.41501768
2 15.16326649 15.48081663 15.41111155
4 15.40142910 15.41546811
180
8 15.41195836
∫ [10e ]
1 2
− ( 0.25 + 0.25 xd )
I= sin 2π (0.25 + 0.25 x d ) 0.25dx d
−1
For the two-point application, the transformed function can be evaluated using the values
from Table 17.1
For the three-point application, the transformed function can be evaluated using the values
from Table 17.1
(c)
>> format long
>> I = quad(inline('(10*exp(-x).*sin(2*pi*x)).^2'),0,0.5)
I =
15.41260804934509
1/ 7
0.75 ⎛ r ⎞
I= ∫ 0
10⎜1 − ⎟
⎝ 0.75 ⎠
2πr dr
iteration → 1 2 3 4
εa → 25.0000% 1.0725% 0.098313%
1 0.00000000 10.67030554 12.88063803 13.74550712
2 8.00272915 12.74249225 13.73199355
4 11.55755148 13.67014971
8 13.14200015
181
These can be substituted to yield
2
⎡ ⎛ 0.375 + 0.375 x ⎞1 / 7
1 ⎤
I= ∫
⎢10⎜1 − ⎟ 2π (0.375 + 0.375 x d )⎥ 0.375dx d
d
−1
⎢⎣ ⎝ 0.75 ⎠ ⎥⎦
For the two-point application, the transformed function can be evaluated using the values
from Table 17.1
(c)
>> format long
>> I = quad(inline('10*(1-x/0.75).^(1/7)*2*pi.*x'),0,0.75)
I =
14.43168560836254
8
I= ∫2
(9 + 4 cos 2 0.4t )(5e −0.5t + 2e 0.15t ) dt
iteration → 1 2 3 4
εa → 7.4179% 0.1054% 0.001212%
1 411.26095167 317.15529472 322.59571622 322.34570788
2 340.68170896 322.25568988 322.34961426
4 326.86219465 322.34374398
8 323.47335665
(b)
>> format long
>> y = inline('(9+4*cos(0.4*x).^2).*(5*exp(-0.5*x)+2*exp(0.15*x))')
>> I = quadl(y,2,8)
I =
3.223483672542467e+002
4
2⎡ x3 x2 ⎤
∫ ⎢
−2 3
⎣
− 3y 2 x + y 3 ⎥ dy
2 ⎦0
2( 4) 3 ( 4) 2
∫ −2 3
− 3 y 2 ( 4) + y 3
2
dy
2
∫ −2
21.33333 − 12 y 2 + 8 y 3 dy
182
[21.33333 y − 4 y 3
+ 2y4 ]
2
-2
(b) The operation of the dblquad function can be understood by invoking help,
A session to use the function to perform the double integral can be implemented as,
>> dblquad(inline('x.^2-3*y.^2+x*y.^3'),0,4,-2,2)
ans =
21.3333
183
CHAPTER 18
18.1 (a) The analytical solution can be derived by the separation of variables,
dy
∫ y ∫
= t 3 − 1.5 dt
t4
ln y = − 1.5t + C
4
Substituting the initial conditions yields C = 0. Substituting this value and taking the
exponential gives
4
/ 4 −1.5t
y = et
t y dy/dt
0 1 -1.5
0.5 0.25 -0.34375
1 0.078125 -0.03906
1.5 0.058594 0.109863
2 0.113525
t y dy/dt
0 1 -1.5
0.25 0.625 -0.92773
0.5 0.393066 -0.54047
0.75 0.25795 -0.2781
1 0.188424 -0.09421
1.25 0.164871 0.074707
1.5 0.183548 0.344153
1.75 0.269586 1.040434
2 0.529695
t y dy/dt tm ym dym/dt
0 1 -1.5 0.25 0.625 -0.92773
0.5 0.536133 -0.73718 0.75 0.351837 -0.37932
1 0.346471 -0.17324 1.25 0.303162 0.13737
1.5 0.415156 0.778417 1.75 0.60976 2.353292
2 1.591802
184
(d) RK4 (h = 0.5)
t y k1 tm ym k2 tm ym k3 te ye k4 φ
0 1.0000 -1.5000 0.25 0.6250 -0.9277 0.25 0.7681 -1.1401 0.5 0.4300 -0.5912 -1.0378
0.5 0.4811 -0.6615 0.75 0.3157 -0.3404 0.75 0.3960 -0.4269 1 0.2676 -0.1338 -0.3883
1 0.2869 -0.1435 1.25 0.2511 0.1138 1.25 0.3154 0.1429 1.5 0.3584 0.6720 0.1736
1.5 0.3738 0.7008 1.75 0.5489 2.1186 1.75 0.9034 3.4866 2 2.1170 13.7607 4.2786
2 2.5131
0
0 0.5 1 1.5 2
Euler (h=0.5) Euler (h=0.25) M idpoint
Analytical RK4
18.2 (a) The analytical solution can be derived by the separation of variables,
dy
∫ y ∫
= 1 + 2 x dx
2 y = x + x2 + C
Substituting the initial conditions yields C = 2. Substituting this value and rearranging gives
2
⎛ x2 + x + 2 ⎞
y = ⎜⎜ ⎟
⎟
⎝ 2 ⎠
x y
0 1
0.25 1.336914
0.5 1.890625
0.75 2.743164
185
1 4
f (0,1) = (1 + 2(0)) 1 = 1
x y dy/dx
0 1 1
0.25 1.25 1.67705
0.5 1.66926 2.584
0.75 2.31526 3.804
1 3.26626 5.42184
Predictor:
k1 = (1 + 2(0)) 1 = 1
Corrector:
1 + 1.6771
y (0.25) = 1 + 0.25 = 1.33463
2
x y k1 xe ye k2 dy/dx
0 1 1.0000 0.25 1.25 1.6771 1.3385
0.25 1.33463 1.7329 0.5 1.76785 2.6592 2.1961
0.5 1.88364 2.7449 0.75 2.56987 4.0077 3.3763
0.75 2.72772 4.1290 1 3.75996 5.8172 4.9731
1 3.97099
186
(d) Ralston’s method:
Predictor:
k1 = (1 + 2(0)) 1 = 1
Corrector:
1 + 2(1.49837)
y (0.25) = 1 + 0.25 = 1.33306
3
(e) RK4
x y k1 xm ym k2 xm ym k3 xe ye k4 φ
0 1.0000 1 0.125 1.1250 1.32583 0.125 1.1657 1.34961 0.25 1.3374 1.73469 1.3476
0.25 1.3369 1.73436 0.375 1.5537 2.18133 0.375 1.6096 2.2202 0.5 1.8919 2.75096 2.2147
0.5 1.8906 2.74997 0.625 2.2343 3.36322 0.625 2.3110 3.42043 0.75 2.7457 4.14253 3.4100
0.75 2.7431 4.14056 0.875 3.2606 4.96574 0.875 3.3638 5.04368 1 4.0040 6.00299 5.0271
1 3.9998
0
0 0.2 0.4 0.6 0.8 1
Euler Heun Analytical
RK4 Ralston
187
18.3 (a) Heun’s method:
Predictor:
k1 = −2(1) + (0) 2 = −2
y (0.5) = 1 + (−2)(0.5) = 0
Corrector:
− 2 + 0.25
y (0.5) = 1 + 0.5 = 0.5625
2
− 2 + (−2(0) + 0.5 2 )
y i1+1 = 1 + 0.5 = 0.5625
2
− 2 + (−2(0.5625) + 0.5 2 )
y i2+1 = 1 + 0.5 = 0.28125
2
− 2 + (−2(0.28125) + 0.5 2 )
y i3+1 = 1 + 0.5 = 0.421875
2
The iterations can be continued until the percent relative error falls below 0.1%. This
occurs after 12 iterations with the result that y(0.5) = 0.37491 with εa = 0.073%. The
remaining values can be computed in a like fashion to give
t y
0 1.0000000
0.5 0.3749084
1 0.3334045
1.5 0.6526523
188
2 1.2594796
k1 = −2(1) + (0) 2 = −2
The remainder of the computations can be implemented in a similar fashion as listed below:
t y dy/dt tm ym dym/dt
0 1 -2.0000 0.25 0.5 -0.9375
0.5 0.53125 -0.8125 0.75 0.328125 -0.0938
1 0.48438 0.0313 1.25 0.492188 0.57813
1.5 0.77344 0.7031 1.75 0.949219 1.16406
2 1.35547
k1 = −2(1) + (0) 2 = −2
− 2 + 2( −0.3594)
y (0.25) = 1 + 0.5 = 0.54688
3
189
1.5
0.5
0
0 0.5 1 1.5 2
p = p0 e
kgt
ln p = ln p 0 + k g t
Therefore, a semi-log plot (ln p versus t) should yield a straight line with a slope of kg. The
plot, along with the linear regression best fit line is shown below. The estimate of the
population growth rate is kg = 0.0178/yr.
(b) The ODE can be integrated with the fourth-order RK method with the results tabulated
and plotted below:
190
1985 4759.54 84.60 4971.03 88.36 4980.43 88.52 5202.15 92.46 88.47
1990 5201.89 92.46 5433.04 96.57 5443.31 96.75 5685.64 101.06 96.69
1995 5685.35 101.05 5937.98 105.54 5949.21 105.74 6214.06 110.45 105.68
2000 6213.75 110.44 6489.86 115.35 6502.13 115.57 6791.60 120.72 115.50
2005 6791.25 120.71 7093.02 126.07 7106.43 126.31 7422.81 131.93 126.24
2010 7422.43 131.93 7752.25 137.79 7766.90 138.05 8112.68 144.20 137.97
2015 8112.27 144.19 8472.74 150.60 8488.76 150.88 8866.67 157.60 150.79
2020 8866.22 157.59 9260.20 164.59 9277.70 164.90 9690.74 172.25 164.80
2025 9690.24 172.24 10120.84 179.89 10139.97 180.23 10591.40 188.25 180.12
2030 10590.85 188.24 11061.47 196.61 11082.38 196.98 11575.76 205.75 196.86
2035 11575.17 205.74 12089.52 214.88 12112.37 215.29 12651.61 224.87 215.16
2040 12650.96 224.86 13213.11 234.85 13238.09 235.30 13827.45 245.77 235.16
2045 13826.74 245.76 14441.14 256.68 14468.44 257.17 15112.57 268.61 257.01
2050 15111.79
16000
12000
8000
4000
0
1950 1970 1990 2010 2030 2050
18.5 (a) The analytical solution can be used to compute values at times over the range. For
example, the value at t = 1955 can be computed as
12,000
p = 2,555 = 2,826.2
2,555 + (12,000 − 2,555)e −0.026(1955−1950)
Values at the other times can be computed and displayed along with the data in the plot
below.
(b) The ODE can be integrated with the fourth-order RK method with the results tabulated
and plotted below:
t p-rk4 k1 tm ym k2 tm ym k3 te ye k4 φ
1950 2555.0 52.29 1952.5 2685.7 54.20 1952.5 2690.5 54.27 1955.0 2826.3 56.18 54.23
1955 2826.2 56.17 1957.5 2966.6 58.06 1957.5 2971.3 58.13 1960.0 3116.8 59.99 58.09
1960 3116.6 59.99 1962.5 3266.6 61.81 1962.5 3271.1 61.87 1965.0 3425.9 63.64 61.83
1965 3425.8 63.64 1967.5 3584.9 65.36 1967.5 3589.2 65.41 1970.0 3752.8 67.06 65.37
1970 3752.6 67.06 1972.5 3920.3 68.63 1972.5 3924.2 68.66 1975.0 4096.0 70.15 68.63
1975 4095.8 70.14 1977.5 4271.2 71.52 1977.5 4274.6 71.55 1980.0 4453.5 72.82 71.52
1980 4453.4 72.82 1982.5 4635.4 73.97 1982.5 4638.3 73.98 1985.0 4823.3 75.00 73.95
1985 4823.1 75.00 1987.5 5010.6 75.88 1987.5 5012.8 75.89 1990.0 5202.6 76.62 75.86
191
1990 5202.4 76.62 1992.5 5394.0 77.20 1992.5 5395.5 77.21 1995.0 5588.5 77.63 77.18
1995 5588.3 77.63 1997.5 5782.4 77.90 1997.5 5783.1 77.90 2000.0 5977.8 78.00 77.87
2000 5977.7 78.00 2002.5 6172.7 77.94 2002.5 6172.5 77.94 2005.0 6367.4 77.71 77.91
2005 6367.2 77.71 2007.5 6561.5 77.32 2007.5 6560.5 77.32 2010.0 6753.8 76.77 77.29
2010 6753.7 76.77 2012.5 6945.6 76.06 2012.5 6943.9 76.07 2015.0 7134.0 75.21 76.04
2015 7133.9 75.21 2017.5 7321.9 74.21 2017.5 7319.4 74.23 2020.0 7505.0 73.09 74.20
2020 7504.9 73.09 2022.5 7687.6 71.83 2022.5 7684.5 71.85 2025.0 7864.2 70.47 71.82
2025 7864.0 70.47 2027.5 8040.2 68.98 2027.5 8036.5 69.01 2030.0 8209.1 67.43 68.98
2030 8208.9 67.43 2032.5 8377.5 65.75 2032.5 8373.3 65.80 2035.0 8537.9 64.04 65.76
2035 8537.7 64.05 2037.5 8697.8 62.23 2037.5 8693.3 62.28 2040.0 8849.1 60.41 62.25
2040 8849.0 60.41 2042.5 9000.0 58.50 2042.5 8995.2 58.56 2045.0 9141.8 56.61 58.53
2045 9141.6 56.62 2047.5 9283.1 54.65 2047.5 9278.2 54.72 2050.0 9415.2 52.73 54.68
2050 9415.0
10000
8000
6000
4000
2000
0
1950 1970 1990 2010 2030 2050
Thus, the RK4 results are so close to the analytical solution that the two results are
indistinguishable graphically.
18.6 We can solve this problem with the M-file Eulode (Fig. 18.3). First, we develop a function
to compute the derivative
function dv = dvdt(t, v)
if t < 10
% chute is unopened
dv = 9.81 - 0.25/80*v^2;
else
% chute is opened
dv = 9.81 - 5/80*v^2;
end
Notice how we have used an If statement to use a higher drag coefficient for times after the
cord is pulled. The Eulode function can then be used to generate results and display them
graphically..
192
18.7 (a) Euler’s method:
t y z dy/dt dz/dt
0 2 4 16 -16
0.1 3.6 2.4 3.658049 -10.368
0.2 3.965805 1.3632 -2.35114 -3.68486
0.3 3.730691 0.994714 -3.77687 -1.84568
0.4 3.353004 0.810147 -3.99072 -1.10035
3
y
2 z
0
0 0.1 0.2 0.3 0.4
193
k 2,1 = f 1 (0.05,2.8,3.2) = −2( 2.8) + 5(3.2)e −0.05 = 9.619671
2.8(3.2) 2
k 2, 2 = f 2 (0.05,2.8,3.2) = − = −14.336
2
2.480984(3.2832) 2
k 3, 2 = f 2 (0.05,2.480984,3.2832) = − = −13.3718
2
3.065342(2.662824) 2
k 4, 2 = f 2 (0.1,3.065342,2.662824) = − = −10.8676
2
These slope estimates can then be used to make the prediction for the first step
The remaining steps can be taken in a similar fashion and the results summarized as
t y z
0 2 4
0.1 3.041043 2.628615
0.2 3.342571 1.845308
0.3 3.301983 1.410581
0.4 3.107758 1.149986
194
5
3
y
2 z
0
0 0.1 0.2 0.3 0.4
18.8 The second-order van der Pol equation can be reexpressed as a system of 2 first-order
ODEs,
dy
=z
dt
dz
= (1 − y 2 ) z − y
dt
(a) Euler (h = 0.2). Here are the first few steps. The remainder of the computation would be
implemented in a similar fashion and the results displayed in the plot below.
(b) Euler (h = 0.1). Here are the first few steps. The remainder of the computation would be
implemented in a similar fashion and the results displayed in the plot below.
195
4 y (h = 0.1)
z (h = 0.1)
3
y (h = 0.2)
2 z (h = 0.2)
0
0 2 4 6 8 10
-1
-2
-3
-4
18.9 The second-order equation can be reexpressed as a system of two first-order ODEs,
dy
=z
dt
dz
= −9 y
dt
(a) Euler. Here are the first few steps along with the analytical solution. The remainder of
the computation would be implemented in a similar fashion and the results displayed in the
plot below.
4 yEuler
3
yanal
2
1
0
-1 0 1 2 3 4
-2
-3
-4
-5
196
(b) RK4. Here are the first few steps along with the analytical solution. The remainder of
the computation would be implemented in a similar fashion and the results displayed in the
plot below.
k1,1 = f 1 (0,1,0) = z = 0
y (0.05) = 1 + 0(0.05) = 1
k 2, 2 = f 2 (0.05,1,−0.45) = −9(1) = −9
These slope estimates can then be used to make the prediction for the first step
The remaining steps can be taken in a similar fashion and the first few results summarized
as
197
t y z yanal
0 1.0000 0.0000 1.00000
0.1 0.9553 -0.8865 0.95534
0.2 0.8253 -1.6938 0.82534
0.3 0.6216 -2.3498 0.62161
0.4 0.3624 -2.7960 0.36236
0.5 0.0708 -2.9924 0.07074
As can be seen, the results agree with the analytical solution closely. A plot of all the values
can be developed and indicates the same close agreement.
4 yRK4
3
yanal
2
1
0
-1 0 1 2 3 4
-2
-3
-4
-5
18.10 A MATLAB M-file for Heun’s method with iteration can be developed as
198
% so that range goes from t = ti to tf
if t(n)<tf
t(n+1) = tf;
n = n+1;
end
y = y0*ones(n,1); %preallocate y to improve efficiency
iter = 0;
for i = 1:n-1
hh = t(i+1) - t(i);
k1 = feval(dydt,t(i),y(i));
y(i+1) = y(i) + k1*hh;
while (1)
yold = y(i+1);
k2 = feval(dydt,t(i)+hh,y(i+1));
y(i+1) = y(i) + (k1+k2)/2*hh;
iter = iter + 1;
if y(i+1) ~= 0, ea = abs((y(i+1) - yold)/y(i+1)) * 100; end
if ea <= es | iter >= maxit, break, end
end
end
plot(t,y)
Here is the test of the solution of Prob. 18.5. First, an M-file holding the differential
equation is written as
function dp = dpdt(t, p)
dp = 0.026*(1-p/12000)*p;
1.9500 2.5550
1.9550 2.8261
1.9600 3.1165
1.9650 3.4256
1.9700 3.7523
1.9750 4.0953
1.9800 4.4527
1.9850 4.8222
1.9900 5.2012
1.9950 5.5868
2.0000 5.9759
199
18.11 A MATLAB M-file for the midpoint method can be developed as
ti = tspan(1);
tf = tspan(2);
t = (ti:h:tf)';
n = length(t);
% if necessary, add an additional value of t
% so that range goes from t = ti to tf
if t(n)<tf
t(n+1) = tf;
n = n+1;
end
y = y0*ones(n,1); %preallocate y to improve efficiency
for i = 1:n-1
hh = t(i+1) - t(i);
k1 = feval(dydt,t(i),y(i));
ymid = y(i) + k1*hh/2;
k2 = feval(dydt,t(i)+hh/2,ymid);
y(i+1) = y(i) + k2*hh;
end
plot(t,y)
Here is the test of the solution of Prob. 18.5. First, an M-file holding the differential
equation is written as
function dp = dpdt(t, p)
dp = 0.026*(1-p/12000)*p;
200
Then the M-file can be invoked as in
ti = tspan(1);
tf = tspan(2);
t = (ti:h:tf)';
n = length(t);
% if necessary, add an additional value of t
201
% so that range goes from t = ti to tf
if t(n)<tf
t(n+1) = tf;
n = n+1;
end
y = y0*ones(n,1); %preallocate y to improve efficiency
for i = 1:n-1
hh = t(i+1) - t(i);
k1 = feval(dydt,t(i),y(i));
ymid = y(i) + k1*hh/2;
k2 = feval(dydt,t(i)+hh/2,ymid);
ymid = y(i) + k2*hh/2;
k3 = feval(dydt,t(i)+hh/2,ymid);
yend = y(i) + k3*hh;
k4 = feval(dydt,t(i)+hh,yend);
phi = (k1+2*(k2+k3)+k4)/6;
y(i+1) = y(i) + phi*hh;
end
plot(t,y)
Here is the test of the solution of Prob. 18.2. First, an M-file holding the differential
equation is written as
function dy = dydx(x, y)
dy = (1+2*x)*sqrt(y);
202
18.13 Note that students can take two approaches to developing this M-file. The first program
shown below is strictly developed to solve 2 equations.
ti = tspan(1);
tf = tspan(2);
t = (ti:h:tf)';
n = length(t);
% if necessary, add an additional value of t
% so that range goes from t = ti to tf
if t(n)<tf
t(n+1) = tf;
n = n+1;
end
y1 = y10*ones(n,1); %preallocate y's to improve efficiency
y2 = y20*ones(n,1);
for i = 1:n-1
hh = t(i+1) - t(i);
k11 = feval(dy1dt,t(i),y1(i),y2(i));
k12 = feval(dy2dt,t(i),y1(i),y2(i));
ymid1 = y1(i) + k11*hh/2;
ymid2 = y2(i) + k12*hh/2;
k21 = feval(dy1dt,t(i)+hh/2,ymid1,ymid2);
k22 = feval(dy2dt,t(i)+hh/2,ymid1,ymid2);
ymid1 = y1(i) + k21*hh/2;
ymid2 = y2(i) + k22*hh/2;
k31 = feval(dy1dt,t(i)+hh/2,ymid1,ymid2);
203
k32 = feval(dy2dt,t(i)+hh/2,ymid1,ymid2);
yend1 = y1(i) + k31*hh;
yend2 = y2(i) + k32*hh;
k41 = feval(dy1dt,t(i)+hh,yend1,yend2);
k42 = feval(dy2dt,t(i)+hh,yend1,yend2);
phi1 = (k11+2*(k21+k31)+k41)/6;
phi2 = (k12+2*(k22+k32)+k42)/6;
y1(i+1) = y1(i) + phi1*hh;
y2(i+1) = y2(i) + phi2*hh;
end
plot(t,y1,t,y2,'--')
Here is the test of the solution of Prob. 18.7. First, M-files holding the differential equations
are written as
A better approach is to develop an M-file that can be used for any number of simultaneous
first-order ODEs as in the following code:
204
% [t,y] = rk4sys(dydt,tspan,y0,h):
% uses the fourth-order RK method to integrate a pair of ODEs
% input:
% dydt = name of the M-file that evaluates the ODEs
% tspan = [ti, tf] where ti and tf = initial and
% final values of independent variable
% y0 = initial values of dependent variables
% h = step size
% output:
% t = vector of independent variable
% y = vector of solution for dependent variables
ti = tspan(1);
tf = tspan(2);
t = (ti:h:tf)';
n = length(t);
% if necessary, add an additional value of t
% so that range goes from t = ti to tf
if t(n)<tf
t(n+1) = tf;
n = n+1;
end
y(1,:) = y0;
for i = 1:n-1
hh = t(i+1) - t(i);
k1 = feval(dydt,t(i),y(i,:))';
ymid = y(i,:) + k1*hh/2;
k2 = feval(dydt,t(i)+hh/2,ymid)';
ymid = y(i,:) + k2*hh/2;
k3 = feval(dydt,t(i)+hh/2,ymid)';
yend = y(i,:) + k3*hh;
k4 = feval(dydt,t(i)+hh,yend)';
phi = (k1+2*(k2+k3)+k4)/6;
y(i+1,:) = y(i,:) + phi*hh;
end
plot(t,y(:,1),t,y(:,2),'--')
This code solves as many ODEs as are specified. Here is the test of the solution of Prob.
18.7. First, a single M-file holding the differential equations can be written as
function dy = dydtsys(t, y)
dy = [-2*y(1) + 5*y(2)*exp(-t);-y(1)*y(2)^2/2];
205
CHAPTER 19
19.1 (a) Euler’s method. Here are the first few steps
t x y dx/dt dy/dt
0 2.0000 1.0000 1.2000 -0.2000
0.1 2.1200 0.9800 1.2974 -0.1607
0.2 2.2497 0.9639 1.3985 -0.1206
0.3 2.3896 0.9519 1.5028 -0.0791
0.4 2.5399 0.9440 1.6093 -0.0359
0.5 2.7008 0.9404 1.7171 0.0096
The computation can be continued and the results plotted versus time:
16
x
y
12
0
0 5 10 15 20 25 30
Notice that the amplitudes of the oscillations are expanding. This is also illustrated by a
state-space plot (y versus x):
16
12
0
0 4 8 12 16
206
x(0.05) = 2 + 1.6(0.05) = 2.08
k 2, 2 = f 2 (0.05,2.08,0.995) = −0.06766
k 3, 2 = f 2 (0.05,2.083564,0.996617) = −0.06635
k 4, 2 = f 2 (0.1,2.167179,0.993365) = −0.03291
These slope estimates can then be used to make the prediction for the first step
The remaining steps can be taken in a similar fashion and the first few results summarized
as
t x y
0 2 1
0.1 2.167166 0.993318
0.2 2.348838 0.993588
0.3 2.545029 1.001398
0.4 2.755314 1.017509
0.5 2.978663 1.042891
207
A plot of all the values can be developed. Note that in contrast to Euler’s method, the
cycles do not amplify as time proceeds.
6
x
y
4
0
0 5 10 15 20 25 30
This periodic nature is also evident from the state-space plot. Because this is the expected
behavior we can see that the RK4 is far superior to Euler’s method for this particular
problem.
0
0 1 2 3 4 5
(c) To implement ode45, first a function is developed to evaluate the predator-prey ODEs,
function yp = predprey(t,y)
yp = [1.5*y(1)-0.7*y(1)*y(2);-0.9*y(2)+0.4*y(1)*y(2)];
208
19.2 (a) Here are the results for the first few steps as computed with the classical RK4 technique
t x y z
0 5 5 5
0.1 9.78147 17.07946 10.43947
0.2 17.70297 20.8741 35.89688
0.3 10.81088 -2.52924 39.30744
0.4 0.549578 -5.54419 28.07462
0.5 -3.1646 -5.84128 22.36888
0.6 -5.57588 -8.42037 19.92312
0.7 -8.88719 -12.6789 22.14148
0.8 -11.9142 -13.43 29.80001
0.9 -10.6668 -7.21784 33.39903
1 -6.84678 -3.43018 29.30717
50 x y z
40
30
20
10
-10
0 5 10 15 20
-20
-30
209
The solution appears chaotic bouncing around from negative to positive values. Although
the pattern might appear random, an underlying pattern emerges when we look at the state-
space plots. For example, here is the plot of y versus x.
25
15
-20 -10 -5 0 10 20
-15
-25
50
40
30
20
10
0
-20 -10 0 10 20
(b) To implement any of the MATLAB functions, first a function is developed to evaluate
the Lorenz ODEs,
function yp = lorenz(t,y)
yp = [-10*y(1)+10*y(2);28*y(1)-y(2)-y(1)*y(3);-2.666667*y(3)+y(1)*y(2)];
Then, the solution and plots for the ode23 function can be obtained:
210
Notice how this plot, although qualitatively similar to the constant step RK4 result in (a),
the details are quite different. However, the state-space representation looks much more
consistent.
>> plot(y(:,1),y(:,2))
(c) The ode45 again differs in the details of the time-series plot,
211
(d) The ode23tb also differs in the details of the time-series plot,
Close inspection of all the above results indicates that they all yield identical results for a
period of time. Thereafter, they abruptly begin to diverge. The reason for this behavior is
that these equations are highly sensitive to their initial conditions. After a number of steps,
because they all employ different algorithms, they begin to diverge slightly. When the
discrepancy becomes large enough (which for these equations is not that much), the
solution will tend to make a large jump. Thus, after awhile, the various solutions become
uncorrelated. Such solutions are said to be chaotic. It was this characteristic of these
particular equations that led Lorenz to suggest that long-range weather forecasts might not
be possible.
212
Predictor:
Corrector:
j yi+1j ε a ,%
1 3.269562
2 3.271558 0.061
Second step:
Predictor:
Predictor Modifier:
Corrector:
j yi+1j ε a ,%
1 2.573205
2 2.573931 0.0282
19.4 Before solving, for comparative purposes, we can develop the analytical solution as
t3
−t
y= e3
t y
0 1
0.25 0.782868
0.5 0.632337
213
The first step is taken with the fourth-order RK:
k1 = f (0,1) = 1(0) 2 − 1 = −1
k 2 = f (0.125,0.875) = −0.861328
k 3 = f (0.125,0.89233) = −0.87839
k 4 = f (0.25,0.78040) = −0.73163
The second step can then be implemented with the non-self-starting Heun method:
Predictor:
The iterative process can be continued with the final result converging on 0.63188975.
( )
y i +1 = y i + − 100,000 y i +1 + 99,999e −ti +1 h
214
which can be solved for
y i + 99,999e − ti +1 h
y i +1 =
1 + 100,000h
The results of applying this formula for the first few steps are shown below. A plot of the
entire solution is also displayed
t y
0 0
0.1 1.904638
0.2 1.818731
0.3 1.740819
0.4 1.67032
0.5 1.606531
0
0 1 2
y i +1 = y i + (30(sin t i +1 − y i +1 ) + 3 cos t i +1 )h
y i + 30 sin t i +1 h + 3 cos t i +1 h
y i +1 =
1 + 30h
The results of applying this formula are tabulated and graphed below.
t y t y t y t y
0 0 1.2 0.952306 2.4 0.622925 3.6 -0.50089
0.4 0.444484 1.6 0.993242 2.8 0.270163 4 -0.79745
0.8 0.760677 2 0.877341 3.2 -0.12525
215
1.5
0.5
0
0 1 2 3 4
-0.5
-1
19.7 (a) The explicit Euler can be written for this problem as
Because the step-size is much too large for the stability requirements, the solution is
unstable,
t x1 x2 dx1/dt dx2/dt
0 1 1 2998 -3000
0.05 150.9 -149 -147102 147100
0.1 -7204.2 7206 7207803 -7207805
0.15 353186 -353184 -3.5E+08 3.53E+08
0.2 -1.7E+07 17305943 1.73E+10 -1.7E+10
or collecting terms
Thus, to solve for the first time step, we substitute the initial conditions for the right-hand
side and solve the 2x2 system of equations. The best way to do this is with LU
decomposition since we will have to solve the system repeatedly. For the present case,
because its easier to display, we will use the matrix inverse to obtain the solution. Thus, if
the matrix is inverted, the solution for the first step amounts to the matrix multiplication,
216
⎧ x1,i +1 ⎫ ⎡ 1886088
⎨x
⎩ 2 ,i +1 ⎭ ⎣
. 186648
. ⎤1
⎦ {} {3.752568
⎬ = ⎢ − 0.93371 − 0.9141⎥ 1 = − 184781
. }
For the second step (from x = 0.05 to 0.1),
⎧ x1,i +1 ⎫ ⎡ 1886088
⎨x
⎩ 2 ,i + 1 ⎭ ⎣
. 186648
.
{
⎬ = ⎢ − 0.93371 − 0.9141⎥ − 184781
⎦ . }{
⎤ 3.752568 = 3.62878
− 181472
. }
The remaining steps can be implemented in a similar fashion to give
t x1 x2
0 1 1
0.05 3.752568 -1.84781
0.1 3.62878 -1.81472
0.15 3.457057 -1.72938
0.2 3.292457 -1.64705
The results are plotted below, along with a solution with the explicit Euler using a step of
0.0005.
4 x1
0
0 0.1 0.2
-2 x2
19.8 (a) The exact solution is
y = Ae 5t + t 2 + 0.4t + 0.08
y = t 2 + 0.4t + 0.08
Note that even though the choice of the initial condition removes the positive exponential
terms, it still lurks in the background. Very tiny round off errors in the numerical solutions
bring it to the fore. Hence all of the following solutions eventually diverge from the
analytical solution.
(b) 4th order RK. The plot shows the numerical solution (bold line) along with the exact
solution (fine line).
217
15
10
-5
0 1 2 3 4
-10
(c)
function yp = dy(t,y)
yp = 5*(y-t^2);
(d)
>> [t,y] = ode23s(@dy1,tspan,y0);
(e)
>> [t,y] = ode23tb(@dy1,tspan,y0);
30
20
10
0
-10 0 1 2 3 4 5
-20
-30
19.9 (a) As in Example 17.5, the humps function can be integrated with the quad function as in
ans =
29.85832612842764
(b) Using ode45 is based on recognizing that the evaluation of the definite integral
218
b
I= ∫ a
f ( x) dx
dy
= f (x)
dx
for y(b) given the initial condition y(a) = 0. Thus, we must solve the following initial-value
problem:
dy 1 1
= + −6
dx ( x − 0.3) + 0.01 ( x − 0.9) 2 + 0.04
2
where y(0) = 0. To do this with ode45, we must first set up an M-file to evaluate the right-
hand side of the differential equation,
function dy = humpsODE(x,y)
dy = 1./((x-0.3).^2 + 0.01) + 1./((x-0.9).^2+0.04) - 6;
Thus, the integral estimate is within 0.01% of the estimate obtained with the quad function.
Note that a better estimate can be obtained by using the odeset function to set a smaller
relative tolerance as in
19.10 The nonlinear model can be expressed as the following set of ODEs,
dθ
=v
dt
dv g
= − sin θ
dt l
where v = the angular velocity. A function can be developed to compute the right-hand-side
of this pair of ODEs for the case where g = 9.81 and l = 0.6 m,
function dy = dpnon(t, y)
dy = [y(2);-9.81/0.6*sin(y(1))];
219
The linear model can be expressed as the following set of ODEs,
dθ
=v
dt
dv g
=− θ
dt l
function dy = dplin(t, y)
dy = [y(2);-9.81/0.6*y(1)];
Then, the solution and plot can be obtained for the case where θ(0) = π/8. Note that we only
depict the displacement (θ or y(1)) in the plot
You should notice two aspects of this plot. First, because the displacement is small, the
linear solution provides a decent approximation of the more physically realistic nonlinear
case. Second, the two solutions diverge as the computation progresses.
For the larger initial displacement (θ(0) = π/8), the solution and plot can be obtained as,
220
Because the linear approximation is only valid at small displacements, there are now clear
and significant discrepancies between the nonlinear and linear cases that are exacerbated as
the solution progresses.
function yp = dpdt(t, p)
yp = 0.026*(1-p/12000)*p;
The function ode45 can be used to integrate this equation and generate results
corresponding to the dates for the measured population data. A plot can also be generated
of the solution and the data,
221
>> SSR = sum((p - pdata).^2)
SSR =
4.2365e+004
222
CHAPTER 20
20.1 The matrix inverse can be evaluated and the power method expressed as
Iteration 1:
Iteration 2:
0.0875 − 0.1
εa = × 100% = 14.29%
0.0875
Iteration 3:
0.085714 − 0.0875
εa = × 100% = 2.08%
0.085714
The iterations can be continued. After 10 iterations, the relative error falls to 0.00000884%
with the result
⎧⎪0.70710678⎫⎪
0.085355⎨ 1 ⎬
⎪⎩0.70710678⎪⎭
(2 − λ ) 3 −4 λ 4 8 4 8 3 − λ = − λ3 + 10λ2 + 101λ + 18
7 − λ − 2 10 7 − λ + 10 10 4
(b) The eigenvalues can be determined by finding the roots of the characteristic polynomial
determined in (a). This can be done in MATLAB,
223
>> a = [1.0000 -10.0000 -101.0000 -18.0000];
>> roots(a)
ans =
16.2741
-6.0926
-0.1815
(c) The power method for the highest eigenvalue can be implemented with MATLAB
commands,
First iteration:
>> x = A*x
x =
14
15
19
>> e = max(x)
e =
19
>> x = x/e
x =
0.7368
0.7895
1.0000
Second iteration:
>> x = A*x
x =
13.0526
12.2632
15.5263
>> e = max(x)
e =
15.5263
>> x = x/e
x =
0.8407
0.7898
1.0000
Third iteration:
>> x = A*x
x =
13.2610
13.0949
16.5661
>> e = max(x)
e =
224
16.5661
>> x = x/e
x =
0.8005
0.7905
1.0000
Fourth iteration:
>> x = A*x
x =
13.1819
12.7753
16.1668
>> e = max(x)
e =
16.1668
>> x = x/e
x =
0.8154
0.7902
1.0000
Thus, after four iterations, the result is converging on a highest eigenvalue of 16.2741 with
a corresponding eigenvector of [0.811 0.790 1].
(d) The power method for the lowest eigenvalue can be implemented with MATLAB
commands,
AI =
-0.0556 1.6667 -1.2222
-0.0000 -5.0000 4.0000
0.1111 0.6667 -0.5556
First iteration:
>> x = AI*x
x =
0.3889
-1.0000
0.2222
>> x = x/x(i)
x =
-0.3889
1.0000
-0.2222
225
Second iteration:
>> x = AI*x
x =
1.9599
-5.8889
0.7469
>> x = x/x(i)
x =
-0.3328
1.0000
-0.1268
Third iteration:
>> x = AI*x
x =
1.8402
-5.5073
0.7002
>> x = x/x(i)
x =
-0.3341
1.0000
-0.1271
Thus, after three iterations, the estimate of the lowest eigenvalue is converging on the
correct value of 1/(−5.5085) = −0.1815 with an eigenvector of [−0.3341 1 -0.1271].
20.3 MATLAB can be used to solve for the eigenvalues with the polynomial method. First, the
matrix can be put into the proper form for an eigenvalue analysis by bringing all terms to
the left-hand-side of the equation.
⎡ 4 − 9λ 7 3 ⎤ ⎧⎪ x1 ⎫⎪
⎢ 7 8 − 4λ 2 ⎥⎨x2 ⎬ = 0
⎢⎣ 3 2 1 − 2λ ⎥⎦ ⎪⎩ x3 ⎪⎭
226
MATLAB can then be used to determine the eigenvalues as the roots of the characteristic
polynomial,
>> e=roots(p)
e =
2.9954
-0.3386
0.2876
20.4 (a) MATLAB can be used to solve for the eigenvalues with the polynomial method. First,
the matrix can be put into the proper form by dividing each row by 0.36.
A =
5.5556 -2.7778 0 0
-2.7778 5.5556 -2.7778 0
0 -2.7778 5.5556 -2.7778
0 0 -2.7778 5.5556
Then, the poly function can be used to generate the characteristic polynomial,
>> p = poly(A)
p =
1.0000 -22.2222 162.0370 -428.6694 297.6871
>> e = roots(p)
e =
10.0501
7.2723
3.8388
1.0610
(b) The power method can be used to determine the highest eigenvalue:
First iteration:
>> x = A*x
x =
227
2.7778
0
0
2.7778
>> e = max(x)
e =
2.7778
>> x = x/e
x =
1
0
0
1
Second iteration:
>> x = A*x
x =
5.5556
-2.7778
-2.7778
5.5556
>> e = max(x)
e =
5.5556
>> x = x/e
x =
1.0000
-0.5000
-0.5000
1.0000
Third iteration:
>> x = A*x
x =
6.9444
-4.1667
-4.1667
6.9444
>> e = max(x)
e =
6.9444
>> x = x/e
x =
1.0000
-0.6000
-0.6000
1.0000
The process can be continued. After 9 iterations, the method does not converge on the
highest eigenvalue. Rather, it converges on the second highest eigenvalue of 7.2723 with a
corresponding eigenvector of [1 −0.6180 −0.6180 1].
228
(c) The power method can be used to determine the lowest eigenvalue by first determining
the matrix inverse:
First iteration:
>> x = AI*x
x =
0.7200
1.0800
1.0800
0.7200
>> e = max(x)
e =
1.0800
>> x = x/e
x =
0.6667
1.0000
1.0000
0.6667
Second iteration:
>> x = AI*x
x =
0.6000
0.9600
0.9600
0.6000
>> e = max(x)
e =
0.9600
>> x = x/e
x =
0.6250
1.0000
1.0000
0.6250
Third iteration:
>> x = AI*x
x =
0.5850
0.9450
229
0.9450
0.5850
>> e = max(x)
e =
0.9450
>> x = x/e
x =
0.6190
1.0000
1.0000
0.6190
The process can be continued. After 9 iterations, the method converges on the lowest
eigenvalue of 1/0.9450 = 1.0610 with a corresponding eigenvector of [0.6180 1 1 0.6180].
20.5 The parameters can be substituted into force balance equations to give
(0.45 − ω )X2
− 0.2 X2 =0
( )
1
− 0.24 X 1 + 0.42 − ω 2 X 2 − 0.18 X3 = 0
(
− 0.225 X 2 + 0.225 − ω 2 X 3 = 0 )
A MATLAB session can be conducted to evaluate the eigenvalues and eigenvectors as
v =
-0.5879 -0.6344 0.2913
0.7307 -0.3506 0.5725
-0.3471 0.6890 0.7664
d =
0.6986 0 0
0 0.3395 0
0 0 0.0569
Therefore, the eigenvalues are 0.6986, 0.3395 and 0.0569. The corresponding eigenvectors
are (normalizing so that the amplitude for the third floor is one),
230
-2 -1 0 1 2 -1 0 1 0 1
20.6 As was done in Section 20.2, assume that the solution is ij = Ij sin(ωt). Therefore, the second
derivative is
d 2i j
2
= −ω 2 I j sin(ωt )
dt
1
− L1ω 2 I 1 sin(ωt ) + ( I 1 sin(ωt ) − I 2 sin(ωt )) = 0
C1
1 1
− L2ω 2 I 2 sin(ωt ) + ( I 2 sin(ωt ) − I 3 sin(ωt )) − ( I 1 sin(ωt ) − I 2 sin(ωt )) = 0
0.001 0.001
1 1
− L3ω 2 I 3 sin(ωt ) + i3 − ( I 2 sin(ωt ) − I 3 sin(ωt )) = 0
0.001 0.001
All the sin(ωt) terms can be cancelled. In addition, the L’s and C’s are constant. Therefore,
the system simplifies to
⎡1 − λ −1 0 ⎤ ⎧⎪ I 1 ⎫⎪
⎢ −1 2−λ − 1 ⎥ ⎨I 2 ⎬ = 0
⎢⎣ 0 −1 2 − λ ⎥⎦ ⎪⎩ I 3 ⎪⎭
where λ = LCω2. The following MATLAB session can then be used to evaluate the
eigenvalues and eigenvectors
231
0 1.5550 0
0 0 3.2470
The matrix v consists of the system's three eigenvectors (arranged as columns), and d is a
matrix with the corresponding eigenvalues on the diagonal. Thus, MATLAB computes that
the eigenvalues are λ = 0.1981, 1.5550, and 3.2470. These values in turn can be used to
compute the natural frequencies for the system
⎧ 0.4450 ⎫
⎪ ⎪
⎪ LC ⎪
⎪ 1.2470 ⎪
ω =⎨ ⎬
⎪ LC ⎪
⎪ 1.8019 ⎪
⎪ LC ⎪
⎩ ⎭
⎡m1 0 0 ⎤ ⎧ &x&1 ⎫ ⎡ 2k −k − k ⎤ ⎧⎪ x1 ⎫⎪
⎢0 ⎪ ⎪
m2 0 ⎥ ⎨ &x&2 ⎬ + ⎢− k 2k − k ⎥⎨x2 ⎬ = 0
⎢0 m3 ⎥⎦ ⎪⎩ &x&3 ⎪⎭ ⎢⎣− k −k 2k ⎥⎦ ⎪⎩ x3 ⎪⎭
⎣ 0
⎡2k − m1ω 2 −k −k ⎤⎧ X ⎫
⎢ ⎥⎪ 1 ⎪
⎢ −k 2 k − m 2ω 2 −k ⎥⎨ X 2 ⎬ = 0
2 ⎪X ⎪
⎢⎣ −k −k 2k − m3ω ⎥⎦ ⎩ 3 ⎭
Using MATLAB,
>> k = 1;
>> kmw2 = [2*k,-k,-k;-k,2*k,-k;-k,-k,2*k];
>> [v,d] = eig(kmw2)
v =
0.8034 0.1456 0.5774
-0.2757 -0.7686 0.5774
-0.5278 0.6230 0.5774
d =
3.0000 0 0
0 3.0000 0
0 0 0.0000
Therefore, the eigenvalues are 0, 3, and 3. Setting these eigenvalues equal to mω 2 , the
three frequencies can be obtained.
232
mω 3 = 0 ⇒ ω 3 = 3 (Hz) 3rd mode
2
20.8 The pair of second-order differential equations can be reexpressed as a system of four first-
order ODE’s,
dx1
= x3
dt
dx 2
= x4
dt
dx 3
= −5 x1 + 5( x 2 − x1 )
dt
dx 4
= −5( x 2 − x1 ) − 5 x 2
dt
function dx = dxdt(t, x)
dx = [x(3);x(4);-5*x(1)+5*(x(2)-x(1));-5*(x(2)-x(1))-5*x(2)];
(a) x1 = x2 = 1
Because we have set the initial conditions consistent with one of the eigenvectors, the two
masses oscillate in unison.
(b) x1 = 1, x2 = –0.6
>> tspan=[0,10];
>> y0=[1,-0.6,0,0];
233
>> [t,y]=ode45('dxdt',tspan,y0);
>> plot(t,y(:,1),t,y(:,2),'--')
>> legend('x1','x2')
Now, because the initial conditions do not correspond to one of the eigenvectors, the
motion involves the superposition of both modes.
20.9
es = 0.0001;
maxit = 100;
n = size(A);
for i=1:n
v(i)=1;
end
v = v';
e = 1;
iter = 0;
while (1)
eold = e;
x = A*v;
[e,i] = max(abs(x));
e = sign(x(i))*e;
v = x/e;
iter = iter + 1;
ea = abs((e - eold)/e) * 100;
if ea <= es | iter >= maxit, break, end
end
234
Application to solve Prob. 20.2,
20.10
es = 0.0001;
maxit = 100;
n = size(A);
for i=1:n
v(i)=1;
end
v = v';
e = 1;
Ai = inv(A);
iter = 0;
while (1)
eold = e;
x = Ai*v;
[e,i] = max(abs(x));
e = sign(x(i))*e;
v = x/e;
iter = iter + 1;
ea = abs((e - eold)/e) * 100;
if ea <= es | iter >= maxit, break, end
end
e = 1./e;
235