Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

TMA4130/35 Mathematics 4N/D, November 2020 Page 1 of 18

Problem 1 [15 points]

There are four versions of this exercise with different constants c = −1, −2, 2, 3.

The function f is given as 


1, 0 ≤ t ≤ 1,
f (t) = (1)
c, t > 1.

a) Compute the Laplace transform of f .

Solution: By the definition of the step function u(t − a), we can write

f = 1 + (c − 1)u(t − 1)

thus the Laplace transform formula for the step function gives

F (s) = 1/s + (c − 1)e−s /s.

Input c = −1, −2, 2, 3 gives the final results respectively.

b) Show that
Z t
Y (s + 1)

−x
L e y(x)dx = ,
0 s
where L denotes the Laplace transform and Y := L(y).

Solution: Notice that


Z t Z t
e−x y(x)dx = e−t et−x y(x)dx = e−t (et ∗ y),
0 0

where et ∗ y denotes the Laplace transform. Using the s-shifting theorem, we get

Y (s + 1)
L(e−t et ∗ y) = L(et ∗ y)(s + 1) =
s
By the Laplace convolution theorem, we have

L(et ∗ y) = L(et ) · Y = Y (s)/(s − 1),

which gives
Y (s + 1)
L(et ∗ y)(s + 1) = .
s
The proof is complete.
TMA4130/35 Mathematics 4N/D, November 2020 Page 2 of 18

An alternative approach is to note that the function


Z t
z(t) := e−x y(x) dx
0

solves the ODE


z 0 (t) = e−t y(t), z(0) = 0.
By the s-shift theorem, the Laplace transform of e−x y(x) is Y (s + 1). Computing the Laplace
transform of the ODE yields
sL(z)(t) = Y (s + 1).
Now we may divide by s and arrive at the desired result.

c) Use the formula in part b) in order to find the solution y(x) of


Z t
e−x y(x) dx = f (t).
0

Here f is the function defined in (1).

Solution:

Apply the Laplace transform to both sides, by a) and b), we get

Y (s + 1)/s = 1/s + (c − 1)e−s /s

hence
Y (s + 1) = 1 + (c − 1)e−s .
By a change of variable, we get
Y (s) = 1 + (c − 1)e−(s−1)
which gives y(x) = δ(x) + (c − 1)e δ(x − 1). Input c = −1, −2, 2, 3 gives the final results respectively.
TMA4130/35 Mathematics 4N/D, November 2020 Page 3 of 18

Problem 2 [5 points]

There are four versions of this exercise with different constants α = −1, −2, 2, 3.

Find the complex Fourier series of the function



eix + α, 0 ≤ x ≤ π,
f (x) =
eix − 1, −π ≤ x < 0.

Solution:

Notice that we can write f as eix + g, where


(
α, 0 ≤ x ≤ π,
g(x) =
−1, −π ≤ x < 0.

Hence it suffices to compute the Fourier series for g and add eix to the final result.

For c0 we get Z 0 Z π
1  
c0 = −1dx + αdx = (α − 1)/2.
2π −π 0
For other cn , we have Z 0 Z π
1  
cn = −e−inx dx + α e−inx dx .
2π −π 0

Since −π e−inx dx = 0, we get −π −e−inx dx = 0π e−inx dx, which gives


Rπ R0 R

α + 1 1 − (−1)n
Z π
α+1
cn = e−inx dx = · .
2π 0 2π in
Notice that 1 − (−1)n = 0 when n is even and 1 − (−1)n = 2 when n is odd, thus
α−1 X 1+α
f (x) = eix + + einx .
2 n odd
iπn

Input α = −1, −2, 2, 3 gives the final results respectively.


TMA4130/35 Mathematics 4N/D, November 2020 Page 4 of 18

Problem 3 [10 points]

Use the convolution theorem for the Fourier transform in order to find the function f that
solves the equation Z +∞
2 √ 2
e−(x−t) f (t) dt = 2π xe−x /2 .
−∞

Solution: Take the Fourier transform. The Fourier convolution theorem gives
√ 1 2 √
2π fˆ(w) · √ e−w /4 = 2π xe
\ −x2 /2 .
2
Now we use that
2 /2 2 /2
xe−x = −(e−x )0 ,
which implies that
2
\
xe −x2 /2 = −iwe−x /2 .
−x2 /2 = −iw e\

Now the first identity reduces to


√ 1 2 √ 2
2π fˆ(w) · √ e−w /4 = 2π(−iwe−w /2 ),
2
which gives
√ 2
fˆ(w) = −iw 2e−w /4 = −2iwed
−t2 = (−2)(e
\ −t2 )0 ,

(the final identity uses again that gb0 = iwĝ) from which we know that the Fourier transform of f
2 2
is equal to the Fourier transform of (−2)(e−t )0 . Now we know that f must be equal to (−2)(e−t )0
(you can use, for example, the Fourier inversion formula to prove this), hence we get
2 2
f (t) = (−2)(e−t )0 = 4te−t .
TMA4130/35 Mathematics 4N/D, November 2020 Page 5 of 18

Problem 4 TMA4135 Mathematics 4D: [5 points]

Let f, g be two smooth functions and let c > 0 be a constant. Show that the function
u(x, t) := f (cx + t) + g(2cx − 2t) + sin cx cos t
satisfies the wave equation uxx = c2 utt .

Solution:

Both uxx and c2 utt equal


c2 f 00 (cx + t) + 4c2 g 00 (2cx − 2t) − c2 sin cx cos t.

Problem 4 TMA4130 Mathematics 4N: [5 points]

Show that the Fourier transform of



 x2 , |x| < 1,
f (x) = 
0, |x| ≥ 1,
is the function  q
1 · 2 , w = 0,
fˆ(w) = 3
q π
 2 sin w 2 cos w 2 sin w

π w
+ w2
− w3
, w 6= 0,

Solution:

By definition r
Z 1
1 1 2
fˆ(0) = √ 2
x = · ,
2π −1 3 π
and when w 6= 0 we have Z 1
1
fˆ(w) = √ x2 e−ixw .
2π −1
Integration by parts gives
e−ixw 1 e−ixw e−ixw
Z 1 Z 1 Z 1
sin w
x2 e−ixw = x2 | − 2x =2 + 2x
−1 −iw −1 −1 −iw w −1 iw
and
e−ixw e−ixw e−ixw
Z 1 Z 1
cos w sin w
2x = 2x 2 |1−1 − 2 2
=4 2 −4 3
−1 iw w −1 w w w
Hence r
2 sin w 2 cos w 2 sin w
 
fˆ = + − .
π w w2 w3
TMA4130/35 Mathematics 4N/D, November 2020 Page 6 of 18

Problem 5 [20 points]

Consider the heat equation


ut = c2 uxx + α (2)
where c > 0 and α ∈ R are given constants.

a) Show that the function


α
w(x, t) = x(π − x)
2c2
satisfies the equation wt = c2 wxx + α.

b) Find the solution of equation (2) for x ∈ [0, π] and t > 0 with the boundary conditions

u(0, t) = u(π, t) = 0, t > 0,

and the initial condition



α 0, 0 ≤ x < π2 ,
u(x, 0) = 2 x(π − x) +
2c x − π, π < x ≤ π.
2

c) Find limt→∞ u(x, t).

Solution:

a) We have wt = 0, and wxx = −α/c2 , which implies that wt − c2 wxx = α.

b) Consider the function v = u − w. Linearity (or superposition) implies that v satisfies

vt = c2 vxx , v(0, t) = v(π, t) = 0, t > 0,

(note that w vanishes at x = 0 and π), with initial data


(
0, 0 ≤ x < π2 ,
v(x, 0) = u(x, 0) − w(x, 0) =
x − π, π2 < x ≤ π.

Standard separation of variables (Kreyszig, pp. 558ff) gives that



2t
Bn sin(nx)e−(cn)
X
v(x, t) =
n=1

where Bn are given as the Fourier coefficients of the initial data, thus

(
X 0, 0 ≤ x < π2 ,
v(x, 0) = Bn sin(nx) =
n=1 x − π, π2 < x ≤ π.
TMA4130/35 Mathematics 4N/D, November 2020 Page 7 of 18

Standard formulas for Fourier series yield


2 π
Z
Bn = sin(nx)w(x, 0)dx
π 0
Z π
2
= sin(nx)(x − π)dx
π π/2
2 h π 1 1 π
Z i
= − cos(nx)(x − π) + cos(nx)dx
π π/2 n n π/2
1 nπ  2 nπ 
= − cos − sin .
n 2 πn2 2
Thus the answer reads

u(x, t) = w(x, t) + v(x, t)


∞ 
α 1 nπ  2 nπ  2
sin(nx)e−(cn) t .
X
= 2
x(π − x) − cos + 2
sin
2c n=1
n 2 πn 2

2
c) We see that each term in the infinite sum contains the exponentially decaying factor e−(cn) t . Thus
these terms will all vanish in the limit when t → ∞. Hence
α
u(x, t) → x(π − x), t → ∞.
2c2
TMA4130/35 Mathematics 4N/D, November 2020 Page 8 of 18

Problem 6 [10 points]

There are four versions of this exercise with slightly different assumptions.

We are given a continuously differentiable function g : [0, 1] → R with the following properties:

Version I:
• g(0) = 0.2 and g(1) = 0.7.

• 0.1 ≤ g 0 (x) ≤ 0.9 for all 0 ≤ x ≤ 1.

Version II:
• g(0) = 0.7 and g(1) = 0.2.

• −0.9 ≤ g 0 (x) ≤ −0.1 for all 0 ≤ x ≤ 1.

Version III:
• g(0) = 0.4 and g(1) = 0.8.

• 0.2 ≤ g 0 (x) ≤ 0.8 for all 0 ≤ x ≤ 1.

Version IV:
• g(0) = 0.8 and g(1) = 0.4.

• −0.8 ≤ g 0 (x) ≤ −0.2 for all 0 ≤ x ≤ 1.

The remaining part of the exercise is the same for all versions.

We consider now the fixed point iteration

xk+1 = g(xk )

with x0 = 0.

a) Show that the function g has a unique fixed point r in the interval [0, 1] and that the
fixed point iteration converges to r.

b) Provide an upper bound for the number of iterations that are required until |xk − r| ≤
10−6 .
TMA4130/35 Mathematics 4N/D, November 2020 Page 9 of 18

Solution:

Part a), Version I: Since g 0 (x) ≥ 0.1, the function g is (strictly) increasing. Since moreover g(0) ≥ 0
and g(1) ≤ 1, it follows that 0 ≤ g(x) ≤ 1 for all x ∈ [0, 1]. That is, the function g maps the interval
[0, 1] to itself. Moreover, the derivative of g satisfies the condition |g 0 (x)| ≤ L := 0.9 < 1 for all
x ∈ [0, 1]. Thus all the conditions of the fixed point theorem are satisfied, and therefore g has a
unique fixed point r in the interval [0, 1] and the fixed point iteration converges to r.

Version II: Here the function is strictly decreasing, and g(0) ≤ 1, g(1) ≥ 0. Else the argumentation
is the same as for version I.

Version III: Here we have that |g(x)| ≤ L := 0.8 < 1. Else the argumentation is the same as for
version I.

Version IV: Here the function is strictly decreasing, and g(0) ≤ 1, g(1) ≥ 0. Moreover, |g(x)| ≤
L := 0.8 < 1. Else the argumentation is the same as for version I.

Part b) For the number of iterations, we have the a–priori error estimate

Lk
|xk − r| ≤ |g(x0 ) − x0 |.
1−L
Thus we need that
Lk
|g(x0 ) − x0 | ≤ 10−6
1−L
or, as x0 = 0,
(1 − L)
Lk ≤ 10−6 .
|g(x0 )|
Taking logarithms on each side and dividing by log(L) (note that log(L) < 0, which explains why the
inequality is reversed!), we obtain the condition

1 (1 − L)
 
k≥ log 10−6 .
log(L) |g(x0 )|

Version I: Here we have L = 0.9 and g(x0 ) = 0.2. This results in the estimate

log 0.5 · 10−6


k≥ ≈ 137.7.
log 0.9
That is, we need at most 138 iterations.

Version II: Here we have L = 0.9 and g(x0 ) = 0.7. This results in the estimate

log((1/7) · 10−6 )
k≥ ≈ 149.6.
log 0.9
TMA4130/35 Mathematics 4N/D, November 2020 Page 10 of 18

That is, we need at most 150 iterations.

Version III: Here we have L = 0.8 and g(x0 ) = 0.4. This results in the estimate

log 0.5 · 10−6


k≥ ≈ 65.01.
log 0.8
That is, we need at most 66 iterations.

Version IV: Here we have L = 0.8 and g(x0 ) = 0.8. This results in the estimate

log 0.25 · 10−6


k≥ ≈ 68.1.
log 0.8
That is, we need at most 69 iterations.

Alternatives:

Replacing |g(x0 ) − x0 | by 1 (which is the length of the interval) in the a–priori error estimate yields
also a valid solution, though the estimate is weaker. For versions I and II, this results in 153 iterations;
for versions III and IV in 70 iterations.

Finally, an alternative is the estimate

|xk − r| ≤ Lk |x0 − r| ≤ Lk ,

which yields the estimate


k ≥ log(10( − 6))/ log(L).
For versions I and II, this yields at most 132 iterations; for versions III and IV, at most 62 iterations.
TMA4130/35 Mathematics 4N/D, November 2020 Page 11 of 18

Problem 7 [10 points]

Consider the data points


xi −2 −1 1 2
f (xi ) −5 0 1 4

a) Use Lagrange interpolation to find the polynomial of minimal degree interpolating these
points. Express the polynomial in the form

pn (x) = an xn + · · · + a1 x + a0 .

b) Determine the Newton form of the interpolating polynomial.

c) Verify that the solutions in (a) and (b) are the same.

d) Use your result to find an approximation to f (0).

Solution:

a) With x0 = −2, x1 = −1, x2 = 1, x3 = 2, we get the following cardinal functions:


3
Y x − xj (x + 1)(x − 1)(x − 2) x3 − 2x2 − x + 2
`0 (x) = = =
j=1
x0 − xj (−2 − (−1))(−2 − 1)(−2 − 2) −12
Y x − xj (x + 2)(x + 1)(x − 2) x3 + x2 − 4x − 4
`2 (x) = = =
j6=2
x2 − xj (1 + 2)(1 + 1)(1 − 2) −6
Y x − xj (x + 2)(x + 1)(x − 1) x3 + 2x2 − x − 2
`3 (x) = = =
j6=3
x3 − xj (2 + 2)(2 + 1)(2 − 1) 12

There is no need to compute `1 (x) because the function value is zero.

The interpolating polynomial in Lagrange form is

p(x) = −5`0 (x) + `2 (x) + 4`3 (x)


!

5 3 10 2 5 10
 x3 + x2 − 4x − 4 
4 3 8 2 4 8

= 12 x − 12 x − 12 x + 12 + + 12 x + 12 x − 12 x − 12
−6
7 3 1 2 1 10
= x − x − x+
12 3 12 12
TMA4130/35 Mathematics 4N/D, November 2020 Page 12 of 18

b) Newton form:
−2 −5
5
−1 0 − 32
1 7
2 12
5
1 1 6
3
2 4
Polynomial:   
p(x) = −5 + (x − (−2)) 5 + (x − (−1)) − 32 + 7
12 (x − 1)

c) Simplify one of the forms, for example:

  
p(x) = −5 + (x − (−2)) 5 + (x − (−1)) − 32 + 7
12 (x − 1)
  
= −5 + (x − (−2)) 5 + (x − (−1)) − 32 + 7
12 x − 7
12
  
= −5 + (x − (−2)) 5 + (x − (−1)) − 25
12 +
7
12 x
  
= −5 + (x − (−2)) 5 + (x + 1) − 25
12 +
7
12 x
  
7 2
= −5 + (x − (−2)) 5 + − 25
12 x + 12 x − 25
12 + 7
12 x
  
7 2 18 25
= −5 + (x − (−2)) 5 + 12 x − 12 x − 12
  
60 7 2 18 25
= −5 + (x − (−2)) 12 + 12 x − 12 x − 12
 
7 2 18 35
= −5 + (x − (−2)) 12 x − 12 x + 12
 
7 2 18 35
= −5 + (x + 2) 12 x − 12 x + 12
7 3 18 2 35 2
= −5 + ( 12 x − 12 x + 12 x + 14
12 x − 36
12 x + 70
12 )
7 3 1 2
= − 60 1 70
12 + ( 12 x − 3 x − 12 x + 12 )
7 3 1 2 1 10
= 12 x − 3 x − 12 x + 12

7 1 1 10 10
d) Use the expanded form to get f (0) ≈ p(0) = 12 · 03 − 3 · 02 − 12 ·0+ 12 = 12 .
TMA4130/35 Mathematics 4N/D, November 2020 Page 13 of 18

Problem 8 [5 points]

Let 
1/(x + 1)2 , if x > 0,
f (x) =
x + 1, if x ≤ 0.
R1
Find an approximation to −1 f (x)dx using Simpson’s rule, and compute the error.

Solution:

Approximative solution:
Z 1
1
f (t)dt ≈ (f (−1) + 4f (0) + f (1))
−1 3
1 1
 
= 0+4·1+
3 4
4 1
= +
3 12
17
=
12
Exact solution:

Z 1 Z 0 Z 1
f (t)dt = f (t)dt + f (t)dt
−1 −1 0
Z 0 Z 1
= (x + 1)dt + 1/(x + 1)2 dt
−1 0
0
1 1 1
   
= x2 + x + −
2 −1 x+1 0
1 1
    
= 0− − + − − (−1)
2 2
1 1
= +
2 2
=1
17 5
Error: e = 12 −1= 12
TMA4130/35 Mathematics 4N/D, November 2020 Page 14 of 18

Problem 9 [8 points]

There are four versions of this exercise, each with a different RK-method.

We are given the following python code, in which one step of a Runge–Kutta method is
implemented.

Version I:

def onestep(f, x, y, h):


k1 = f(x, y)
k2 = f(x+h/4, y+h*k1/4)
k3 = f(x+h, y+h*(k1+k2)/2)
y_next = y + h*(2*k2/3+k3/3)
x_next = x + h
return x_next, y_next

Version II:

def onestep(f, x, y, h):


k1 = f(x, y)
k2 = f(x+h/2, y+h*k1/2)
k3 = f(x+h, y+h*(k1+k2)/2)
y_next = y + h*(k1/3+k2/3+k3/3)
x_next = x + h
return x_next, y_next

Version III:

def onestep(f, x, y, h):


k1 = f(x, y)
k2 = f(x+2*h/3, y+2*h*k1/3)
k3 = f(x+h, y+h*(k1+k2)/2)
y_next = y + h*(5*k1/12+k2/4+k3/3)
x_next = x + h
return x_next, y_next

Version IV:
TMA4130/35 Mathematics 4N/D, November 2020 Page 15 of 18

def onestep(f, x, y, h):


k1 = f(x, y)
k2 = f(x+3*h/4, y+3*h*k1/4)
k3 = f(x+h, y+h*(k1+k2)/2)
y_next = y + h*(4*k1/9+2*k2/9+k3/3)
x_next = x + h
return x_next, y_next

Write down the Butcher tableau of the method, and determine the method’s order.

Solution:

The Butcher tableaux are:


Version I Version II Version III Version IV
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1/4 1/4 0 0 1/2 1/2 0 0 2/3 2/3 0 0 3/4 3/4 0 0
1 1/2 1/2 0 1 1/2 1/2 0 1 1/2 1/2 0 1 1/2 1/2 0
0 2/3 1/3 1/3 1/3 1/3 5/12 1/4 1/3 4/9 2/9 1/3

The order conditions are:

Version I:
• p = 1: The condition
P
i bi = 1 yields: 0 + 2/3 + 1/3 = 1, which is satisfied.
21 1 1 1
• p = 2: The condition = 12 , which is satisfied.
P
i bi ci = 1/2 yields: 0 + 34 + 3 = 6 + 3
2 1 1
• p = 3: The condition 2 6= 31 .
P
i bi ci = 1/3 yields: 0 + 3 16 + 3

Version II:
• p = 1: The condition
P
i bi = 1 yields: 1/3 + 1/3 + 1/3 = 1, which is satisfied.
11 1 1 1
• p = 2: The condition = 12 , which is satisfied.
P
i bi ci = 1/2 yields: 0 + 32 + 3 = 6 + 3
11 1
• p = 3: The condition 2 6= 13 .
P
i bi ci = 1/3 yields: 0 + 34 + 3

Version III:
• p = 1: The condition
P
i bi = 1 yields: 5/12 + 1/4 + 1/3 = 1, which is satisfied.
12 1 1 1
• p = 2: The condition = 12 , which is satisfied.
P
i bi ci = 1/2 yields: 0 + 43 + 3 = 6 + 3
14 1
• p = 3: The condition 2 6= 13 .
P
i bi ci = 1/3 yields: 0 + 49 + 3
TMA4130/35 Mathematics 4N/D, November 2020 Page 16 of 18

Version IV:
• p = 1: The condition
P
i bi = 1 yields: 4/9 + 2/9 + 1/3 = 1, which is satisfied.
23 1 1 1
• p = 2: The condition = 12 , which is satisfied.
P
i bi ci = 1/2 yields: 0 + 94 + 3 = 6 + 3
2 9 1
• p = 3: The condition 2 6= 31 .
P
i bi ci = 1/3 yields: 0 + 9 16 + 3

Thus all the methods are of order 2.


TMA4130/35 Mathematics 4N/D, November 2020 Page 17 of 18

Problem 10 [12 points]

We consider the time-dependent PDE

ut = uxx + xux

with initial conditions


u(x, 0) = x2 for 0 < x < 1
and boundary conditions

u(0, t) = t and u(1, t) = 1 for t > 0.

a) Perform a semi-discretisation of the PDE using central differences for the approximations
of the x-derivatives. Use equidistant grid points xi = i∆x with a grid size ∆x = 1/M .

b) We now want to use the trapezoidal rule for ODEs in order to compute a numerical
solution of the system obtained in part a). Set up the linear system that has to be
solved in each step for an arbitrary time step ∆t > 0.
Set up specifically the system for M = 2 and ∆t = 1/2, and compute a numerical
approximation of u(1/2, 1).

Solution:

a) We start with choosing M ∈ N and setting ∆x = 1/M and xi = i∆x for i = 0, . . . , M . Using
central differences for the approximation of the derivatives on the right hand side, we then
obtain at the interior grid points the equations

∂u u(xi − ∆x, t) − 2u(xi , t) + u(xi + ∆x, t)


(xi , t) =
∂t ∆x2
u(xi + ∆x, t) − u(xi − ∆x, t)
+ xi + O(∆x2 ).
2∆x
Approximating Ui (t) ≈ u(xi , t) and ignoring the error term yields

Ui−1 (t) − 2Ui (t) + Ui+1 (t) Ui+1 (t) − Ui−1 (t)
Ui0 (t) = 2
+ xi
∆x 2∆x
for i = 1, . . . , M − 1. In addition, we have the initial condition

Ui (0) = x2i for i = 1, . . . , M − 1,

and the boundary values


U0 (t) = t and UM (t) = 1.
TMA4130/35 Mathematics 4N/D, November 2020 Page 18 of 18

b) We now choose a time step ∆t and approximate Uin ≈ u(xi , tn ) with tn = n∆t. Then the
trapezoidal rule (or the Crank-Nicolson method) reads as
n − 2U n + U n n − Un
∆t Ui−1 Ui+1

i i+1 i−1
Uin+1 = Uin + + x i
2 ∆x2 2∆x
n+1
Ui−1 − 2Uin+1 + Ui+1
n+1 n+1
Ui+1 n+1
− Ui−1

+ + x i
∆x2 2∆x

for i = 1, . . . , M − 1 and n ≥ 0. In addition, we have

U0n = tn = n∆t and n


UM = 1,

and
Ui0 = x2i for i = 1, . . . , M − 1.

For the specific case of M = 2 and ∆x = 1/2, the only unknown is U1n+1 . If we insert the
boundary values U0n = tn and U2n = 1, as well as x1 = 1/2, we end up with the equation

∆t  1 1 
U1n+1 = U1n + 4(tn − 2U1n + 1) + (1 − tn ) + 4(tn+1 − 2U1n+1 + 1) + (1 − tn+1 ) .
2 2 2
This can be solved explicitly for U1n+1 and we obtain

1 9∆t 7∆t
 
U1n+1 = (1 − 4∆t)U1n + + (tn + tn+1 )
1 + 4∆t 2 4

For the specific case ∆t = 1/2 and tn = n/2, this results in


1 9 7 1  1  43 7n 
U1n+1 = −U1n + + n + = −U1n + + .
3 4 8 2 3 16 8
With the initial value U10 = 1/4, we thus obtain that
1  1 43  13
U11 = − + =
3 4 16 16
and
1  13 43 7  7
U12 = − + + = .
3 16 16 8 8

You might also like