Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Model 5 Non-Newtonian Flow in Ducts Mathematical Aspect: Numerical Solution of Boundary-Value Problems Flow in Ducts

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

1

MODEL 5
Non-Newtonian Flow in Ducts

Mathematical Aspect: Numerical solution of boundary-value problems

Flow in ducts
The problem to be considered in this model is the developed flow of a fluid in a duct with a
rectangular cross section, 2a × 2b (Figure 1). The fluid moves in the z direction. The flow is
driven by a pressure drop along the z direction. In the formulation of the problem, the following
assumptions and conditions will be considered:

Figure 1. Duct with a rectangular cross section, 2a × 2b.

(1) The flow is steady, incompressible and isothermal.


(2) The fluid only moves in the z direction, so that the velocity vector has only one nonzero
component (vx,vy=0, vz≠0). This assumes that the flow is fully developed; i.e. we are analyzing a
portion of duct that is far from entrances and exits. Under these conditions, application of the
continuity equation (conservation of mass) for incompressible flows is

∂v x ∂v y ∂v z ∂v z
∇⋅v = 0 ⇒ + + =0 ⇒ =0 (1)
∂x ∂y ∂z ∂z

which indicates that vz=vz(x,y).


(3) We will consider a type of fluid behavior that can be described by a viscosity (µ) that
depends on the local shear rate. This behavior is modeled by a constitutive equation for the stress
tensor that corresponds to the Generalized Newtonian Fluid, which is different from the more
common Newtonian fluid for which the viscosity is a constant.
Conservation of linear momentum leads to the equations of motion. The simplified form of
the equation of motion in the z direction for this flow is
2

∂P ∂  ∂v  ∂  ∂v 
0=− + ρg z +  µ z  +  µ z  (2)
∂z ∂x  ∂x  ∂y  ∂y 

where P is pressure and gz is the z-component of the acceleration of gravity vector. As is


customary in the treatment of internal flows, the pressure drop and gravitational terms will be
lumped together by using the definition of modified pressure, P, so that

∂P ∂P
− = − + ρg z (3)
∂z ∂z

Momentum balances in the x and y directions can be used to show that the modified pressure
only varies in the direction of motion: P=P(z). Therefore, equation (2) can be rewritten as

dP ∂  ∂v z  ∂  ∂v z 
= µ  + µ  (4)
dz ∂x  ∂x  ∂y  ∂y 

The left-hand side of this equation is only a function of z, while the right-hand side only depends
on x and y. This means that both sides must be equal to a constant, which in turn implies that the
modified pressure varies linearly with z, so that the pressure gradient can be expressed as

dP ∆P
=− (5)
dz L

where ∆P is the modified pressure drop over a length of duct L. The modified pressure drop per
unit length is a constant and is physically the driving force for the motion of the fluid. Here, we
will take it to be a known parameter. The velocity field is then governed by the PDE

∂  ∂v z  ∂  ∂v z  ∆P
µ  + µ  = − (6)
∂x  ∂x  ∂y  ∂y  L

For a generalized Newtonian fluid, the viscosity is an exclusive function of the shear rate,
µ = µ(γ& ) , where

2 2
 ∂v   ∂v 
γ& =  z  +  z  (7)
 ∂x   ∂y 

The shear rate γ& quantifies the average rate at which a fluid element deforms.
In principle, boundary conditions for this problem are no-slip conditions at the walls of the
duct:

vz=0, x=±a (8)


3

vz=0, y=±b (9)

However, an analysis of this problem reveals that the velocity field is symmetric about both x
and y axes. For instance, to prove symmetry about the x axis, consider the following change of
variables:

x'=−x, y'=y (10)

When this change of variables is introduced into equations (6), (8) and (9), we find that vz(x',y')
satisfies exactly the same problem as vz(x,y) (note that the shear rate and, consequently, the
viscosity, are not affected by this transformation). Therefore, we conclude:

vz(x',y')=vz(x,y) ⇒ vz(x,y)=vz(−x,y) (11)

Taking the partial derivative of this equation with respect to x yields

∂v z ∂v
( x , y) = − z ( − x , y) (12)
∂x ∂x

Evaluating this equation at x=0 yields

∂v z
(0, y) = 0 (13)
∂x

Similarly, we can prove that vz(x,y)=vz(x,−y), which leads to

∂v z
( x ,0 ) = 0 (14)
∂y

Because of these symmetry considerations, the PDE (6) can be solved only in a quarter of the
rectangular domain, imposing the boundary conditions shown in Figure 2.

vz = 0
y=b

∂v z
=0 vz = 0
∂x
y
x
∂v z x=a
=0
∂y

Figure 2. Solution domain and boundary conditions for the PDE (6).
4

Next, we will formulate the problem in dimensionless form. Let

x y µ
ξ= , η= , σ = (15)
a b µ0

where µ0 is a reference value of the viscosity (e.g. the viscosity at a specific shear rate). If the
fluid is Newtonian, then σ=1. If we define the dimensionless velocity by

v µ L
u= z 0 (16)
a 2 ∆P

then the PDE (6) becomes

∂  ∂u  ∂  ∂u 
 σ  + α 2  σ  = −1 , 0<ξ<1, 0<η<1 (17)
∂ξ  ∂ξ  ∂η  ∂η 

where

a
α= (18)
b

and the boundary conditions are

∂u
= 0 , η=0 (19)
∂η

u=0, η=1 (20)

∂u
= 0 , ξ=0 (21)
∂ξ

u=0, ξ=1 (22)

Let us consider first the case of a Newtonian fluid, for which equation (17) simplifies to

∂ 2u ∂ 2u
+ α2 = −1 (23)
∂ξ 2 ∂η2

This PDE, along with boundary conditions (19) to (22) can be solved by separation of variables.
The solution is
5

  (2n + 1)πη  
16 ∞ (−1) n  cosh  2α    (2n + 1)πξ 

u (ξ, η) = ∑
π3 n = 0 (2n + 1) 3
 1 −  cos
 (2n + 1)π    2  (24)
 cosh 
  2α  

For the non-Newtonian case, it is evident from the formulation that the dimensionless
viscosity, σ, is a nonlinear function of derivatives of u, since it depends on shear rate (equation
7). This would make the PDE (17) nonlinear, and its solution would require the use of a
numerical method. Later in this chapter we will consider the more general non-Newtonian case
but first, for simplicity, we will establish a finite difference method to solve equation (23), which
yields results that can be compared to the analytical solution (24).

Finite-difference discretization
We start by defining the discretization grid. We will use equidistant intervals of size h in ξ
and size k in η. To define N interior nodes in ξ and M interior nodes in η, since both independent
variables span the interval (0,1), we have:

1
h= (25)
N +1

1
k= (26)
M +1

so that nodal coordinates are given by

ξi=ih, i=0,1,…N+1 (27)

ηj=jk, j=0,1,…M+1 (28)

An example of a discretization grid is shown in Figure 3.


Next, we evaluate the PDE (23) at interior nodes,

∂ 2u ∂ 2u
(ξ i , η j ) + α 2 (ξ i , η j ) = −1 , i=1,…N, j=1,…M (29)
∂ξ 2 ∂η2

To approximate the derivatives, we use second-order centered differences (equation 3-31):

∂ 2u û i +1, j − 2û i, j + û i −1, j


(ξ i , η j ) ≈ (30)
∂ξ 2 h2

∂ 2u û i, j+1 − 2û i, j + û i, j−1


(ξ i , η j ) ≈ (31)
∂η2 k2
6

Figure 3. Example of a discretization grid (N=M=3). There are N×M=9 interior nodes and
2N+2M+4=16 boundary nodes.

Substituting equations (30) and (31) into equation (29) and solving for û i, j yields

û i, j =
h2
+
1
2(1 + λ ) 2(1 + λ2 )
2
[
û i +1, j + û i −1, j + λ2 (û i, j+1 + û i, j−1 ) ] (32)

where

αh
λ= (33)
k

The truncation error for this approximation is of O(h2+k2). It is worthwhile to note that equation
(32) relates the value of û i, j to the values of û at the four adjacent nodes that surround node (i,j)
(Figure 4). An interesting particular case of equation (32) occurs when the cross section of the
duct is square (a=b). In this case α=1 and, if we set h=k so that λ=1, equation (32) becomes

h2 1
û i, j = + (û i +1, j + û i −1, j + û i, j+1 + û i, j−1 ) , i=1,…N, j=1,…M (34)
4 4

The second term on the right-hand side of this equation is the arithmetic average of the
7

dependent variable at the four adjacent nodes. If we were solving Laplace's equation (the
homogeneous version of equation (23) with α=1, then the nodal value at (i,j) would be equal to
the arithmetic average of the dependent variable at the four adjacent nodes.

Figure 4. Nodes involved in the discretized form of the PDE evaluated at node (i,j). This type of
graphical representation is called "computational cell" or "computational molecule".

When equation (32) is evaluated at all interior nodes, a linear system of N×M equations is
generated. The N×M nodal values of û i, j are the unknowns of this system after we have
incorporated the boundary conditions into the formulation, as follows (see Figure 3)

Equation (21) ⇒ û i,0 = û i,1 , i=0…N (35)

Equation (22) ⇒ û N +1, j = 0 , j=0…M (36)

Equation (19) ⇒ û 0, j = û1, j , j=1…M (37)

Equation (20) ⇒ û i, M +1 = 0 , i=0…N+1 (38)

These equations provide appropriate relations for the 2NM+4 boundary values. Note that the first
derivatives in equations (19) and (21) have been discretized by a first-order forward-difference
formula.
To solve the linear system of equations represented by equation (32) a direct solution
procedure such as Gaussian elimination can be used. This requires forming a coefficients matrix.
Let p=MN be the number of unknowns (and equations). The coefficient matrix will have
p2=(NM)2 coefficients. However, most of the coefficients will be zero since each equation has, at
8

most, only 5 nonzero coefficients. The percentage of coefficients that are nonzero in a matrix is
referred to as the matrix density, d. In this case, we have,

5 NM 500
d (%) = ×100 = (39)
( NM ) 2 NM

For example, for N=M=100, d=0.05%. If the matrix were defined as a two-dimensional array in
a computer program, this means that 99.95% of the memory allocated to store the matrix would
be wasted, since 99.95% of the coefficients would be zero. Due to this, standard elimination
procedures are only rarely used in this type of systems. There are other elimination methods that
avoid defining the full matrix. These methods require sophisticated bookkeeping algorithms.
Another alternative to solving the system directly is to use iterative methods. In what follows, we
explore the most common iterative methods used in this type of problem.

The Jacobi method


A common characteristic of iterative methods is the fact that an initial estimate must be
provided for all the unknowns. In this case, we need initial estimates for the nodal values of the
dimensionless velocity at the MN interior nodes. Let these estimates be û i(,0j) . The Jacobi
iterative method consists of the direct application of equation (34) iteratively, using for the terms
on the right-hand side the values of the dependent variable calculated in the previous iteration (or
the initial estimates for the first iteration.) Let û i(,nj) be the nodal values calculated at iteration
number n. The basic equation for the Jacobi method is (see equation 32)

h2 1 û ( n −1) + û ( n −1) + λ2 (û ( n −1) + û ( n −1) )


û ( n ) = + (40)
i, j 2(1 + λ2 ) 2(1 + λ2 )  i +1, j i −1, j i, j +1 i, j−1 

This equation is applied successively for n=1,2… until all nodal values in two consecutive
iterations differ by less than a specified tolerance. Let t be this tolerance. We will continue the
iterative process until

û ( n ) − û ( n −1) ≤ t , for all i and j (41)


i, j i, j

Application of the Jacobi method does not necessitate the formation of the matrix of coefficients:
memory requirements are only the storage of the NM nodal values of the current (n) and
previous (n-1) iterations. However, a disadvantage of the Jacobi method is that the effect of the
boundary conditions on the solution propagates inside the solution domain at a rate of one line of
nodes per iteration because each nodal value is only affected by nodal values in adjacent rows
and columns (Figure 4). This means that at least we need to perform a number of iterations equal
to the maximum between M and N before the boundary conditions have the opportunity to affect
the whole domain, which should be a requirement in a converged solution. From this standpoint,
it is convenient to explore schemes that accelerate propagation of boundary conditions. One of
such schemes is Gauss-Seidel iteration.
9

The Gauss-Seidel method


One way to improve on the previous method is to recognize that, during the calculations
performed at a particular iteration, the values of û that have already been calculated in that
iteration should be a better approximation to the solution than the values from the previous
iteration. That is, some of the values of the right-hand side of equation (40) already have been
calculated at iteration n. For instance, consider the calculation sequence illustrated in Figure 5:
the calculation is performed by starting at i=1, j=1 and increasing j from 1 to M, and then
moving on to i=2 (second column of nodes), and so on. It can be seen that, when û i(,nj) is to be

calculated, û i(−n1), j and û i(,nj)−1 already have been calculated and there is no need to use these
values from iteration n−1 in equation (40). If these new values are used, equation (40) is
modified to

h2 1 û ( n −1) + û ( n ) + λ2 (û ( n −1) + û ( n ) )


û ( n ) = + (42)
i, j 2(1 + λ2 ) 2(1 + λ2 )  i +1, j i −1, j i, j +1 i, j−1 

This scheme is termed Gauss-Seidel iteration. Note that the calculation procedure in this case
may be affected by the way that the grid is swept in a given iteration.

Figure 5. A possible calculation sequence during the nth iteration: when a specific nodal
value is to be calculated, values of nodes to the left and bottom of that node already have been
calculated for iteration n.

The use of updated values for the dependent variables in a given iteration makes the Gauss-
Seidel scheme more efficient (i.e. with faster rate of convergence) than the Jacobi method in
most cases. On the other hand, it reduces memory requirements since values from the previous
iteration do not have to be stored. In a computer program, it would be possible to use equation
10

(34) directly, since the values on the right-hand side will be then the most current nodal values of
the dependent variable.
It is clear that the Gauss-Seidel method following the calculation sequence in Figure 5 will
propagate the effect of the boundary condition at the line of nodes i=1 throughout the whole
domain in a single iteration. However, the boundary conditions at the line of nodes j=M+1 and
i=N+1 will not propagate throughout the whole domain. A more efficient method at this respect
is the line-by-line method.

The line-by-line method


A different way to look at the potential efficiency of an iterative method for the solution of the
system of equations (34) is to realize how many of the nodal values on the right-hand side are
evaluated from the previous iterations. The ideal case would be if all values were from the
current iteration, but that would imply solving the full system of NM equations with NM
unknowns (iteration would not be necessary in this case). In the Jacobi method, all four nodal
values on the right-hand side of equation (34) are evaluated at the previous iteration (equation
40) while in the Gauss-Seidel method, only two nodal values are evaluated at the previous
iteration (equation 42). What if it were possible to evaluate only one nodal value at the previous
iteration? For example, say that we would like a scheme that looks as follows (compare with
equation 42):

h2 1 û ( n ) + û ( n ) + λ2 (û ( n −1) + û ( n ) )


û ( n ) = + (43)
i, j 2(1 + λ ) 2(1 + λ )  i +1, j
2 2 i −1, j i, j +1 i, j−1 

where we have changed the value of û i +1, j to be evaluated at the current iteration. It would not
be possible, of course, to apply this equation directly as is because that value not known when
this equation is applied. However, for a given j, equation (43) can be interpreted as a linear
system of equations whose unknowns are û ( n ) , for i=1,2…N. We can rewrite equation (43) as
i, j
follows

h2 λ2
(û ( n −1) + û ( n ) )
1 1
− û ( n ) + û ( n ) − û ( n ) = + (44)
2(1 + λ )
2 i + 1, j i , j 2(1 + λ )
2 i − 1, j 2(1 + λ ) 2(1 + λ ) i, j+1
2 2 i, j −1

For a given j, and assuming that the calculation procedure starts at j=1 and proceeds increasing j
sequentially, writing equation (44) for i=1,2…N yields a tri-diagonal system whose solution (by
the Thomas algorithm, for example) yields the values of the dependent variable for the horizontal
line of nodes j. This procedure is more efficient than the Gauss-Seidel method because it will
propagate the effect of the boundary conditions at i=0 and i=N+1 throughout the whole domain
in a given iteration.
Even though the Gauss-Seidel and line-by-line methods seem more efficient than the Jacobi
method and usually lead to faster rates of convergence, sometimes, especially for nonlinear
systems, they mat not be convergent (there is no assurance that any of these methods will
eventually converge.) For these cases, it is convenient to have a way of controlling the rate of
change of the calculated nodal values from iteration to iteration. This may be achieved by the use
of relaxation methods.
11

Relaxation methods
Relaxation methods establish that the nodal value calculated at a given iteration be a weighted
average between the value available from the previous iteration and the value that would be
calculated during the current iteration. For example, the Gauss-Seidel method prescribes that the
value of û ( n ) be calculated from equation (42). The relaxation method prescribes that this value
i, j

should be a weighted average between the value from the previous iteration, û ( n −1) , and the
i, j
Gauss-Seidel value; i.e. (see equation 42)

 h2 û ( n −1) + û ( n ) + λ2 (û ( n −1) + û ( n ) ) 


û ( n ) = (1 − ω)û ( n −1) + ω
1
+   (45)
i, j i, j  2(1 + λ ) 2(1 + λ )  i +1, j
2 2 i −1, j i, j+1 i, j−1  

where ω is a weighting factor. Note that ω=0 implies no progress in the iterative procedure (i.e.
nodal values would never change), whereas ω=1 leads to Gauss-Seidel iteration. If 0<ω<1 the
new value is truly a weighted average between the previous value and the Gauss-Seidel value.
Choices of ω in this range are used to "decelerate" the change prescribed by Gauss-Seidel
iteration and are therefore used when the Gauss-Seidel method diverges. The technique is then
called successive under-relaxation and is usually limited to highly nonlinear problems.
A variation of the relaxation technique is to use ω>1. In this case, equation (45) can be
interpreted as an extrapolation in which the calculated value goes beyond the value prescribed by
Gauss-Seidel iteration. This generally results in accelerated convergence. The method then is
called successive over-relaxation (SOR) and it is the method most commonly used in practice to
solve systems of equations iteratively.
For linear systems of equations that can be formulated in such a way that the system matrix is
made up of tri-diagonal blocks, it is possible to show that that the optimum value of ω that leads
to the best improvement in the calculated values at a given iteration (here, "improvement" refers
to an average error with respect to the exact solution) is given by

2
ωopt = (46)
1 + 1 − ρ2

where ρ is the spectral radius of the coefficients matrix, which is defined as its maximum
eigenvalue. For the linear system of equations considered here (equation 34), it can be shown
that

1  π   π 
ρ=  cos  + cos  (47)
2   M +1  N + 1 

Note that this makes 1<ωopt<2 for all possible grids (M and N), with the limit for a very fine grid
(M,N→∞) of ωopt=2. In general, however, the value of ω is chosen by trial and error.
Relaxation can be applied also to the line-by-line method. Using equation (43), direct
application of relaxation leads to
12

 h2 û ( n ) + û ( n ) + λ2 (û ( n −1) + û ( n ) ) 


û ( n ) = (1 − ω)û ( n −1) + ω
1
+   (48)
i, j i, j  2(1 + λ2 ) 2(1 + λ2 )  i +1, j i −1, j i, j+1 i, j−1  

which can be rearranged into the tri-diagonal system

ω ω
− û ( n ) + û ( n ) − û ( n ) = (1 − ω)û ( n −1) +
2(1 + λ )
2 i + 1, j i , j 2(1 + λ ) i −1, j
2 i, j
(49)
 h2 λ2 
ω + (û ( n −1) + û ( n ) )
 2(1 + λ2 ) 2(1 + λ2 ) i, j+1 i, j −1 

This equation replaces equation (44).

Model 5 solution: Newtonian fluid


To validate the methods presented, we will apply them in this section to the flow of a
Newtonian fluid in a square duct (α=1). First, we present quantitative results regarding the
convergence characteristics of the methods. For simplicity, the initial estimate of the solution
was taken as û i, j = 0 for all the interior nodes.
Figure 6 shows the values of û 0,0 (dimensionless velocity at the center of the duct) obtained
in successive iterations, using the methods of Jacobi and Gauss-Seidel. Even though it will
eventually converge, we can see that the Jacobi method has not achieved convergence after 800
iterations. The results show that the Gauss-Seidel method is much better, especially when
applied with successive over-relaxation with relaxation parameter given by equation (46) (in this
case ωopt=1.74.) For this particular point in the domain, convergence is even faster for higher
values of the relaxation parameter (Figure 7). Note that the solution for ω=2 oscillates around the
value towards which it finally converges. Values of the relaxation parameter ω>2 lead to
divergent solutions. The number of iterations required by each method to converge, using a
tolerance t=1×10-6 (using the criterion established by equation 41) is shown in Table 1.

Method Total n
Jacobi 2326
Gauss-Seidel 1345
SOR, ω=1.74 313
SOR, ω=1.9 168
SOR, ω=2.0 275

Table 1. Total number of iterations required by each method to reach the converged solution,
with a tolerance t=1×10-6, N=M=20.

Figures 8 and 9 show the velocity profiles obtained for a grid with N=M=20. These solutions
are almost equal to the exact solution for the scale of the plots presented. The maximum velocity
is obtained at the center of the duct, as expected, Note the symmetry of the profiles around the
line ξ=η.
13

0.30
^
u0,0
SOR, ω=1.74
0.25
Gauss-Seidel
0.20

0.15 Jacobi

0.10

0.05

0.00
0 100 200 300 400 500 600 700 800
n

Figure 6. Calculated dimensionless velocity at the center of the duct (ξ=0, η=0) as a function of
the iteration number for a grid with N=M=20. The SOR method is applied with the optimum
relaxation parameter given by equation (46), and it converges to a tolerance t=1×10-6 (as defined
by equation 41) in 275 iterations.

0.35
^
u0,0
0.30
2
0.25
1.9
0.20 ω=1.74

0.15

0.10

0.05

0.00
0 25 50 75 100 125 150
n
Figure 7. Convergence behavior of the SOR method for various values of the relaxation
parameter. Conditions are the same as in Figure 6.
14

0.30

0.25

0.20 u

0.15

0.10

0.05

0.00
0.0
0.2
0.4
η
0.0 0.6
0.2
0.4
0.6 0.8
ξ 0.8
1.0 1.0

Figure 8. Three dimensional representation of the calculated velocity field for Newtonian flow
in a square duct. Each point corresponds to a nodal value of the grid with N=M=20.

Model 5: Non-Newtonian fluid


In this section we will find the velocity field for a non-Newtonian fluid with a viscosity that
depends on shear rate. We will consider once again the case of a square duct (α=1). The PDE is
equation (17) with α=1:

∂  ∂u  ∂  ∂u 
 σ  +  σ  = −1 (50)
∂ξ  ∂ξ  ∂η  ∂η 

The dimensionless viscosity (σ) depends on the shear rate, given by equation (7), which can be
expressed in terms of dimensionless variables using equations (15) and (16) to get (note that a=b
in this case)

2 2
a∆P  ∂u   ∂u 
γ& =   +   (51)
Lµ 0  ∂ξ   ∂η 
15

1.0

0.8 0.10
0.20
0.30
0.6
0.40
η 0.60
0.50
0.70
0.4 0.80

0.2 0.90

0.0
0.0 0.2 0.4 0.6 0.8 1.0
ξ

Figure 9. Contour plot of the velocity field for Newtonian flow in a square duct (generated from
calculations with N=M=20). Legend on curves is the value of normalized dimensionless
velocity, u/umax, where umax=0.2906 is the maximum velocity (ξ=η=0).

We will assume that the relation between viscosity and shear rate corresponds to the power-law
model, given by

µ = mγ& n −1 (52)

where m and n are known constants (for most non-Newtonian fluids that obey this model, n<1,
so that viscosity decreases as shear rate increases – these are called "shear thinning" fluids.)
Substituting equation (51) into equation (52) and using equation (15) yields

n −1
 ∂u  2
 ∂u 
2 2
σ = β   +    (53)
 ∂ξ   ∂η  
 
where

n −1
m  a∆P 
β=   (54)
µ 0  Lµ 0 
16

To discretize equation (50), first we expand the derivatives as follows

 ∂ 2 u ∂ 2 u  ∂u ∂σ ∂u ∂σ
σ +  + + = −1 (55)
 ∂ξ 2 ∂η2  ∂ξ ∂ξ ∂η ∂η

Now we evaluate this equation on each interior node of the grid (Figure 3) using centered
differences of O(h2) and O(k2). The formulas are

∂ 2u û i +1, j − 2û i, j + û i −1, j


(ξ i , η j ) ≈ (56)
∂ξ 2 h2

∂ 2u û i, j+1 − 2û i, j + û i, j−1


(ξ i , η j ) ≈ (57)
∂η2 k2

∂u û i +1, j − û i −1, j
(ξ i , η j ) ≈ (58)
∂ξ 2h

∂u û i, j+1 − û i, j−1
(ξ i , η j ) ≈ (59)
∂η 2k

∂σ σˆ i +1, j − σˆ i −1, j
(ξ i , η j ) ≈ (60)
∂ξ 2h

∂σ σˆ i, j+1 − σˆ i, j−1
(ξ i , η j ) ≈ (61)
∂η 2k

where the nodal values of the dimensionless viscosity, σ̂i, j , are calculated from the discretized
form of equation (53):

n −1
 û 2
 û i, j+1 − û i, j−1   2
2
i +1, j − û i −1, j 

σˆ i, j = β   +    (62)
 2h   2 k  

The next step is to substitute equations (56) to (61) into equation (55) and to solve the
resulting equation for û i, j . This yields
17

1 1
û i, j = [û i +1, j + û i −1, j + λ2 (û i, j+1 + û i, j−1 )] + [(û i +1, j − û i −1, j ) ×
2(1 + λ2 ) 8σˆ i, j (1 + λ2 )
(63)
h2
(σˆ i +1, j − σˆ i −1, j ) + λ2 (û i, j+1 − û i, j−1 )(σ
ˆ i, j+1 − σˆ i, j−1 )] +
2σˆ i, j (1 + λ2 )

where

h
λ= (64)
k

The application of successive over-relaxation to this problem leads to the following iteration
equation

û ( n ) = (1 − ω)û ( n −1) + ωû (calc) , n=1,2… (65)


i, j i, j i, j

starting with assumed initial values û (0) , where û (calc) is the calculated (Gauss-Seidel) value of
i, j i, j
the nodal variable from equation (63), using on the right-hand side the current (updated) nodal
values. This equation is applied to all interior nodes (i=1,2…N, j=1,2…M), using as boundary
values those given by equations (35) to (38). In addition, application of equation (63) requires
the evaluation of the dimensionless viscosity at boundary nodes. Along the boundaries ξ=0 (i=0)
and η=0 (j=0) the boundary conditions (19) and (21) can be combined directly with equation
(53) evaluated at boundary nodes to find expressions to calculate the boundary viscosities, as
follows:
(1) On the boundary ξ=0, combination of equations (21) and (53) leads to

n −1
∂u
σ=β , ξ=0 (66)
∂η

Note the necessity to use here the absolute value of the derivative, product of the elimination of
the square root with the square of the derivative. Using centered differences to discretize the
derivative in equation (66) and evaluating it at nodes along the ξ=0 boundary leads to

n −1
û 0, j+1 − û 0, j−1
σˆ 0, j = β , j=1,2…M (67)
2k

(2) On the boundary η=0, combination of equations (19) and (53) leads to

n −1
∂u
σ=β , η=0 (68)
∂ξ
18

Using centered differences to discretize the derivative in this equation and evaluating it at nodes
along the η=0 boundary leads to

n −1
û i +1,0 − û i −1,0
σˆ i,0 = β , i=1,2…N (69)
2h

Along the boundaries ξ=1 (i=N+1) and η=1 (j=M+1), the evaluation of the dimensionless
viscosity requires a different strategy:
(1) Along the boundary ξ=1, the boundary condition (22) (u=0) implies that u does not vary
with η, which means

∂u
= 0 , ξ=1 (70)
∂η

From equation (53), this implies that

n −1
∂u
σ=β , ξ=1 (71)
∂ξ

To evaluate the derivative in this equation, we cannot use a centered-difference formula, since it
will require nodal values beyond the border (ξ>1). Therefore, we will use a first-order backward
difference approximation

∂u û N +1, j − û N, j
≈ (72)
∂ξ h

But û N +1, j =0 (from boundary condition 22), so substitution of this equation into equation (71)
leads to

n −1
 û N, j 
σˆ N +1, j = β  , j=1,2…M (73)
 h 

(2) Along the boundary η=1, the boundary condition (20) (u=0) implies that u does not vary
with ξ, which means

∂u
= 0 , η=1 (74)
∂ξ

From equation (53), this implies that


19

n −1
∂u
σ=β , η=1 (75)
∂η

One again, to evaluate the derivative in this equation we cannot use a centered-difference
formula, so we use a first-order backward difference approximation

∂u û i, M +1 − û i, M
≈ (76)
∂η k

But û i, M +1 =0 (from boundary condition 20), so substitution of this equation into equation (75)
leads to

n −1
 û i, M 
σˆ i, M +1 = β  , i=1,2…N (77)
 k 

Note that the previous equations for the boundary viscosities (67. 69, 73 and 77) leave the
viscosities undefined at the corner nodes of the domain. This does not represent a problem,
because the viscosity at those nodes never appears in the formulation.
An interesting aspect of the solution described here, based on the iterative procedure defined
by equations (63) and (65), is that the procedure is not that different from that applied to the case
of Newtonian flow, for which the PDE was linear. That is, the nonlinear nature of the problem
does not add complexity to the algorithm except for the slightly more complicated algebraic
equations.
The procedure outlined above was applied to a non-Newtonian fluid with n=0.65 and β=0.5.
A grid with N=M=99, and SOR with ω=1.74 were used, with a tolerance t=1×10-6. For such a
fine grid and the same conditions, the Newtonian flow algorithm required 4103 iterations to
converge, while the non-Newtonian case required 18079 iterations, reflecting the additional
effort required to solve the nonlinear problem. The results are presented in Figures 10 and 11.
For purposes of the graphical representation, only 22×22 equally-spaced points were used in the
plots.
The most interesting aspect of the solution (compare with figures 8 and 9) is that the velocity
profile for the non-Newtonian case tends to be steeper close to the walls of the duct, and flat
close to the center of the duct. This is a consequence of the decrease of the viscosity with shear
rate: close to the walls the velocity gradient (and, consequently, the shear rate) is high, so that the
viscosity is lower, which makes the material easier to deform, i.e. leads to the capability of
sustaining a high gradient. Close to the center of the duct, the shear rate is lower and the
viscosity becomes so high that the fluid tends to flow like a plug of uniform velocity.
20

0.50
0.45
0.40
0.35 u
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0.0
0.2
0.4
η
0.0 0.6
0.2
0.4
0.6 0.8
ξ 0.8
1.0 1.0

Figure 10. Three dimensional representation of the calculated velocity field for non-Newtonian
flow in a square duct with n=0.65, β=0.5. A grid with N=M=99 was used to generate the
solution.

1.0

0.10
0.8
0.20
0.30

0.6 0.50
0.70 0.60
0.40
η
0.80
0.4
0.90

0.2

0.0
0.0 0.2 0.4 0.6 0.8 1.0
ξ

Figure 11. Contour plot of the velocity field for non-Newtonian flow in a square duct (generated
from calculations with N=M=99). Legend on curves is the value of normalized dimensionless
velocity, u/umax, where umax=0.4791 is the maximum velocity (ξ=η=0).
21

Introduction to finite element methods – the method of weighted residuals


The numerical methods used so far have been all finite differences in which Taylor series are
used to approximate derivatives in the differential equation and boundary conditions, and then
the differential equation is discretized by evaluating its approximate form only at nodal points. In
this section we will introduce a completely different approach to find a numerical solution of a
differential equation. The approach is based on the approximation of an integral form of the
differential equation. To illustrate the approach and the numerical method, consider the ODE that
describes one-dimensional, steady state transport of a species subject to consumption by a first
order reaction and a general source term:

d  dc  dc
 D  − v − kc + s = 0 , a<x<b (78)
dx  dx  dx

where D is the dispersion coefficient, v is the velocity of the flow in the x direction, k is the first
order rate constant, and s(x) is a general source term, representing the moles of species added to
the main flow per unit time and unit volume. This equation could represent, for example, the
transport of a reactive contaminant in a portion of a river, in which case c would be the species
concentration averaged over the cross section of the river. The source term, s(x), could include
contributions from point discharges of the contaminant to the river or from distributed sources,
such as dissolution of contaminant from the river sediment. For illustration purposes, we will
consider the following boundary conditions: (1) the species concentration is known at the
upstream boundary:

c=c0, x=a (79)

and (2) at the downstream end, the concentration becomes uniform:

dc
= 0 , x=b (80)
dx

The objective of the problem is to find a function c(x) that satisfies the ODE (78) and the
boundary conditions (79) and (80). The problem can be formulated in an alternate way as
follows. We can reinterpret the ODE in terms of the definition of the residual:

d  dc  dc
r(x ) =  D  − v − kc + s (81)
dx  dx  dx

The original objective of the problem would be to find the function c(x) that makes r(x)≡0 in the
interval a<x<b, while satisfying the boundary conditions.
An alternate way of formulating the problem stated above is based on the definition of
weighted residual:

b
R = ∫ w ( x )r ( x )dx (82)
a
22

where r(x) is given by equation (81) and w(x) is a weighting function. It is clear that the exact
solution would make the weighted residual be zero, regardless of the weighting function used.
However, the basic problem can be formulated by requiring that R=0 for any function w(x)
belonging to a complete set of functions. Alternatively, if we make R=0 for all possible functions
w(x), then we are assured that the function c(x) that satisfies this (plus the boundary conditions)
would be the exact solution of the ODE. This way of formulating the problem is called the
variational formulation.
To understand why the variational and differential formulations are equivalent, consider the
following argument. If the weighted residual is zero for any function w(x), let us select the
following function

ψ n (x ) =
n
[
exp − n 2 ( x − x 0 ) 2 ] (83)
π

where n and x0 are specified constants, with a≤x0≤b. We know that this function becomes the
delta function as n becomes large (see Chapter 4, pages 18 and 19 for more details about this
particular function):

lim ψ n = δ( x − x 0 ) (84)
n →∞

Letting w(x)=ψn(x) at the limit when n→∞, the weighted residual is

b
R = ∫ δ( x − x 0 )r ( x )dx (85)
a

From equation (3-102), the integral becomes

R = r(x 0 ) (86)

It is clear that by choosing x0's as all real numbers in the interval considered, making R=0 is
equivalent to satisfying the ODE (78) everywhere.
The method of weighted residuals is based on proposing an approximate solution for c(x) and
then forcing this approximate solution to minimize or make zero the weighted residual calculated
for a discrete number of weighting functions. The most common way of doing this is to propose
an approximate solution in terms of a finite series of known functions,

M
ĉ( x ) = ∑ α jφ j ( x) (87)
j= 0

where the αj's are constants, and the functions φj(x) are called basis functions. The idea behind
this representation is for the set of basis functions to become a complete basis for the space of
functions that are solutions of the ODE, as M→∞. This would ensure that the approximate
solution (87) converges to the exact solution as M→∞. An advantage of the representation given
23

by equation (87) is that the approximate solution will be defined as a function of x, and not
merely at a finite number of discretization points, as is the case for finite difference methods.
The approximate solution is forced to satisfy the boundary conditions and it is required that a
finite number of weighted residuals be equal to zero:

b
R i = ∫ w i ( x )r̂ ( x )dx (88)
a

where r̂ ( x ) is the residual calculated from equation (81) using the approximate solution (87).
The selection of the weighting functions to use, wi(x), determines the type of method, of which
there are many variations. Here, because of the form selected for the approximate solution
(equation 87), we will choose as many weighting functions as unknown coefficients to ensure
that equation (88) generates a system of equations with equal number of equations as unknowns.
Since it is expected that the set of basis functions be a complete set as M→∞, and there are as
many basis functions as unknown coefficients, it makes sense to select the weighting functions to
be the same as the basis functions,

w i ( x ) = φi ( x ) (89)

The requirement used will be that the weighted residuals be zero,

b
∫ φi ( x)r̂ (x )dx = 0 (90)
a

This particular variational method is called Galerkin's method. Equation (90) will eventually
form a system of equations from which the unknown coefficients in equation (87) can be found.
Substituting equation (81) in terms of the approximate concentration into equation (90) leads to

b
d  dĉ  dĉ 
∫ φi  dx  D dx  − v dx − kĉ + sdx = 0 (91)
a

It will be convenient to eliminate second derivatives in this expression. The first term in equation
(91) can be integrated by parts, as follows

b b b
d  dĉ   dĉ  dφi dĉ
∫ φi  D dx = φi D  − ∫ D
dx  dx   dx  a dx dx
dx (92)
a a

so that equation (91) can be written as


24

bb
 dĉ   dφi dĉ  dĉ 
φi D dx  + ∫ − D dx dx + φi  − v dx − kĉ + s  dx = 0 (93)
a a 

The only thing that remains to be specified is the set of basis functions. We will use basis
functions that make the coefficients (αj) in the approximation (87) be nodal values of the
concentration. To do this, first we discretize the solution domain, a≤x≤b into subintervals
delimited by nodes, in a similar manner to what we did for the generation of finite difference
grids. For example, for a fixed step size h, we use

b−a
h= (94)
N +1

which defines nodes with coordinates

x i = a + ih , i=0,1…N+1 (95)

where x0=a and xN+1=b are the boundary nodes and there are N interior nodes. Now we make the
constants in the series (87) be nodal values of the approximate concentration. Since there will be
N+2 nodal values of the concentration, equation (87) requires the use of N+2 basis functions, so
that M=N+1,

N +1
ĉ( x ) = ∑ ĉ jφ j ( x) (96)
j= 0

This choice of coefficients implies that the basis functions must satisfy the following constraint
at the nodes:

1, x = x j
φ j (x) =  (97)
0, x = x k , k ≠ j

The simplest set of basis functions would be those that result in an approximation for the
concentration that is linear with x within each subinterval. A basis function that fulfills this
requirement is shown in Figure 12; it consists of two linear branches that go to zero in adjacent
nodes. The basis function associated with node j is given by

0, x < x j-1



 x − x j-1 , x ≤ x < x
 x j − x j-1 j-1 j

φ j (x) =  (98)
 x j+1 − x , x ≤ x < x
 x j+1 − x j j j+1

0, x j+1 ≤ x
25

with this set of basis functions, equation (96) will result in a piecewise linear approximation for
ĉ( x ) , as illustrated in Figure 13.

Figure 12. A linear basis function that satisfies equation (97) and leads to the approximation
given by equation (96).

Figure 13. Approximate concentration obtained using the basis functions given by equation (96)
for 4 adjacent nodes.

The special form of Galerkin's method represented by the approximation described above
conforms what is known as the finite element method. The next step in the procedure is to
26

substitute the approximation (96) into equation (93). Doing this and interchanging summation
with integration leads to

b N +1  b 
 dĉ   dφ i dφ j  dφ j   b
φ D
 i dx  + ∑  j ∫  dx dx i  dx
ĉ − D + φ  − v − k φ j   dx  + ∫ φi sdx = 0 (99)
a j= 0 
 a    a

This equation is applied for i=1,2…N to generate as many equations as unknown internal nodal
values (the nodal values at i=0 and N+1 will be set by the boundary conditions). Consider the
first term,

b
 dĉ  dĉ dĉ
φi D dx  = φi (b)D dx − φ i (a ) D
dx x = a
(100)
a x =b

Both terms on the right-hand side of this equation are always zero, because the value of the basis
functions for interior nodes i=1,2…N at x=a or b will all be zero. Therefore, equation (99)
simplifies to

N +1  dφ i dφ j dφ j   b
 
b

∑ ĉ j ∫ − D dx dx + φi  − v dx − kφ j dx  + ∫ φisdx = 0 (101)
j= 0 
 a    a

All terms in this equation contain the basis function φi or its derivative. Since this function is
nonzero only in the interval xi-1<x<xi+1, the integrands will be nonzero in that interval only. The
result is

N +1 

x i +1
 dφ i dφ j  dφ j   i +1
x
∑ ĉ j ∫ − D dx dx + φi  − v dx − kφ j dx  + ∫ φisdx = 0 , i=1,2…N (102)
 x i −1 
j= 0     x i −1

In this equation, the only basis functions φj that are nonzero in the interval xi-1<x<xi+1 are those
for j=i−1, j=i and j=i+1 (Figure 14). Therefore, the only nonzero terms in the summation will be
those that correspond to those three nodes:

i +1  x i +1    i +1
x
 dφ i dφ j  dφ j
∑ ĉ j ∫ − D dx dx + φi  − v dx − kφ j dx  + ∫ φisdx = 0 , i=1,2…N (103)
 x i −1 
j= i −1     x i −1

Each of these equations has only a maximum of three nonzero coefficients associated with
unknown nodal values and will lead to system of linear equations with a tri-diagonal matrix. The
nodal values ĉ 0 and ĉ N +1 appear in these equations and will be determined by the boundary
conditions. Boundary condition (79) leads directly to
27

ĉ 0 = c 0 (104)

To evaluate boundary condition (80), we substitute the approximate concentration (equation 96)
into equation (80) to get

N +1 dφ j
∑ ĉ j dx =0 (105)
j= 0 x =b

Figure 14. Nonzero basis functions in the interval xi-1<x<xi+1.

Note that the only basis functions that will be nonzero in the vicinity of x=b (node i=N+1) are
those corresponding to j=N and j=N+1, so that equation (105) simplifies to

dφ N dφ
ĉ N + ĉ N +1 N +1 =0 (106)
dx x = b dx x = b

From equation (98), noticing that xN+1−xN=h, we get

dφ N 1
=− (107)
dx x = b h

dφ N +1 1
= (108)
dx x = b h

so that equation (106) yields


28

ĉ N +1 = ĉ N (109)

This is exactly the same result that would be obtained if the boundary condition (80) were
discretized using a first-order finite difference approximation.
Using equations (104) and (109), the system of equations (103) can be formulated as follows:

 b1 c1 0 0 L 0 0 0   ĉ1   d1 
 a2 b2 c2 0 L 0 0 0   ĉ 2   d 2 

 0 a3 b3 c3 L 0 0 0   ĉ 3   d 3 
  =  (110)
 M M M M M M M  M   M 
 0 0 0 0 L a N −1 b N −1 c N −1  ĉ N −1  d N −1 
    
 0 0 0 0 0 0 aN b N   ĉ N   d N 

where

xi
 dφi dφi −1  dφi −1 
ai = ∫ − D dx dx + φi  − v dx − kφi −1  dx , i=2…N
  
(111)
x i −1

x i +1 

2
 dφ i   dφ i
bi = ∫  − D
  dx 
 + φi  − v
 dx
− kφi  dx , i=1…N−1

(112)
x i −1

x N +1 

2
 dφ N   dφ N
bN = ∫ − D
  dx 
 + φN  − v
 dx
− kφ N  dx +

x N −1
(113)
x N +1
  dφ N  dφ N +1   dφ N + 1 
∫ − D dx  dx  + φ N  − v dx − kφ N +1  dx
     
x N −1

x i +1
 dφi dφi +1  dφi +1 
ci = ∫ − D dx dx + φi  − v dx − kφi +1  dx , i=1…N−1
 
(114)
xi 

x1 2 x
 dφ dφ  dφ 
d1 = −c 0 ∫ − D 1 0 + φ1  − v 0 − kφ 0  dx − ∫ φ1sdx (115)
x0 
dx dx  dx  x0

x i +1
di = − ∫ φisdx , i=2…N (116)
x i −1
29

An interesting aspect of this formulation is that we have not assumed that any of the coefficients
in the ODE (D, v and k) are constant. The previous equations can be evaluated directly even
when these coefficients and the source term depend on x. Here, we will consider the case s≡0,
and constant D, v and k. Using the definition of the basis functions (equation 98), all integrals in
equations (111) to (116) can be evaluated in closed form (recall that h=xi−xi-1). The results are:

D v kh
ai = + − , i=2…N (117)
h 2 6

2D 2kh
bi = − − , i=1…N−1 (118)
h 3

D v 5kh
bN = − − − (119)
h 2 6

D v kh
ci = − − , i=1…N−1 (120)
h 2 6

 D v kh 
d1 = −c 0  + −  (121)
h 2 6 

di=0, i=2…N (122)

For comparison purposes, consider the case in which all parameters (c0, D, v and k) are unity.
The exact solution of the ODE (78) for this case subject to the boundary conditions (79) and (80)
is

r e r+ x − r+ e[ r+ − r− (1− x )]
c= − (123)
r− − r+ e ( r+ − r− )

where the roots of the characteristic equation are

1± 5
r± = (124)
2

Figure 15 shows a comparison between the concentration profile obtained by finite elements
for a given step size and the exact solution. Step sizes smaller than h=0.05 yield approximate
solutions that are almost indistinguishable from the exact solution at the scale of the plot. To get
an idea about the convergence of this algorithm to the exact solution, Figure 16 shows how the
error at x=1 varies with step size. It should not be surprising that the trend is e=O(h) since
essentially the method developed here leads to a linear approximation of the concentration
(Figure 13), which would be the equivalent of a first-order finite difference scheme.
30

1
c Exact
0.95 Approximate, h=0.05

0.9

0.85

0.8

0.75

0.7
0 0.2 0.4 0.6 0.8 1
x
Figure 15. Comparison between a finite element solution and the exact solution for the ODE
(78) subject to the boundary conditions (79) and (80).

1
e
0.1

0.01

0.001

0.0001

0.00001
0.0001 0.001 0.01 0.1 1
h

Figure 16. Comparison of the error at x=1 for the application of the finite element algorithm.
The solid line is a drawn line with slope of one.

You might also like