Hydra FEM Code For CFD
Hydra FEM Code For CFD
Hydra FEM Code For CFD
HYDRA
?
A Finite Element Computational
Fluid Dynamics Code
User Manual
Mark A. Christon
Mechanical Engineering
University of California
Lawrence Livermore National Laboratory
June 1995
DISCLAIMER
This document was prepared as an account of work sponsored by an agency of the United States
Government. Neither the United States Government nor the University of Wornia nor any of their
employees, makes any warranty, express or implied, or assumes any legal hbility or responsibility for
the accuracy, completeness,or usefulness of any information, apparatus, product, or process disclosed,
or represents .fiat its use would not infringe privately owned rights. Reference herein to any specific
commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does
not n d y constitute or imply its endorsement, recommendation,or favoring by the United States
Government or the University of Califonria l h e views and opinions of authors expressed herein do
not neceSSanly state or reflect those of the United States Government or the UNversity of California,
and shaU not beused for advertisingor product endorsementpmposes
Work performed under the auspices of the US. Department of Energy by Lawrence LivermoreNational
LaborabryunderContract W-7405-ENG48.
DISCLAIMER
‘i
i
Dynamics Code
Mark A. Christon
1I Lawrence Livermore National Laboratory
i Livermore, California 94551 USA
i
t
ii
June, 1995
i
i
A
i
University of California
lY National
L a m c eLiiermore
Laboratory
Table of Contents
Preface.............................................................................................. 1
Chapter 1........................................................................................... 3
Introduction ........................................................................................ 3
1.1 History of HYDRA Development ............................................... -3
1.2 HYDRA Capabilities............................................................... 4
1.2.1 Pre-Processor Interfaces ............................................... 4
1.2.2 Post-Processor Interfaces.............................................. 5
1.2.3 Material Models ......................................................... 7
1.3 Guide to the HYDRA User’s Manual ........................................... -8
Chapter 2 ........................................................................................... 9
Theoretical Overview ............................................................................. 9
2.1 The Navier-Stokes Equations ..................................................... 9
2.2 The Advection-Diffusion Equation ............................................... 11
2.3 Spatial Discretization............................................................... 11
2.4 Element Technology ............................................................... 12
2.5 Grid Parameters .................................................................... 17
2.6 Explicit Time Integration .......................................................... 20
2.7 The Semi-Implicit Projection ...................................................... 21
2.8 Start-up Procedures ................................................................ 23
2.9 The Pressure Poisson Equation ................................................... 24
2.10 Turbulence Models ............................................................... 28
2.11 Derived Variables ................................................................. 29
2.12 Vectorization. Parallelization. and Performance............................... 29
Chapter 3 ........................................................................................... 33
Running HYDM ................................................................................. 33
3.1 Execution ............................................................................ 33
3.2 Restarts .............................................................................. 34
Chapter 4 ........................................................................................... 35
HYDW Input Data .............................................................................. -35
4.1 HYDRA Control File .............................................................. 35
4.1.1 Mesh Parameters........................................................ 35
4.1.2 Analysis Parameters ................................................... -37
4.1.3 Momentum Equation-Solver Parameters .......................... -41
4.1.4 Pressure Poisson Equation-Solver Parameters .................... 42
4.1.5 Time History Blocks .................... .............................. 47
........ .
..-
Preface
HYDRA is a finite element code which has been developed specifically to attack the
class of transient, incompressible, viscous, computational fluid dynamics problems which
are predominant in the world which surrounds us. The goal for HYDRA has been to
achieve high performance across a spectrum of supercomputer architectures without
sacrificing any of the aspects of the finite element method which make it so flexible and
permit application to a broad class of problems. As supercomputer algorithms evolve, the
continuing development of HYDRA will strive to achieve optimal mappings of the most
advanced flow solution algorithms onto supercomputer architectures.
The development of HYDRA has drawn, in part, upon over ten years of research in
computational fluid dynamics by Phil Gresho, Stevens Chan and their colleagues.
Certainly, the work by Helmut Daniels with the PASTIS code (a research version of the
Projection-11 algorithm) has proven valuable, if in only identifying the obvious memory
and performance issues to consider. HYDRA has glso drawn upon the many years of finite
element expertise constituted by DYNA3D’ and NIKE3D2 Certain key architectural ideas
from both DYNA3D and NIKE3D have been adopted and further improved to fit the
advanced dynamic memory management and data structures implemented in HYDRA.
H Y D U , in its implementation, reflects, to a certain degree, my training and experience
with supercomputers beginning with the CYBER 205 and progressing through the CRAY
UNICOS “friendly user” period at both the National Center for. Supercomputing
Applications’and at the Pittsburgh Supercomputer Center to the ongoing parallel efforts
with the Meiko CS-2 at LLNL.The philosophy for HYDRA is to focus on mapping flow
algorithms to computer architectures to try and achieve a high level of performance, rather
than just performing a port-a philosophy I adopted from Dan Pryor and Pat Burns during
my days as a graduate student at Colorado State University.
I wish to thank Jerry Goudreau and Phil Gresho for their help, encouragement and
periodic moral support during the initial development of HYDRA. I would also like to
express my appreciation for the help which Brad Maker provided through his discussions
with me regarding the Gibbs-Poole-Stockmeyer bandwidth minimization algorithm, the
construction of dual grids for the pressure Poisson equation, and the many algorithmic
similarities and differences between solid and fluid mechanics.
I wish to give special thanks also to Lourdes Placeres and Cathe Forte for their time,
patience and expertise in typesetting the draft version of this document and for their advice
in defining the manual style. For the efforts of the early HYDRA collaborators, I wish to
thank Barb Kornblum, Rose McCallen and Stevens Chan for their input on this document.
3
Chpter 1
Introduction
The simulation of flow fields about vehicles and in turbomachinery remains one of the
computational grand chaZZenges3 for the 1990's. An example of this class of computational
fluid dynamics problem is the transient simulation of flow around a submarine or an
automobile. In order to simulate the flow around a vehicle, it is anticipated that more than
one million elements will be required to resolve important flow-field features such as shed
vortices from regions of separated flow. In addition to the high degree of spatial
discretization, the tempod resolution for this class of problem is also demanding,
ultimately requiring the optimal mapping of flow-solution algorithms to modem
supercomputer architectures.
HYDRA is a finite element code which solves the transient, incompressible, viscous,
Navier-Stokes equations, and is based, in part upon the work of Gresho, et al. 4,59697.
HYDRA makes use of advanced solution algorithms for both implicit and explicit time
integration. The explicit solution algorithm42 introduces lagging phase error, but decouples
the momentum equations and minimizes the memory requirements. While both the
diffusive and Courant-Freidrichs-Levy (CFL) stability limits must be respected in the
explicit algorithm, balancing tensor diffusivity somewhat ameliorates the restrictive
diffusive stability limit and raises the.order of accuracy of the advective time integration
scheme. The explicit algorithm, In combination with single point integration and hourglass
stabilization, has proven to be both simple and efficient in a computational sense. Because
of this, the explicit algorithm has been the focus of early parallelization efforts with
HYDRA.
In the second-order projection algorithm (P-II)697, a consistent-mass predictor in
conjunction with a lumped mass corrector legitimately decouples the velocity and pressure
fields thereby reducing both memory and cpu requirements relative to more traditional fully
coupled solution strategies for the Navier-Stokes equations. The consistent mass predictor
retains high order phase speed accuracy, while the lumped mass corrector (a projection to a
divergence-free subspace) maintains a divergence free velocity field. Both the predictor and
the corrector steps are'amenable to solution via direct or preconditioned iterative techniques
making it possible to tune the algorithm to the computing platfom, i.e., parallel, vector or
super-scalar. The second-order projection algorithm can accurately track shed vortices, and
is amenable to the incorporation of either simple or complex (multi-equation) turbulence
submodels appropriate for a broad spectrum of applications. HYDRA provides several
turbulence models which vary in complexity from a simple algebraic form899 to more
traditional multi-equation modelslo. However, because there is no single turbulence model
which can solve all flow problems, HYDRA development efforts will continue to evaluate
new turbulence models which are appropriate for large scale, complex geometry problems.
finite element codes. Over the past two years, HYDRA has been under continuous
development, and has acted as a test bed not only for investigating the issues involved in
mapping finite element codes to parallel architectures, but also for the study of optimal
solution methods for the pressure Poisson equation, and for the study of advanced Navier-
Stokes solution algorithms.
HYDRA was developed using standard UNM. tools for source configuration, and
source code control. HYDRA was originally constructed to permit the source code to be
configured for compilation using either FORTRAN-77 or FORTRAN-90 compilers in an
attempt to span multiple supercomputer architectures, e.g., traditional CRAY vector
computers and machines like the Thinking Machines CM-5.9FORTRAN-77 was used for
the vectorized (CRAY) version of HYDRA, while FORTRAN-90 source configuration
permitted coexistence and concurrent development of a data parallel version of HYDRA for
the Thinking Machines CM-200 and (34-5.
While partially successful, the very long vector characteristics of the CM-200, and CM-
5, have pushed HYDRA development away 'from a data parallel implementation and
towards a more portable domain decomposition message passing model (DDMP) which
relies upon the FORTRAN-77 part of HYDRA. Current algorithm mapping efforts with
HYDRA are being directed towards the Meiko CS-2 because of its superior network
bandwidth, fast scalar speed, vector processing abilities, and local disk. However, the
general purpose nature of the DDMP approach will also pennit future efforts to consider
machines such as the CRAY T3D, and the INTEL Paragon. '
The primary mesh generator for HYDRA is currently INGFUD14. At this time, there is
no direct mesh generation support for HYDRA in INGRID. However, the input data for
HYDRA has been designed to enable the use of the dn3d INGRID output option. One
difficulty with this approach is that the user must do some manual editing of the HYDRA
input files which can be a bit unwieldy where large grids are concerned. The use of the
UNM: utility, awk, can simplify the conversion of DYNA3D boundary conditions to
HYDRA boundary conditions. It is advisable to generate all 2-D HYDRA meshes in the x-y
plane to simplify the task of converting the DYNA3D nodal coordinates and boundary
conditions for HYDRA.
While INGRID is currently the primary mesh generator being used with HYDM, the
input data for HYDRA is quite straightforward, and nearly any finite element mesh
generator could be used in place of INGRID. Alternative mesh generation tools such as
those from the National Grid Project at Mississippi State University, the CUBIT project at
5
HYDRA:
A Finite Element
ComputationalFluid Dynamics Code
Sandia National Laboratories, or from the PMESH project at LLNL will hopefully provide
an adequate interface for HYDRA in the near future.
1.2.2 Post-ProcessorInterfaces
HYDRA can output several forms of graphics files, but the primary file format is the
Methods Development Group’sbinary,graphics data files and time history files which are
compatible with GRIZ15 and THUG16. G R E is used for visualizing snapshots of the
entire flow-field (state data) or generating animations of the time varying flow-field data,
while THUG is used for interrogating time history data at a moderate number of mesh
points. Typically, the state data are written at relatively large time intervals while the time
history data is recorded at each time step.
G R E and THUG are general purpose scientific visualization tools for finite element
codes, and they support analysis codes for both computational fluid dynamics and
computational solid and structural mechanics. The use of these general purpose data
visualizers requires translation from the primitive variables which HYDRA writes to the
graphics data files to variables which can be displayed in GRIZ and THUG.Table 1.1
shows the mapping from HYDRA’S 2-D primitive variables to GRIZ and THUG variables.
Table 1.2 shows the mapping from 3-D HYDRA variables to GRTZ variables.
In GRIZ, the character strings associated with certain variables may have to be reset to
reflect the correct HYDRA variable, e.g., the x-acceleration variable in GRIZ is actually the
x-component of vorticity in a 3-D HYDRA database. For 2-D HYDRA state databases, the
z-velocity and x-acceleration are omitted, and the z-vorticity and stream function are
computed and output at the nodes instead.
THUG provides direct support for HYDRA and the variable mapping is handled
automatically. However, THUG currently requires the use of the default variable names for
the display of HYDRA variables. Table 1.3 shows the relationship between HYDRA
global time history variables and THUG global time history variables. THUG currently
provides the divu and ke global variable commands for the display of the rms divergence
and total kinetic energy for HYDRA time history data.
Future work on interfaces to alternative visualization tools will be based, in part, upon
user requirements and the functionality provided by alternative visualization tools for large
scale CFD problems using unstructured grids.
6
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
HYDRA THUG
Global Time Global Time
History Variables History commands
1.2.3 MaterialModels
Material models in HYDRA may be broadly classified into two groups. The first
material model consists of the definition of a fluid density, kinematic viscosity, and thermal
diffhsivity for a problem involving only fluid flow. For flow problems which require only
one material definition, HYDRA provides a simplified input format for specification of the
fluid properties.
Included in this class of material definition is the Smagorinsky8 model which currently
is treated as just an added viscosity, albeit a turbulent viscosity based upon the local strain-
rate tensor. Because the input data for this model are minimal, the data format is presented
in a simplified form in the HYDRA control file (see Chapter 4).
The second class of material model involves the definition of a relation between material
properties and dependent variables such as velocity and temperature. The internal
architecture of HYDRA permits the use of this class of material models but, at this time
user access to this type of material model in HYDRA is restricted. In the future, user access
for alternative constitutive models will be provided.
8
H
YDm
A Finite Element
Computational Fluid Dynamics Code
Chapter 2
Theoretical Overview
0
This chapter presents a brief overview of the theoretical foundation for HYDRA. As an
overview, this chapter is not intended to be a complete technical reference. Instead, it
simply presents the basic forms of the partial differential equations which HYDRA treats,
and a general description of the methodologies employed in their solution. The interested
reader may pursue the references included in this chapter for details on the algorithms used
in HYDFL4 and their implementation.
To begin, a brief introduction to the incompressible Navier-Stokes equations is presented.
This is followed by the advection-diffusion equation, the semi-discrete form of the
conservation equations, and a discussion of the pressure Poisson equation. Finally, a brief
description of the currently implemented turbulence models is presented.
dU
~ + u V u = - V P + u V 2 u +f-
at - -
v . u- = o
where g = (u, v, w ) is the velocity, P = p / p , p is the pressure, p is the mass density, v is
the kinematic viscosity, and f- is the body force.
The system of equations above are subject to boundary conditions which consist of
specified velocity as in Eq. (2.3), or pseudo-traction boundary conditions on F, as in
Eqs. (2.4)-(2.5) and shown in Fig. (2.1).
u- = i-i o n rl (2.3)
- P + v - 3% =F, on r,
an (2.4)
.
Here, rlUI', = r is the boundary of the domain, n represents the outward normal
direction at the boundary, u, = en, u, = u. z, F', and F, are the normal and tangential
components of the boundary traction respectively. Homogeneous traction boundary
conditions correspond to the well known natural boundary conditions in the finite element
formulation which are typically applied at outflow boundaries.
10
HYDRA:
A Finite Element .
Computational Fluid Dynaniics Code
(2.7)
n - ~ ( x0), = n - 5(s 0)
Equations (2.7H2.8) pose a solvability constraint on the flow problem17. That is, if either
Eq. (2.7) or Eq. (2.8) are violated, then the flow problem is ill-posed. If T2= 0 (the null
set), (e.g., enclosure flows with g n specified on all surfaces), then global mass
conservation enters as an additional solvability constraint as:
Remark
HYDRA always checks the initial conditions and boundary conditions and, if necessary,
performs a divergence-fiee projection on tlie initial velocity field before the first time step is
taken. This guarantees that the solvability constraints will always be met, even if the user
"
a
--+&
k
VT = -V~T +p (2.10)
at PCP
0
M @ + A g g+Kg+CP= f
- (2.1 1)
CTg=O (2.12)
0
MTT+A g T + K T T = Q (2.13)
Here, M is the unit mass matrix, A @ ) is the advection operator, K the viscous diffusion,
and f the body force. C is the gradient operator, and CT is the divergence operator. In the
energy conservation equation, Eq. (2.13), MT is the mass matrix corresponding to the
scalar valued advection-diffusionproblem, with Q representing the discrete volumetric heat
sources, and KT being the thermal diffusivity operator.
The advection operator at the element level is:
(2.14)
where gh = p ! N ig.. Here, 8-is the element shape function, and nnpe is the number
-- of nodes per elemend In the evaluation of Eq. (2.14), an integral of triple products is
required to generate the advection matrix. This very computationally intensive integral is
12
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
(2.16)
(2.17)
Equations (2.12)-(2.17) form the basis for a description of the time integration
methods available in HYDRA. The following sections will discuss some of the
modifications made to the basic element formulation for the sake of computational
efficiency.
n4
t n
3
nI n2
It:! It3
Elements
M‘ 2 2
K‘ >3* >3*
A‘ 2 2
C‘(and E) 1 2
Reduced Integration
Several of the solution options in HYDRA make use of reduced numerical integration
for the sake of minimizing memory and floating point operations (FLOPS). Reduced
integration has typically been employed in explicit time integration algorithms where every
attempt is made to minimize the number of floating point operations per time step. The
benefits of one-point integration are tremendous in nonlinear computational fluid dynamics
problems because of the requisite sizes of meshes for interesting problems and the
associated operation counts for performing constitutive evaluations and the concomitant
operator formation and assembly.
Although there are practical benefits to using one-point integration, there are also some
drawbacks. The primary difficulty with one-point integration is the possibility of a mesh
instability referred to as hourgZassing (sometimes referred to as keystoning in the finite
difference community).
The reduced numerical quadrature of the diffusion term, K, in Eq. (2.1 1) leads to rank
deficiency of the element level operator. The presence of an improper singular mode in the
element level operator can also lead to singularity of the assembled global operators. In 2-
D,there is only one improper singular mode as shown in Fig. 2.4. In 3-D, there are
four improper singular modes which are shown in Fig. 2.5. When the hourglass modes
are excited in a numerical solution, they remain undamped and can pollute the entire field.
In 2-D, the presence of hourglass modes is most easily detected in surface or contour plots.
In 3-D, the presence of hourglass modes is much more difficult to detect because the four
modes rarely occur individually in a pure form.
In order to eliminate the singular modes, a stabilization operator is used as shown in
Eq. 2.18 .
(2.18)
The specifics of the stabilization operator in Eq. 2.18 may be found in references 20 -
24. In HYDRA, the default hourglass stabilization is based upon the work of Goudreau
and Hallquist24 and Gresho et aL4. This stabilization method is the so-called .
“h-stabilization” because the stabilization operator is formed from the outer-product of the
hourglass vectors at the element level. This form of stabilization is sometimes referred to as
trace stabilization.
15
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
,-
Figure 2.4 2-D Hourglass mode shape.
-.
In the 2-D, explicit time integration of the Navier-Stokes, there is an optional hourglass
stabilization based upon the work of Belytschko and his colleagues20.21,22.23. This
stabilization method is often referred to as y-stabilization. y-stabilization is perhaps more
robust in structural mechanics applications, but also requires more operations and storage
than h-stabilization.
It is the author's experience that it is relatively more difficult to excite the hourglass
modes in an Eulerian computation than in a correspondingLagrangian computation, e.g., a
DYNA3D1 computation. However, y -stabilization still requires fewer operations and less
storage than the fully integrated element in 2-D. In 3-D, this is not the case. Table 2.2
shows the memory requirements, and operations counts for a matrix-vector multiply ( Kzt)
for a variety of element formulations. In 2-D, y-stabilization requires nearly the same
storage as the fully integrated element, but takes 9 more operations to achieve the
matrix-vector multiply. In 3-D, y-stabilization is about 3 times more expensive to perform
than the corresponding row-compressed global matrix-vector multiply. Thus, in both 2-D
and in 3-D, h-stabilization is the default hourglass stabilization option.
Table 2.2: Memory requirements and operations counts for a matrix-vector multiply for
various element integration rules, stabilization operators, and storage
schemes.(Nel = number of elements, Nnp = number of nodes, Nbw = 1/2
bandwidth)
The grid Re and CFL are defined in the canonical element-local coordinate system as
shown in Figures 2.2 and 2.3. This coordinate system corresponds approximately to the
element-local coordinates shown in Figures 2.6 and 2.7. In these figures, the location of
the element-local coordinates is based upon the intersection of the vectors connecting the
mid-sides of the elements (Fig. 2.6).
Figure 2.6: Element parameters for the grid Re and CFL estimation.
18
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
//
n
/
r"'
' 6
d x h
-5
'n3
/
/
.
/
/
n 2
Figure 2.7: Element parameters for the grid Re and CFLesthnation in 3-D.
The grid Reynolds numbers is defined as:
(2.19)
and the grid CFL number as:
. (2.20)
Thus Reand CFLrely upon the projection of the centroid vexity onto the element-
local coordinate directions. These quantities are used in the stability computations for the
explicit time integration algoritbms described below.
*
Figure 2.8 shows sample HYDRA output for the 2-D grid parameter estimation. The
xi-grid, eta-grid, and psi-grid parameters correspond to the element-local 5 and
q,coordinate directions based upon the canonical local node number system shown in
Figures 2.2 and 2.3. Thus, interpretation of the element-local grid parameters requires a
knowledge of element orientation which is normally available in a mesh plot.
. .
.
.
..
.
.
..
.
. ~~ .................. .
HYDRA: 19
A Finite Element
Computational Fluid Dynamics Code
,i
(2.21)
The explicit algorithm must respect botlz the diffusive and the convective stability limits.
The use of BTD is recommended for all simulations despite some of its drawbacks7,
because of its beneficial effects upon stability. While the analytical stability limits for the
explicit time integration of the Navier-Stokes equations in multiple dimensions remains
intractable4, the stability computations in HYDRA rely upon the estimation of element grid
size and the grid parameters described in section 2.5.
In order to make use of Equations 2.19 and 2.20, a unit normal for each element-local
coordinate direction is defined as:
(2.25)
(2.26)
c.
where i = 5, 77, A mh@numis taken over all elements and all element-local coordinates
to establish a global mesh stability requirement.
(2.27)
where CFLis the user specified acceptable CFL number which must be always be less
than 1.0 for the explicit algorithm.
By default, the minimum constraining time step is used in the time integration scheme.
However, the user may request that the grid parameters and stable time step be re-evaluated
at a fixed interval using the dtchk command described in Chapter 4. In a rapidly changing
flow field, this may avoid over-estimating or under-estimating the stable time step based
upon the initial divergence-free velocity field.
The P-11 algorithm is a second-order accurate fractional step method697 which uses a
consistent-mass predictor in conjunction with a lumped mass corrector. In the P-11
algorithm, the velocity and pressure fields are legitimately decoupled. This reduces both
memory and cpu requirements relative to traditional fully coupled solution strategies. The
consistent mass predictor retains phase speed accuracy, while the lumped mass corrector
(projection) maintains a divergence free velocity field. Both the predictor and the corrector
steps are amenable to solution via direct or preconditioned iterative techniques making it
possible to tune the algorithm to the computing platform, Le., parallel, vector or super-
scalar. The P-IIprojection algorithm with consistent mass matrix delivers superconvergent
fourth-order phase accuracy when a uniform mesh is used.
22
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
Given an initial divergence-free velocity field, uo, which satisfies the essential
boundary conditions, and the associated pressure field,P, the P-11algorithm proceeds as
follows. First, an intermediate velocity field is predicted by solving Eq.2.28 for Z + l . The
divergence of the intermediate velocity field is then used as a right-hand-side for the
computation of a Lagrange multiplier as hi Eq.2.29. Finally, the predicted velocity field is
projected to a divergence-free velocity field using Eq. 2.30, and the pressure field is
updated using Eq. 2.3 1.
In the iterative solution of the momentum equations, two convergence criteria must be
met as outlined in Greshou. Viewing the system to be solved as Ax=b, Equations 2.32
23
HYDRA:
A Finite Element
ComputationalFluid Dynamics Code
and 2.33 define the stopping criteria. That is, both the normalized residual, and the change
in the solution vector during the last iteration must be less than the user specified tolerance.
(2.32)
b
(2.33)
For the momentum equations, the user may specify the maximum number of iterations
permitted, the interval to check the residual norms, and the allowable tolerance. Typically,
E = 1.0e - 5, produces acceptable solutions in 20-50 iterations on most problems.
Exceptionally tough problems may require more iterations, but the number of iterations
should never exceed the number of nodal points in the mesh. For more detailed
information on the iterative methods used in HYDRA, see references 25,32,33,34,38,
39, 41.
E the divergence is greater than the specified tolerance, a mass consistent projection to a
divergence-free subspace is performed. For an initial velocity field, 5" , Eq. 2.34 is solved
for A, ,and the orthogonal projection in Eq. 2.35 is performed.
(2.34)
(2.35)
Given initial conditions which satisfy the solvability constraint, an initial partial
acceleration is computed according to Eq. 2.36. In the case of the explicit algorithm, a
lumped mass matrix ( ML)is used, while for P-II, a consistent mass matrix is employed.
A PPE is then solved for the pressure at t=O,using zio
- from Eq. 2.36.
(2.36)
[c=M;'c]P" = cT,uo
.
(2.37)
While the user is given control over the allowable divergence, it is not advisable to relax
the divergence of the initial velocity field. This is particularly so with the explicit algorithm
which relies heavily upon the initial velocity field being divergence-free.
24
HYDRA:
A Finite Element
Computational Ruid Dynamics Code -
Many codes rely upon a pre-processor in addition to a mesh generator for the
construction of the PPE dual grid. HYDRA follows the algorithm in Christon43 to .
4
construct the dual grid durin the initialization phase. The simplest algorithms for
constructing the dual grid are Nel algorithms, while the dual grid construction algorithm in
HYDRA only requires O(Nel)operations which permits the dual grid computation to be
performed in HYDRA during the initialization phase.
I I I Dual Grid
/
I I 'H
-+-+-$--& I
-4
Figure 2.9: Mesh and PPE Dual Grid
The efficient solution of the PPE has been an ongoing research issue during the
development of HYDlW, and is not a closed issue today. HYDRA provides a wide variety
of solution strategies for the PPE, and the overall performance of the code is sensitive to
the solution option chosen. In fact, the performance of the solution algorithms vary across
computer architectures, and not all of the algorithms require the explicit construction of the
dual grid.
For the iterative techniques, the convergence criteria applied to stop the iteration is
identical to that used for the momentum equations (Eqs. 2.32-2.33). In general, all of the
conjugate gradient based PPE solvers typically require (0.01 - 0.1)NeZ iterations to
converge with a tolerance of E = 1.0e - 5. However, the iterative solvers require less
memory than the direct solvers.
25
HYDRA:
A Finite Element .. .
ComputationalFluid Dynamics Code
Of the direct solvers, the PVS287 29 solver is highly recommended when memory is not
, an issue. However, as is the case with all direct solvers, the PVS solver does not scale to
very large problems because of round-off and memory limitations. Direct solvers are also
very dependent on having minimum bandwidth (profile). All of the direct solvers in
HYDRA automatically invoke the Gibbs-Poole-Stockmeyef’s bandwidth minimization
routines.
While there are no hard and fast rules for picking an optimal PPE solver, in general, the
direct methods are robust and fast, requiring only a resolve of the pre-factored PPE at each
time step. Table 2.3 shows the relative costs in memory and cpu time of a conjugate
gradient solver, the PVS solver, and a fixed bandwidth Gaussian elimination solver.
The band Gaussian solver was used to normalize the memory requirements and cpu
usage for a three dimensional problem with approximately 30,000 elements. The PVS
solver is the clear winner with regards to the element cycle time of the explicit algorithm,
but the EBE/JPCG solver requires no additional storage over the basic operators for the
momentum equations. The algebraic multigrid solver (AMG) performed only marginally
better than the conjugate gradient solver and required a good deal of memory to hold the
coarse grids. While these findings are not absolute, hopefully they can guide the user in the
selection of an appropriate solver for each problem. The following paragraphs provide a
brief description of each PPE solver.
Table 2.3 PPE solver Comparison (All timings on a CRAY Y-MP single processor)
EBWJPCG
The element-by-element Jacobi preconditioned conjugate gradient (EBE/JPCG) solver
was designed to make use of the C and Mi1 operators without requiring any additional
storage for the PPE matrix. Thus, this is a “matrix-free” method, because it performs the
necessary matrix-vector multiplies for conjugate gradient in an element-by-element right-to-
left order rather than using the traditional left-to-right ordering.
EBE/JPCG is highly vectorized, and is also a very parallel algorithm. The primary
drawback in its current implementation is the relatively slow convergence rates. Future
research will focus on better preconditioners for this solver because it has so much potential
in parallel scalability.
UDUBand Gaussian
The UDU solver was written by Erik Thompson at Colorado State University, and is
included as a reference solver for benchmarking purposes. The storage format is based
upon storing the upper symmetric bands of the coefficient matrix in the columns of the PPE
array. This solver requires bandwidth minimization, but still is relatively costly because of
wasted storage and unnecessary operations associated with zero entries in the PPE .
PVS solver
The PVSa solver also requires bandwidth minimixation, but uses a compact row storage
scheme to avoid storing entries which are zero. The PVS solver is highly vectorized, and
makes extensive use of loop unrolling to achieve a high level of performance on vector and
cache based architectures.
AMG solver
The algebraic multigrid solver (AMG)27is a black-box multigrid solver which is capable of
treating the linear systems generated for unstructured grids. This is an experimental solver
in HYDRA, and will not be supported. For the brave users who wish to experiment with
AMG,additional Momation on the multigrid method may be found in Briggs26940.
JPCG
The Jacobi preconditioned conjugate gradient solver is also used for the momentum
equations in the P-II algorithm, and may also be invoked for the PPE. Details of the data
structures used may be found in the ITPACK User’s ‘guide35 and the NSPCG User’s
Guide36.
27
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
SSOR-PCG
The symmetric successive over-relaxation preconditioned conjugate gradient algorithm
requires no additional storage with respect to the JPCG solver for the PPE. Currently, this
solver may only be invoked for the solution of the PPE. HYDRA'S SSOR-PCG solver is
based in part upon the work in references 30,3 1,42.
28
HYDRA:
A Finite Element
.Computationalmuid Dynamics Code
Wilcox also suggests that the complexity of a turbulence model is dictated by the level
of detail required in a simulation. There is a plethora of turbulence models available in the
literature which range in complexity from algebraic models which employ the Boussinesq
eddy-viscosity approximation to full second-moment models which ,require seven
additional transport equations. Turbulence research using HYDRA as a vehicle is an
ongoing effort with two major concentrations. The fist involves the use of a simple
Smagorinsky mode1899 for performing Large Eddy Simulations WS).The second effort is
focused on implementing multi-equation models which can capture the effects of the
anisotropic Reynolds stress induced comer vortices.
Large ~ d d Simulation
y
The basic idea behind.LES is to resolve the larger eddies in a turbulent flow while
employing a model for the smallest eddies which cannot be resolved by the grid.
The LES model implemented in HYDRA follows the development in Wilcoxg, and
makes use of a simple Smagorinsky sub-grid scale (SGS) model. Currently, this model is
available only in the 3-D, explicit, time-integration option of .HYDRA. The Smagorinsky
eddy viscosity is based upon the element level strain-rate tensor as defined in Eqs. 2.38-
2.39.
3, =(
C ,
A )
'm (2.38)
(2.39)
In Eq. 2.38, A is based upon the local element diameter corresponding to the element
volume. The Smagorinsky constant, C,,varies from flow-to-flow, but its typical values
are:
The Smagorinsky model presented here represents the simplest type of turbulence model
which can be employed. While the implementation in HYDRA will permit the use of a
dynamic sub-grid scale model, however, it does not employ any type of explicit wall
model. It should be noted that the one of the p r i m q reasons for the relative success of the
Smagorinsky model is that is provides enough additional diffusion to stabilize most
simulations. Further, the large eddies (grid resolvable) yield statistics which are not
influenced by the SGS model.
The scalability of this turbulence model is an issue. For a direct numerical simulation,
the number of grid points for a simple channel flow scales as The application of an
LES model scales approximately as Re2 which is still extremely demanding.
30
IIYDRA:
A Finite Element
Computational Fluid Dynamics Code
rely upon the dated strategy of an expandable blank common). This means that HYDRA
can both allocate and free memory as it needs during the execution cycle of a problem. This
is a must for providing a scalable computational tool, i.e., one that can accommodate very
large meshes.
Element Groups
One of the key vectorization ideas in DYNA3D and NIKE3D is the element group.
HYDRA has enhanced this basic concept making it a flexible, hierarchical mechanism
which is used in configuring the code for a target architecture. The basic idea behind the
element groups is to group elements according to a certain criteria. For register-to-register
vector supercomputers, the criteria is that all elements in a group and their associated data
structures should be completely independent to permit full vectorization. Thus, the goal is
to avoid any vectoriz,ation inhibiting data dependencies.
In HYDRA,the size of the element groups is a configwable parameter, and is set based
upon the size and number of vector registers on a vector computer, or according to the size
of cache on a RISC machine. The topological domain decomposition used to group
elements for vectorization is a well known algoritbm and is described in Hughesl8.
In the case of most parallel machines, the element groups form a second level in a
hierarchical topological domain decomposition scheme. The coarse grained decomposition
is achieved via a technique such as recursive spectral bisection (RSB), and then the
elements are grouped in data independent clusters. This is the technique applied to i
machines such as the Meiko CS-2-a parallel vector architecture.
The element grouping idea has been an essential tool for mapping HYDRA to very-
long-vector SlMD architectures such as the CM-2 and CM-5 where the element groups
need not be data independent. The element groups need not be data independent because of
the capability of these machines to perform data routing with data reduction to gracefully
handle data dependencies. Given the size of machine (number of physical processors), the
element groups may be quite laige. For the CM-5 with vector units,the typical group size
is 8192.
Vectorization
All of the algorithms in HYDRA have been vectorized. This is a common starting
. point even for parallel architectures because the process of vectorization is well understood,
can be easily -tested,and usually reveals the key aspects of finite element based algorithms.
Some of the concepts used in DYNA3D for achieving its high level of vector performance
have been studied and enhanced for HYDRA. Where it has been possible, standard
techniques such as loop-unrolling, scalar promotion, and vector reuse have been taken into
account in the implementation of HYDRA. However, there is always room for
improvement.
31
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
Parallelizatbn
The early development of HYDRA began on the CM-200 at the Advanced Computing
Laboratory at Los Alamos National Laboratory. The work on the CM-200 then moved to
the CM-5 at the Army High Performance Computing Research Center at the University of
Minnesota. The early versions of HYDRA could be configured as FORTRAN-90 or
FORTRAN-77 to permit the rapid transition from a CRAY environment to the Connection
Machine.
While this was successful, the delivered performance of the CM-5 on unstructured
grids was quite disappointing. The best performance was approximately eight times slower
than the best timings on the CRAY Y-MP. The ideas of Tezduyar, et al. seem to lead to
very good performance on the CM-5, but require a code architecture which is inflexible.
This fact in conjunction with the shortcomings of FORTRAN-90 have led the author away
from this architecture and away from FORTRAN-90 for the time being.
The movement back to FORTRAN-77 was also driven by the availability of machines
such as the Meiko CS-2, and workstations clusters as well as symmetric multiprocessors.
The flexibility afforded by FORTRAN-77 in combination with the DDMP model has led to
a more portable version of HYDM than FORTRAN-90 could provide.
The current parallelkation efforts are focused upon the DDMP model not only for the
Meiko, but for workstation clusters and the CRAY T3D. It is anticipated that future
versions of HYDRA will even be available for heterogeneous clusters of workstations.
33
Chap& 3
RunningHYDRA
HYDRA has been exercised on computers ranging from UNM workstations to
traditional CRAY supercomputers, Thinking Machines CM-200 and CM-5 computers, and
the vector-parallel Meiko CS-2. The common thread for a l l of these machines is a UNM
(or UNIX like) environment. HYDRA provides a single command line interface which
functions in the fashion which most common UNIX commands operate, i.e., a single
command followed by a list of command line arguments.
3.1 Execution
HYDRA may be executed with the following command line options:
hydra -i i n .-c cntl -0 out -p plot -h hist -g glob -d dump
All of the file names may include a path name as well. For example, the following
command line makes use of the automatic expansion of the user’s login directory and both
absolute and relative paths for files.
hydra -i \-/plate/flowl.msh -c../cntll -o/home/joe/plate.out
As the graphics files are generated during a simulation, the root file name and
subsequent family members will be written to disk with the family member file names
being the root fde name concatenated with a two digit family number which changes to a
three digit family number after 100files have been generated.
34
HYDRA:
A Finite Element
Computational Fluid Dynamics Code
3.2 Restarts
HYDRA will automatically write a binary check-point file which contains all of the data
necessary to restart a computation. If the dump file exists when HYDM starts execution,
then HYDFU will allow the user to choose between a restart and starting a new calculation.
HYDRA re-writes the check-point file in place each time a simulation is terminated
normally. Therefore, the user must first save a copy of the check-point file before restarting
if the original check-point file is to be preserved. This mode of operation derives from the
fact that most supercomputer installations provide archival storage which may be used to
preserve the check-point file during a restart run and subsequent check-point operations.
The state and time history plot files are preserved when a restart is performed. Upon
restart, HYDRA will begin writing new state and time history data in the next family
member of the plot files. Similarly, the global output data is simply concatenated to the
existing glob file when a restart is performed. However, the human output file is re-
written when a restart is performed.
HYDRA will permit the user to change only a limited number of analysis parameters
when a check-point file is used to restart a computation. For example, changing mesh
parameters such as the number of nodes and elements is not possible at this time.
However, changing material properties, the number of time steps, plot intervals, etc. is
acceptable.
35
Mesh Parameters
Keyword I
Variable & Meaning
mesh Mesh parameter starting delimiter.
mP Number of nodes (no default).
ne1 Number of elements (nu default).
llllliit Number of materials (default: nmat=l).
ndim Number of dimensions (default: ndim=3).
end Terminate mesh parameters.
36
HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
ndhi st
...
end
end
Analysis Parameters
Variable & Meaning
Analysis parameter definition starting delimiter.
1: Transient, incompressible, Navier-Stokes using 1-point
quadrature, lumped mass, and Forward Euler time integration
(default. 1)
3: Transient, incompressible, Navier-Stokes using full
quadrature and P-II.
101: Transient advection-diffusion using Forward Euler with a
prescribed velocity field (0). This option makes use of 1-
point integration.
102 Transient advection diffusion using semi-implicit implicit
time integration with a prescribed velocity field (4). This
option makes use of fully integrated elements.
0/1: Solve scalar advection-diffusion with momentum equations
(default: 0)
term Simulation termination time (default: 1.0).
nstep Number of time steps (default: 10).
deltat time step size (default: 0.01).
dtscal time step scale factor (default: 1.0).
dtchk Intervalto check the stable time step size and report grid
parameters. A negative value for the interval causes the time
step to be checked but not changed, i.e., forces the grid
parameters to be reported (default: 10).
mass 1: Lumped Mass (default: mass=l for solve=l,lOl)
2: Consistent Mass (default: mass=2 for soIve=2,3,102)
38
. HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
s (cont.)
Variable & Meaning
Fluid kinematic viscosity (default: 1.0).
Fluid thermal diffusivity (defauZt: 1.0).
Time weight for viscous terms (default: 0.5).
Time weight for BTD (dejaulr:: 0.5).
Time weight for advection (default: 0.0).
Time weight for BC's (default: 0.5).
1:Write the final velocity field to a file which may later be used
to specify initial conditions for HYDRA. The name of the
initial conditions file is just the name of the output file with
" k s " appended.
0: Don't write an initial condition file (default: 0).
40
HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
I i-
Iteration limit (default:itmax=lO).
Interval to check convergence (defauk itchk=5).
ePS Convergence tolerance (default:eps=10-10).
Wrt 0: Suppress diagnostic information from solver (default).
1:Write diagnostic information fiom solver.
hist 0: Suppress writing an ASCII convergence history file (defauzt).
1:Write an ASCII convergence history file.
end Terminate momentum equation solver parameters.
44
HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
p~esok61
- JPCG solver ontions:
itmax Iteration limit (default: itmax=lO).
itchk Interval to check convergence (default: itchkS).
ePS Convergence tolerance (default: eps=l.Oe-lO).
Wrt 0: Suppress diagnostic information from solver (default).
1: Write diagnostic information from solver.
hist 0: Suppress writing an ASCII convergence history file (default).
1: Write an ASCII convergence history file.
4. I . 6 Turbulence Models
Turbulence models and their associated parameters are specified in the HYDRA control
file. For all two equation models and second-order closure models, the default for ndofis
automatically set when the user activates the turbulence model.
Turbulence Models
Keyword Variable & Meaning
turb n Activate turbulence model # n (default: n=O).
1: Smagorinsky subgrid scale model without dynamic subgrid
scale.
102: Lein and Leschziner13 k-E model - Cartesian grids only.
103: Shuga, Craft,Launderlo k-&-A2 model (not available yet).
n=l Smagorinslq Model:
smagc Set the value of the Smagorinslq constant (default: smagc=0.1).
4.2 HYDRAMeshFile
The HYDRA mesh file contains an 80 character comment line, the spatial coordinates of
the nodes, nodal connectivity, all specified initial conditions, and boundary conditions. The
80 character comment line must be the frrst line in the input file and is echoed in the human
readable output file. Lines may be commented out by using a “C” followed by a blank
space, or by using #, *, $, or enclosing a region of the input file in braces, { }. Because
the data in the mesh file is usually generated by an automatic mesh generator, the data in
this file can’t be input in a format-free style as in the case of the HYDRA control file.
However, all input in the mesh file is case insensitive.
For many problems, load curves for essential boundary conditions are not necessary,
e.g., no-slip boundaries or boundaries with prescribed velocity are the most common.
Therefore, a blank field for the load curve number associated with a boundary condition
indicates that the amplitude of the boundarj condition should be used as the boundary
condition, That is, the boundary condition does not vary with time. This applies to all
prescribed boundary conditions in HYDRA.
Connectivity-QlPO Elements
Columns Format I Description
9-13 I5 material number.
14-21 I8 local node #l.
22-29 I8 local node a.
30-37 I8 local node #3.
38-45 I8 local node #4.
...
70-77 I8 local node #8.
(Nodes 5-8are ignored for 2-D)
50
HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
The order of the required essential and natural boundary conditions in the mesh file is
shown in the table below with the corresponding analysis parameter from the control file
and its associated mesh input requirements. In the table which follows, EBC refers to an
essential boundary condition, e.g., specified value of velocity. In contrast, NJ3C refers to
a natural boundary condition, e.g., specified tractions.
-
Columns Format Description
1-10 I10 Node number.
11-30 E20.0 Prescribed initial temperature.
54
HYDRA:
A Finite Element Code for
Computational Fluid Dynamics
ntbc values of specified temperature must be input for the advection diffusion equation.
See the table in section 4.2.7 for additional information.
chapter 5
Example Problems
This chapter presents several 2-D and 3-D HYDRA calculations. These computation are
provided for the first time user who wishes to perform several benchmark computations for
comparison before embarking on a detailed analysis. For this reason, the control files are
replicated here with representative results which can be compared to for a rough validation
of the local HYDRA installation. These problems are not intended to be a comprehensive
benchmark suite. Instead, the calculations presented here are simply intended to provide the
first time user with a set of test cases which can be used for code familiarization.
For this reason, most of the sample problems use relatively coarse meshes to minimize
run times and provide a starting point for the user who wishes to experiment with code
options before attempting any significant calculations. All of the sample calculations are
isothermal, but they would require minimal changes to activate the scalar advection-
diffusion equation.
D i v e r g e n c e E r r o r
Allowable Divergence ......................... 1.00003-10
Initial Divergence ......................... 1.13793-08
Projected Divergence ......................... 1.04743-17
G r i d R e y n o l d s & C F L N u m b e r s
xi-grid Reynolds Numbers:
__-----_______-_----------
____-_--------------------
Min. Element no. ............................. 1
Min. xi-element dimension .................... 8.72983-02
Minimum xi-grid Reynolds Number .............. 4.36493+00
Max. Element no. ............................. 140
Max. xi-element dimension .................... 5.33913-01
Maximum xi-grid Reynolds Number .............. 5.33913+01
eta-grid Reynolds Number:
__________________-------
.........................
Min. Element no. ............................. 158
Min. eta-element dimension ................... 1.00003-01
Minimum eta-grid Reynolds Number ............. 2.10193-09
Max. Element no. ............................. 79
Max. eta-element dimension ................... 1.0000E-01
Maximum eta-grid Reynolds Number ............. 1-19333-04
xi-grid CFL Numbers:
.....................
Min. Element no. ............................. 200
Min. xi-element dimension .................... 5.33913-01
Minimum xi-grid CFL Number ................... 4.68253-02
Max. Element no. ............................. 161
Max. xi-element dimension .................... 8.72983-02
Maximum xi-grid CFL Number ................... 5.72753-01
eta-grid CFL Numbers:
___________________-_
_------_-----_-------
Min. Element no. ............................. 158
Min. eta-element dimension ................... 1.0000E-01
Minimum eta-grid CFL Number .................. 1.05103-10
Max. Element no. ............................. 79
Max. eta-element dimension ................... 1.00003-01
Maximum eta-grid CFL Number .................. 5-96653-06
The stream function contours for the velocity field are shown in Fig. 5.4 at 100 &e
units. The corresponding pressure field contours are shown in Fig. 5.5. Velocity time
history plots are shown in Fig. 5.6 for the x and y-velocity components along the duct
centerline (nodes 17, 149 and 226). The outlet x-velocity profile in Fig. 5.7 shows the
expected parabolic velocity distribution for hydrodynamically fully developed flow. The
kinetic energy for this calculation is also shown in Fig. 5.8.
--
---
-
-
Pressure
1.06efGU
1.4
1.3
h
.-
c,
g 1.2
d)
Legend
1.1 Node 17 X Velocity
Node 149 X Velocity
X ---- Node 226 X Velocity
1
0.9
0 2 4 6 8 10
Time
2e-07
1e-07
0 Legend .
- Node 17 Y Velocity
--- Node 149 Y Velocity
- le-07 ---- Node 226 Y Velocity
-2e-07
0 2 4 6 8 10
Time
b) y-velocity time history.
1.o
0.8
0.6
3:
F
0.4
P
0.2
0.0
2.45
2.4
bl)
I
2.35
.&
0
d)
z=
b4
2.3
2.25 !
""'?
0 2 4 6 8 10
Time
62
m u - 9 4
A Finite Element
Computational Fluid Dynamics Code
The non-leaky lid driven cavity problem results in both a hydrostatic and a
checkerboard mode in the pressure field. Thus, npbc 2 is specified in the control file, and
the two pressures at the bottom, center of the cavity are pegged at 0.0.
The pressure, stream function, and vorticity contours at t=25 units are shown in
Figures 5.11-5.13. In comparison to the results of Hughes, et al., the results for these
quantities are qualitatively similar. Figure 5.14 shows the velocity vectors for the problem
after 25 time units. Examination of the kinetic energy time history in Fig. 5.15 shows that
problem is essentially steady-state after only 10time units.
63
HYDM-94
A Finite Element
ComputationalFluid Dynamics Code
mesh
nnp 441
ne1 400
nnpe 4
mat 1
ndim 2
end
analyze
solve 3
nubc 84
nvbc 84
npbc 2
nstep 100
plti 20
prti 100
. pltype 1
icset 2220000 ;
dim 1.0e-10
deltat 0.25
dtchk -1
term 1000.00
rho 1.0
nu 1.0e-2
end
ndhist 4
nstep 1
st 25 en 25
st 149 en 149
st 419 en 419
st 311 en 311
end
ppesol 41 end
momsol 1
itmax 400
itchk 10
eps 1.0e-5
wrt 0
end
//
Figure 5.11: Lid driven cavity pressure contours.
0.03
0.025
0.02
0.015
0.01
0 .5 10 15 20 25
Time
Figure 5.15: Lid driven cavity kinetic energy time history.
68
HYDRA-94
A Finite Element
ComputationalFluid Dynamics Code
end
I
I
---
Figure 5.18: Stream function contours. w = +/- 0.05, 0.1, 0.15, 0.2, 0.4, 0.6, 1.0,
2.0, 3.0 at z = 250 time units. Vorticity
250efOrJ
1.5
Legend
- Node 441 X Velocity
--- Node 447 X Velocity
x ---- Node 653 X Velocity
---e Node 844 X Velocity
0 ------ Node 899 X Velocity
-0.5
0 50 100 150 200 250 300
Time
0.5
Y
Legend
*I+
0
P
0
d
0 - Node 441 Y Velocity
I
88 !
7
..-..-
..-.........._
80
I
For Re=800, a steady-state solution results. The stream function and vorticity contours
at t=250 time units are shown in Fig. 5.24-5.25 respectively. The reattachment point on
the lower boundary is approximately 5.40 units downstream from the step. In comparison
to the results of Gartling, this is a 11.48 percent error, but with comparably few elements.
The velocity time history plots in Fig. 5.26 show that the velocity field is still changing,
although the kinetic energy time history in Fig. 5.27 appears to be steady. (The time history
nodes were selected at x=5.866, y=O.I, 0.5, 0.9 according.)
Entrance
Region
BFS # 4 - R e = 8 0 0 , parabolic i n f l o w
mesh
nnp 2 1 2 1
ne1 2 0 0 0
nnpe 4
mat 1
ndim 2
end
analyze
solve 3
nubc 222
nvbc 2 2 4
nstep 2500
plti 50
prti 250
pltype 1
btd 1
icset 0000000 I
dim 1.0e-10
deltat 0.100
dtchk -1
dtscal 1
t e r m 500.00
rho 1 . 0
nu 1.25e-3
end
ndhist 8
nstep 1
st 902 en 902
st 8 9 6 en 8 9 6
st 1 9 2 7 en 1 9 2 7
st 1 0 5 0 en 1 0 5 0
st 1 0 5 6 en 1 0 5 6
st 2 0 6 7 en 2 0 6 7
st 5 2 8 en 5 2 8
st 672 en 672
end
momsol 1
i t m a x 250
i t c h k 20
eps 1 . 0 e - 6
wrt 0
end
ppesol 4 1 end
end
Figure 5.24 Stream function contours. psi = 0.0, +/- 0.1,+/-0.2,0.225,-0.275, Vort it i ty
-0.285,-0.295at z = 250 time units. 5 QQe+OO
I I . . ,, ... . .. , ” ... .
i
., , ~ ...,.
. ,
. . . . -.. . , ,... -
44
1.2
1
Legend
Node 902 X Velocity
Node 896 X Velocity
x 0.2 Node 1927 X Velocity
0
-0.2
0 50 100 150 200 250
Time
0.3
0.2
Legend
Node 902 Y Velocity
Node 896 Y Velocity
Node 1927 Y Velocity
-0.2
-0.3
0 50 100 150 200 250
Time
Pressure
3.m e + OO
0.01 i
2
...................................... ...............
.........i............ :.....................
..--
.................
Legend
- Node 36 Y Velocity
.... j ........ .i.. ....... ......... --- Node-72 Y Velocity
---- Node'.l08 Y Velocity
---- Node 144 Y Velocity
------ Node 180 Y Velocity
- - - - Node 936 Y Velocity
----------- - - - - Node 972 Y Velocity
0 2 4 6 8 10
Time
1.8
1.6
Legend
1.4 - Node 36 Z Velocity
--- Node 72 Z Velocity
1.2 ---- Node 108 2 Velocity
N ---- Node 144 2 Velocity
------ Node 180 Z Velocity
1 . . - - - - Node 936 2 Velocity
- - - - Node 972 2 Velocity
0.8
I " " ' ' I I" " ' " I " " ' " I " ' I r "
0 2 4 6 8 10
Time
0.56
0.54
0.48
0.46
0 2 4 6 8 10
Time
Figure 5.33: Kinetic energy time history plot.
83
HYDRA-94
A Finite Element
Computational Fluid Dynamics Code
mesh
nnp 3704
ne1 1760
nnpe 8
mat 1
ndim 3
end
analyze
solve 3
nubc 350
nvbc 3704
nwbc 350
nstep 1 0 0 0
plti 50
prti 5 0 0
pltype 1
icset 222 00 00 1
divu 1.0e-10
pltype 1
term 5 0 0 . 0
deltat 0.25
dtscal 1.0
dtchk -1
rho 1,000000E+00
nu 1.000000E-02
end
ppesol 41 end
momsol 1
itmax 100
itchk 1 0
eps 1 . 0 e - 5
wrt 0
end
ndhist 5
nstep 1
st 671 en 671
st 678 en 678
st 1135 en 1135
st 1327 en 1327
st 1760 en 1760
end
end
85
HYDRA-94
A Finite Element
ComputationalFluid Dynamics Code
Pressure
2.07e+00
- 4.0 Qetou
. . . .
. .. . - . ...
. .
. . . .
. .. . . . .. . .
... .
. ,. .. . .
h
Y
Legend
.M
0
PX
0
d - Node 671 X Velocity
...-I- Node 678 X Velocity
_ _ _ -Node 1135 X Velocity
-.-. Node 1327 X Velocity
...... Node 1760 X Velocity
1 -
__.-_
&.---.-.-.- :>.\:<.
0
d *....
. -I - .~ .
, ..................
d,.,, 8
*:
(*,\.t.\?.??.'I:
- Node 671 2 Velocity
--- Node 678 2 Velocity
N ---- Node 1135 Z Velocity
:...................L .................. 2................-.t....-.....-.....
i.......... ---- Node 1327 2 Velocity
------ Node 1760 Z Velocity
-1 -
1""""'1""""' I""""' 1""""'I " " I
88
82
80
0 50 100 150 200 250 300
Time
Figure 5.39: Kinetic energy time history.
88
HYDlU-94
A Finite Element
ComputationalRuid Dynamics Code
The interaction of the plate boundary layer and the post results in longitudinal vortices
being shed from the root of the post in the downstream direction. Figure 5.42 shows a
snapshot of pressure isosurfaces during a vortex shedding cycle. Figure 5.43 shows
isosurfaces of the z-vorticity also at 380 time units. The helicity (F g) is shown in
Fig. 5.44.
Several velocity time history plots are shown in Fig. 5.45. The time history node was
placed n the symmetry plane, and are directly downstream of the post. Figure 5.46 shows
the kinetic energy time history for the calculation.
Post w. Re=100
mesh
nnp 16096
ne1 13800
nnpe 8
nmat 1
ndim 3
end
analyze
solve 3
npbc 0
nubc 2747
nvbc 3643
nwbc 3814
nstep 1
plti 1
prti 1 0
PltYPe 1
icset 2220000 ;
dim 1.0e-10
term 1.000000E+03
deltat 0.500000E+00
dtscal 1.0
dtchk -1
rho 1.000000E+00
nu 1.000000E-02
end
ppesol 41 end
ndhist 1
st 1 en 10
nstep 1
end
momsol 1
itmax 100
itchk 10
eps 1.0e-5
wrt 1
end
end
‘0 Helicity
5.OQe-01
a
Figure 5.44: Helicity isosurfaces at z = 380 time units.
a
92
HYDRA-94:
A Finite Element
Computational Fluid Dynamics Code
1
0.8
0.6
0.4
0.6
0.4
0.2
P* -0.2
0
-0.4
-0.6
0 50 100 150 200 250 300 350 400
Time
0.6
0.4
0.2
-0.2
-0.4
-0.6
0 50 100 150 200 250 300 350 400
Time
93
HYDRA-94
A Finite Element
ComputationalFluid Dynamics Code
38
36 , . 8 , ..., , I , , . I , , . , , , . . . . , I
4. P.M. Gresho, S.T. Chan, R.L. Lee, and C.D. Upson, “A Modified Finite Element
Method for Solving the Time-Dependent, Incompressible Navier-Stokes Equations.
Part 1: Theory,” Journal for Numerical Methods in Fluids ,Vol. 4, pp. 557-598,
1984.
5. P.M. Gresho, S.T. Chan, R.L. Lee, and C.D. Upson, “A Modified Finite Element
Method for Solving the Time-Dependent, Incompressible Navier-Stokes Equations.
Part 2: Applications,” 1nt. Journal for Numerical Methods in Fluids, Vol. 4, pp.
619-640, 1984.
6. P.M. Gresho, “On the Theory of Semi-implicit Projection Methods for Viscous
Incompressible Flow and Its Implementation via a Finite Element Method that also
Introduces a Nearly Consistent Mass Matrix. Part 1: Theory,” Int. Journal for
Numerical Methods in Fluids, Vol. 11, pp. 587-620, 1990.
7. P.M. Gresho, and S.T. Chan, “On the Theory of Semi-implicit Projection Methods
for Viscous Incompressible Flow and Its Implementation via a Finite Element
Method that also Introduces a Nearly Consistent Mass Matrix. Part 2: Applications,”
Int. Journal for Numerical Methods in Fluids, Vol. 11, pp. 587-620, 1990.
14. M.A. Christon, D. Dovey, and J.O.Hallquist, “INGRID A 3-DMesh Generator for
Modeling Nonlinear Systems,” UCRL-MA- 10970, September, 1992.
15. D.J. Dovey, and T.E. Spelce, “GRTZ Finite Element Analysis Results Visualization
for Unstructured Grids,” Draft manual-version 2, February, 1993.
22. W.K. Liu, J.S. Ong, and R.A. Uras, “Finite Element Stabilization Matrices,”
Computer Methods in Applied Mechanics and Engineering, Vol. 53, pp. 1 3 4 6 ,
1985.
23. T. Belytschko and W.K. Liu, “On Reduced Matrix Inversion for Operator Spliiting
Methods,” International Journal for Numerical Methods in Engineering, Vol. 20,
pp. 385-390, 1984.
97
HYDRA-94
A Finite Element
ComputationalFluid Dynamics Code
24. G.L. Goudreau, and J.O. Hallquist, “Recent Developments in Large-Scale Finite
Element Lagrangian Hydrocode Technology,” Computational Methods in Applied
Mechanics and Engineering, Vo1.33, pp. 725, 1982. . -
25. F.S. Beckman, “The Solution of Linear Equations by the Conjugate Gradient
Method,” Mathematical Methodsfor Digital Computers, Vol. 1, pp. 62-72,1965.
26. W.L. Briggs, Multigrid Tutorial, Lancaster Press, Lancaster, PA, 1987.
28. 0.0. Storaasli, D.T.Nguyen, and T.K. Agarwal, “A Parallel-Vector Algorithm for
Rapid Structural Analysis on High-Performance Computers,” NASA Technical
Memorandum 102614, April, 1990.
29. T.K. Agarwal, 0.0. Storaasli, D.T. Nguyen, “A Parallel-Vector Algorithm for
Rapid Structural Analysis on High-Performance Computers,” AIAA 90-1 149, April,
1990.
3 0. S . C . Eisenstat, “Efficient Implementation of a Class of Preconditioned Conjugate
Gradient Methods,” SIAM J. Sci Stat. Comput., Vol. 2, No. 1, pp. 1-4. March,
1981.
31. S. C. Eisenstat, J. M. Ortega. and C. T. Vaughan, “Efficieint Polynomial
Preconditioning for the Conjugate Gradient Method, “SIAM J. Sci. Stat. Comput,
VO1. 11, No. 5, pp. 859-872, Sept. 1990.
32. I. Fried, “A Gradient Computational Procedure for the Solution of Large Problems
Arising from the Finite Element Discretization Method,” International Journal for
Numerical Methods in Engineering, Vol. 2, pp. 477-494, 1970.
36. T.C. Oppe, W.D.Joubert, D.R. Kincaid, NSPCG User‘s Guide, Version 1.0,
Center for Numerical Analysis, The University of Texas at Austin, April, 1988.
37. D.J. Tritton, Physical Fluid Dynamics, Second Edition, Oxford University Press,
New York, 1988.
98
HYDRA--94
A Finite Element
. Computational Fluid Dynamics Code
38. J.J. Dongarra, I.S. Duff, D.C. Sorensen, H.A. van der Vorst, Solving Linear
Systems on Vector and Shared Memory Computers, SIAM, Philadelphia,
Pennsylvania, 1991.
39. 0. Axelsson, Iterative Solution Methods, Cambridge University Press, New York,
1994.
40. S.F. McCormick, Multilevel Projection Methods for Partial Differential Equations,
SIAM, Philadelphia, Pennsylvania, 1992.
41. R. Barrett, M. Berry, T.F. Chan, J. Demmel, J. Donato, J. Dongma, V. Eijkhout,
R. Pozo, C. Romine, H. van der Vorst, Templates for the Solution of Linear
Systems: Building Blocksfor Iterative Methods, SIAM, Philadelphia, Pennsylvania,
1994.
42. 0. Axelsson, “A Generalized SSOR Method,” BIT, Vol 13, pp. 443-467, 1972.
43. Christon, M., Some Useful Algorithms for Treating Unstructured Grids, in
preparation for IJNME, 1994.
44. Gresho, P.M., Time Integration and Conjugate Gradient Methods for the
Incompressible Navier-Stokes Equations, LLNL, UCRL-9400, January 1986.
45. N.E. Gibbs, W.G.Poole, Jr., P.K. Stockmeyer, An Algorithm for Reducing the
Bandwidth and Profile of a Spruce Matrix, SIAM J. Numer. Anul., Vol. 13,
No. 2, pp. 236-250, 1976.
46. Storaasli, O.O., D.T. Nguyen, T.K. Agaruwal, “A Parallel-Vector Algorithm for
Rapid Structural Analysis on High-Performance Computers,” NASA Technical
Memorandum 102614, April, 1990.