PHD Thesis Mahdi Zakyani
PHD Thesis Mahdi Zakyani
PHD Thesis Mahdi Zakyani
Development of a
Scientific Visualization System
Thesis submitted in fulfillment of the requirements for the award of the degree of
Doctor in de ingenieurswetenschappen (Doctor in Engineering) by
Dean Vuini
July 2007
Advisor(s):
PhD Committee
President:
Prof. Dr. Jacques Tiberghien
Vrije Universiteit Brussel (VUB)
Department of Electronics and Informatics (ETRO)
Vice President:
Prof. Dr. Ir. Rik Pintelon
Vrije Universiteit Brussel (VUB)
Department of Fundamental Electricity and Instrumentation (ELEC)
Secretary:
Prof. Dr. Ir. Jacques De Ruyck
Vrije Universiteit Brussel (VUB)
Head of the Mechanical Engineering Department (MECH)
Advisors:
Prof. Dr. Ir. Chris Lacor
Vrije Universiteit Brussel (VUB)
Department of Mechanical Engineering (MECH)
Head of the Fluid Mechanics and Thermodynamics Research Group
Prof. em. Dr. Ir. Charles Hirsch
Vrije Universiteit Brussel (VUB)
President of NUMECA International
Members:
Prof. Dr. Ir. Jan Cornelis
Vrije Universiteit Brussel (VUB)
Vice-rector for Research and Development
Head of the Electronics and Informatics department (ETRO)
Prof. Dr. Ir. Herman Deconinck
von Karman Institute for Fluid Mechanics (VKI)
Head of Aeronautics and Aerospace Department
Dr. Franois-Xavier Josset
Thales Research & Technology (TRT) France
Department of Cognitive Solutions
Prof. Dr. Arthur Rizzi
Royal Institute of Technology (KTH) Sweden
Head of Aerodynamics Division
TABLE OF CONTENTS
ABSTRACT........................................................................................................................................................... III
ACKNOWLEDGMENTS .......................................................................................................................................... IV
NOMENCLATURE ................................................................................................................................................. VI
LIST OF FIGURES ............................................................................................................................................... VIII
LIST OF TABLES ................................................................................................................................................. XII
Introduction ........................................................................................................ 1
SCIENTIFIC VISUALIZATION MODEL..................................................................................................................... 4
SCIENTIFIC VISUALIZATION ENVIRONMENT ......................................................................................................... 6
SCIENTIFIC VISUALIZATION SOFTWARE - STATE OF THE ART ............................................................................... 8
OBJECT ORIENTED METHODOLOGY ................................................................................................................... 13
R&D PROJECTS HISTORY .................................................................................................................................... 16
The flow downstream of a bluff body in a double annular confined jet......................................................... 18
In-cylinder axisymmetric flows...................................................................................................................... 19
Flow pattern of milk between 2 corrugated plates ........................................................................................ 20
TOWARDS AN INTEGRATED MODELING ENVIRONMENT ..................................................................................... 22
THESIS ORGANIZATION ...................................................................................................................................... 24
ii
Abstract
This thesis describes a novel approach to create, develop and utilize software tools for visualization in scientific
and engineering applications. These Scientific Visualization (SV) tools are highly interactive visual aids which
allow analysis and inspection of complex numerical data generated from high-bandwidth data sources such as
simulation software, experimental rigs, satellites, scanners, etc... The data of interest typically represent physical
variables -- 2- and 3-dimensional scalar, vector and tensor fields on structured and unstructured meshes in
multidomain (multiblock) configurations. The advanced SV tools designed during the course of this work permit
data extraction, visualization, interpretation and analysis at a degree of interaction and effectiveness that was not
available with previous visualization techniques.
The Object-Oriented Methodology (OOM), which is the software technology at the basis of the approach
advocated in this thesis, is very well adapted for large-scale software development: OOM makes SV tools
possible and makes them a usable, innovative investigation instrument for the engineer and the researcher in all
areas of pure and applied research. The advanced SV tools that we have developed allow the investigator to
examine qualitatively and quantitatively the details of a phenomenon of interest, in a unified and transparent
way. Our SV tools integrate several well-known algorithms -- such as the cutting plane, iso-surface and particle
trace algorithms -- and enhance them with an ergonomic graphical user interface. The resulting SV system
implements the reusability and encapsulation principles in its software components, which support both space
discretization (unstructured and structured meshes) and continuum (scalar and vector fields) unconstrained by
the grid topology. New implementation mechanisms applied to the class hierarchies have been developed
beyond existing object-oriented programming methods to cover a broader range of interactive techniques. A
solution was found to the problem of developing, selecting and combining classes as reusable components. The
object-oriented software development life-cycle was mastered in the development of these classes, which were
finally packaged in a set of original class libraries.
A main outcome of our approach was to deliver one of the first frameworks, which integrates 3D graphics and
windowing behavior based on the software components implemented in C++ only. This framework ensures
maximal portability to different hardware platforms and establishes the basis for reusable software in industrial
applications, such as an integrated Computational Fluid Dynamics (CFD) environment (pre-post processing and
the solver). The important outcome of this work is an integrated set of VS tools -- the Computational Field
Visualization System (CFView) -- available for investigators in field physics in general, and specifically for CFD
researchers and engineers. Several CFD examples are presented and discussed to illustrate the new development
techniques for scientific visualization tools.
iii
Acknowledgments
I would like to express my gratitude to my supervisor, Prof. Charles Hirsch, for taking the risk of introducing an
innovative software-construction methodology to the CFD community and for giving me the opportunity to
master it during all these years. His probing remarks, his continuous support and advice spurred many thoughts
and were always an incentive for me to pursue excellence in this work. Without Prof. Chris LACOR, this thesis
would not have come to completion, and I deeply thank him for his support and encouragement in the
finalization phase of this thesis.
Thanks go to all my colleagues, former or present members of the Department of Mechanical Engineering, the
former Department of Fluid Mechanics of the Vrije Universiteit Brussel, who contributed to the development of
CFView: Michel Pottiez, Vincent Sotiaux, Cem Dener, Marc Tombroff, Jo Decuyper, Didier Keymeulen, Jan
Torreele, Chris Verret, as well as to the developers at NUMECA: Jorge Leal Portela, Etienne Robin, Guy
Stroobant and Alpesh Patel, who pursued the development of CFView and made it an advanced scientific
visualization product for turbo-machinery applications.
Special thanks go to the many researchers and engineers who used CFView: Shun Kang, Peter Segaert, Andreas
Sturmayr, Marco Mulas, Prasad Alavilli, Zhongwen Zhu, Peter Van Ransbeeck, Erbing Shang, Nouredine
Hakimi, Eric Lorrain, Benoit Lonard, Francois Schmitt, Evgeni Smirnov, Andrei Khodak, Martin Aube, Eric
Grandry, for their valuable comments and suggestions. Since 1988, when I joined the Department, I have had
many useful and challenging interactions with Francisco Alcrudo, Antoine van Marcke de Lummen, Guido Van
Dyck, Peter Grognard, Luc Dewilde, Rudy Derdelinckx, Steven Haesaerts, Famakan Kamissoko, Hugo Van
Bockryck, Karl Pottie, Wim Teugels, Wim Collaer, Kris Sollie and Pascal Herbosch; these exchanges of views
contributed to making this work richer and better. I happily acknowledge the inputs of the younger group of
scientists at our Department and their ideas on novel visualization aspects in acoustics, combustion, medicine
and environment domains: Stephan Geerts, Jan Ramboer, Tim Broeckhoven, Ghader Ghorbaniasl, Santhosh
Jayaraju, Mark Brouns, Mahdi Zakyani and Patryk Widera. I am indebted to Jenny Dhaes, Alain Wery and
Michel Desees, without whom my daily work at the Department would not be possible.
I am pleased to mention the contribution of Prof. Jacques de Ruyck on PLOT83, elements of which were applied
in CFView. I also thank Prof. Herman Deconinck for offering me the opportunity to give a lecture on objectoriented programming in computer graphics at von Karman Institute in 1991, which provided momentum to my
work to this exciting research area. For our interactions on unstructured modeling, I thank Jean-Marie Marchal
and gratefully remember the late Yves Rubin, whose suggestions prompted the development of CFView in the
finite-element-method domain. Special thanks go to Patrick Vankeirsbilck for many enlightening discussions on
object-oriented programming practice.
I would like to thank Koen Grijspeerdt for our teamwork in the IWT-funded LCLMS project, and for his effort
in introducing CFD and visualization in food-production applications.
Special thanks go to Birinchi Kumar Hazarika for the work in the EC-funded ALICE project and to Cristian
Dinescu, John Favaro, Bernhard Snder, Ian Jenkinson, Giordano Tanzini, Renato Campo, Gilles Gruez,
Pasquale Schiano and Petar Brajak.
I take the opportunity to thank my collaborators Danny Deen, Emil Oanta and Zvonimir Batarilo for successful
development of the Java visualization components in the SERKET project. I would like to thank Prof. Jan
Cornelis and his collaborators Hichem Sahli, Rudi Deklerk and Peter Schelkens for their contributions when
extendeding scientific visualization to general-purpose information systems. Special thanks also go to Claude
Mayer, Franois-Xavier Josset, Tomasz Luniewski, Jef Vanbockryck, Claude Desmet, Karel Buijsse, Christophe
iv
Herreman, Luc Desimpelaere and Richard Aked for their help in shaping the visualization development in ITEA
projects.
Two EU-funded TEMPUS projects, made it possible for me to interact again with Croatia and Macedonia, and
my thanks go to Prof. Zdravko Terze and Prof. Milan Kosevski for encouraging me to complete this thesis.
Special thanks go to Bernard Delcourt for his thorough proof-reading and improving the written English, and
sharing with me the last moments before the publishing of this thesis.
The funding of the European Commission (EC) and the Flemish institute for Innovation and Technology (IWT)
is gratefully acknowledged; the LCLMS, ALICE, LASCOT, QNET-CFD and SERKET projects have been
instrumental in allowing me to carry out my research. I am grateful to Vrije Universiteit Brussel for providing
the necessary research and computer facilities, not only to accomplish this work, but also to complete the
engaged projects.
I would like thank my parents for their moral and financial support, which made it possible for me to come to
Belgium; I will always remember my mother for her unfailing enthusiasm for research work, an attitude she has
passed on to me and for which I will ever be grateful. I thank my father and brother who kept chasing me up and
pushing me to complete this work.
Finally and most importantly, I wish to thank my wife and children for their love, patience, support and
encouragement during these many years, months, weekends, days and evening hours that went into this longerthan-expected undertaking.
Dean Vuini
Brussels, July 2007
Nomenclature
for all
there exist
equals
summation
product
Expressions
gij
:
metric tensor components (i,j = 1,2,3)
i, j, k
I, J, K
J
G
n
n
p
r, x
S
A
u, v, w
U
x, y, z
X
Symbols
position vector
structured grid
transformation matrix
parametric curvilinear coordinates
unstructured grid
Cartesian coordinates
hybrid grid
:
Kronecker delta; search radius
partial differential operator; boundary
error of numerical solution
.
x
Gradient operator
Laplacian operator
forward difference operator
spacing in parametric coordinates
spacing in Cartesian coordinates
u, v, w
x, y, z
Divergence operator
curl of vector
:
Abstract Data Type
Animation Framework Extension
Artificial Intelligence
Automatic Naming Convention
After Top Dead Centre
Advanced Visual Systems
Boundary Condition
Boundary Representation
vi
CA
CAD
CAE
CFD
CG
CFView
CON
CPU
DFD
DNS
DPIV
EFD
ERM
EXT
FD
FE
FEA
FO
FV
GPU
GUI
HOOPS
HWA
IME
INL
J2EE
JOnAS
KEE
LAN
LG
LDV
LSE
LSV
MB
MFLOPS
MIPS
MIMD
MPEG
MVC
MVE
OO
OOM
OOP
OOPL
OUT
PC
PDE
PER
PIV
PHIGS
PVM
QFView
Crank Angle
Computer Aided Design
Computer Aided Engineering
Computational Fluid Dynamics
Computer Graphics
Computational Flow Field Visualization
Connection BC
Central Processing Unit
Data Flow Diagram
Direct Numerical Simulations
Digital Particle Image Velocimetry
Experimental Fluid Dynamics
Entity Relationships Model
External BC
Finite Difference
Finite Element
Finite Element Analysis
Function Oriented
Finite Volume
Graphics Processing Unit
Graphical User Interface
Hierarchical Object Oriented Picture System
Hot wire Anemometry
Integrated Modeling Environment
Inlet BC
Java 2 Platform, Enterprise Edition
Java Open Source J2EE Application Server
Knowledge Engineering Environment
Local Area Network
Local to Global index mapping
Laser Doppler Velocimetry
Large Eddies Simulation
Light sheet visualization
Mega Bytes
Millions of Floating Point Operations per Second ("MegaFlops")
Millions of Instructions per Second
Multiple instruction, multiple data
Moving Picture Expert Group
Model View Controller
Modular Visualization Environments
Object Oriented
Object Oriented Methodology
Object Oriented Programming
Object-Oriented Programming Language
Outlet boundary condition
Personal Computer
Partial Differential Equations
Periodic BC
Particle Image Velocimetry
Programmers Hierarchical Interactive Graphics System
Parallel Virtual Machine
Quantitative Flow Field Visualization
vii
QoS
RAM
RANS
RG
RMI
RMS
ROI
SDK
SGS
SIMD
SISD
SNG
SOL
SOAP
STD
SYM
SV
SVS
TKE
VG
VisAD
VTK
VUB
WWW
WS
Quality of Service
Random Access Memory
Reynolds Average Navier Stokes
Raster Graphics
Remote Method Invocation
Root Mean Square
Region of Interest
Software Development Kit
Sub Grid Scale
Single instruction, multiple data
Single instruction, single data
Singularity BC
Solid wall BC
Simple Object Access Protocol
State Transition Diagram
Symmetry BC
Scientific Visualization
Scientific Visualization System
Turbulent Kinetic Energy
Vector Graphics
VISualization for Algorithm Development
Visualization ToolKit
Vrije Universiteit Brussel
World Wide Web
Web Services
List of Figures
Figure 1: The scientific visualization role _______________________________________________________2
Figure 2: The Scientific Visualization Model _____________________________________________________4
Figure 3: The Visualization Data Sets __________________________________________________________5
Figure 4: Integrated Computational Environment _________________________________________________6
Figure 5: CFView the scientific visualization system_______________________________________________9
Figure 6: The OpenDX Application Builder_____________________________________________________10
Figure 7: VisAD application example _________________________________________________________11
Figure 8: The integrated modeling environment from Dassault Systmes and ANSYS, Inc _________________12
Figure 9: The comparison of Hardware/Software productivity ______________________________________13
Figure 10: Graphics Engine as combine software/hardware solution _________________________________15
Figure 11: Software Components Distribution___________________________________________________15
Figure 12: QFView Web Interface ____________________________________________________________17
Figure 13: Use of EFD and CFD tools ________________________________________________________18
Figure 14: Laminar flow at Ret = 60: DPIV of nearly axisymmetric flow, LSV of vortex shedding and CFD at
various degrees of non-axisymmetric flow ______________________________________________________18
Figure 15: PIV system at VUB _______________________________________________________________19
Figure 16: Flow pattern for 90oATDC: (a) visualization at 20 rev/min, valve lift = 10 mm (b) Average velocity
field at 5 rev/min, valve lift = 10 mm __________________________________________________________19
Figure 17: Turbulent kinetic energy field at 90oATDC at 5 rev/min, valve lift=10 mm ____________________20
Figure 18: Average vorticity field at 90oATDC at 5 rev/min, valve lift=10 mm__________________________20
Figure 19: Experimental and CFD model for flow analysis between corrugated plates ___________________21
Figure 20: Workflow for the integration of EFD and CFD simulations _______________________________21
Figure 21: Example of an Integrated Modeling Environment [57] ___________________________________22
Figure 22: Example of the 3D virtual car model testing ___________________________________________23
Figure 23: Software model as communication media in the software development process ________________25
Figure 24: Entity-Relationship Model _________________________________________________________26
Figure 25:.Data model decomposition _________________________________________________________28
Figure 26:.Cell classification ________________________________________________________________30
Figure 27: Developed cell ERM ______________________________________________________________31
Figure 28: 1D & 2D Cell topologies __________________________________________________________32
viii
ix
Figure 151 Surface and 3D Streamlines generation from a cutting plane surface ______________________
Figure 152: Vector lines representations from structured surface points, with the required toolbox in action
Figure 153: ERM of the interaction process ___________________________________________________
Figure 154: The menu structure ____________________________________________________________
Figure 155: CFView GUI layout____________________________________________________________
Figure 156: Different view types ____________________________________________________________
Figure 157: Evolution of GUI ______________________________________________________________
Figure 158: Reminders for different interactive components ______________________________________
Figure 159: Cube model for sizing the viewing space____________________________________________
Figure 160: Clipping planes in viewing space _________________________________________________
Figure 161: Coordinates system and 3D mouse-cursor input______________________________________
Figure 162: View projection types __________________________________________________________
Figure 163: Viewing buttons _______________________________________________________________
Figure 164: Camera model and its viewing space ______________________________________________
Figure 165: Camera parameters and virtual sphere used for camera rotation ________________________
Figure 166: Symbolic calculator for the definition of new field quantities ____________________________
Figure 167: EUROVAL visualization scenario for the airfoil test case ______________________________
Figure 168: EUROVAL visualization scenario for the Delery and ONERA bump ______________________
Figure 169Setting of graphical primitives_____________________________________________________
Figure 170: Superposing different views______________________________________________________
Figure 171: Different graphical primitives showing the same scalar field ____________________________
Figure 172: Different colormap of the same scalar field _________________________________________
Figure 173: Analytical surfaces generation for comparison purposes _______________________________
Figure 174: Comparison of the traditional and object-oriented software development life-cycle __________
Figure 175: Object concept________________________________________________________________
Figure 176: Abstract data type structure _____________________________________________________
Figure 177: Point object __________________________________________________________________
Figure 178: Single and multiple inheritance___________________________________________________
Figure 179: Polymorphism ________________________________________________________________
Figure 180: DFD of the streamline example___________________________________________________
Figure 181: The partial ERD of the streamline example. _________________________________________
Figure 182: STD of the streamline example ___________________________________________________
Figure 183: Class hierarchy diagram ________________________________________________________
Figure 184: Class attribute diagram_________________________________________________________
Figure 185: The MVC model with six basic relationships ________________________________________
Figure 186: MVC framework for Surface manipulation __________________________________________
Figure 187: Visualization system architecture _________________________________________________
Figure 188: Hierarchy of Geometry classes ___________________________________________________
Figure 189: 3D View Layer________________________________________________________________
Figure 190: Class hierarchy of the controller classes ___________________________________________
Figure 191: Event/Action coupling __________________________________________________________
Figure 192: Eclipse Integrated Development Environment _______________________________________
Figure 193: Knowledge domains involved in interactive visualization_______________________________
Figure 194: Conceptual overview of the SIMD/MIMD Parallel CFView system _______________________
Figure 195: QFView an Internet based archiving and visualization environment ____________________
Figure 196: The QFView framework ________________________________________________________
Figure 197: VUB Burner Experiment ________________________________________________________
Figure 198: The eight QNET-CFD newsletters_________________________________________________
Figure 200: The LASCOT scenario__________________________________________________________
Figure 201: The security SERKET scenario ___________________________________________________
Figure 202: The SERKET application________________________________________________________
Figure 203: Visualization of 3D Model_______________________________________________________
Figure 204: Components of a 3D Model______________________________________________________
Figure 205: Graphical and Textual Annotations _______________________________________________
Figure 206: Representation of a Measurement ________________________________________________
Figure 207: Cone Trees __________________________________________________________________
Figure 208: Reconfigurable Disc Trees ______________________________________________________
Figure 209:- Mobile Device Controlling Virtual Worlds _________________________________________
Figure 210: Mobile Application Over Internet _________________________________________________
Figure 211: Alternative User Interaction Devices ______________________________________________
Figure 212: Handheld Devices _____________________________________________________________
xi
147
147
151
152
152
153
154
155
156
156
157
158
158
158
159
160
161
162
163
163
164
165
166
169
172
173
174
175
176
180
181
182
183
184
193
195
197
199
200
203
204
207
208
210
212
213
214
215
217
218
219
221
221
221
221
222
222
223
223
223
223
Figure 213: New generation of miniature computers and multi touch-screen inputs ____________________223
Figure 214: 3D Model of Machine on Display Wall _____________________________________________224
Figure 215: Scientific Visualization with Chromium ____________________________________________224
Figure 216: Example of Augmented Reality ____________________________________________________224
Figure 217 :NASA Space Station on Display Wall _______________________________________________224
Figure 218: Collaborative Visualization ______________________________________________________224
Figure 219: 6xLCD Based Display Unit ______________________________________________________224
Figure 220: Parallel Rendering _____________________________________________________________224
Figure 221: 3D Model of Visualization Lab____________________________________________________224
Figure 222: Overview of the heterogeneous and distributed environment used for the theoretical benchmarks 248
Figure 223: The theoretical random-base meshes (a) 20x20x20 (b) 200x200x250 ______________________249
Figure 224: Mesh size 200x200x250 (a) Cutting plane and Particle traces (b) Isosurface ________________249
Figure 225: Average execution times in seconds for the algorithms on the different machines (with caching
mechanism enabled for the parallel implementations). ___________________________________________252
Figure 226: Average execution times in seconds for the SIMD and MIMD implementations of the isosurface
algorithm, with respect to the number of triangles generated (caching mechanism on) __________________253
Figure 227: Execution times in seconds for particle tracing with respect to the number of particles ________254
List of Tables
Table 1: Layered Software Architecture________________________________________________________14
Table 2: SEGMENT skeleton table____________________________________________________________32
Table 3: TRIANGLE skeleton table ___________________________________________________________32
Table 4: QUADRILATERAL skeleton table _____________________________________________________32
Table 5: TETRAHEDRON skeleton table _______________________________________________________34
Table 6: PYRAMID skeleton table ____________________________________________________________34
Table 7: PENTAHEDRON skeleton table_______________________________________________________34
Table 8: HEXAHEDRON skeleton table________________________________________________________35
Table 9: Structured zone parameterization _____________________________________________________47
Table 10: C++ implementation of the hashing value ______________________________________________53
Table 11: Boundary indexing for 2D and 3D structured grids_______________________________________57
Table 12: Domain connectivity specification in 2D _______________________________________________58
Table 13: Domain connectivity specification in 3D _______________________________________________59
Table 14: The WEB model record ____________________________________________________________61
Table 15: WEB model of the tetrahedron _______________________________________________________63
Table 16: The lookup table for the tetrahedron __________________________________________________64
Table 17: Polygon Subdivision _______________________________________________________________68
Table 18: Records from hexahedron lookup table with polygon partitions and multiple connected regions____69
Table 19: Lookup table for the triangle ________________________________________________________70
Table 20: Lookup table for the quadrilateral ____________________________________________________71
Table 21: Shape function for 3D isoparametric mapping __________________________________________78
Table 22: Reduction of multiplication operations ________________________________________________80
Table 23: Triangle truth table ______________________________________________________________104
Table 24: Triangle constrains path __________________________________________________________106
Table 25: Quadrilateral truth table __________________________________________________________106
Table 26: Quadrilateral constrains path ______________________________________________________107
Table 27: The mapping procedure of a cell boundary point between connected cells in 3D _______________119
Table 29: Standard notation for boundary conditions ____________________________________________128
Table 30: Comparison of classical and user- centered approach ___________________________________148
Table 31: Software quality factors ___________________________________________________________169
Table 32: Graphics primitives for different geometries and text ____________________________________201
Table 33: Average times (s) for Sequential, SIMD and MIMD implementations of Cutting Plane and Isosurface
algorithms (wall-clock time)________________________________________________________________211
Table 34: The lookup table for the pentahedron_________________________________________________234
Table 35: Research Projects Timeline ________________________________________________________239
Table 36: Average times for Cutting Plane (wall-clock time in seconds)______________________________251
Table 37: Average times for Isosurface (wall-clock time in seconds) ________________________________251
Table 38: Average times for Particle Trace (wall-clock time in seconds) _____________________________251
Table 39: Evolution of the execution times in seconds with the number of particles used _________________252
Table 40: Execution times in seconds for Isosurface on MIMD for different machine configurations (wall-clock
time) with varying number of processors ______________________________________________________252
xii
Introduction
Fluid motion is studied in Fluid Dynamics [1, 2] by performing experimental and computational simulations that
researchers analyze in order to understand and predict fluid flow behaviors. This scientific process yields large
data sets, resulting from measurements or numerical computations. Scientific visualization comes naturally in
this process as the methodology that enhances comprehension and deepens insight in such large data sets. The
term Scientific Visualization was officially introduced and defined as a scientific discipline in 1987 at
SIGGRAPH[3] and contributes to the role of computing, as quoted by Richard Hamming:
Physical phenomena
Modeling
Computational
model
Experimental
model
Simulation
Simulated data
Scientific
visualization
As shown in Figure 1, numerically generated data are the main input to the visualization system. The data
sources are experimental tests and computational models which yield high-resolution, multi-dimensional data
sets. Such large and complex data sets may consist of several scalar, vector and/or tensor quantities defined on
2D or 3D geometries; they become even larger when time-dependency or other specialized parameters are added
to the solution. Fast and selective extraction of qualitative and quantitative information is important for
interpreting the visualized phenomena. The performance (response time) of the visualization system becomes
critical when applying iterative optimization procedure to the model under analysis. An adequate computer
system performance (response loop) must be in place, in order to match the activity of the human visual system
and the computer display of the extracted flow features to enable the user to enjoy the benefits of a truly
interactive visualization experience. The visualization system must also provide extensive interactive tools for
manipulating and comparing the computational and experimental models. For example, to lower the costs of a
new product development, we can reduce the range of experimental testing and increase the number of numerical
simulations, provided we are able to effectively exploit the existing experimental database.
The main role of SV is to present the data in a meaningful and easily understandable digital format. Visualization
can be defined as a set of transformations that convert raw data into a displayable image; the goal is to convert
the raw information into a format understandable by the human visual system, while maintaining the
completeness of the presented information expected by the end user. In our work, we used Vector Graphics (VG)
for applying colors and plotting functionality on geometrical primitives such as points, lines, curves, and
polygons. This is in contrast to the Raster Graphics (RG) approach, which represents images as a collection of
pixels. Other scientific visualization techniques rely on comparison and verification methods based on RG highresolution images. Such visualization systems to analyze RG images, for example coming from satellites and
scanners are not developed in this work, but their results are integrated, as part of the performed physical
experiments.
The present work contributed to the development of Scientific Visualization by addressing the following
questions:
search and extraction algorithms that can identify, compute and extract the
geometrical and quantitative data from selected data sets.
Visualization task:
DATA
TRANSFORMATION
Simulated
extraction
refinement
Derived
enhancement
enrichment
Graphical
rendering
Displayable
The SV model needs to be well understood by the user -- the investigator -- in order for him/her to correctly
interpret the displayed data. The user must be able to fully control the amount of visualized information in order
to extract meaningful information from the data without being overloaded with graphical content.
Figure 3 shows examples of results of the visualization process. The solid model, mesh grid and computed
physical quantities are initial simulation data. Applying the next extracting transformation permits the interactive
generation of user-defined surface or curves. The mirror surface extracted from the double ellipsoid test case
represents the reduced data set. In the third step, the image is enriched and enhanced by a color mapping.
Computer-graphics primitives, such as polygonal meshes, store the data in the format treatable by the underlying
computer graphics engine. The final rendering transformation treats the 3D graphical objects with viewing and
lighting transformations.
store intermediate results. If the systems rendering module is capable of real-time animation, the accumulated
data can be visualized while the main numerical computation continues in parallel.
Interactive visualization accelerates the CFD design cycle by allowing the user to jump at will between the
various phases so as to optimize his/her CFD analysis. The user conducts the investigation in a highly interactive
manner, can easily compare variants of a simulation/analysis and may intuitively develop a deep understanding
of the simulation and of the calculation details. An example of an integrated environment application is the
Virtual Wind Tunnel [9], which reproduces a laboratory experiment in a virtual reality environment, where a
virtual model can be created and put to test with dramatic cost and time savings compared to what is done in the
real laboratory.
Such programs are appropriate for users who need off-the-shelf visualization functionality. Such software
implements the event-driven programming paradigm which is suitable where all functions are launched by
the user interacting at with the Graphical User Interface (GUI). This is the case for CFView [21], see Figure
5, a scientific visualization application developed by the author over the 1988-98 period. CFView started as
an academic application in 1988 and was continuously upgraded in the following years. In the mid 90s,
CFView was taken over by the VUB spin-off company NUMECA and integrated in FINE, NUMECAs
engineering environment. FINE is an environment that nicely illustrates the variety of visualization tasks that
need to be performed to solve an engineering problem, especially addressed to turbomachinery applications.
2. Modular Visualization Environments (MVE) are programs often known as visualization programming
environments; examples are[22]:
Advanced Visual Systems AVS [23],
Iris Data Explorer from Silicon Graphics[22, 24],
OpenDX the IBMs Data Explorer[25],
PV Wave from Visual Numeric [26].
Their most significant characteristic is the visual programming paradigm. Visual programming intends to give
users an intuitive GUI for them to build customized visualization applications. The user graphically
manipulates programming modules displayed as boxes, which encapsulate the available functionality. By
inter-connecting boxes, the user defines the data stream from one module to another, thereby creating the
application. The MVE can be viewed as a visualization network with predefined building blocks, and which
often needs to be quite elaborate in order to be useful to the user. The freedom given to the users to design
their own visualization applications is the strength of so-called application builders. This class of software
implements the data flow paradigm, with the drawback that iterative and conditional constructs are difficult
to implement. For example, PV Wave uses an interactive fourth-generation programming language (4GL) for
application development, which supports conditional logic, data sub-setting and advanced numerical
functionality in an attempt to simplify the use of such constructs in a visual programming environment. The
interactive approach is usually combined with a script-oriented interface, and such products are not easy to
use right out of the box and have a longer learning curve than stand-alone applications.
There is an ongoing debate on whether the best way to procure visualization software is to use stand-alone
applications or to build applications using MVEs. Time has shown that both approaches are equally accepted
as there is no alternative. The approach that we chose to follow in our work is a compromise between the two
options. The GUI of our CFView software looks very much like that of a stand-alone visualization
application; internally though, CFView is an object-oriented system which has the flexible, modular
architecture of an application builder. This is to say that a new component can be integrated in the core
application structure with a minimum coding effort; also, that the propagation effects resulting from the
modification are kept limited.
3. Visualization Toolkits are general-purpose object-oriented visualization libraries, usually present as
background components of SV applications. They emerged in the mid 1990s, and the two representative
examples are VTK[27] and VisAD[28]:
The Visualization ToolKit (VTK) is an open-source software system for 3D computer graphics,
image processing and visualization, now used by thousands of researchers and developers in the
world. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk,
Java, and Python. VTK supports a wide variety of visualization algorithms (including scalar, vector,
tensor, texture and volumetric methods), advanced modeling techniques (such as implicit modeling,
polygon reduction, and mesh smoothing, cutting, contouring and Delaunay triangulation). In
addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D
imaging / 3D graphics algorithms and data.
The VISualization for Algorithm Development (VisAD) is a Java component library for interactive
and collaborative visualization and analysis of numerical data. VisAD is implemented in Java and
supports distributed computing at the lowest system levels using Java RMI distributed objects.
VisADs general mathematical data model can be adapted to virtually any numerical data that
supports data sharing among different users, different data sources and different scientific
disciplines, and that provides transparent access to data independent of storage format and location
(i.e., memory, disk or remote). The general display model supports interactive 3D (Figure 7), data
fusion, multiple data views, direct manipulation, collaboration, and virtual reality. The display
model has been adapted to Java3D and Java2D, and virtual reality displays.
11
4. Integrated Modeling Environments (IME) is software that combines two or more engineering applications
and visualization systems to solve a multi-disciplinary problem. For example, the naval architect shapes the
ship hull in order to reduce the ships hydrodynamic drag, while the stress engineer calculates the ships steel
structure. Both use visualization to analyze the data generated by the hydrodynamics and stress calculation
solvers. The visualization software may be able to process the CFD flow-field solver data and the FEA stressfield solver data in a unified manner, giving the two engineers the possibility to work on compatible,
interfacing 3D representations of the hydrodynamic and structural problems. An example of such integration
is the Product Life-cycle Modeling (PLM) developed by Dassault Systmes and the CFD solver technology
developed by ANSYS, Inc, where the FLUENT CFD flow modeling approach is integrated in CATIA CAD
tools throughout the whole product lifecycle [29].
Figure 8: The integrated modeling environment from Dassault Systmes and ANSYS, Inc
12
3.5
Hardware
Software
3.0
Complexity
2.5
GAP
2.0
1.5
1.0
0.5
1980
1990
2000
2010
2020
Year
13
Reusability is an intrinsic feature of all OO software and their efficient exploitation promotes the computer
network to become a commercial market place, as the Internet, in which such general-purpose and specialized
software components needs to be available, validated and marketed [30].
The OO approach has led to the emergence of Object Oriented Programming (OOP) with specialized OO
programming languages -- such as Smalltalk[31], CLOS, Eiffel[32-34], Objective C [35], C++ [36], Java, C#
and other derivatives -- which apply encapsulation and inheritance mechanisms to enhance software modularity
and improve component reusability. It is important to stress that the highest benefit of OOM is obtained when
OOM covers the full software life-cycle, from the requirements specification phase to the software delivery
phase. When an application is created applying OOM, reusability in different development phases can be
expected. First, the OOP brings in object-oriented libraries, which provide components validated in previously
developed applications. Second, the software design of the previously modeled software could be reused through
the established design patterns. Previously developed components may be reused for the new application which
does not need to be designed from scratch, which is an obvious advantage. Improvements that could be brought
to the existing, re-used objects would also improve the older applications that use the same objects.
It is interesting to note that the OOP paradigm has shifted the emphasis of software design from algorithms to
data (object, class) definitions [37-39]. The object-oriented approach can be summarized in three steps. The
system is first decomposed into a number of objects that characterize the problem space. The properties of each
object are then defined by a set of methods. Possibly, the commonality between objects is established through
inheritance. Actions on these objects and access to encapsulated data can be done randomly rather than in a
sequential order. Moreover, reusable and extensible class libraries can be created for general use. These are the
features which make OOP very attractive for the development of software, in particular for interactive software.
It should be mentioned that OOM does not directly reduce the cost of software development; however, it
markedly improves the quality of the code by assuring consistent object interfaces across different applications.
Estimated software construction times are often incorrect. Time and resource allocation tend to be largely
underestimated in software projects, not uncommonly by factors of 2 to 5, especially where innovative features
are to be developed. Unfortunately, for the software construction planning we do not have an underlying
engineering, scientific or mathematical model to calculate the software development time required, when starting
the new software development process. The theoretical basis of how to best construct software does not exist.
The ability to plan the project costs, schedule millstones, and diagnose risk is ultimately based on experience,
and could be only valid for a very similar application done in the past and applying the same development
environment.
RESPONSIBILITY
LAYER
TASK
DEVELOPER
Application - Database
simulation data
Visualization System
graphics data
displayable data
Device Drivers
Operating System
Hardware Platform
VENDOR
14
15
To develop the visualization software, our approach must be multi-disciplinary in the sense that it puts together
an application engineer and a computer specialist in order to develop different application layers, as shown in
Figure 11. The software development environment needs to enable the evolution of the software under
development and has to provide a framework for porting applications across different hardware/operating
systems/windowing systems. Also, it has to simplify the process of the creation of interactive graphical
applications, with enabling the application engineer to have under control the application software layer and hide
the lower software layers of the system, as depicted in Figure 11. Thus, the object-oriented approach was
selected to introduce the necessary abstraction levels, necessary for organizing the inherent complexity present in
the development of the scientific visualization software.
16
the archival and retrieval of data from unified (experimental and numerical) flow field database.
Based on proven Internet and World Wide Web (WWW) standard technologies, QFView provides an integrated
information system for fluid dynamics researchers (see Figure 12). QFView is a web-based archival, analysis and
visualization system, which enables the manipulation and extraction of data resulting from laboratory
measurements or computational simulations. The system is suited for combining experimental and computational
activities in a single operational context. This results in an increase of productivity, since the system facilitates
for the exchange of information between investigators who conduct the same or similar simulations/experiments
in different geographical locations, can be conducted in collaboration or independently.
The rapid progress in all facets of fluid dynamics research has made it essential that the research activities are
conducted in close cooperation between experimentalists, numerical analysts and theoreticians. In the early
stages of the CFD development the improvements of the process was so rapid that it was expected to eliminate
the role of EFD all together. However, experience showed that in order to study new phenomena experiments are
still the most economical approach. The CFD codes, ones validated against experimental data, are the most
effective tool for the production of data to build a comprehensive flow database [48]. The strengths of various
tools in EFD and CFD should be used judiciously to extract significant quantities required for problem solving as
shown in Figure 13.
17
Validation data
Statistical data
+Reynolds stresses
+Empirical parameters
PIV
Velocity field
+Vortricity field
+Length scale
DNS
Lower time scale
EFD
RANS
CFD
Resolution
requirement
Eddies
+SGS model
Fluctuation
HWA
LDV
Input
+Validation
+Eddies
LES
The flow downstream of a bluff body in a double annular confined jet [49-51]
2.
3.
The experiments generated large data sets -- measured and calculated values of physical quantities-- which were
visualized and compared in order to support the validation procedure of the obtained results.
Figure 14: Laminar flow at Ret = 60: DPIV of nearly axisymmetric flow, LSV of vortex shedding and CFD at
various degrees of non-axisymmetric flow
18
2.
(a)
(b)
Figure 16: Flow pattern for 90oATDC: (a) visualization at 20 rev/min, valve lift = 10 mm (b) Average velocity
field at 5 rev/min, valve lift = 10 mm
19
The influence of the speed and valve lift on the in-cylinder flow in the motored mono-cylinder was investigated
while in the steady-state flow test rig; only the effect of the valve lift on the intake jet and stability of the tumble
motion was retained. The similarities and differences in the two types of flows were analyzed in order to
understand the relevance of the steady-state experimental results to model the unsteady flow in the motored
cylinder. Figure 16 (a) shows an example of visualization of the flow pattern in the mono-cylinder at 90o CA
after top dead centre (ATDC). The details of the flow structure were presented in terms of maps for average and
instantaneous variables such as velocity (Figure 16 (b)), turbulent kinetic energy (Figure 17), vorticity (Figure
18), normal and shear strain rates. The main findings were:
the integral length scales are the same for the investigated stationary and unsteady flows, a finding
which supports the hypothesis of the isotropy of the small-scale structures
the footprint of the flow is given by the root-mean square velocity and turbulent kinetic energy fields
the flow is a complex collection of local jet and recirculation flows, shear and boundary layers
Figure 17: Turbulent kinetic energy field at 90oATDC at 5 rev/min, valve lift=10 mm
Figure 19: Experimental and CFD model for flow analysis between corrugated plates
The development of the QFView environment highlighted the need for computer specialists and engineers to
collaborate. Our work demonstrated the indispensable role of multi-disciplinary teamwork for progressing
scientific visualization systems and developing next-generation engineering environments, which need to
combine CFD codes and EFD experiments facilities in an integrated information framework, applicable to
different sectors of industry, research and academia.
LASER
COMPUTATION
EXPERIMENT
VIDEO CAMERA
Computer for recording
&
Post-processing images
Database administration
&
post analysis
Database
SEEDING
S
FLOW
VISUALIZATIO
FIELD
N
VIDEO
RECORDING
Quantifying
moving images
ILLUMINATION
Figure 20: Workflow for the integration of EFD and CFD simulations
21
VELOCITY
FIELD
manipulation
After the September 11-2001 event, when the ALICE project ended, the authors research shifted towards
general-purpose (as opposed to scientific) visualization systems. This work was carried out in the LASCOT
project [55] funded by IWT, being part of the European Information Technology European Advancement
(ITEA) program for R&D of software middleware. The LASCOT visualization system demonstrated the
possibilities of 3D graphics to support a collaborative distributed knowledge-management and decision-making
applications. The research challenge was to deliver a visualization system capable of enhancing situation
awareness (i.e. information that the actor-user has to manipulate to solve a crisis situation), which was done by
providing a 3D interface that gave the user the possibility to navigate in and interrogate an information data-base
in a highly intuitive manner. It was a main requirement that the visualization technology should be capable of
assisting the user in performing decision-making and knowledge management tasks. The LASCOTs
visualization system was built upon the JOnAS [56] application server (developed on the European J2EE
architecture). These technology and architecture could be easily used for developing new engineering integrated
environments.
The work presently carried out by the author in the ITEA SERKET project is about security issues and protection
against threats, which relates to the development of 3D graphical models capable of integrating and rendering
data acquired from heterogeneous sensors like cameras, radars, etc.
Over the last 20 years, the author has initiated and worked on many research projects with a clear focus on
developing scientific visualization software and advancing the state of the art in this area. There are potential
avenues of future research, as discussed in the next chapter.
22
The Integrated Modeling Environment (IME) [58] concept is quite recent, yet its roots can be found in 1stgeneration CAD-CAM tools. An IME system attempts to offer to the engineer a homogeneous working
environment with a single interface from which various simulation codes and data sets can be accessed and used.
In the fluid mechanics application area, an IME system needs to integrate the latest CFD and EFD good
working practice; the system must be constantly updated so that at any time, it runs on the most-recent
software/hardware platform (see Figure 21).
An IME system consists of an Internet portal from which the investigator is able to access
information/knowledge/databases and processing functions, at any time and wherever they are located/stored.
He/she has access to accurate and efficient simulation services, for example to several CFD solvers. Calculations
can be performed extremely fast and cheaply where solvers are implemented as parallel code, and grid
computing resources are available. Results obtained can be compared with separate experimental results and
other computations; this can be done efficiently by accessing databases that manage large collections of archived
results. The possibilities for benchmarking and for exchanging knowledge and opinions between investigators
are virtually infinite in an IME environment. Clearly though, a pre-requisite for an IME environment to work is
its adoption by its user community, which agrees on a specific codex that enables and guarantees openness and
collaboration. Typically, an IME system will open Web-access to:
Computational Services: selection of simulation software and access to processing and storage
resources
Experimental Services: access to experimental databases with possibility to request new measurements
Collaborative Services: chat and video-conferencing, with usage of shared viewers (3D interactive
collaboration)
Visualization is required to support many tasks in an IME software. This poses the problem of building/selecting
data models that can be used by the visualization components to present the information correctly to the users,
whilst offering to them tools for real-time interaction in a natural, intuitive manner. The IME can include walldisplays connected to high-performance, networked computing resources. Such systems and architectures are no
longer a mere vision: they are becoming reality, which opens new challenges for scientific visualization software
researchers and developers.
23
The usefulness of IME can be illustrated by considering how it would help the car designer. Assume that the
present goal is to find out how a new car performs under various weather conditions and to assess its safety level
before manufacturing begins. Assume that test-drive simulations are performed digitally, and that they are run in
parallel. Clearly, car behavior problems can be readily identified, and design can be modified rapidly and at little
cost; the expected savings in time and money are clearly substantial.
The QFView system that we have developed provides certain features of IME systems like the web-access and
data sharing but does not integrate distributed computational resources, which might be addressed in the coming
future.
Thesis Organization
The motivation for the research work presented in this thesis, and the objectives that were pursued are described
in the Introduction Chapter, which also covers the state-of-the-art and an outline of the authors research and
development work in relation to object-oriented visualization software.
The main body of the thesis is then subdivided in 3 Chapters. The first Chapter is dedicated to discussing
modeling concepts and fundamental algorithms, including discretization, i.e. the modeling of the continuum in
cells and zones. This Chapter also presents the theoretical foundations of the basic algorithms applied to
geometries, scalar and vector fields.
In Chapter 2, the visualization tools are explored to show different possibilities of extracting and displaying
analyzed quantities. Improved and adapted visualization techniques are illustrated in several examples.
Interactivity is then considered by explaining the Graphical User Interface design model. The Chapter ends with
a discussion of Visualization Scenarios which allow the standardization of the visualization process.
Chapter 3 is devoted to discussing the relevance and appropriateness of the object-oriented methodology (OOM)
for visualization systems, and the associated implementation issues. Object-oriented programming concepts and
object-oriented data model are reviewed. The design and implementation of the visualization system is presented
-- architecture, description of important system classes, management of input/output files, and development of
portable and reusable GUI code.
The last Chapter explains how OOM permitted the development of Parallel CFView, an extension of the basic
CFView visualization system that takes advantage of distributed and parallel processing. The Chapter covers the
upgrading of the QFView system to distributed data storage/archiving for scientific applications, then its
application for visualization in general-purpose information systems as prototyped in the LASCOT and
SERKET ITEA projects. We conclude this Chapter with suggestions for future research.
During the course of the work presented in this thesis, our CFView visualization system has evolved from an
academic laboratory prototype to a versatile and reliable visualization product providing a highly-interactive
environment capable of supporting the most demanding fluid flow investigations. Researchers can reliably use
and control CFView to extract, examine, probe and analyze the physics of flow fields, by simple commands
through an intuitive graphical user interface.
One of the objectives of this work was to explore object-oriented programming and design a new development
methodology for scientific visualization systems. This objective was achieved with the production of CFView, a
visualization system fully implemented in C++ (one of the OOP languages) [36].
24
Figure 23: Software model as communication media in the software development process
The software model is a fundamental element in OOM software development. The model describes the
knowledge mapped in the software in a formal, unambiguously defined manner. Such a precise specification is
both the documentation and the communication tool between the developers and the users; recall that the term
developer includes application analysts, software designers and coding programmers (see Figure 23).
In the software development process, the analyst creates an abstract model that will be partially or fully
implemented. The designer uses that model as a basis to add specific classes and attributes to be mapped onto
one or more OOP languages. The designer specifies the detailed data structure and functional
operations/processes, which are required by the application specification. Finally, the programmer receives the
analysts and the designers models for implementation into source code. The source code is compiled to
produce the executable software. Software modeling is then the iterative and incremental process which maps
abstract concepts into formal constructs that eventually become reusable software entities. In OOM, the object
model comprises a data model and a functional model; see section 3.2 Object Oriented Concepts. The
specification of an object includes a description of its behavior and of the data necessary and sufficient to
support its expected functionality. The data model describes the pertinent data structures, the relations between
the objects and the constraints imposed on the objects. The functional model describes the objects behavior in
terms of operations. From the data model point of view, the primary concern is to represent the structures of the
data items that are important to the scientific visualization process and the associated relationships. The
25
modeling tool that was applied is known as the Entity-Relationship Model (ERM) [59]; it is well-adapted for
constructing static data aspects and ensures the completeness of and the consistency between the specified data
types. Figure 24 depicts an ERM with Surface, Section and Vertex entities (shown together with their
relationships). Modeling entities define the direct association between problem entities (visualization data) and
software objects (manipulative data), which establishes the reference basis for the incremental software
development. This is important, because software modeling is not a one-shot process but a continuous or
repetitive one, since software must be changed to account for the evolution of the user requirements and/or of the
technology platforms.
It is important to mention that the terminology introduced by OOM standardizes the names of the objects which
will constitute the system in all development phases, so that the Naming Convention must be strictly preserved
and complied with in the software model. ERM consists of three major components:
1.
Entities that represent a collection, a set or an object and which are shown as rectangular boxes
(Surface, Vertex). They are uniquely identified, and may be described by one or more attributes.
2.
Relationships that represent a set of connections or associations between entities; they are shown as
diamond-shaped boxes.
3.
Attributes that are properties attached to entities and relationships and which are shown in oval
call-outs.
In Figure 24, the integer numbers on each side of a relationship-type denote the number of entities linked by the
relationship. Two numbers specified on one side of a relationship indicate min-max values. For the consist of
relationship, the Vertex can be part of one and only one Surface and a Surface consists of M Vertices. A
dependency constraint on a relationship is shown as an arrow; for example, a Section can exist only if there is a
Surface for which an Intersection exists. The attribute can be of atomic or composite type. An example of
relationship attributes is the Intersection relationship, which has additional data attribute to perform the
intersection operation by interfacing Surface and Section entities. Composite attributes are shown in doublelined ovals. Underlined attributes are key attributes whose values are unique over all entities of that given type.
If an entity type does not have a complete key attribute -- like the Vertex because two vertices constructing two
different surfaces can have the same index --, it is called a weak entity type and shown in a double-lined box.
The attribute index of a Vertex is a partial key only and is denoted by a doted underline. Entity types can be
related by a is-a relationship, which define specialization or generalization of related entities. The entity types
connected by a small diamond
determine a hierarchy (or lattice). For example, possible surfaces can be
structured or unstructured. All the attributes, relationships and constraints on an entity type are inherited by its
subclass entity types.
index
type
parameters
name
description
coordinates
Section
Intersection
Surface
consist of
Vertex
index
data
Structured
Unstructured
26
The entities in the ERM diagram are the basis for the Class decomposition because of their one-to-one
correspondence with the classes in the OOM Class Diagram. OOM preserves the same class decomposition
throughout the software development process; see Section 3.7, where detailed ERM diagram of the scientific
visualization model is shown, applying the ERM modeling elements showing objects, relations and constraints.
For example, Mesh or Surface objects are entities, qualified by specific attributes (for example color or
coordinates). Their relation includes functional dependencies between them, while constraints are applied on
attribute values.
ERM naturally fits in OOM, because it improves the specification of the object data model. ERM is one of the
semantic data models that support a rich set of modeling constructs for representing the semantics of entities,
their relationships, and constraints. The entity types are mapped to classes and their operations. Constraints that
cannot be specified declaratively in the model are coded in class methods. Additional methods are identified in
the classes to support queries and application functionality. ERM makes the software model more tangible to
users who just have CFD application knowledge. It is advisable to use ERM as a communication tool between
users and developers because ERM gets better the specification of the software model and its analysis.
The modeling concepts are not simply data-oriented: the object data model comes together with the fundamental
algorithms which will generate the inputs to the computer graphics algorithms. Algorithms represent a suitable
abstraction of the systems behavioral characteristics; their modeling is done in conjunction with the data
structures, and puts together all information required to make them work. The algorithmic models define the
computational aspects of the system; these models can be analyzed, tested and validated independently of the full
systems implementation. The algorithmic solutions influence directly the performance of the implementation;
hence, they should be modeled as simple, efficient and traceable components. To achieve clarity, effectiveness of
expression, and conciseness in describing algorithms, we will use a pseudo-language which can be routinely
mapped to any formal high-level programming language. An algorithm is written as a sequential order of
statements; a statement can be expressed in a natural-language syntax or in as mathematical notation with formal
descriptors and list of variables. The important statements are:
assignment:
a = b;
conditional:
loop:
In OOM, the data structures are encapsulated in the objects together with the algorithms. Normally, the
algorithms operate just on the internal data structure of the object they are associated with. More complex
algorithms could involve several objects; in this case, the data structure supporting their interaction needs to
relate all of them. For example, the cutting-plane algorithm involves the geometrical data structure of the domain
and of the surface which is created as a result of applying the algorithm. In what follows, we have grouped the
algorithms according to the data structures involved in the visualization processes, i.e. in terms of: combinatorial
topology (cell connectivity, node normal), computational geometry (cutting plane, section, local value) and
quantity representations (iso-lines, iso-surfaces, thresholds and particle traces).
Computer graphics algorithms which perform operations such as coloring, rendering and shading are commonly
implemented in hardware (and are of no interest here). What is important is the way in which we configure their
set-up and inputs in order to create the desired representations (this is partially discussed in Chapter 2 Adaptation
of Visualization Tools dealing with Representations).
This Chapter describes the algorithms in the following order: starting with topology, extending them to geometry
and ends with quantity related algorithms. This order was chosen because it incrementally introduces new data
structures which are sufficient and necessary for the algorithms to be operational. For example, the cutting-plane
algorithm cannot be efficient without the topology information.
27
grouping
Zone
vertical decomposition
The visualization data model covers the objects needed for storing and manipulating data; it includes two
decomposition lines:
vertical decomposition, based on sup-sub relationship, and
horizontal decomposition, based on part of relationship.
Both decomposition approaches describe the Cell and/or Zone geometry and topology with the grouping
principle (see Figure 25). The vertical decomposition describes the sup-sub relationship as the principal concept
in the boundary representation (B-rep) model [64-66], which defines geometrical shapes using their limits. The
sup-objects are here described by the group of sub-objects which are defined in the parametric space of lower
dimension. The horizontal decomposition implies that the grouping principle puts together objects from the same
parametric dimension, so that the resulting object remains in the same parametric space. The geometric model is
fully described with the parametric and modeling coordinates, for example a point in a curve or surface is
completely defined by its coordinates. The geometric model is enriched with the topology where boundary
relationships and connectivity can define multiple regions. The multiple region objects with connected parts are
defined by maintaining information about the common boundaries between them.
The Cell and Zone models are fundamental to the discretized geometry and topology models; according to the
vertical and horizontal decomposition principle:
the cell is explained in the light of the vertex-based boundary model
the zone is terms of its horizontal decomposition of as a collection of cells.
1- 1
29
Solid
Face
Edge
Node
a) topology
Point cell
Curve cell
Surface cell
Body cell
b) parametric dimensions
1, 2, . . . N
- skeleton
GEOMETRY:
- modeling space :
- parametric space:
- mapping function:
- interpolation function:
The cell is called 0D, 1D, 2D or 3D in accordance with the cell parametric dimension which describes the region
of a point, curve, surface or body (see Figure 26 (b)). The cell boundaries are defined as cells of lower
dimension in respect to the cell they bound (see Figure 26(a)): they delimit the cell region from the underlying
Map. For example, three edges of parametric dimension 1D bound the face of topological dimension D2. Thus,
1D-curve cells bound 2D surface cells. The edges are defined as regions with infinite line maps bounded by
respective nodes.
30
composed of
part of
Cell
Skeleton
M
S
Cell 3D
Solid
Cell 2D
Face
composed of
part of
S
Hexahedron
composed of
part of
Pentahedron
Piramid
Tetrahedron
Quadrilateral
Triangle
Cell 1D
Edge
composed of
part of
Cell 0D
Node
Segment
Vertex
31
n0
n1
u
e0
n2
e2
e2
n3
e1
e3
e0
n0
n2
n1
e1
e0
n1
n0
Mesh type
: unstructured & structured
Cell topology : SEGMENT 1D, 2D & 3D
Coordinate System
Cell
1
Nodes
3
Axis
1
Nodes
3
T1N2
0-1
0-1
Coordinate
System
Edges
3
Nodes
3
Cell
1
Nodes
3
Axis
2
Nodes
3
0-1
T2N3
0-1-2-3
0-1
1-2
0-3
2-0
Table 3: TRIANGLE skeleton table
2D: quadrilateral
Mesh type
: unstructured & structured
Cell topology : QUADRILATERAL 2D & 3D
Coordinate
System
Edges
4
Nodes
4
Cell
1
Nodes
4
Axis
2
Nodes
4
0-1
T2N4
0-1-2-3
0-1
1-2
0-3
2-3
3-0
Table 4: QUADRILATERAL skeleton table
32
tetrahedron
w
v
n3
e5
e3
e5
e4 e2
e4
e3
e5
f1
n2
e2
e3
e1
f3
e4
e1
e2
f0
f2
e0
e0
e0
n0
e1
n1
pyramid
n4
v e7
e7
e6
f1
e4
n3
e5
e2
e4
n2
e1
e3
e2
e2
f3
e1
e1
f0
f2
e0
e0
e0
f4
e5
e5
e3
e3
e4
e6
e6
e7
n1
n0
pentahedron
w
n5
e8
e8
e8
n3
e6
e6
e7
e3
n4
e2
e5
e3
n2
f2
e2
e1
e0
e4
e5
f3
e2
n0
e0
e7
f1
e3
e6
e7
f4
e5
e4
e4
e0
f0
e1 e1
n1
u
e10
e10
hexahedron
n7
e11
e11
n2
e2
e3
n0
e2
f0
e5
f2
e1
e0
u
e0
e0
n1
33
e9 e6
e6
e3
e3
e4
e9
e2
e8
e4
e5
n3
f4
e8
f1
e6
n5
f5
e7
e7
v
e4
e11
e9
e7
e8
n4
n6
e10
f3
e5 e1
e1
Mesh type
: unstructured
Cell topology : TETRAHEDRON 3D
Coordinate
System
Edges
6
Nodes
4
Faces
4
Nodes
4
Cell
1
Nodes
4
Axis
3
Nodes
4
0-1
0-2-1
T3N4
0-1-2-3
0-1
1-2
2-0-3
0-2
2-0
0-1-3
0-3
0-3
1-2-3
1-3
2-3
Table 5: TETRAHEDRON skeleton table
Mesh type
: unstructured
Cell topology : PYRAMID 3D
Coordinate
System
Edges
7
Nodes
5
Faces
5
Nodes
5
Cell
1
Nodes
5
Axis
3
Nodes
5
0-1
0-1-2-3 T3N5
0-1-2-3-4
0-1
1-2
0-4-1
0-3
2-3
1-4-2
0-4
3-0
2-4-3
0-4
3-4-0
1-4
2-4
3-4
Table 6: PYRAMID skeleton table
Mesh type
: unstructured
Cell topology : PENTAHEDRON 3D
Coordinate
System
Edges
9
Nodes
6
Faces
5
Nodes
6
Cell
1
Nodes
6
Axis
3
Nodes
6
0-1
0-2-1
T3N6
0-1-2-3-4-5
0-1
1-2
2-0-3-5
0-2
2-0
0-1-4-3
0-3
0-3
1-2-5-4
1-4
3-4-5
2-5
3-4
4-5
5-3
Table 7: PENTAHEDRON skeleton table
34
Mesh type
: unstructured & structured
Cell topology : HEXAHEDRON 3D
Edges
12
Nodes
8
Faces
6
Nodes
8
0-1
0-3-2-1 T3N8
1-2
2-3
Coordinate
System
Cell
1
Nodes
8
Axis
3
Nodes
8
0-1-2-3-4-5
0-1
3-0-4-7
0-3
0-1-5-4
0-4
3-0
1-2-6-5
0-4
2-3-7-6
1-5
4-5-6-7
2-6
3-5
4-5
5-6
10
6-7
11
7-4
Table 8: HEXAHEDRON skeleton table
Space
Dimension
Step
order 0
Modeling
Map
Cell
Parametric
Linear
order 1
S
Quadratic
order 2
Cubic
order 3
M
compose of
part of
Cell
cell node
indexing
Node
Index
M
S
Boundary
(sub-cell)
Bridge
Cell
Star
(sup-cell)
1
Frame
0D
1D
2D
3D
By convention, the local coordinate system of a cell is in parametric space: its origin is at the first cell node, and
the coordinate axes are defined based following the local node indexing within the cell, as described under the
Coordinate System header in each skeleton table. The orientation of the cell boundary is considered positive if its
face normal points from the interior of the cell to the exterior. A cell is always embedded in a geometrical space
of equal or higher dimension than the cells intrinsic dimension, and can be composed of cells of intrinsic
dimension lesser than its own. The cell with lowest 0D dimension is a node. Any cell can be described by an
ordered collection of nodes. The topology for nodes, edges, faces and cells is predefined for each cell type so that
some properties are implicitly defined. For example, the order of the nodes of a triangular cell enables us to
compute the line that is normal to that triangle in a 3D space. Another example is the extraction of faces from a
3D-cell applied in a marching-cell algorithm, see Section 1.4.1.
B-rep defines the lower dimensional cells as sub-cells, while the cell itself is defined (bounded) by the
mentioned sub-cells. As shown in Figure 30, each cell can have more than one sup-cell (named Stars), and a
cells boundary can be composed of more than one Boundary sub-cell. The sup-sub relationship is defined by the
Bridge classes. The Bridge class defines the relationship between a cell and one of its boundaries; it specifies the
direction, when moving from the boundary in the cell interior. Each boundary cell is a bridge; for example, a
triangle has 3 bounding bridges (edges) which determine the boundary sub-cells. Boundary cells can be
connected in a prescribed way by showing the boundary topology, named Frame. The group of Bridges (edges)
forms the Frame, which defines their sup-cell (face).
1- 3
The geometry of a zone, described by the nodes coordinates and the topology is a composition of cells. The zone
topology is defined as the union of cells according to the B-rep cell model extended to support:
the intersection of every pair of the zone cells,
the union of any number of the zone cells.
The finite space z is discretized into a finite number of regions. The zone Z is expressed as the union of the
finite space z and its boundaries z.
Z = z z
1- 4
Cell C is expressed as the union of the finite region c and its boundary c
C= c c
1- 5
The boundaries of the cells represent their connections with the neighborhood cells:
c1 c2 = c1 c2.
The common boundary c1
c2
1- 6
is also the zone cell. The zone is defined as a union of all its cells, whatever
Z = z z =
1- 7
c =1
The Zone model is defined with a boundary representation model, B-rep, because we always treat a finite region
in the generalized modeling space. A zone encapsulates the relationship with its boundary. As the cell is the
36
atomic entity in the modeling and parametric spaces, it represents a Zone unit space. The only difference is that
the Zone B-rep has in addition the cell connectivity to depict the zone topology. The zone topology incorporates
schemes that specify the geometry of the zone boundaries and their inter-connectivity. It explicitly maintains cell
connectivity by ensuring that adjacent cells share their common boundaries. The zone topology depends solely
on the cells structure, not on their geometry. If the geometry of a zone is modified, its topological structure
remains intact. The invariant nature of the zone topology allows its separate treatment (see Section 1.2). The
combined use of geometry and topology is present in the algorithms described in Section 1.3 and Section 1.4.
Grids can be discretized using different types of zones, so one can establish sup-sub zone relationships and
define a zone as a region bounded by zones of lower dimension -- the sub-zones. A sup-zone is a zone bounded
by sub-zones. The sup-sub relationship yields an elegant recursive structure called the zone topology. The B-rep
modeling naturally supports aggregation, thus it can be extended to model the set of zones through composition
based on the recursive sub-zone relationship. B-rep describes the topology of a zone modeled with all its
boundary zones and their connectivity.
The presented topological structure and its general functionality is defined in the Geom class, thus applicable to
Zone and ZoneGroup sub-classes, which are responsible for modeling the collection of zones as a Set. The
additional topological classes are: Bridge, Frame and Skeleton.
M
Geom
1
S
Zone Group
1
S
1
Skeleton
Zone
Node
Boundary
(sub-zone)
Star
(sup-zone)
Bridge
M
1
Frame
S
Inner
Frame
Outer
Frame
37
Cell
n1
b1
S1
t1
t2
n2
D1-bridge
S2
b2
The algebraic normal is computed from the natural orientation of the sub-zone and sup-zone.
D0-bridge: point bounds a curve, the normal is simply the oriented tangent of the curve.
D1-bridge: curve bounds a surface, the algebraic normal is defined as n t where t is the oriented
tangent to the curve and n is oriented normal to the surface.
2.
The topological normal is the algebraic normal, but its orientation can be inverted if needed. It
represents the inward direction from the sub-zone to sup-zone. The inward normal is closely related
to the relative orientation of the bridge.
The interior of a zone is a region of the zone bounded by lower dimensional zones. A node n of Z is said to be an
interior point of Z, if there exist at least one sub-zone about n whose all points belong to Z. A node n is said to be
exterior to a zone Z, if none of the sub-zone points belongs to Z, in which the node is located.
The boundary of a zone is the zone of topological dimension-1 composed of the nodes that are exterior to Z. A
Frame combines bridges into a connected enclosure. Enclosures can have one Outer frame and several Inner
ones (see Figure 33). The dimension of the Frame is that of the sub-zone whose boundaries it represents. Frames
can be active or inactive depending on whether they have been assigned to the sup-zone or not. The Geoms
can be constructed from Frames and Bridges.
38
holes
defines
Dimension
limits
S
S
Range
0D
1D
2D
3D
1
Modeling
Map
Cell
Step
order 0
Parametric
1
1
Linear
order 1
Geometry
Topology
S
Quadratic
order 2
Cubic
order 3
Point
Coordinates
Zone
Cell
local node
indexing
cell node
indexing
Node
Index
PointSet
Curve
Zone
Space
Surface
Body
Hexahedron
T3N8
Unstructured
Structured
3D
2D
1D
composed of
part of
composed of
part of
embeded
embeded
embeded
Pentahedron
T3N6
Piramid
T3N5
Tetrahedron
T3N4
Quadrilateral
T2N3
Triangle
T2N3
Segment
T1N2
Vertex
T0n1
S
composed of
part of
Cell 3D
Solid
Cell 2D
Face
composed of
part of
composed of
part of
Cell 1D
Edge
Cell 0D
Node
S
M
compose
is part
Cell
Skeleton
M
40
the base geometry class, is defined with different subclasses, which are each representing collections of
homogeneous cells. The PointSet, Curve, Surface and Body are defined with the parametric dimension,
respectively 0D, 1D, 2D and 3D. Recall that a zone is both a collection of nodes and a collection of cells. The
highest parametric dimension of the cells in a zone determines the dimension of the zone and its parametric
space. The parametric dimension of the zone (topology) should not be confused with the modeling dimension of
the zone (geometry); the same holds for cells. For example, a surface with a 2D topology could have a 2D or 3D
geometry definition, and its node coordinates would be (x, y) or (x, y, z) in the respective geometrical space.
As explained, different topologies can exist for the same geometry, and different geometries can exist for the
same topology. Geometry and topology are defined in the Zone class.
Coherent
Non-coherent
2
3
6
n7
n7
n9
n4
n4
n5
n6
n6
n5
n8
n1
n1
n3
n3
n2
n2
the unit tolerance that specifies the distance between two points. Two points are considered
one point if they are centered at the same origin when scaled down to the unit box of size 1.
41
Obviously, in practice, computation must be carried out at a given precision level, for example using doubleprecision floating point computation that yields an accuracy of about 10-16. The unit tolerance values are in
accordance of a larger magnitude order than the precision limit given above, so that the inaccurate input and the
accumulation of errors in the computation remain insignificant.
The effective tolerance for the problem is defined by multiplying problem size by input unit-size. Points are
considered as coinciding when located within the effective tolerance distance. The cell has nodes that define
modeling and parametric coordinates for points on the map. The cell nodes coordinates are created within
specified tolerances.
two
boundaries
(b)
one
boundary
(a)
(a)
42
The analysis of a quantity field is usually performed on interactively-selected zones of different types which
represent discretized regions of space. The most important Zone class is the Surface class; it is defined by a set of
nodes and 2D cells. The surfaces finite geometry is in the 2D or 3D space. A surface lies in a bounded region of
space and has two sides. To pass from one side to the other side of the surface, one must either cross the surface,
or cross the curve that bounds the surface area. A surface without a boundary curve, (for example a sphere, or
any surface that can be transformed continuously into a sphere) is called a closed surface, in opposition to and
open surface which has at least one boundary curve (see Figure 37). The boundary curve is always a closed
curve; if a boundary curve can continuously be deformed and shrunk to a point without leaving the surface
space, then the surface is characterized as simply-connected.
3
2
1
3
1
Figure 38: Simply and multiply connected and disconnected surface regions.
The visualization system makes an extensive use of the surface as a main identification and manipulation object;
most visual representations are related to surfaces, as the surfaces are the most appropriate containers of the data
extracted from volume data sets.
The main surface types are:
mesh surfaces
mesh boundary surfaces
slicing or cutting planes
isosurfaces
The mesh surfaces are part of structured grids (consistent I, J, K surfaces or surfaces defined by faces), and other
surfaces can be created for structured and unstructured meshes. The mesh and boundary surfaces are given at
input, when cutting planes and isosurfaces are computed during an interactive session (surfaces and associated
visualization tools are described in the Chapter 2).
The cells connectivity defines the relationship between nodes and various cells. A Surface can be queried for:
which are the cells adjacent to a given cell,
which are the cells that share a given node,
which edges are defined with a given node,
which two nodes define the edge connecting faces.
Geometry algorithms rely on the mathematical form of the underlying maps and on the topology that defines
boundaries. They are:
the bounding cube that computes the range (cube) which contains the geometric objects. It is used as
rough bound of objects before proceeding to more precise computations.
the closest point that computes the minimum (geometrical) distance between objects,
the extent that computes the quantities as length, area, volume of objects,
the center of extent that defines the center of object area, volume, etc.
43
cell topology
This topology information enables different visualization tools of the SV system to operate on interactivelycreated surfaces. The first algorithm is the cell connectivity algorithm, which pre-processes cell connectivity
information as part of the input to the visualization system. The second algorithm is the node topology algorithm,
including the node reduction algorithm. This algorithm is used to determine surface normals which are required
to identify the cells around a node and to calculate the unique node normal. This algorithm provides the
necessary input to the surface-shading algorithm. Both algorithms are performing the nodes and cells topological
mapping from local to global indexing space.
44
The pair (Z, C) is called a topological space. In addition, each cell in C defines its topological space because it is
defined as a node-set and as a cell-set containing its boundary cells. If the cell boundaries are excluded the open
cell represents the open set of points which contains only interior points. The closed cell refers to a closed set of
points and contains in addition its boundary points. A set is closed if the complement is open. All cell points that
are bounded by its boundary are called cell interior. A point p is an interior point of c, where c C; if there exists
an open set contained in c and at the same time contains p. The set of all interior points of c is called the interior
of c. If the point is classified as interior point of all c, it is in addition the interior point of cell-set C as c C.
curve cell
surface cell
body cell
The cell interior is always a manifold because every interior point of the cell has its infinitesimal neighborhood which
reflects the similar cell shape. The shapes of the curve, surface and body cell topology are shown in Figure 40, where
the neighborhood of the interior point looks like the primary cell. Boundary points are not interior points or part of the
manifold, because its neighborhood is not complete, see Figure 41. The concept of neighborhood is important in order
to understand the definition of the boundary cell. The neighborhood is defined as infinity small region around the
arbitrary point. It is a set of points inside the cell interior having the same parametric dimension as the analyzed cell.
The neighborhood which completely belongs to the cell interior is defined as -full, and can topologically deform up
to the analyzed point. The -full neighborhood classifies the point as the cell interior point.
zero
exsterior point
half
boundary point
full
interior point
45
non-manifold
T
x
manifold
cell topology
cell connectivity
46
Global indexing
Local indexing
2
3
5
15
0
14
13
7
6
16
11
12
10
6
10
dimension
zone
parameters
number of nodes
number of cells
1D
curve
(i)
I-1
2D
surface
( i, j )
IJ
(I-1) (J-1)
3D
body
( i, j, k)
IJK
47
(i,j)
(i,j,k)
(i)
i
i
i
(a) structured
3
1
(b) unstructured
curve
body
i,j,k+1
i,j+1
i,j+1,k
i-1,j,k
i-1
i+1
i-1,j
i+1,j
i+1,j,k
i,j-1,k
i,j-1
48
i,j,k-1
cell
...
nodes
cell
...
0
1
2
3
4
5
6
7
8
9
cell topology
cells
last cell<>cell #0
first cell<>cell #1
cell connectivity
cell connectivity
CN
i = ni
i =0
CN
= sj
1- 8
j =0
where n is a number of nodes per cell, s is the number of sides per cell and CN is the total number of zone cells
which precede the one for which the index is calculated. From the cell index vector the number of cell nodes are
defined as the difference between the starting cell index CT of the next and current cell. The equivalent
mechanism is applied to the cell index vector CC associated with the cell connectivity to define the number of
cell sides, see Figure 47. Thus, the number of cell nodes and cell sides are calculated as:
(number of cell nodes)i = CTi+1 - CTi
1- 9
1- 10
parametric dimension
cell connectivity
number of nodes
number of cells
cell topology
The dynamic memory storage is appropriate for such heterogeneous data model as the size of the complete zone
topology can be only defined if all the input topology is read in. The input data set are the first four parameters of
49
the Cell Connectivity model. The cell connectivity algorithm calculates the additional output parameters: cell
connectivity and LG maps for the following tasks:
to define zone topology,
to define boundary topology,
to define patches (segments) topology given as boundary conditions.
The cell connectivity information is required to make efficient the marching inside a zone when passing from
one cell to another. This concept is applied in the cutting plane and vector line algorithms. The cell connectivity
is calculated from the cell topology (nodes defining each cell) in order to keep the input data smaller and in
addition, to avoid the unnecessary errors, which could be detected if the input data would contain inconsistent
topology information. There are two types of cell connectivity calculations which are performed during
visualization process:
The first one is done during the preparation phase when the input files are generated. Such cell
connectivity information is of the static nature for the zones input and doesnt change during the
visualization process.
The second type of cell connectivity calculation is done when the surface is interactively created,
and more precisely, when saved for additional investigation. These are cases when the cutting plane
or the iso-surface is created, as the cell connectivity is generated for each newly created surface.
This calculation is computationally intensive and affects real-time interaction. Such aspects impose
further optimization and improvements to the cell connectivity calculation.
6
(a) grid
50
N2
N0
C2
C1
N1
C0
N3
1D cells
C1
C0
e0
n0
no
e0
C2
n0
e0
n1
n1
n1
N4
N3
C1
C0
C2
N5
N2
N1
2D cells
N0
n2
n2
n3
n2
e2
e1
e1
e2
e3
e1
C1
C0
e2
C2
n1
e0
e0
e0
n1
n1
n0
n0
n0
N9
N10
C3
N7
N5
N8
C1
N6
N2
N4
N3
C0
C2
N0
N1
3D cells
n3
n7
n1
C1
n2
n6
f5
n4
n2
n0
f0
n0
f2
f4
n4 n5
n4
n1
n3
n2
f3
n1
n3
n1
f2
f4
n0
n3
f0
C2
C0
n5
n2
n0
51
At the beginning each node-side set is empty. The algorithm iterates over all the zone cells and constructs the
sides for each cell which are grouped according the minimum node index. When a side is created its minimum
node index is found. The side nodes are ordered from the minimum to the maximum node index. By definition,
the minimum node index is the first one in the array defining the side. This nodes order simplifies the
comparison between two sides. As the node set is of fixed size and contains all the zone nodes, the appropriate
data type to treat such set is the indexed array with the node index as entry. The node index makes possible to
find the set of all processed sides around the identified node.
The newly created side is compared with all the sides in the node-side set. If such a side is not found, the created
side is added to the node-side set. If there is a side in the node-side set, which matches the newly created side,
the connected cells for the newly created side and the matching side are set respectively inside the cell
connectivity table.
When discarding empty node-sides sets the resulting node set contains the node-sides sets, which contain non
connected sides. Such sides represent the cells forming the boundary sub-zone. The sub-zone of parametric
dimension-1 requires the complete definition of the zone topology. After the first traversal of all sides, the side
nodes are grouped in the unique node set, which by definition doesnt allow duplicate nodes. The sub-zone
nodes, when sorted in the monotonous increasing order, represent the LG map of the boundary nodes.
Simultaneously, the cell topology is created with the global node indexing of the sup-zone. Such sub-zone
topology is completely constructed if a node and a cell indexing are done in the local index space. The related
LG maps define the local indexing, which is applied to the cell topology and cell connectivity.
The result of the first iteration over boundary cells is:
the LG map of nodes,
the cell topology in global index space,
the LG map of cells.
The following step is the calculation of:
the cell topology in local index space,
the cell connectivity.
The cell topology in local index space is created from the cell topology in global index space where for each
global index the corresponding local index is found from the LG map of nodes. This algorithm is tuned with
appropriated hashing value to improve the searching performance.
52
2
3
2
3
1
(a)
(b)
(c)
Figure 52: Surface parts (a) multiple-connected, (b) disconnected and (c) mixed
The algorithm includes the traversal of all the sides on the front allowing the cell to be considered only once.
Practically, the cell is added only once and removed only once from the front set of cells. The cell inclusion is
controlled with the cell-done vector. When the front reaches the surface/curve boundary the remaining sides are
not connected to any cell. The traversal of cells continues with the remaining ones which are not searched. The
next cell becomes the starting cell for the creation of the new curve/surface zone. Such treatment splits the zone
in multiple connected or disconnected regions, named ZoneParts, see Figure 51.
If the surface boundary exists it is always closed. For example, when surface has three boundaries it can form
different surface parts arrangements, see Figure 52. It is obvious that it is not sufficient to prove that for the
example shown in Figure 52 the surface boundary is the group of three curves. The question, if the zone is
53
multiple-connected or disconnected, still remains. The traversal of the cells starts with the arbitrary chosen cell
and follows the cell connectivity information. The surface is recognized to be a connected region when all the
cells are reached. In addition, if there are no boundaries the surface is closed. The algorithm is performed on
surface/curve level as the created and visualized objects are in 2D/3D space.
When all the sides are traversed, the remained entries into the node-side set represent the boundary cells of the
zone. These are, for a body the surfaces, and for a surface the curves. The boundaries are later used with the
definition of boundary condition patches. For the internal and boundary surfaces the topology has to be defined
from the global node index space. Thus, it is possible to reach sup-zones without leaving the zone itself from
every cell inside the zone.
original
node topology
node connectivity
cell topology
cell connectivity
54
traversal of all the zone cells, where for each node the connected edges are found and surrounding cells are
identified. The algorithm manipulates dynamic information which implies that the exact length of arrays storing
topological information can be defined when complete cell/node traversal is performed. The required
intermediate step is the creation of temporary sets for the processed state of nodes and the two disjoint sets
storing surrounding cells. The two disjoint sets avoid recursive invocation of the cells which where previously
processed.
Initialization of the node_processed state
Initialization of the node_topology set.
for each cell:
for each node in cell:
if(node_processed(node)) continue
else
node_processed(node) processed
initialization of topology set
initialization of cell_not_processed set
insert cell in cell_not_processed set
while(cell from cell_not_processed)
insert cell in topology set
for node find local_node in cell
for cell find sides set containing local_node,
for sides set find connected_cells set
if(connected_cells not in topology set)
insert connected_cells in cell_not_processed
insert topology set in node_topology set.
The logic of the algorithm is outlined in the Metacode 2. The node topology set has all the necessary information
to recreate the node-centered topological information for a zone, see Figure 47 in Structured and Unstructured
Topology section.
The node reduction algorithm is a minor modification of the node topology algorithm where the cell connectivity
information is available and the unique node indexing has to be calculated for the given zone. The problem is
illustrated in Figure 54. The elimination of duplicate nodes is based on coherence and tolerance model, see
section 1.1.3. The nodes are assumed to be the same if they are within the range defined in the tolerance model.
The local node identification is not based on the local index, as in the case of node topology algorithm, but on
tolerance model. In addition, the new node index space is created.
9 1
7
2
3
55
We assume that the connectivity information allows navigation around the node. The treatment of a particular
case, when the node is building two cells, which are not connected through one of the cell sides, is resulting in
having two nodes at the same place, see Figure 55. In such a case the boundary curve would intersect itself. This
is avoided by applying node reduction algorithm, which eliminates the creation of non-manifold geometry.
Figure 55: Navigation around the node: (a) one node (b) two nodes
The second variation of node topology algorithm is the calculation of node value based on the averaging
mechanism. The problem is to compute the cells surrounding the node and to calculate the node value from the
cell values. The calculation of the node value is reusing the navigation part of the node topology algorithm in the
simplified form for requested node. In addition, the cell can be customized to contain the information necessary
to apply the desired averaging mechanism.
56
B4
B5
B3
B1
B1
B6
B4
B2
B3
k
B2
Figure 56: Domain with boundary indexing in 2D & 3D and boundary identification
The domain can be analyzed as a hexahedron cell type, see section 1.1.1, with the difference that the cells faces
are domain boundaries, see Figure 56. The domain itself can be arbitrary oriented, but it must be right handed.
Because of the structured nature of the domain node indexing (i,j,k), the boundary indexing logic is different than
face indexing of hexahedron cell. The rule defining the indexing is based on the permutation of the i,j,k planes in
increased order for minimum index value and in decreased order for maximum index value. Applying such a rule
the following Table 11 is defined:
Boundary
Index
Value
Curve (i)
B1
(i)
B2
(j)
B3
max
(j)
B4
max
(i)
Boundary
Index
Value
Surface indices(i,j)
B1
(j,k)
B2
(i,k)
B3
(i,j)
B4
max
(i,j)
B5
max
(i,k)
B6
max
(j,k)
57
S2
i
D3
S1
j
i
i
II
D3
III
D2
S1
S1
j
S1
D2
S2
I
S2
D1
D1
S2
j
S1
j
i
i
Connected region
Segment
Segment
Orientation
D1.B4.S2
D2.B4.S2
R (reverse)
II
D1.B3.S2
D3.B2.S1
E (equal)
III
D2.B4.S1
D3.B4.S2
R (reverse)
n3
+
i
+
n2
n1
3
j
j
j
i
4
58
In the following 3D example, see Figure 59, the 3 connected surfaces areas are identified. The segment
orientation and the node reference are added to the 3D input. Applying the same symbolic path for the segment
identification as the one for 2D, the following Table 13 is constructed:
Connected region
Segment
Segment
Node
Orientation
D1.B6.S4
D2.B6.S3
R (reverse)
II
D2.B3.S2
D3.B5.S2
R (reverse)
III
D1.B6.S3
D3.B3.S2
E (equal)
59
k
j
Domain 3
Domain 1
j
k
Domain 2
D3.B3
D1.B6
i
S1
j
D1.B6.S3
S1
D3.B3.S2
S2
D3.B5
j
i
i
S1
D3.B5.S2
D1.B6.S4
D2.B3.S2
S3
j
j
S1
D2.B3
i
S4
j
D2.B6.S3
S1
S2
D2.B6
60
Edge
2
M
sub
sup
2
M
M
Face
Vertex
1
M
1
M
1
Solid
Node
Face
Edge A
Edge B
Edge
Start
End
Next
Prev
Next
Prev
nstart
nend
fA
fB
enA
e pA
enB
eBp
61
e8
f2
f3
e5
f2
f3
e5 e5
e1
e0
n4
e4
n0
e8
n4
n5
e8
n5
f2
e4
e5
e0
n0
n1
a) local
e5
e0
n1
b) global
62
The global edge orientation is given as direction from its start to its end node. This information is available from
node-based B-rep, where edges are defined in terms of nodes. In Table 15, the face f3 of the tetrahedron is
defined with the edges e1, e5 and e4. The face orientation determines the local orientation of edges, when defining
the face itself. The WEB representation of the face fA is based on the global edge orientation, which is aligned
with counter-clockwise orientation.
First Node-
First Edge-Face
Node
Edge
Face
Edge
n0
+e0
f0
-e0
n1
+e1
f1
+e2
n2
+e2
f2
-e3
n3
-e3
f3
+e1
Node
Face
Edge A
Edge B
Edge
Start
End
Next
Prev
Next
Prev
e0
n0
n1
f2
f0
e4
e3
e2
e1
e1
n1
n2
f3
f0
e5
e4
e0
e2
e2
n2
n0
f1
f0
e3
e5
e1
e0
e3
n0
n3
f1
f2
e5
e2
e0
e4
e4
n1
n3
f2
f3
e3
e0
e1
e5
e5
n2
n3
f3
f1
e4
e1
e2
e3
Label
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Node Mask
0
0
0
0
0
1
0
1
1
0
1
0
1
1
1
1
0
0
0
0
0
1
0
1
1
0
1
0
1
1
1
1
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
n
0
3
3
4
3
4
4
3
3
4
4
3
4
3
3
0
Edges Intersected
0-3-2
0-1-4
1-4-3-2
1-2-5
0-3-5-1
0-2-5-4
3-5-4
3-4-5
0-4-5-2
0-1-5-3
1-5-2
1-2-3-4
0-4-1
0-2-3
-
Faces Connected
1-0-2
2-3-1
3-1-0-2
2-0-3
1-0-3-2
2-0-3-1
0-3-1
1-3-0
1-3-0-2
2-3-0-1
3-0-2
2-0-1-3
1-3-2
2-0-1
-
+
P2
P1
Intersection pattern
2
P3
P0
3
3
P2
P3
Intersection pattern
2
P1
P0
64
n4
n3
f1
f1
v
f4
n2
f3
f2
n0
n0
f2
n3
f3
f0
f0
n2
n1
n1
u
u
w
n5
n7
n4
n3
f5
f4
n6
f1
f1
f3
n4
f4
n5
v
f2
n2
n3
f2
f3
n0
n0
f0
f0
n2
n1
n1
u
u
(a) FALSE
(b) TRUE
Figure 66: Edge traversal direction for nodes labeled as: (a) FALSE (b) TRUE
65
n3
n3
e5
n2
e5
e3
e4
e2
e0
n0
e4
n2
e1
e2
f3
f2
e3
f0
e0
n0
n1
e1
n1
f1
25=32
26=64
28=256
n4
n4
e7
e4
e7
n3
e3
e4
e6
e6
f4
f3
e5
f1
e5
e2
n0
e3
n3
n0
e0
e2
e0
f2
n2
e1
n1
66
n2
f0
e1
n1
n5
e8
n5
n3
e7
e5
e5
e8
e6
e3
f1
n3
n4
e3
n
2
e2
n0
e0
f2
n0
e4
e1
n2
e2
f0 e1
f3
e7
n1
e6
e0
e4
n1
f4
n4
For the reason of readability the example of the pentahedron lookup tables is given in appendix on page 231
The navigation graph for other cell types follows; see Figure 68, Figure 69 and Figure 70.
n7
e1o
n6
n7
e11
e7
n4
n3
e6
e8
n3
e2
e2
e11
f1
e3
f0
e1
n2
f3
e9
e5
e1
n0
e6
n2
n5
e4
e3
f4
e7
e9
n6
e10
n0
e4
e0
e0
f2
e8
n1
n4
n1
e5
n5
f5
67
Figure 71: Some intersection patterns for the hexahedron cell with the polygon partitions
The underlying cell map for the general n-polygon cell type is not defined. An unstructured surface can be
composed only of triangles or quadrilaterals because they are the only cell types defined for 2D topologies. In
order to subdivide the n-polygon cell type a very simple logic of subdivision is applied. It minimizes the number
of created cells by generating quadrilaterals whenever possible. To eliminate random approach to the local
subdivision (geometry orientation and connectivity) the Table 17 was defined as one possibility to uniquely
define the polygon partitions.
N-polygon
Subdivision
triangle
quadrilateral
quadrilateral + triangle
2 x quadrilateral
2 x quadrilateral + triangle
68
Label nT
0
1
65
173
165
0
1
2
3
4
2
170
188
225
110
77
52
0
0
1
2
1
0
1
Triangles
Triangles
Edges
0-4-3
0-4-3 6-10-9
0-5-1 4-11-8 6-9-10
0-5-1 4-11-8 3-2-7
6-9-10
6-9-10
0-7-3 4-11-8
0-10-8
1-4-5
Faces
2-1-0
2-1-0 4-5-3
2-3-0 1-5-2 3-5-4
2-3-0 1-5-2 0-4-1
3-5-4
3-5-4
6-1-0 1-5-2
6-5-2
7-2-3
nQ
Quadrilaterals
Quadrilaterals
0
0
0
0
Edges
-
Faces
-
1
2
1
1
1
2
2
1-4-3-2
0-3-11-8 1-9-10-2
1-3-4-5
0-5-6-7
0-3-7-10
0-4-7-10 0-10-9-1
1-2-6-9 1-9-11-4
3-2-1-0
0-1-5-2 3-5-4-0
0-1-2-3
2-3-4-6
0-1-4-6
2-1-4-6 6-5-3-0
0-4-3-6 6-5-1-7
Table 18: Records from hexahedron lookup table with polygon partitions and multiple connected regions
The new lookup table consists of the header indicating: intersection label, the created triangles/quadrilaterals and
the connected faces for the marching cube algorithm. Obviously, the internal subdivision of polygon into
possible quadrilaterals and triangles requires the cell connectivity indexing between internal cells. In the
hexahedral example, see Table 18, there are defined faces which doesnt exist for hexahedral cell. This faces are
used for the internal cell connectivity indexing. It is assumed that any face with greater index then maximum
index of the intersected cell face is an internal polygon cell.
69
Made
3
2
2
4
0
0
5
Figure 74: Pathological case of the star-shaped polygon, and its polygon partition
The intersection pattern algorithm treats only topologically the intersected cell. Such method only guaranties to
produce sensible results when convex polygons are created. The pathological case, see Figure 74, shows the starshaped polygon, which could happen if the algorithm would generate overlapping cells. However, such cases are
not expected because the good numerical simulation grids are made from convex cells.
Integer
0
1
2
3
4
5
6
7
F
F
F
F
T
T
T
T
Bit representation
F
F
T
T
F
F
T
T
Edges intersected
e1, e3
e1, e2
e2, e3
e2, e3
e1, e2
e1, e3
-
F
T
F
T
F
T
F
T
70
Polygon created
l1
l1
l1
l1
l1
l1
Integer
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
F
F
F
F
F
F
F
F
T
T
T
T
T
T
T
T
Bit representation
F
F
F
F
F
T
F
T
T
F
T
F
T
T
T
T
F
F
F
F
F
T
F
T
T
F
T
F
T
T
T
T
F
T
F
T
F
T
F
T
F
T
F
T
F
T
F
T
Edges intersected
e1, e4
e1, e2
e2, e4
e2, e3
e1, e2, e3, e4
e1, e3
e3, e4
e3, e4
e1, e3
e1, e2, e3, e4
e3, e2
e2, e4
e1, e2
e1, e4
-
Polygon created
l1
l1
l1
l1
l1, l2
l1
l1
l1
l1
l1, l2
l1
l1
l1
l1
-
71
Q(x + x) = Q(x) +
Q
2Q ( x ) 2
x + 2
+ ...
x
x 2!
The first two terms are only required to indicate the behavior when x approach zero. Thus, one has a fictitious
continuum supported by mathematical techniques, especially interpolation explained in the following section
map the numerically generated data as discrete model in the continuum model.
In this section, the focus is on the development of techniques necessary to support the continuity assumption
using numerically generated data as field quantities on defined geometry. These data represent the approximation
of the given continuum by the set of points defined in the discretized space, called computational grid. The
assumption of continuity has to be kept in mind despite that each grid point is storing the physical solution in
discretized form. An interpolation method that expands the values of the discretized solution in the continuous
solution, in a whole domain, models the concept of the continuum throughout the computational grid. Thus, the
base for the continuum field analysis is formed and we can extract quantities related to different geometries as
point, curve or surface fields.
72
1D segment
U1 in E ,E ,E
2D triangle, quadrilateral
U2 in E ,E
U3 in E
The interpolation is based on the parametric definition of the point location, including the mapping between the
modeling and parametric space. Each cell has the following two important parameters:
The Euclidian space En with dimension n=1,2 or 3 is defined with variables x, y and z used as coordinates in the
modeling space. The parametric space Uk with dimension k=1,2 or 3 is defined with variables u, v and w.
Modeling SpaceE3
Parametric SpaceU3
w
P(x)
x = A (u)
J=
A
u
X (x,y,z)
G = JTJ
y
P(u)
u(u,v,w)
1.3.1 - 1
and in vector notation:
x (x,y,z) = A [u (u,v,w)]
1.3.1 - 2
There are several geometrical variables, see Figure 75, which has to be defined for the mapping A, which
represents the base for the coordinate transformation, and in addition, it is applied for the definition of the
Jacobian matrix J and the Metric tensor G.
73
It is assumed that the cell is simply connected, thus topologically, uniquely describing a single portion of the
Euclidian space [62]. It is important to note that the topological structure of the cell, named cell topology is
preserved by the mapping A. The cell is a closure of an open connected set and it is assumed to be a closed
including its boundary Uk. This means in particular that the cell nodes are included in the mapping. The cell
boundaries in the parametric cell space Uk are aligned with the constant coordinate axis or lines/planes with unit
normal n(1,1,1). Before the mapping A is derived, a cell in modeling space must be numerically specified with
all its constituting nodes. The cell boundary Uk must be given in order to define the boundary mapping as:
Akn : U k E n
1.3.1 - 3
before the extension to the cell interior is done. Each point in the parametric space U is mapped to a unique point
in the modeling space E. Thus, for every point u Uk, there exist a unique point x En and for every point x
En, there exist a unique point u Uk. Such mapping is smooth and non-singular within the cell and preserves in
addition the cell orientation.
As it could be noted, the superscript and subscript indexing notation was used in this section to indicate the
contravariant and covariant nature of respective coordinates transformations to keep it general [77]. From now
on, we assume that the applied coordinate systems are orthogonal, which yields that the respective contravariant
and covariant coordinates are equal, and we used only the subscript indexing.
1D:
f=
a (u
i
) i = a0 u0 + a1 u1
i =0
r 1
f = a0+ a1u
i =3
2D:
f=
ai( u v
) = a0 u0 v0 + a1 u1 v0 + a2 u0 v1+ a3 u1 v1
r s i
i =0
r +s2
f = a0 + a1 u + a2 v + a3 u v
i =7
3D:
f =
a (u v w )
r s
t i
i =0
r + s +t 3
f = a0 u0 v0 w0 + a1 u1 v0 w0 + a2 u0 v1 w0 + a3 u0 v0 w1
+ a4 u1 v1 w0 + a5 u1 v0 w1 + a6 u0 v1 w1 + a7 u1 v1 w1
f = a0 + a1 u + a2 v + a3 w + a4 u v + a5 u w + a6 v w + a7 u v w
1.3.1 - 4
For example, in the case of a quadrilateral cell, the 2D equation 1.3.1 - 4 can be satisfied for each cell node by
forming a set of simultaneous equations, for which we assume that the function value fN is known at each cell
node, as follows:
74
f 0 1
f 1
1 =
f 2 1
f 3 1
u0
v0
u1
v1
u2
u3
v2
v3
u 0 vo a 0
u1 v1 a1
u 2 v2 a 2
u3 v3 a3
1.3.1 - 5
The unknown coefficients a can be calculated by finding the inverse of C when fN is defined at cell node, as:
a = C -1 fN
1.3.1 - 6
Once coefficient a is computed, it can be substituted in equation 1.3.1-4, and written as
f=pa
1.3.1 - 7
where p(u) = [ 1, u, v,........, u v,........,u.2........]
resulting in
f(u) = p(u) C -1 f N.
1.3.1 - 8
The shape function h is defined as
h(u) = p(u) C-1
1.3.1 - 9
and when combined with the node solution fN gives the polynomial form
i = M 1
f(u) =
hi(u) (fN)i
i =0
1.3.1 - 10
where M indicates the number of cell nodes. This equation reveals important constraints of the shape function h
expressed as:
hi = 1, for node i and hj = 0, for node j i
1.3.1 - 11
The shape function h has a value of unity at an arbitrary node i and is equal to zero at all other nodes. The
variation of the solution along all boundaries is retained in order to satisfy condition for continuity. The outlined
formulation has two know disadvantages [69]:
1. The inverse of C may not exist, and
2. The evaluation of C in general terms for all cell types involves considerable algebraic
difficulties.
As the definition of the shape function h is necessary for all the elaborated cell types, it is appropriate to define C
-1
more explicitly for the code implementation. This is accomplished by applying the interpolation polynomial in
the Lagrange form [78, 79], which is well known numerical interpolation method for its systematic definition of
shape functions. Lagranges polynomials L satisfy the constraint given in the equation 1.3.1-11 and the cell-tocell continuity condition. For example, their definition is given in explicit form, in 1D coordinate space,
75
hi hu Lni ( u ) =
(u uj )
(u u
j =0
j i
1.3.1 - 12
In the term Lin, n stands for the number of subdivision in the cell along the parametric coordinate axis. As the
analyzed cell types are linear the number of subdivision is 1 and it is constant for all of them. In the following
equations such indices are removed and replaced with the node number associated with the parametric
coordinate (u,v). The explicit form of the 2D shape function is defined in accordance with the analogy to 1D as
follows:
hi huv Lu(u) Lv(v)
1.3.1 - 13
and 3D shape function as:
hi huvw = Lu Lv Lw
1.3.1 - 14
The following relations satisfy the shape function conditions 1.3.1-12 for different dimensions spaces
2D: hi hj = ij
1D: hi =1
3D: hi hj hk = ijk
1.3.1 - 15
Lagrange polynomials provide characteristics to define non-linear cell types. As the development of scientific
visualization system could be diversified into non-linear cells, the presented structure represents a good basis to
capture future needs to develop interpolation methods, which will better approximate the imported results from
numerical simulation in the visualization system.
As explained, the isoparametric mapping A is based on simple products of Lagrange polynomials in the
parametric cell space U supported by values defined at cell nodes:
h ( u )( f )i
M
f = A(u) =
i =1
1.3.1 - 16
where M is the number of cell nodes and (fN )i are the nodes coordinates or solution. The mapping is called
isoparametric because the coordinates and solution are treated with same the shape function. The simplest shape
function of order 1 is defined for 1D as:
Lo (u) =
u u1 u 1
=
= 1u
u0 u1 0 1
L1 (u) =
u u0 u 0
=
=u
u1 u 1 0
1.3.1 - 17
Substituting 1.3.1-17 in equations 1.3.1-13, 1.3.1-14 and 1.3.1-15 the following result is obtained, respectively
for 1D, 2D and 3D. The fully developed mapping for linear cell types follows:
1D: segment
A
i =0
i =0
hi f i N ai u i = a0 + a1u1 = a0 + a1 u
A = h0 f0 + h1 f1 where
h0 L0 (u) = 1 - u
h1 L1 (u) = u
76
a0 = f0
f1
f0
n0
n1
a1 = f1 - f0
A = a0 + a1u
1.3.1 - 18
This is the isoparametric mapping for segment cell. Follows the derivation of the isoparametric mapping for 2D
cell types:
3
A =
h
i =0
i ( u ) fi
a (u v
i
r s i
)
i =0
r + s2
2D: quadrilateral
h 0 h00 = L0 (u) L0 (v) = (1 - u ) (1-v )
n2
a1 = f1 - f0
n1
a2 = f3 - f0
a3 = f0 - f1 + f2 - f3 = - a1 + f2 - f3
A = a0 + a1 u + a2 v + a3 u v = a0 + u (a1 + a3 v) + a2 v
1.3.1 - 19
The isoparametric mapping A for triangles is a degenerated case of the quadrilateral one, where the shapes
function of the dummy node hD h11= 0 annihilate the influence of the a3 coefficient. The coefficients for 2D
triangle follow:
v
a0 = f0
nD
n2
a1 = f1 - f0
a2 = f2 - f0
hD h11= 0 u v a3 = 0
A = a0 + a1 u + a2 v
n0
n1
1.3.1 - 20
Note that the nodes indices are shifted according to the triangle node connectivity see Table 3: TRIANGLE
skeleton table on page 32. The most complex isoparametric mapping in the context of this thesis is the 3D
mapping of the hexahedron.
77
i ( u ) fi
i =0
ai ( u
i =0
r + s +t 3
r s
v w )
This mapping is fully developed before the specific mappings for tetrahedron, pyramid and pentahedron, cells as
they are degenerated cases of the hexahedron one. The shape functions for each node are defined in Table 21,
and the mapping A is:
A = a0 + a1 u + a2 v + a3 w + a4 u w + a5 u w + a6 v w + a7 u v w
Shape function
h0
h000
(1 - u) (1 - v ) (1 - w )
h1
h100
u (1 - v) (1 - w)
h2
h110
u v (1 - w)
h3
h010
(1 - u) v (1 - w)
h4
h001
(1 - u) (1 - v) w
h5
h101
u (1 - v) w
h6
h111
uvw
h7
h011
(u-1)vw
uv
uw
wv
uvw
+
+
+
+
a0 = f0
n7
n4
a1 = f1 - f0
a2 = f3 - f0
n6
v
n5
a3 = f4 - f0
n3
a4 = f0 - f1 + f2 - f3 = - a1 + f2 - f3
n0
a5 = f0 - f1 - f4 + f5 = - a1 - f4 + f5
n2
a6 = f0 - f3 - f4 + f7 = - a2 - f4 + f7
n1
a7 = - f0 + f1 - f2 + f3 + f4 - f5 + f6 - f7 = - a4 + f4 - f5 + f6 - f3
The coefficients are grouped to reduce the number of multiplication operations as:
A = a0 + u [a1 + v (a4 + a7 w) + a5 w] + v (a2 + a6 w) + a3 w
1.3.1 - 21
The 3D cells are always embedded in the hexahedron, see figure above. As mentioned, degenerated cases of the
hexahedron are tetrahedron, pyramid and pentahedron cell. The shape functions hi, containing the non existing
nodes are removed from the isoparametric mapping A by making them equal to zero, equation 1.3.1 - 21. For the
78
tetrahedron the existing nodes are 0, 1, 3 and 4 of the hexahedron, thus the shape functions h2, h5, h6, h7, are zero,
see Table 21:
h2=0
h2= u v u v w
from h6= u v w = 0
uv=0
h5=0
uw=0
h6=0
h6= u v w
uvw=0
h7=0
h7=v w u v w
from h6= u v w = 0
vw=0
a1 = f1 - f0
n3
a2 = f3 - f0
v
a3 = f4 - f0
n2
n0
a0 = f0
a1 = f1 - f0
a2 = f2 - f0
n1
a3 = f3 - f0
1.3.1 - 22
For a pyramid the complete base of the hexahedron is included and in this way the hexahedron node indexing is
equal to pyramid ones. The last three shape functions h5, h6 and h7 are zero, see tetrahedron derivation, and the
w
existing coefficients are:
n4
a0 = f0
a1 = f1 - f0
v
n3
a2 = f3 - f0
n0
a3 = f4 - f0
n2
a4 = f0 - f1 + f2 - f3 = f2 - f3 - a1
n1
A = a0 + u (a1 + v a4) + v a2 + w a3
1.3.1 - 23
The last 3D cell is a pentahedron, called also the prism. It is a half of the hexahedron as it does not include
hexahedron nodes 2 and 6. Thus, the coefficients h4 and h6 are zero; see tetrahedron derivation and the
coefficients are:
n5
w
a0 = f0
n3
a1 = f1 - f0
a2 = f2 - f0
n4
a3 = f3 - f0
n2
n0
a4 = - a1 - f3 + f4
n1
a5 = - a2 - f3 + f5
79
A = a0 + u (a1 + a4 w) + v (a2 + a5 w) + a3 w
The mapping is
1.3.1 - 24
The number of multiplication operations is reduced with adequate grouping of mapping coefficient, as shown in
the Table 22.
Operation
Cell
Simple
Grouped
T2N3
T2N4
T3N4
T3N5
T3N6
T3N8
12
Jacobian matrix
The modeling space E is discretized with an arbitrary set of cells and it is global to all of them. The parametric
space U is local to every cell. The isoparametric mapping A, applied to a point P, transforms its parametric
coordinates u to modeling coordinates x.
The cell coordinates are used as mapping support, equation 1.3.1 - 16, and the mapping A becomes the
coordinates transformation x(u), see Figure 76.
x xi ( u, v, w )
i = 1, 2, 3
1.3.1 - 25
Such mapping hasnt much value if the inverse mapping A , denoted u(x) doesnt exist. The mapping u(x)
allows backward coordinates transformation to the parametric space, as shown in Figure 76.
-1
u ui ( x, y, z )
i =1, 2, 3
1.3.1 - 26
Modeling space E3
Parametric space U3
w
z
1
1
u(x)
P(u,v,w)
ew
0
ev
ez
v
eu
ex
y
eu
x(u)
ey
ew
ev
P(x,y,z)
80
and can be found if x is single valued and continuously differentiable in the neighborhood of a point P. Thus,
provided that Jacobian matrix
J=
x
u
1.3.1 - 27
x
u
x
y
J J =
=
u
u
z
u
x
v
y
v
z
v
x
w
y
w
z
w
1.3.1 - 28
This implies the existence of the inverse J -1. J is calculated from A, see equation 1.3.1 - 16, as
J=
x A (u )
=
u
u
1.3.1 - 29
For the cell origin J is defined with cell base vectors [eu, ev, ew], see Figure 76., as
eu
x
J = [eu ,ev ,e w ] = eu y
e
uz
evx
ev y
evz
ewx
ewy
ewz
In E3 the triad [eu, ev, ew] serves as a basis for U3 provided that they are not coplanar.
eu(eu ew)0
The three unit vectors have each, only one non vanishing component in U3
eu e(1)= (1,0,0)
ev e(2)= (0,1,0)
ew e(3)= (0,0,1)
The suffixes to the e are enclosed in parenthesis to show they do not denote components. The j-th component of
e(i) is denoted by e(i)j and satisfies the following relation
e(i)j=ij I
In U3 any vector a can be expressed in the form
a= ai e(i)
and the summation convention is also applied to suffixes enclosed in parentheses.
Isoparametric mapping for quantities is important, as it allows combined treatment of quantities interpolation and
mapping between parametric and modeling coordinates spaces. When the modeling quantities have to be
manipulated and therefore be defined in both coordinate spaces, the knowledge of J is required to define the
quantity components in both spaces, while the quantity itself is invariant. These characteristics are used for
vector line algorithm; see section 1.4.5, when the integration of the particle path is performed through the
parametric vector field. Another application of the Jacobian occurs in the calculation of derived quantities, for
81
example vorticity, is another application of the Jacobian matrix. Follows the derivation of Jacobian matrices for
predefined cell types:
Jacobian 1D matrix:
J 1D =
A A
u u
1.3.1 - 30
1D segment A = a0 + a1 u
J T1N2 =
= a1
u
1.3.1 - 31
Jacobian 2D matrix:
J 2D =
A A A
u u v
1.3.1 - 32
2D triangle
A = a0 + a1 u + a2 v
A
= a1 ,
= a2
J T2N3 =
v
u
1.3.1 - 33
2D quadrilateral
A = a0 + a1 u + a2 v + a3 u v
A
J T2N4 =
= a1 + a3 v,
= a2 + a3 u
v
u
1.3.1 - 34
Jacobian 3D matrix:
J 3D =
A A A A
,
,
u u v w
1.3.1 - 35
3D hexahedral
A = a0 + u [a1 + a5 w + v ( a4 +a7 w)] + v ( a2 + a6 w ) + a3 w
A A A
JT3N8 =
,
,
v
w
A
= a1 + a5 w + v (a4 +a7 w)
u
A
=a2 + u ( a4 + a7 w) + a6 w
v
82
A
= a3 + a6 v + u (a5 + v a7)
w
1.3.1 - 36
3D tetrahedron
A = a0 + a1 u + a2 v + a3 w
A
A
J T3N4 =
= a1 ,
= a2 ,
= a3
v
w
u
1.3.1 - 37
3D pyramid
A = a0 + (a1 + a4v) u + a2v + a3w
A
A
J T3N5 =
= a1 + a4 v,
= a2 + a 4 u ,
= a3
v
w
u
1.3.1 - 38
3D prism
A = a0 + u (a1 + a4w) + v (a2 + a5w) + a3 w
A
A
J T3N6 =
= a1 + a4 w,
= a2 + a5 w,
= a3 + a4 u + a5 v
v
w
u
1.3.1 - 39
Metric tensor
The metric tensor is one of the basic objects in differential geometry [77, 80] and it is related to geometrical
properties such as length, area or volume, respectively in 1D, 2D or 3D coordinate space. It is the ratio between
the two coordinate systems for which the isoparametric mapping A and the Jacobian matrix J are defined. The
metric tensor G can be written as
G = JT J
1.3.1 - 40
This relation is important in the case when parametric and modeling spaces differ in dimension. For example,
when a quadrilateral cell is placed in the 3D modeling space the Jacobian matrix is not of the square type:
J=
x
u
y
u
z
u
x
v
y
v
z
v
1.3.1 - 41
This is a significant inconvenience for the calculation of the inverse Jacobian matrix J-1, as in general J is a
rectangular matrix. Let us consider its Moore-Penrose generalized inverse J, see [81] for equalities on
generalized inverses. The metric tensor G = JT J is square and regular, thus its generalized inverse is equal to its
ordinary inverse, so that
83
G -1 = (JT J)
G -1 = J (J)T
G -1J T= J (J)T J T
G -1J T = J (J J)T
G -1J T = J J J
G -1J T = J
1.3.1 - 42
The inverse of the metric tensor G -1 can be explicitly calculated being by definition the square type matrix:
G = {gij}
1.3.1 - 43
where,
gij =
xk xk
,1 i, j k
k =1
i uj
n
xu 2 xv
gij = xu i xu j
1.3.1 - 44
The definition in vector notation shows that gij is the dot product of the tangent vector of i-th coordinate axis
with the tangent vector of the j-th coordinate axis analyzed from the modeling coordinates space.
n3
n
n
n2
v
3
n0
xv
0
n1
xu
1
y
x
x u xv
x u xv
1.3.1 - 45
84
The four nodes of the quadrilateral are defining the surface, see Figure 77. The surface point for which the
normal exists is the regular surface point and has to satisfy the following condition:
x ux v 0
1.3.1 - 46
If the above condition is satisfied, the vectors xu and xv are not collinear and they define the tangential plane. If
the condition is not satisfied the surface point is singular and the Jacobian of such coordinates transformation is
zero. Following the equation 1.3.1-41 the transpose Jacobian matrix is:
x
u
T
J =
x
v
y
u
y
v
z
u
z
v
1.3.1 - 47
and applying it to the equation 1.3.1-40 , we obtain the following expanded form:
x 2 y 2 z 2 x x y y z z
+
+
+
+
u
u
u
u
v
u
v
u
v
G = J TJ =
2
2
2
x x y y z z x y z
u v + u v + u v v + v + v
1.3.1 - 48
From which the G
equation 1.3.1-42.
-1-
can be calculated, and thus we have defined all the elements for calculating Jin the
Some additional characteristics of the Jacobian matrix and Metric tensor determinants, defined as
G =G
1.3.1 - 49
and, if the Jacobian matrix is of the square type, the metric G is defined explicitly with the Jacobian J as
J =J,
G = J2
1.3.1 - 50
For example, the calculation of cell length L, area A and volume V is possible by knowing the Jacobian of
isoparametric mapping:
volume
J=
dV
dV0
area
J=
length
dA
dA0
J=
dL
.
dL0
85
applied when Gradient, Divergence and Curl operators are computed for the related scalar or vector quantity
field.
The scalar function S of position s when gradient operator grad ( ) is applied to it produces a vector s.
v s grad
s=
s
s
s
ex + e y + ez
x
y
z
1.3.2 - 1
vx v y vz
+
+
x y z
1.3.2 - 2
The important physical meaning for the divergence of the velocity field v is that it represents the relative rate of
the space dilatation when computed along the particle trace. Consider the cell around the point P. By the fluid
motion this cell is moved and distorted. As the fluid motion cannot break up by the continuity law its volume dV0
=J dV and hence:
J=
dV (t0 )
dV (t )
1.3.2 - 3
defines the ratio of the cell volume at beginning and time t, called the dilatation or expansion [77].
Suppose a velocity vector field v(x) defined in a 3D Euclidian (modeling) space. The vorticity field (x) is the
circulation of the velocity field v around a cell area taken perpendicularly to the direction of ., and is
obtained by computing at each point the curl of the velocity field v.
= curl v
vz v y
y
z
v v
y = x z
z
x
v y vx
z =
x y
x =
1.3.2 - 4
Analytical Model
If v is given as function in the parametric coordinate system u(u,v,w), with a well know link to modeling space
x(x,y,z) trough isoparametric mapping, see section 1.3.1 on page 74. The components of the velocity vector v can
be written as:
v( u ) = [v x ( u , v , w ), v y ( u , v , w ), v z ( u , v , w )]
1.3.2 - 5
and by the law of partial derivation, applied to Jacobian matrix, see section see section 1.3.1 on page 80:
d -1
d u
J = i
dt
dt x j
( )
d ui vi
=
=
,
x dt x
j
j
i,j = 1,2,3
1.3.2 - 6
86
and defining the partial derivative of the vector component along the modeling coordinate axis:
vi vi u1 vi u2 vi u3 vi uk
=
+
+
=
x j u1 x j u2 x j u3 x j uk x j
1.3.2 - 7
where u(u1, u2, u3 )
u(u, v, w)
and for the specific case
vy
z
vy vy u vy v vy w
=
+
+
z u z v z w z
1.3.2 - 8
or putting it into the form of matrix multiplication
v v -1
=
J
x u
1.3.2 - 9
and expand the notation
vx
x
vy
x
v
z
x
The components of the
v
x
vx
y
vy
y
vz
y
vx vx
z u
vy vy
=
z u
vz vz
z u
vx
v
vy
v
vz
v
vx u u u
w x y z
vy v v v
w x y z
vz w w w
w x y z
matrix are applied in the equations for gradient divergence and vorticity. For the
v is calculated locally for each cell node when the inverse of Jacobian J
u
exists.
Numerical Model
The velocity field is given at cell nodes and for each node
1D segment
v v
v
)1,0 = v1 v0
u
u
n0
2D triangle
n1
v i
= ( v i )1 ( v i )0
v
u
( i )k = ( vi )1 ( vi )0
v i
u j
= ( v i )2 ( v i )0
v
87
2D triangle
w
vu
u
vv
u
vu
v
vv
v
n2
= ( vu )1 ( vu )0
j
= ( vv )1 ( vv )0
= ( vu )2 ( vu )0
= ( vv )2 ( vv )0
vi
u
v i
v
1-0
2-0
1-0
2-0
1-0
2-0
n1
n0
2D quadrilateral
v
vi
= ( vi )1 ( vi )0
u 1,0
n2
n3
j
k
vi
= ( vi )2 ( vi )3
u 2 ,3
vi
= ( vi )3 ( vi )0
v 3 ,0
u
n1
n0
vi
= ( vi )2 ( vi )1
v 2 ,1
1-0
3-0
1-0
2-1
2-3
2-1
2-3
3-0
j
n3
0
1
2
3
v
n2
n0
n1
u
88
vi
v
3D tetrahedron
vi
u
vi
u
vi
v
v i
w
1-0
1-0
1-0
1-0
2-0
2-0
2-0
2-0
3-0
3-0
3-0
3-0
3D pyramid
w
n4
j
k
n3
0
1
2
3
4
n0
n2
n1
vi
u
vi
v
v i
w
1-0
1-0
2-3
2-3
1-0
3-0
2-1
2-1
3-0
3-0
4-0
4-0
4-0
4-0
4-0
vi
u
vi
v
v i
w
1-0
1-0
1-0
4-3
4-3
4-3
2-0
2-0
2-0
5-3
5-3
5-3
3-0
4-1
5-2
3-0
5-2
5-2
vi
u
vi
v
v i
w
1-0
1-0
2-3
2-3
5-4
5-4
6-7
6-7
3-0
2-1
2-1
3-0
7-4
6-5
6-5
7-4
4-0
5-1
6-2
7-3
4-0
5-1
6-2
7-3
3D pentahedron
j
k
0
1
2
3
4
5
3D hexahedron
w
n7
n4
k
n6
v
n5
n3
n0
n2
n1
0
1
2
3
4
5
6
7
The linear derivatives for the considered cell types are introduce in the node connectivity algorithm for the
calculation of the uniquely defined quantity at each cell node.
89
Ax+D=0
or
1.4.1 - 1
where the vector A (A,B,C) is along the normal to the plane. The distance d0 between the point x0(x0,y0,z0) and
the plane:
Ax + By0 + Cz0 + D Ar0 + D
d0 = 0
=
A
A2 + B 2 + C 2
d0 = n x0 - p
where n = A and
A
p=
D
A
1.4.1 - 2
where the sign of the square root is chosen to be opposite to that of D. The definition of the distance sign is:
sign (d0) = sign (A x0 + D)
1.4.1 - 3
for all cell nodes. sign(d0) defines on which side of the plane the node r0 is located.
Each node position vector is projected on the normal n of the cutting plane in order to determine its sign (d0)
(positive or negative). The set of the cell nodes sign (d0) describes the cell intersection pattern, as shown in the
lookup table where all the possible intersection patterns are labeled and defined, see Section 1.2.6.
0 sign false
Node
Edge
Face
Figure 78: Possible singular cases of intersections with a node, an edge and a face
91
SP=0
LC=P
LL=0
S
SP=C
L
S
S
L
L
L
P
SC
VS
find all intersected cell on the volume boundary and create seeds set
92
The algorithm selects the appropriate cell type based on the parametric cell dimension and the number of cell
nodes. The node mask defines the cell intersection pattern and the intersection can be calculated. To keep track
in the marching-cell algorithm the set of intersected cells seeds must be removed from the seed set of cells to
avoid possible duplication in processing of intersected cells. For each intersected cell, the edges intersections
with the cutting plane are computed. The nodes on the intersected edges define polygons, see Section 1.2.6,
which all together define the cutting plane. In principle, these polygons are used for defining the graphical output
primitive, see Section 3.7.2. The intersection of edge and plane is calculated in parametric form using a linear
interpolation between two edge nodes; the resulting ratio is applied to compute the quantity value at the point of
intersection. When the surface is kept for further processing, the created parametric field (nodes and the quantity
parameter) is saved in order to speed up the recalculation of other available quantities.
The orientation of all created cells must be the consistent, because of the graphics rendering techniques, which
require the normals definition as input to the shading and contouring algorithms. The orientation is implicitly
given in the lookup table, where the order of triangle and quadrilateral nodes defines the local coordinate system
and thus the normal to the generated cells (see Section 1.2.6). The same approach is used for pyramids, prisms
and hexahedrons. Situations with intersected patterns with several internal pieces may be more complex and
result in larger lookup tables, but processing remains extremely rapid with no need for conditional branching
code.
(u,v,w+1)
i, j, k+1
(u,v+1,w)
i, j+1, k
(u,v,w)
i, j, k
(u+1,v,w)
i+1, j, k
Normal mode
Plan
Save mode
93
quadrilaterals). The display mode manipulates polygons only so as to speed up interactive processing. The
saving process calculates the needed cell-connectivity information, surface boundary and quantity parameters
taking into account the original 3D domain from which the surface is extracted. The parametric field allows
recalculating the surface field for an arbitrarily chosen quantity, thus reducing the processing and memory
requirements to perform such operation. The save mode algorithm is decomposed in the following steps:
find all intersected cells on the boundary and create seeds set
while(seeds not empty)
initialize set of cells to be intersected xcells and add one seed cell to it
add connected cells to the xcells if not yet intersected and remove from seeds if exist
After calculating edge intersections and polygon partitions, cell connectivity is examined. At the time one
computes the intersections in a given cell; its connectivity is not necessarily locally definable since it can be that
the adjacent cells are not yet constructed. Overall connectivity is preserved through face indexing, an indexing
convention that depicts (internal and external) cell connectivity. Face indexing consists of indexing the virtual
faces between polygonal cells with integers which are bigger than the number of faces of the cell. Each
additional cell in the polygon is given an incremental index which is bigger than the number of cells in the zone.
The resulting collection of all intersected cells must be further processed by topology algorithms that establish
the topological description of the cutting plane as an unstructured surface. The connectivity of the intersected
cells is defined applying the global SupZone indexing, to which internal indexing is added. The collection of
cells is sorted using the global indexing. This characteristic allows a fast transformation where for each cell with
greater index than the current one the cell connectivity is calculated with preserving local indexing.
The node-reduction algorithm takes into account the relation between node and cell, where each node can build
more than one cell (1.2.4 Node Topology). The alternative is to eliminate the multiple intersections of the same
edge. The detection of an intersected edge and the addition of new one in an ordered set imposes local node
indexing (see Section 1.2.3 Cell Connectivity). The intersected supZone edges are stored in the set based on
edge-nodes key. If an edge is intersected, the result of intersection is reused and the unique node indexing is
preserved. For a convex domain, there is one single connected set of cells, defining the arbitrary cutting-plane
surface. If the domain is not convex, multiple cutting-plane regions may exist, depending of the position and
orientation of the cutting plane (see Figure 81).
94
The 3D surface generated by this algorithm is unstructured; it is constructed in exactly the same manner as in
the case of the cutting-plane. The marching-cell algorithm cannot be readily applied if the iso-surface has
disconnected regions: as shown in Figure 82. The disconnected components may be open- or closed-type
surfaces; indeed we can group any number of disconnected iso-surface parts into a set, which the user still refers
to as a single surface. An iso-surface which has been saved has the functionality to recreate any other quantities
at its nodes; it is classified as a parametric field. The nodes of the original domain determine the intersection
edges, together with the linear interpolation coefficient.
Distinguishing between structured and unstructured meshes is important when it comes to considering memory
requirements. Unstructured meshes require more storage than structured ones, since one needs cell-connectivity
to be defined explicitly. There is no significant difference between structured and unstructured meshes from the
algorithmic point of view since intersection calculations are based on the cells lookup tables in both cases. The
output of the cutting-plane algorithm is necessarily an unstructured surface, whether the base mesh is structured
or not. An algorithmic difference exists, however, in how connectivity information is accessed. For structured
95
meshes, cell-connectivity is explicit to and deduced from the cell ordering sequence; this is in contrast to the case
of unstructured meshes where cell-connectivity is provided as input data.
S < Smin
Smin < S < Smax
Smax < S
96
S < Smin
S < Smax
97
Isolines
The isoline representation is the primary technique for displaying scalar field information. The isoline algorithm
calculates the curves that connect the surface points where the value of the scalar quantity is equal to a given
input value; in effect, it is the 2D version of the iso-surface algorithm.
The scalar field, known by its values at the nodes of the cell, is filtered against the calculated intersection mask
so as to detect a sign change; if this occurs, the cell is known to contain the isoline. The intersection is calculated
by applying a linear interpolation between the nodes of the edge:
qiso q1 x iso x1
=
q 2 q1
x 2 x1
When a cell is investigated, all its nodes are tested against the criterion qn - qiso > < 0, and the calculated sign is
applied in the definition of the cell intersection mask. The intersection mask is used to find the intersection
pattern in a cell lookup table. For surface cells, 2 cell tables are defined, the first one for triangles and the other
one for quadrilaterals. There can be 0, 2 or 4 intersections for a quadrilateral. The ambiguity case is when 4
intersections are detected; in such case, the 4 possibilities are drawn (see Figure 85).
There is no ambiguity case for the triangle. The marching algorithm generates open- and closed-curve types.
Topologically, open-curves are always connected to the surface boundary, whilst closed-curves can always be
transformed to a circle. A linear interpolation between edge node values is also used for defining the isoline
colors (in relation to the color map).
There are several ways of applying the isoline algorithm. The straight forward method consists of traversing all
surface cells for which the resulting isoline is created when connecting the intersected cells. When displayed, the
resulting line segments are perceived as a map of open- and closed-type isolines. In this approach, the isoline
connectivity table is not created which simplifies the algorithm but increases the calculation overhead, since
every isoline point is shared by two adjacent cells and needs to be calculated twice. Hence the isolines are drawn
with duplicated set of points, and this increases the display time, which can be problematic when interactive
viewing actions like rotation and zooming are performed.
4
1
(a) unstructured
5 times duplicated test
(b) structured
4 times duplicated test
Figure 86: Node traversal in the isoline algorithm for unstructured and structured surfaces
98
To generate curves for which there are no duplicated or disconnected points, the following algorithm is
applied:
1. generation of a collection which contains at least one point on each closed isoline;
2. marching trough the surface grid starting from the boundary (case of open isolines);
3. marching trough the surface grid starting from the points in the list of remained isoline points,
applied to define closed isolines.
1. Calculation of the threshold sign for all the nodes q - qiso + or -;
2. Identification of boundary and internal cells;
3. Traversal of internal cell-edges,
Starting from the intersected boundary cells, one follows the isoline by the existing cell-edge combination
in the internal #cell-edge list, from which the combination is removed, when the isoline point becomes part of
the isoline curve. The marching algorithm continues until the boundary-cell edge is reached. Each isoline
point is added to the isoline curve. The next point defining the isoline is found from the cell intersection
pattern following the cells connectivity. The ambiguity cases (if any) are handled without special attention.
The ambiguity is locally checked if such cell pattern is met. When all the located boundary cells are removed
from the list, all open isolines of the traversed surface are found.
The remaining internal cell edges are used in the same way as the boundary ones that have been intersected
during the first cell-edge test. If that combination is found the isoline is closed. When the collection of
internal cell-edges is empty, it means that all the closed isolines have been found.
99
auxiliary area
n view
plane
surface/line line
intersection point
C
na
L
nsnp
minimum
projected
area
ns
np
n s x nv =
na
ns
P
D
D
A
section plane
The algorithm is decomposed in an outer part, which controls the traversal of the cells and an inner part
which deals with line/plane intersections and the point-inclusion test. The line/plane intersection algorithm is
applied for:
the boundary curve when curve cells are tested against possible section plane intersection
and during the surface cells traversal where the cell edges are intersected.
The intersection point at the cell boundary (set of edges) is identified through the set of constraints applied to the
point inclusion test. This part of the algorithm includes a combination of boundary constraints checks and
line/plane intersections with surrounding cell edges. If the point is found to be in the cell, it is added to the set of
intersections points, and traversal is pursued till all the seed cells are visited.
Inside the located cell, a Newton-Raphson method is applied to define more precisely the surface (see page 111).
The point-location algorithm is decomposed in:
line-plane intersections,
point inclusion test,
Newton Raphson method.
p1
p1
p0
p0
p1
p0
p1
p1
101
L p= p0 + (p1 - p0 )
L p (t) = p0 + t dp,
The plane is defined with normal n and the point rp lying inside the plane P. The plane equation in the pointnormal form is:
P (r - rp) n = 0
1.4.4 - 2
In Figure 91, the imaginary line/plane P divides the cell plane into the outside and the inside half plane. The
outside is indicated with positive direction of the normal n. If the point p of the ray L is introduced in the
equation 1.4.4 - 2 we define the half-space in which the line point is located:
P
L
inside half plane
px
p1(t=1)
dp
n
t>0
t<0
p0(t=0)
rp
outside half plane
n (rp - p) < 0
n (rp - p) > 0
1.4.4 - 3
When the equation 1.4.4 - 1 is introduced in equation 1.4.4 - 2 the parameter t for the edge/face intersection point
px is:
t n(rp p0 ) ,
t>0
=
n( p1 p0 )
1
424
3
dp
1.4.4 - 4
The valid value of t can be computed if the denominator is non zero. The algorithm checks that:
n 0,
rp
p0
n( r p - p0 )
n dp
dp
1.4.4 - 5
p0
u
p1
103
The ray ending point is checked against cells boundary which is further refined with the intersection of the ray
with the boundaries of the cells. As an edge is a part of the boundary of the 2D cell and the face is part of the
boundary of the 3D cell the plane line intersection algorithm, see page 101, is applied to find the point where the
ray leaves the cell, see Figure 95.
1D segment:
left
right
A point can be located in only one of these regions. In case of a triangle cell T2N3, there are 6 regions, see Figure
96, with the following constrains:
Edge ei
e0
e1
e2
Constrain ci
c0 v<0
c1 1 - u -v <0
c2 u <0
Each constrain reduces a possible point location to a half plane. As shown in Figure 96, there are 8 possible
locations. The following proposition is a combination of constrains:
(c0 ^ c2 ) c1
1.4.4 - 7
and when applied, it reduces the number of possible point locations to 7. The complete number of locations is
shown in the truth Table 23, indicating true T or false F, when tested against boundary constrains of the
prescribed regions. The region x is not existing because the point would need to be located in R1 and R2 at the
same time, which is impossible because the two regions do not overlap, see Figure 96.
Regions
IN
c2
c0
c1
Constrain
104
v
R5
R3,4,5
P5
n2
R1
R2
cell interior
cell interior
e1
e2
PIN
n0
0
P
n1
e0
1
P4
R4
R0
P3
R3
R5
R0,1,2
n2
R2
P2
R1
P1
cell interior
e1
e2
cell interior
PIN
0
n0
e0
u
n1
R3
R4
R0
P0
Figure 96: Possible positions of the point leaving the cell in the neighborhood of the triangle cell
Constrains in Table 23 can define a point location inside or outside of the investigated cell. This is not sufficient
for the evaluation of the ray exit edge/face and the calculation of intersection with exit edge/face. The orientation
of the ray helps to identify more rapidly the intersection by introduction of the ray intersection constrains as
follows:
105
Constrain ri
r0= u >1
r1= v <0
r2= v >1
IN
C
T r1(e2)
c2
c0
T e0
c1
F r2(e2)
T e1
T r0(e2)
F e2
T e1
F e0
T e1
F IN
Edge ei
e0
e1
e2
e3
Constrain ci
c0 = v < 0
c1 = u >1
c2 = v >1
c3 = u < 0
Each of these constrains reduce the possible point location to the half plane as indicated in Figure 91. There are
16 possible situations in the truth table, and the following propositions reduce this number to 9.
c0 c2
c1 c3
1,2
c0
c1
c2
c3
11
12
13
14
15
IN
10
2
r0= v <0
r1= v >1
Table 26 after the elimination of impossible cases and identification of external cell regions defined in the truth
Table 25.
106
Ri
IN
ci
T r0(e3)
c3
F r1(e3)
c1
Te0
Tr0(e1)
c0
Te0
c2
Te2
Fe3
Fr1(e1)
Te2
Fe1
Te0
-
F
Te2
FIN
P7
R2
R6
P6
cell
interior
n2
n3
cell
interior
e2
R3
R1
PIN
0
n0
R4,5,6,7
e0
n1
P5
P4
R4
R0
R5
P2
R2
R7
R6
R0,1,2,3
n2
n3
R1
cell
interior
P3
e2
0
n0
P1
PIN
R3
cell
interior
e0
n1
R4
R5
R0
P0
Figure 97: Possible positions of the point leaving the cell in the neighborhood of the quadrilateral cell
107
The next cell types are 3D cells. The simplest case is the tetrahedron which can be associated with quadrilateral
because they have the same number of constrains:
(a) node
(T3N4)
(c) face
(b) edge
Figure 98: Possible positions of the point leaving the cell in the neighborhood of the tetrahedron cell
Figure 99: Possible positions of the point leaving the cell in the neighborhood of the prism
108
Figure 100: Possible positions of the point leaving the cell in the neighborhood of the pyramid
109
(T3N4)
(a) node
(b) edge
(c) face
Figure 101: Possible positions of the point leaving the cell in the neighborhood of the hexahedron
110
Newton-Raphson method
The Newton-Raphson method is used in a completion point extraction phase where a candidate cell and point are
in the neighborhood of the point to be found. The point extraction algorithm in its first phase finds the candidate
points in modeling coordinates x. The Newton-Raphson method computes in iterative loop a sequence of
positions which lie increasingly close to the ultimate location. This can be presented as a multidimensional rootfinding problem on a distance function d that measures the error between the specified constant location x and
the guess solution A(u), see section 1.3.1.
d(u) = A(u) - x 0
1.4.4 - 8
The prerequisite to find the candidate point in the neighborhood of the proper minimum ensures that the
subsequent refinement phase doesnt inadvertently fall into an erroneous local minimum, as the field can have
several non-zero minimum. The iteration is initialized from an arbitrary location inside the cell u0 and is
repeatedly shifted through a sequence of new locations ui+1 which approach to the unknown location u. The
Newton-Raphson method is defined with the recursive equation
ui +1 =ui
d ( ui )
d' ( ui )
where
d' ( ui ) =
A( ui )
J ( ui )
ui
1.4.4 - 9
The denominator of the equation contains partial derivatives of the interpolating function, where Jacobians
matrices and their inverses are defined for each cell type; see section 1.3.1 on page80. The inverse of the
Jacobian matrix maps the modeling space error into parametric form.
ui +1 =ui
d ( ui )
J ( ui )
1.4.4 - 10
The iterative algorithm applying the equation 1.4.4 - 10 has to result in the quite small distance d between
successive points ui. If the distance is increasing the point parametric coordinates are outside the range [0...1]
and the analyzed point is not located in the investigated cell. For the calculation of the starting point (seed) in the
parametric space the applied Newton-Raphson method uses the cell connectivity and the point location tests,
which define in addition an exit cell side. Thus, the Newton-Raphson algorithm can be restarted with the new
cell guess.
The identification of the starting cell is done applying the local value algorithm; see section 1.4.4, for finding the
point on the surfaces and the cell containing the point. Because the surface is part of a 3D domain, the surface
knows about the volumetric cell and edges defining that surface cell. Ones the surface cell is identified the
volumetric cell is defined based on the surface cell index mapping, through which each surface cell has a link to
the volume cell origin.
This algorithm utilizes the knowledge of the normalized cell boundaries aligned with parametric coordinate axis
so the points are checked against the simple boundary conditions for example if the (u,v,w) value is between
(0,1).
111
An application of the Newton-Raphson point location algorithm is related to the usage of numerical probes tools
when for example the user provides input for one of the following Representations: Local Value, Local Isoline or
Vector Line, discussed later in the thesis, see section 2.4.3. All of them have in common that the point location in
the parametric cell space needs to be computed.
Given a point in a physical space (x,y,z), the problem is to find out if a investigated cell is containing or not this
point. The algorithm has to perform the in-out checks applying the cell normalized coordinates (u,v,w), and it
turns out that this computation is one of the most computationally intensive task when the before mentioned
representation are constructed. The objective is that such algorithm is efficient. Conceptually, the task involves
the application of the Jacobian matrix of the isoparametric mapping, which provides only local information for
each cell. The marching cell algorithm can be an applied for getting a correct cell. The optimization of the point
location algorithm to define the starting seed cell for the marching cell algorithm is done by the traversal of the
domain boundaries. After that, just the selected cells are taken into account. When these cells are analyzed the
Jacobian matrix for the analyze cell is calculated and a center of the cell represents the initial point location
guess.
u
x
-1
-1
v = J y u = J x
w
z
x
u
y = J v Jx = u( 1 )
z
w
Ju = x
1.4.4 - 11
Note that even within the current cell the metric terms vary, so that the algorithm is of iterative nature until
u=v=w=0. If (u,v,w) are outside the range [0,1] or depending of the cell type boundary constrain, the target
point is outside the current cell and another neighbor cell will be analyzed.
Given an arbitrary point x, and the good guess of the cell where the point is located, a Newton-Raphson
procedure is used to calculated the corresponding u. The initial guess u(0) is taken to be a center of the cell and
than we apply the previously mentioned equation, in an iterative algorithm as follows:
u(i+1)= u(i)+u(i)
u(i+1)= u(i)+ J (-1u)x (i )
1.4.4 - 12
If the connected cell is searched, the algorithm converges quadratically to the current point. If the cell being
searched doesnt contain the point, the new value of u will exceed the cell normalized space ([0,1] interval). The
search is than switched to the neighbor cell in for u(i+1). With the possibility to find the connected cell, the
algorithm is repeated for the next connected cell, till the solution or boundary of the domain is reached. In the
case that the point is not found, and there are no other connected cells because the boundary of the domain is
reached, the algorithm looks for other seeds (boundary) cells and the marching cell algorithm continuous on the
cells, which are still available for traversal. Here the problem is to identify the cell boundary through which the
point moved out of the cell. The whole algorithm is situated in the normalized cell space where the boundaries
are simplified, and nicely aligned with the coordinate axis planes. It is interesting to reuse the found point and
cell information for the next point inclusion calculation, because the algorithm is often repeated for the point in
the neighborhood of the previous one.
112
means that the vector field is independent of time. The trajectories of the particles are called
streamlines.
means that streamlines do not break up.
means that the particle cannot split and occupy neither two places nor two distinct particles
occupy the same place. Streamlines never intersect.
analytical
xi+2
numerical
vector line
Pi+1(t+dt)
+
x(t
dt )
dx
t
vi(x)
v(x)
dt
Pi+1(t+t)
x i+1
x(t)
xi
Pi(t)
Pi(t)
xi-1
Figure 102: Tangency condition for the vector line analytical and numerical treatment
113
The mathematical concept of the particle motion is described by a point transformation of the particle position x
during time t, see Figure 102. Consider the point P in a vector field v. The vector line through the point is a
general 2D/3D curve. The position vector x gives the location of P as a function of some parameter t that varies
along the vector line. The tangent to the curve at P determines the vector line
dx
= v [x(t)]
dt
1.4.5 - 1
The numerical integration takes into account the discrete points of the computational grid which defines the
vector field v. Each vector line is a solution of the initial value problem governed by a vector field v and an
initial seed point x0. This curve is depicted by a ordered set of points (x0, x1, x2 ... xn) defined by
t i+1
xi + 1 = x +
v [ x(t)]dt
ti
1.4.5 - 2
Adjacent points are connected and define the curve geometry which is displayed. The Vector Line representation
can be applied to surface and volume vector fields defined in 3D space. Thus, there are two different vector line
algorithms: one for the treatment of volume vector fields, and another for the treatment of surface vector fields.
For example, the last one can be applied to cutting plane and isosurface vector fields. The flow in the surface is
taking into account the projection of the tangential component of the velocity field. The following two sections
treat these two aspects.
1.4.5 - 4
which is equivalent to:
du
= g[u(t)]
dt
1.4.5 - 5
With
114
J(u) =
A
u
1.4.5 - 7
The equation 1.4.5 - 1 is computed with interpolated values from g(u). The parametric vector field g(u) is
computed for each cell node and applied for the definition of the inverse mapping J-1. The isoparametric
algorithm is efficient when processing:
the vector value at the point inside the cell, requires J-1,
the point inclusion test that defines if a point is located inside the cell, requires A.
The point inclusion test is efficient because the cell boundaries are planes or lines aligned with main coordinate
axis of the cell parametric space, see section 1.1.1. The conditions that the mappings A and J-1 exists are:
the vector field v should be single-valued, i.e., when the mesh contains singular points (several
different grid points occupy the same location in space) a numerical solution has to ensure that these
points have identical values for v.
The mapping for the right hand side of equation 1.4.5 - 6 calculating g(u) must guarantee enough
continuity for the solution of u(t) throughout the approximation of J.
Continuity should be ensured across two cells. This is satisfied with the piece wise (cell-by-cell)
isoparametric mapping. It must be noted that too distorted cells must be avoided.
Let xs be the cell nodes associated with vector field vs in modeling space, and let Ui be the cell parametric space
(u,v,w), oriented according to the right-hand side rule so that the cell boundary normal point outwards the cell.
The vector line algorithm consists of the following steps:
1-
2-
3-
4-
define Jacobian Js for each of the cell nodes for the mapped cell vector field gs=(J -1)svs.
5-
6-
integrate the vector field equation g applying the isoparametric mapping A-1. The integration is
performed by a fourth order Runge-Kutta method in the parametric cell space Ui. The
integration is continued until the vector line crosses a boundary of Ui.
7-
find the intersection of the vector line u with the cell boundary Ui. The intersection point
becomes the last point of the vector line u local to cell.
map the vector line u to the modeling space x=A(u).
find the neighboring connected cell Uj.
89-
If the connected cell Uj is found, reuse the intersection found in step 7 as the first
point of the vector line and replace Ui =Uj. Repeat steps 3-9.
If the connected cell Uj is not found the mesh boundary is reached and the vector
line algorithm stops.
115
The algorithm described above allows computing a vector line starting from some initial position xo in modeling
space. However, it is common to compute the vector line which reaches a given point in modeling space.
Usually, a vector line is computed over the complete mesh. For example, a trace that consists of the vector line
that reaches point xo together with the field line that starts from xo. To compute the field line that reaches xo one
simply computes a vector field line starting from xo but with a minus sign in the equation 1.4.5 - 1:
dx
= -[v(x)](t)
dt
1.4.5 - 8
with solutions x(t)=(x(t),y(t),z(t)). The overall result of the vector line algorithm consists of two distinct segments
of the vector field line, representing the forward and backward sweep respectively.
The vector line algorithm has following important issues:
g =
i =1
1.4.5 - 9
The integration step t is calculated for each cell so that approximately M steps are taken in the cell. The M
parameter can be interactively adjusted by the user then a step size t is given by:
t =
1
Mg
1.4.5 - 10
w
w
n2
e2
n3
n2
u
e1
e2
e3
e1
u
u
n0
e0
u
u
n1
n0
e0
u
n1
116
v
f1
u
v
w
f2
f3 u
v
u
f0
v
v
u
f4
f1
u
f2
u
f0
u
f3
v
f4
u
v
v
v
u
f2
f1
u
u
f0
f3 u
v
f5 v
u
v
u
f1
u
u
f0
v
w
f2
f4
v
f3 u
v
117
Figure 105: The map of a cell boundary point between connected cells in 3D
Experiments have confirmed good results with M=4 or M=5. The different reasons, for which the vector field
line algorithm (page 114) has to be stopped, are called integration breaks, and they are activated when:
The Jacobian matrix J is singular (step 5), thus, the mesh contains a degenerate cell.
The point location and moving the parametric point from one cell to another is accomplished by applying the cell
connectivity and the imposed orientation of the cell faces defined for each cell type, see Figure 103 and Figure
104. The objective is to avoid the computation of the point parametric coordinates ones the vector line points are
mapped to the modeling space. As the vector line exit point is known applying the point location algorithm, the
parametric coordinates of the point on the cell boundary can be viewed from both connected cell. The mapping is
done in following steps as shown in the Figure 105 and Figure 106:
map the cell parametric coordinates to the boundary, which results in lowering the parametric
dimension.
map the cell boundary parametric coordinates to cell, which results in raising the parametric
dimension.
The important precondition is that the cell topology for the whole domain is completely defined, as described in
section 1.2.2.
The local parametric cell coordinates are transformed when passing the interface between two cells without the
necessity to find the interface point in the global grid coordinates avoiding Newton-Raphson method to find the
point location in (x,y,z)
n1
n1
n0
n3
P(u,v)
P(u)
P(u)
P(u,v)
u
n2
n0
n0
n1
n2
v
n0
n1
Figure 106: The map of a cell boundary point between connected cells in 2D
118
A(T(u1,v1))
B (T ( u ))
II cell
T(u1, v1)
The generic algorithm is based on A,B,C mapping, which are defined separately for each cell type involved. The
solution provides the required flexibility to support heterogeneous cells without complicating the generality of
the algorithm. The ability to change A,B,C at run time without the requirement to change the general algorithm
and the ability to allow the programs to be fine turned after it is up and running (interactive turning).
3D scheme:
T (u ,v )
T (u 2 ,v2 , w2 )
C (T (u ,v))
Table 27: The mapping procedure of a cell boundary point between connected cells in 3D
In the first phase the point must be defined in (u, v) coordinates local to the face. This is handled with the A set of
functions, see Table 27, where w coordinates are always zero as they are part of the cell faces. If that is fulfilled
the intersection of the vector line with the cell edge end face is found. This algorithm includes the identification
of cell edge, which is used to identify the next cell and thus maintain the C0 continuation of the vector line. Each
cell which is holding the vector line has at least one interior and exit point. This is just not true for the cells in
which the vector field vanishes. For each cell type all the possible cases for entering and exiting of the vector line
are considered as detailed in the section 1.4.4 related to the point location algorithm.
Runge-Kutta Method
The integration of ordinary differential equations is solved by the Runge-Kutta method [93, 94] The vector field
v is the RHS of the following equation:
119
dx
= v (x, t).
dt
1.4.5 - 11
The idea is to rewrite dx and dt as finite steps x and t and multiply the equation byt.
x = v(x, t) t
1.4.5 - 12
This is an algebraic equation for the change in x when the independent variable is stepped by one step size t. In
the limit to make step size very small a good approximation of the equation 1.4.5 - 11 is achieved. This
explanation results in the Euler method
k1 = t v(xi, ti)
k2 = t v(xi+
t
k1
, ti+ )
2
2
xi+1 = xi + k2 + O(t2)
1.4.5 - 14
This is called the second-order Runge-Kutta method. As there are many ways to evaluate the right-hand side of
the equation 1.4.5 - 12, which all agree to the first order but may have different coefficients of higher order error
terms. With the Runge-Kutta method, the adding up the right combination of these coefficients the error terms
are eliminated order by order. The fourth-order Runge-Kutta method is defined as follows:
k1 = t v(xi, ti)
k2 = t v(xi+
t
k1
, ti+ )
2
2
k3 = t v(xi+
t
k2
, ti+ )
2
2
k4 = t v(xi+ k3, ti + t)
It requires four evaluations of v per step t. This is more efficient to the equation 1.4.5 - 14 if at least twice a
large step is possible with
xi+1 = xi +
1
[k1 +2( k 2 + k3 )+k4 ] + O(t5)
6
1.4.5 - 15
The higher order integration method is always superior to a lower order one, but when we consider the number
of arithmetic operations involved and the possibility of an adjustable integration step to achieve a desirable
accuracy, it is well known that the fourth-order Runge-Kutta method represents an optimize choice. This is the
reason why the fourth-order Runge-Kutta method is used for the calculation of volume and surface particle
traces algorithm, as described in this section 1.4.5.
120
121
Numerical probes are displayed as point, line or plane object, which are interactively controlled to investigate
the displayed Zones. The objective is to restrict the analysis to the user selected zones, which are obviously not
the only part of the complete calculated domain. For example, the isoline is created on a surface and displayed.
Thus, only the isoline appears on the screen and the entire domain and reference surfaces involved in the isoline
generation could be hidden by the user. In such way the user interactively defines the visualized scene content,
and obtains focused, filtered and reduced set of displayed graphical information.
122
123
Qualitative
Quantitative
Stream lines
Vector arrows
Isosurface
Isolines
Contours
Plots
1.
2.
Because the output of each numerical simulation software is specific to its computational method (cell-centered
or cell-vertex) it is appropriate to have a unified input model for a scientific visualization system. In this thesis,
the cell-vertex model is chosen as the input data model, as it is more near to the data model of the applied
graphics model. Additional reason for selecting the cell-vertex model was that it was not obvious how to
extrapolate the input data to the domain boundaries when treating the cell-centered input model. The applied
mapping for converting the cell-centered data to cell vertices is defined by averaging the quantity values of the
neighbor cells surrounding each node, as explained in node topology, see section 1.2.4.
The designed input data model supports the multi-block decomposition, which is especially useful for the
treatment of complex geometry.
The quantity input data types are organized as:
field data
solid data
validation data
plot data
Field data are quantities given for the complete computational domain. Solid data are quantities confined to
solid boundaries, such as the heat transfer and skin friction coefficient. Validation data are coming from
experiments or other computations to facilitate the data comparison. Plot data are related to the arbitrary x-y
plots, such as convergence history. An arbitrary number of quantities are allowed for each data type.
The results of the computational analysis can be mapped to the designed input file system, which consists of a
combination of ASCII and binary files responsible storing different input data sets. The input file system is
organized through a small navigation file, which groups all the files related to the specific project. Different
navigation files can be created for the same data sets giving flexibility in organizing the visualization session.
The navigation file defines geometry dimension, topology type, number of domains and number of quantities for
each data type. It defines global transformation of overall geometry like mirror and repetition. For each domain,
the number of nodes and cells are given together with domain connectivity file indicating the types of defined
boundaries, as specified in section 1.2.5.
The quantity files are related to the defined quantity data set specified
per domain or per solid surfaces. Therefore, they are separated into
two categories: field and solid data files.
The field data files are defined for the complete domain. For example,
the pressure or the velocity field around the space shuttle is defined
for each grid node.
The solid data are restricted only to the solid boundary segments. For
example, the skin friction on the solid surfaces of the space shuttle can represent a valid solid data input. Another
distinction is made between the computational data and the validation data. For example: comparisons have to
be made between experiments and the computed data, see Figure 109 where the pull down menu reflects the
mentioned distinction. In Figure 110, for the airfoil example, the field data pull right menu contains a list of
scalar and vector quantities defined in the computational space the solid data are also present under similar pullright menus, defined only the solid boundaries.
125
126
when initializing the visualization session, by default the solid surfaces are displayed, as they
represent naturally the investigated problem (airfoil, airplane),
they can be identified and selected in this starting phase to get a first insight in the available data,
The periodic and connected boundary conditions are applied when particle trace algorithm is
performed for the multiblock data model.
The standard notation for the description of boundary conditions, see Table 29, is applied on each segment
separately. Since the types of boundary conditions may vary according to the used flow model, a common set of
BC types was defined for the input data model, as follows:
BC types
Connection
Periodic
External
Singularity
Inlet
Solid
Outlet
Mirror
Input Abbreviations
CON
PER
EXT
SNG
INL
SOL
OUT
MIR
for 2D:
for 3D:
only solid boundaries, if they dont exist, all the boundaries defined in the
boundary conditions data set.
Figure 116: Initial representations: boundaries in 2D and solid surface boundaries in 3D.
129
The dialog-box allows the traversal of computational domains by scrolling through constant I, J or K surfaces
and if desired, to save these for further analysis. For the multidomain configuration, only the selected domains
are traversed. The traversal consists of displaying the surfaces in two modes: animated or step-by-step. The
displayed representations are:
Quantity: as shaded color contour, which is appropriate for quick localization of interesting 3D regions
130
Figure 120: Cutting plane and isosurface examples for structured and unstructured meshes
Figure 119 shows on the left side the Ahmed body multiple patches of the structured grid surfaces and the
respective scalar field contours and on the right side the Hermes space vehicle discretized with the structured 2block grid. The interesting aspect in both cases is the space traversal, which is performed with displaying
complete or partial surfaces; in addition, indicating the multidomain grid structure for geometry and quantity
representations. The possibility to create surface patches is defined within Min-Max range of the local I, J
surface indices. For example, the surface extraction can be reduced to cover a limited display area in order to
make other representations visible, as shown in Figure 119, where the surface patch was adapted to show the
particle traces in the background. The minimum surface patch size can be reduced to one cell. When the surface
or surface patch is created the surface boundaries are displayed, as part of the interactive system feedback, to
indicate that the surface creation was successful. A counterpart of such interactive visualization tool does not
exist for unstructured meshes.
The cutting plane and isosurface tools are the interactive tools, which are available for both structured and
unstructured topologies. It is important to mention that the related surface extraction process is involving all the
domains of a multidomain (multiblock) input. The created surfaces are automatically named with the domain
index and prefix, such as ISO for isosurfaces and CUT for cutting planes, thus preserving the relationship with
the domain from which they were extracted. In Figure 120 the example of simultaneous application of both tools
is shown on structured and unstructured mashes.
The Selection Surface dialog-box contains the surfaces associated with the view. As shown in Figure 121 the
created isosurface and cutting plane are highlighted as active and the automatic naming convention is applied, as
described in the previous section. The interactive process to remove or destroy the surface is equal to the surface
selection process. When the surface is destroyed all associated representations are removed in all the views. If
the surfaces are visible on the screen, the interactive surface selection can be performed with the mouse point
and click operation. Every visible surface in the view can be made active or inactive depending on its previous
state. The interactively selected surfaces are highlighted and the repeated action on the same surface will act as
an active/inactive toggle. For improving the interaction feedback, the surface name and the active state are
displayed in the monitoring area of the main window. In addition, the Selection Surface dialog-box is helpful
when reusing the same surfaces in a multi view visualization environment.
131
Figure 121: Surface dialog-box showing the cutting plane and isosurface instances
The different geometry representations are depicted in the geometry pulldown menu, see Figure 122, as follows: Full mesh showing the surface grid,
Mesh boundary showing the surface boundary in 2D and boundary surfaces in
3D. Solid Boundary automatically displays only the boundaries/surfaces with
the imposed solid boundary condition. The geometry representations are
unified for structured and unstructured grids, as the basic graphics for both
topologies is defined by their nodes, edges and cells, see Figure 123.
132
The Render pull-down menu allows the display of the active surfaces
with Hidden line, Flat, Gouraud or Phong lighting interpolation, see
Figure 125. The hidden line representation assigns a uniform color to
all the surface area. The algorithm removes the invisible parts of the
surfaces.
133
(a)
(b)
134
2.4.1 Isolines
The Isolines interactive tool offers several options to
create isolines:
isolines are computed within specified
minimum-maximum range
giving the increment between two adjoin
isolines
specifying the number of isolines
In Figure 128 the Isolines are displayed together
with Local Value representations, in order to
enhance the relationship of displayed graphics with
numerical values.
The Isolines can be generated as a representation group using the described dialog-box or by the Local Isoline
tool, for the creation of individual isolines. This is performed by specifying a scalar value or by locating the
surface point, through which the isoline is requested to pass. For both invocations the parameters can be set
interactively or numerically. The firs approach enables interactive input, while the numerical one assures that the
exact numerical input of an isoline value or surface point coordinates are given. For the interactive approach, the
creation of an isoline by the mouse-click operation is done by pointing with the screen cursor inside the
colormap scale or by pointing the cursor over an active surface. The mouse click will trigger the interpolations:
of the selected value or of the selected point location, followed by the isoline generation algorithm for the
identified scalar value.
135
136
The extension to the surface Color Contour representation is the Threshold representation, which restricts the
coloring of the surface to a surface region depict by the Threshold range, see Figure 132. In the case of an airfoil
only the supersonic scalar values of a Mach number above 1 are displayed with the Threshold Color Contours
representation. In the subsonic region the Isolines representation is applied, and in the entire field the
quantitative information is displayed with some Local Values representations.
In Figure 134 the examples of unstructured and structured vector fields representations are presented. It can be
noted that the resolution of the structured field can be controlled with limiting the number of display vectors, thus
making more comprehensive the presentation of the examined vector field.
137
Figure 136: Local isolines, scalars and vectors assisted with coordinate axis tool
In the Figure 136, for the velocity vector field the quantity range is
adjusted using the Range menu, see Figure 137, between 350 and 430
m/s, and we generate some Local Isolines representations with a precise
numerical input.
In addition, the isolines are validated with the Local Scalar values corresponding to the velocity magnitudes.
Finally, we have added the Local Vector representations, to show the correspondence with the newly created
Colormap.
The Range menu makes possible to modify the current range of the quantity or to reset it to the default state. This
option is useful for the comparison purposes between different projects, in order to synchronize the imposed
applied ranges. The range limits can be interactively changed utilizing the string input or the mouse point and
click operation within the Colormap display area. The user interaction is identical to the threshold range input
described in the previous section. The new setup up of the quantity range will affect the representation associated
with the related colormap, and consequently all the dependent representations will be updated inline with the
new color mapping. In Figure 136, it is visible that the colors of the isolines and vectors are in accordance with
the range indicated with the displayed Colormap in the right side of the figure.
138
predefined curves
2.
sections (in addition, Cartesian plot for scalar fields) Isolines, Particle paths
3.
When the calculation of the scalar distribution along the extracted curves is called the view, called Cartesian plot
view is opened. The limits of the plot axis are automatically sized to the min-max limits of the quantity value
calculated from the sections present in the Cartesian plot. The Cartesian plot layout can be modified by the
Coordinate Axis Editor. For the Cartesian plot view, the view manipulations buttons can be used to adjust the
displayed space of curves geometry and scalar quantity range. For example, the zoom area operation can be used
to blow up a select a region of a Cartesian plot.
139
In Figure 139 are shown 2D Solid Boundary and Section representations for three different scalar quantities. For
the two plots, from left to right, the abscissa represents the x coordinate and the third plot is related to Section
representations where the quantity distribution is presented with arc length starting from the solid boundary. A
Section representation displays the geometry of the curve as a result of plane/surface intersection, see section
1.4.1. The Section input requests two points for defining section location. It is based on a mouse-cursor visual
feedback. After the input of the first section point is made, the red rubber band line attached to the cursor,
visually aids a user to define the section. The second mouse click triggers the section creation. In addition, to this
input type, a numerical input with precise section coordinates is possible through string based field using
keyboard. A special option is the input of a vertical section, which is defined by one point and the view-up
vector. Its generation is triggered by a mouse double click action at the cursor location. This approach avoids the
input error of two consecutive clicks at the same location, as in such case the section is not defined, and by
default the vertical section is made at that place.
140
141
When the two points are inside the active surface, the position of the origin is defined with the first
selected point.
If one of the points lies outside the surface boundary it is reset to the intersection point with the
surface boundary. The intersected point becomes the origin of the coordinate system.
If both points lies outside the boundary while the section passes through the surface, the point
which has the lowest y coordinate between the two intersections becomes the origin.
The quantity axis in the analyzed region is associated with the abscissa. It is drawn perpendicularly to the y axis.
The quantity axis parameters are modified through the Quantity Axis editor shown on the left side in Figure 143.
The analyzed distance is associated with ordinate axis. It starts from the origin of the coordinate system and ends
at the selected points. The distance axis is modified through the Analyzed Distance editor shown on the right side
in Figure 143, and it is divided into three sections for setting: the Analyzed distance, the Axis length and the
Plotting parameters. The Analyzed distance section modifies the real analyzed distance as follows:
Full Section option extends the analyzed region along the whole mesh. It is similar to the Cartesian
plot representation, but is locally plotted on the view.
Between selected points option analyzes the region between the two selected points, eventually resets
them to the mesh limits. This is the default representation.
On a given distance option requests the analyzed distance in user units. This item is essential as it
allows inspecting very small regions inside the boundary layers. If the given distance is larger than the
full section distance, it is reset to the maximum limit.
142
Not magnified option draws the axis from the origin and ends it at the analyzed distance. In this case
no blow up is performed and the quantity distribution is drawn in real scale.
Between selected points option draws the distance axis between the two selected points. If the first
point lies outside the boundary limits, it will be reset to the closest boundary intersection with the
section line.
In centimeters option requests the y axis length in cm to be printed for example on A4 page.
Analyzed x d option requests the magnify value. The y axis length is given by the product of the
analyzed distance and the magnify value. In this way the user can specify the number of times the
analyzed distance is magnified.
The plotting options section customizes the Cartesian plot scales and appearance. The Vector local profile is
used in the same way as the scalar local profile, with the exception that the quantity axis does not exist in this
representation, see Figure 144. What is interesting to note is that the particle paths are not tangential to the
vectors, which is correct, as the geometrical spaces differs.
143
Figure 148: Several isosurface representations of the Temperature field around the airplane
The Iso-Surface probe is the second tool for the inspection of the 3D volumetric scalar analysis. The isosurface
calculation is based on the marching cube algorithm; see section 1.4.1, which result is an unstructured surface for
which the prescribed scalar value is constant. Figure 148, shows an example of the Temperature field isosurfaces
surfaces around an airplane, where temperature maxima are located in the region of the engine inlet and outlet.
The interesting aspect in this figure is the cut out of the isosurfaces in the mirror plane in order to see the
temperature field development. On the right side of the picture, with the use of transparency and gradual removal
of the airplane solid body, the complete shape of the temperature distribution could be analyzed. The interaction
with the Isosurface probe is supported by two menu items: one for creation and the other for saving of the
generated isosurface. The isosurface scalar value input is entered through the string input field, while the
interactive mouse based input is to move the cursor inside a colormap and with the mouse click select the desired
scalar value, which will trigger the creation of the isosurface. The isosurface algorithm involves the traversal of
the complete computational data input, thus it is computationally intensive for large 3D meshes. Ones the
interesting isosurface is found it can be saved for further manipulations, similar to the cutting plane, and become
one of the other active surfaces available for interactive manipulation. In Figure 149 the combined usage of
different probes is presented.
Figure 149: Combined use of Particle trace, Cutting plane and Isosurface probes
145
146
Figure 151 Surface and 3D Streamlines generation from a cutting plane surface
In Figure 151, the both kind of streamlines are generated for the airflow around the airplane. It is interesting to
notice that the combination of Local Vectors and Surface Streamlines representations show clearly the region of
the wing tip vortex, in a cross-section plane perpendicular to the airplane flight direction. In right side figure,
apart from the standard 3D streamlines strips, it is visible their intersection with a cutting plane instance, from
where they were started. The Vector lines strips where created with the Mono setup, where a unique color is
assigned to the group of vector lines. The Variable color option allows automatic drawing of vector lines, each
with different color. Vector lines applying such coloring scheme improve the visibility of the swirl flow
behavior, as shown in Figure 150 for the rear part of the car. The parameters for the control of the particle path
computation are introduced because there is a possibility that the vector field generates, for example, cyclic
streamlines which computation will never end. To avoid such problems the maximum number of points to be
calculated inside each cell is set by the Cell field. To control the number of integration steps inside each cell the
Cell average parameter guess the number of particle movements to traverse a cell. It is assumed that the cell
traversal will be done, if the particle performs the prescribed number of steps, with the average cell velocity. The
integration direction is shown in the Vector Parameters dialog-box, see Figure 150. In Figure 152, the Tk/Tcl
CFView GUI from Numeca is shown. The red-blue vector lines are created with the downstream integration,
while the yellow and violet with the backward integration from the structured surface nodes. The related dialogboxes, for the generation of the Vector lines representations which aid in the interactive setup, are shown.
Figure 152: Vector lines representations from structured surface points, with the required toolbox in action
147
algorithmic development
The visualization system has to provide efficient handling for large and heterogeneous data sets in a
multi window environment.
148
It has to be 3D graphics enabled in order to provide interactive an efficient: cursor control, visual
feedback and status monitoring.
The visualization tools and numerical probes have to perform fast in order to assure acceptable
interaction response for the variety of visualization tasks, see chapter 2, for example when Isosurfaces
or Streamlines need to be calculated and visualized.
The transparency in manipulating data 2D and/or 3D, structured and/or unstructured, scalars and/or
vectors, original input and/or derived quantities.
The variety of input data formats for 2D and/or 3D, structured and/or unstructured, multiblock
decomposition and connectivity specification, cell-vertex and cell-centered solutions with arbitrary
number of input quantities.
The variety of output data formats for generating image formats as bitmap, postscript and any other
graphics file format output.
The designed interactive model supports the users active involvement in visual processing tasks. The main idea
underlying the interactive visualization activity consists in letting the system reacting in a concrete and flexible
manner to the users interactive request. The requirement is that the user controls the visualization experiment in
a quick and productive manner, and that the graphical output of the investigated data takes advantage of the
graphics acceleration hardware, adapted to the interactive manipulation of 3D models. The graphical
presentations have to be compatible with the user mental representations and have to allow the communication
and exchange of such results with minimum effort. If the system functions are not transparent to the user, the
user will either reject the system or he/she will use it incorrectly. This is always an underestimated objective in
the interactive system development planning, as commonly, the software designers apply the procedure-oriented
approach, where the major role is the identification of a well-defined problem around a set of supporting
algorithms.
The interactivity model is the central principle in the user-centered approach, as it establishes the system
dynamics and offers possibility to support variety of users scenarios, by combining a finite set of available
interactive components. The objective is to create intuitive visual human-computer interactions, which follows
the specified concepts and operations to be given to the user and, at the same time that they are adapted to a
specific user knowledge and experience in treating the simulated problem. The interactive environment has to
support effective man-machine cooperation, mainly based on the software capability to complement the user
activity with suitable feedback, as follows:
the recalculation mechanism, which operates on the functional relationship between data introduced
by the user and the application itself,
the model consistency check, which guarantees system robustness at every stage of interaction,
the immediate visual monitoring of the user actions, with useful confirmation indications.
When using an interactive visualization system, the end user can perceive the existing software objects. The
clear example is the visualization system GUI composed of many different interaction tools, where each of these
components can be invoked and manipulated separately. Thus each of them might be identified as Object, for the
end user thing to employ and for the developer thing to develop. This relationship made possible a focused
interaction between the two experts groups, as Object represents a tangible reality for both of them, users and
149
developers. The both groups work on a same object, but they are doing it from two different points of views. The
added value in this process is that both groups are contributing to the development of the same object. Such
approach is essential to the Object-Oriented Methodology (OOM) and tightly relates the user-centered GUI
design with Object-Oriented Programming (OOP).
An interaction modeling is an important activity in a design process of a scientific visualization system. In the
following section the applied modeling principles are described, followed by sections, which specify the details
of interaction behavior (dynamic) and interaction data (static) aspects.
the cognitive characteristics must improve the user intuition for specific activity,
the perceptual characteristics applies color, depth, perspective and motion to improve the visual
feedback,
the ergonomic characteristics are concerned with the system usability (easy-to-use) and the learning
phase through feedback monitoring of user actions and help facilities.
The effective use of GUI relies on the intellectual, perceptual and sensorial activities of the user. The GUI design
requires consultation with cognitive scientist, psychologist, ergonomic expert, graphics designer, artist and
application expert to work together in order to understand the complexly of the users tasks and how the user
reacts when performing them. The interaction process needs to be translated into the specification describing a
requested interaction: what the user has to do, and what the system has to respond.
The first phase in GUI design is to learn how the user thinks about the task and how he expects the work to be
done (cognitive issue). The understanding of the user thinking process is the most difficult part during the first
phase of GUI design. It consists of several trials-and-errors cycles, which are usually carried out with a set of
small prototypes.
In the second phase (perceptual issue) the integrated prototype is constructed in order to find an appropriate GUI
that supports the user-centered model, as the integrated GUI invokes functions, gives instructions, and allows
control, and present results without intruding in the visualization process.
The adoption of the system is linked to the third phase (ergonomic issues) that enhances the user-system
interaction space. The elements are the on-line context-sensitive help and the good hard copy print output
possibilities. The main objective is to aid the user in developing the workflow for his/her visualization task to
and to keep him/her aware of the involved data sets and applied algorithms.
The entities, which model the GUI architecture, are: Object, Event, Place, Time, State and Action. See Figure
153, where the ERM diagram of the interaction process is shown. In this diagram there are three important
relationships:
1.
2.
3.
150
System
User
Event
S
Operation
Status
Action
Object
Time
Place
menus
direct manipulation
key binding
macros
(mouse selection)
(mouse picking)
(keyboard)
(file)
In Figure 154, the menu structure is presented hierarchically in order to keep organized the actions around the
similar content. The menu organization follows, from left-to-right, the visualization process: The Project menu is
concerned with data input and output data formats. Geometry menu contains the mesh representation, followed
by different Rendering possibilities. Than the Quantity menu selects scalar or vector field quantity, and the
Representation menu offers creation of different representations possibilities. The View and Update menu are
related to the setup of the viewing and presentation parameters. However, the menu structure is flat, which
means that a respective menu items can be invoked at any level of interaction. The pull down and pull right
menus, together with dialog-boxes enable the user to localize quickly the specific input.
However, the command sequence has certain amount of intelligence build into it. Only the commands that are
consistent with the previous one will be executed. In addition, some commands can be triggered through the
mouse-cursor point and click interaction; as sometimes, it is easier that the user performs a direct mouse
manipulation than that he/her has to traverse all the menus hierarchy to find the envisaged command to execute.
For example to select a surface, it is easier to do it by the mouse picking than to remember the textual name,
which needs to be selected trough the dialog-box interface. The users control of the application by menus or by
mouse has to result in a consistent visual feedback through input or output self-explanatory status monitoring.
The GUI should provide a user-friendly point-and-click interface based on the use of mouse and keyboard,
having access to variety of sliders, buttons, text entries, dialog boxes and menus for a variety of users inputs.
151
152
The developed interface of CFView, see Figure 155, contains GUI components organized with menus, icons and
dialogue boxes. Viewing operations and interactive interrogations of the computational fields are ensured by the
cursor and view manipulation buttons. The general GUI layout is subdivided into different areas, as shown in
Figure 155:
TOOLBAR area
QUICK ACCESS area
the middle GRAPHICS AREA and
the bottom area, subdivided into following regions:
1. message area
2. string input
3. viewing buttons
4. view monitor
5. cursor monitor
All the mentioned areas are 2D GUI components, except the graphics area, which is the part of the screen,
where the graphics objects appear. The graphics area displays and manipulates the 3D graphics content, which
is managed through specialized graphics objects called Views. One or more views may appear simultaneously
within the graphics area. These views can be positioned arbitrarily, thus, they can be moved, sized and can
overlap each other, see Figure 156.The generation of more than one type of view was analyzed and the
generation of three types of views is adopted:
Plot views that are used to display data in a Cartesian plot form.
153
154
KEY-BINDING
NUMERICAL OR MOUSE
POINT-AND-CLICK INPUT
DIALOG-BOX INVOCATION
155
156
157
158
Rotation Buttons allow rotating the camera about the principal coordinate directions X, Y or Z for 3D views, but
it is deactivated for Cartesian Plots views. The Roll Button allows rolling the camera around the view normal
and affects the view up vector direction. The Zoom In/Out Button allows to interactively zooming in and out,
affecting the camera width and height parameters. The Zoom Area Button allows specifying a rectangular area of
the active view to fit to the whole view display area and affects the camera position, target, width and height of
the camera parameters, as shown in Figure 165.
Figure 165: Camera parameters and virtual sphere used for camera rotation
159
Figure 166: Symbolic calculator for the definition of new field quantities
For such purposes, the Symbolic Calculator is an important element of the visualization system which can
involve available quantities to define an algebraic expression, as shown in Figure 166. The Derived quantities
are calculated from the user defined symbolic expression, and they can be field derived quantities (defined in the
whole computational domain) or surface derived quantities (defined only on surfaces). Based on a standard set of
computed quantities: static pressure, static temperature, density, absolute velocity vector field or relative velocity
vector field; different new common Thermodynamical Quantities can be computed, like Mach number, Total
Pressure or Internal Energy. In addition, differential operators, such as Gradient, Divergence or Curl, can be
applied to the existing field quantities, thus resulting in the creation of a new scalar or vector quantity field,
depending on the resulting quantity type resulting from each of the available differential operators, see section
1.3.2.
This process can be automated with a set of macro instructions, which can be invoked through the Macro
subsystem. The Macro subsystem allows the user to record the interactive actions he/her is performing, in order
that later on or next time the user is accessing the file, to automatically reply the performed action. This ability
increases the user efficiency to investigate similar cases without repeating all over again the same set of actions.
Thus, a user is actually becoming a high level programmer, and the macro script is the visualization programs,
which can be used to define different Visualization scenarios.
In the following two examples, see Figure 167, of the standardized output created for the EUROVAL project
[97], the two compared contributions are from Deutsche Airbus and Dornier. The airfoil computations were
validated against the computational data. The grid mesh and pressure field are presented for the whole 2D flow
field around the airfoil, while pressure coefficient, skin friction and displacement thickness distribution along
solid boundary are presented as three Cartesian plots, validated against experimental data. As there were many
European partners contributing, the generated visualization scenario macro has automated the generation of such
presentations. The second example is the visualization scenario for 2D bump test case, as shown in Figure 168,
where the vectors profiles in the boundary layer and the shock wave details with isoline and local values are
presented. The two contributed computations are from University of Manchester (UMIST) and Vrije Universiteit
Brussel (VUB).
160
Figure 167: EUROVAL visualization scenario for the airfoil test case
161
Figure 168: EUROVAL visualization scenario for the Delery and ONERA bump
162
As we can realize, the variety of presentation possibilities offered by the visualization system is large, and it is
necessary to carefully experiment with available possibilities to come up with an appropriate setup. As it was
explained, ones the visualization scenario is found the user interaction with the system can be quite fast and
straight forward, and even automated, for fast outputs as shown for the two EUROVAL examples.
The graphical primitives appearances influence the user visual perception of the displayed data. In Figure 169,
the setting of Node in blue, Edge in red and Face primitive to be transparent, with gray grid lines for reference
are shown. For comparison purpose, the use of these attributes can help to distinguish different data sets, when
presented in the same view.
Additional possibility is the superposing of different views in a transparent mode, which gives an impression of
one view. Such layering mechanism is interesting, as allows generating independent views, and than like putting
transparencies one over other the comparison can be performed, as shown in Figure 170.
163
Modifying the colors associated to the scalar quantity can sometimes reveal new insight when analyzing the
scalar fields. Together with the graphical primitives, where each of them can have different colormap associated,
represents a powerful way to present data. This makes possible to manipulate in the same view different
quantities, still keeping the necessary visual distinction for keeping them separate.
Figure 171: Different graphical primitives showing the same scalar field
164
165
The power element of the Symbolic calculator is to generate new geometries, which are analytical shapes like
sphere, and others. In the Figure 173, some examples are generated and the available quantity fields can be
analyzed in these curvilinear spaces, as the superset of the cutting plane mechanism.
166
ANALYSIS
The analysis begins with the problem statement expressed by the user.
Problem Statement:
The analysis model is a concise, precise abstraction of what the desired system
must do, not of how it will do it. The analysis model is to be understood, reviewed
and agreed upon by application domain experts (who are not computer-science
experts/programmers). This process leads to the extension of the user model data.
DESIGN
System design:
Object design:
Object design augments the content of the analysis model. Design decisions
include: specifying algorithms, assigning functionality to objects, introducing
internal objects to avoid re-computation, and optimization. The emphasis is on
essential object properties, so as to force the developer to construct cleaner, more
generic and re-usable objects.
IMPLEMENTATION
The overall system architecture defined by the design model is a tradeoff between the analytical model and the
target computer platform. Some classes whose properties do not derive from the real world -- for instance Set or
167
Vectors which support specific algorithms -- are introduced as an auxiliary part in the design model. The
implementation style must enhance readability, reusability and maintainability of the source code.
The models and the source code constitute together the software solution; they are the answer to the question
WHY is the software system created? When developing a visualization system, several types of OOM models
and diagrams are used to address three basic questions:
1.
WHAT?
The static model describes objects and their relationships by entity-relationship diagrams, a visual
representation of the objects in a system: their identity, their relationships to other objects, their attributes,
and their operations. The static model provides a reference framework into which to place the dynamic
model and the functional model. The object model describes classes arranged into hierarchies that share
common structures and behaviors. Classes define the attributes and the operations which each object has and
performs/undergoes.
2.
WHEN?
The dynamic model describes the interactive and control aspects of the system by state diagrams. This
model describes the time-dependent system characteristics and the sequences of operations regardless of the
nature or mechanism of the operation. Actions in the state diagram correspond to functions in the functional
model. Events in the state diagram become class operations in the object model.
3.
HOW?
The functional model describes data transformations in the system by data flow diagrams. The functional
model captures the systems functionality. Functions are invoked as actions in the dynamic model and are
shown as operations on objects in the object model.
The static model represents the reference base, because it describes what is changing or transforming before
describing when and/or how changes are done. Successful software engineering requires a number of very
different technical skills to satisfy research and industry needs. They include the ability to do analysis, system
design, programming design, coding, integration, testing and maintenance. OOM requires self-consistency and
sense of purpose. The experience with OOM is not only based on a set of techniques but also on their
interactions. That means that all of them work well together - much akin to the mathematical identification of
simplicity and beauty. In order to use various notations and techniques in a balanced way, OOM focuses to
achieve elegant design and implementation, which outcome is expected to be a comprehensive, implement-able,
efficient, maintainable and extendable code.
Software engineering is the compilation of different activities, which have to follow and overlap each other in
the cyclic and iterative manner. The class design should be a separate activity from the class implementation.
Thinking only of a class design in an abstract manner can lead to a design that is impossible to implement. The
balancing of these two thoughts is to the responsibility of an OO software engineer. The iterative process of the
methodology improvement and the iterative process of software development are advancing in parallel.
Consequently, the developed methodology extends beyond a single software project. Therefore it is not always
possible to determine the effects of methodology crafting at the starts of the project, as the methodology evolves
over the time in response to the acquired experience and this process is usually accompanies with development
problems.
168
Figure 174: Comparison of the traditional and object-oriented software development life-cycle
The software engineering methodology is an integrated combination of concepts, guidelines, steps and
deliverables in the context of an underlying process description, includes not only graphical notations but also
textual descriptions and documentation standards. The methodology encompasses a large integrated set of
development techniques and tools resulting in procedures as
Debugging codes and algorithms to support development,
Simulation results for industrial applications,
New visualization patterns to accomplish research.
Techniques are developed to improve software analysis and design possibilities:
Management techniques comprehends planning, organizational structure (hierarchy of abstraction
levels), deliverables (their description and timing) and quality control.
Design techniques provide means for verifying the design before coding.
The library management provides support to the OO approach by search and query possibilities of existing
reusable classes, when developing new classes. Tools supporting OO methodology provide notation, browsing
and annotation capabilities. Ideally, the OO design tools should provide mechanisms for navigation from higher
level OO diagrams to code and back to the design. This functionality is called forward and reverse software
engineering relating the developed code with design diagrams. Software engineering must improve both:
software quality (the product) and software production (the process). As B. Mayer says [4], there exists different
quality factors, as follows:
ROBUSTNESS
EXTENSIBILITY
is the ease with which software products may be adapted to changes of specification.
REUSABILITY
is the ability of software products to be reused, in whole or in part, for a new application.
COMPATIBILITY
is the ease with which software products may be combined with others.
EFFICIENCY
is the skilful use of hardware resources, processors, external and internal memories and
communication devices, minimizing the resources and improving the performance.
is the ease with which products may be transferred to various hardware and software
platforms.
is the ease of preparing acceptance procedures, particularly test data, and procedures for
detecting failures and tracing them to errors during the validation and operation phases.
is the ease of learning how to use software systems, operating them, preparing input
data, interpreting results and recovering from usage errors.
CORRECTNESS
PORTABILITY
VERIFIABILITY
EASE OF USE
169
tested under a large variety of operating conditions is expected (very likely) to work correctly in a new
situation/application.
To summarize: OOM was selected as the appropriate methodology to develop our VS system because OOM:
171
MESSAGES
METHODS
ATTRIBUTES
STATE
BEHAVIOUR
INTERFACE
The identity test checks weather two objects are the same ones.
The equality test checks weather the content of the two examined objects is equal.
The state of the object is represented by a set of values assigned to its attributes, also called instance variables.
For example, the point can have coordinate values (5, 7). These values represent the state of the point objects.
The behavior of the object is represented by the set of methods (procedures, operations, functions), which
operate on its state. For example, the point can be moved. The method "move" represents the point behavior and
affects the point state (coordinate values).
172
SPECIFICATION
(INTERFACE-MESSAGES)
SYNTAX
SEMANTICS
IMPLEMENTATION
REPRESENTATION
(ATTRIBUTES-STATE)
ALGORITHMS
(METHODS)
173
FORTRAN is an ADT as for example, to modify its integer value, an assignment operator (message) has to be
invoked, (e.g. I=5).
The message/method separation is present in ADT specification and implementation, see Figure 176. The
message syntax specifies the rules how to invoke the class methods. The message semantics specifies the
actions, which simulate the class behavior. The detailed definition of the message semantics is essential for the
method implementation of the class (ADT) [100]. The application of a message to an object is called sending the
message. The object class must find an appropriate method to handle the message. The message passing
mechanism allows method invocations, consisting of the object identification followed by the message. The
message can include any other parameter needed in method execution (e.g. aircraft-fly, point-move(x,y)),
therefore the message passing mechanism provides a consistent way of communication between objects, see
Figure 177.
Encapsulation or Information Hiding consists of separating the external aspects of an object, which are
accessible to other objects, from the internal implementation details of the objects, which are hidden from other
objects. The implementation of an object can be modified without the need to modify the application that uses it.
Thus, the encapsulation restricts the propagation of the side effects of small modifications. OOPL makes
encapsulation more powerful and cleaner than conventional languages that separate data structure and behavior.
Information hiding, or encapsulation, is the principle of hiding internal data representation and details of
implementation of ADT, allowing the access through a predefined interface. The interface, represented by a
limited number of messages, reduces the interdependences between objects, as each one can only access other
objects through their interface (messages). The interface is ensuring that some of the class attributes and methods
can not be corrupted from outside. An object has three types of properties:
STATE
BEHAVIOUR
INTERFACE
3.2.3 Inheritance
Inheritance is the sharing of attributes and operations among classes based on hierarchical relationship. Each
subclass incorporates or inherits the properties of its subclass and adds its own unique properties. The ability to
factor out common properties of several classes into a common super-class can greatly reduce repetition within
designs and programs and is one of the main advantages of an OO system. The sharing of code using inheritance
is one of the main advantages of OOPL. In the procedural approach we would have two separated hierarchies:
one for data structure and another for procedures structure. In the OO approach we have one unified hierarchy.
174
More important, then eliminating the code redundancy is the conceptual clarity in recognition that different
methods are related to the same data structure, what largely reduce the number of cases which need to be
specified and implemented.
Inheritance is a partial ordering of classes when the relationship of inclusions is applied among some of their
properties. Ordering is usually achieved hierarchically from generic abstract super-classes at the root to subclasses of greater specialization and tangibility, see Figure 178.
A new subclass is constructed from an existing one, conceptually related super class, by inheritance, specifying
only the difference between them. The inheritance can extend or restrict the features of the super class in order to
create the specialized subclass. Inheritance therefore represents the realization of the
generalization/specialization principle used to create new subclasses incrementally from existing less specialized
super classes.
A class can have multiple number of subclasses and super classes. If the class is having only one super class we
are speaking about single inheritance, see Figure 178a. The more complex inheritance is the multiple
inheritance, see Figure 178b, when a class is allowed to have more super classes. Two different approaches to
apply inheritance are based on ADT decomposition, see Figure 176:
Object A
Object B
Object C
Object B
Object C
specialization
175
The run-time identification together with inheritance provides a form of polymorphism that gives to the
designers and programmers a flexible construction methodology to create software components, which are
reflecting the Object Oriented Analysis and Design concept with the straight forward counterparts in OOP with
C++ as follows:
Good software design is the result of well trained and insightful thought process of its architects, and inheritance
is one of the well adapted mechanisms to promote software reuse.
3.2.4 Polymorphism
Another, equally important OO concept is Polymorphism, which is much less understood and appreciated than
Inheritance. The term Polymorphism means to have many forms, and in software construction Polymorphism
means that the same message can be sent to different class instances. As mentioned previously, a specific
implementation of a message is a method. If a message is polymorphic, it may have more than one method
implementing it. An operation is an action or a transformation that an object performs or it is subjected to.
Polymorphism is the ability to define different kind of objects (classes), which support a common interface
(messages). Thus, objects with quite different behavior may expose the same interface (e.g. aircraft-move, pointmove). There are two types of polymorphism in OO approach, see Figure 179:
polymorphic object, where receivers of the message can be objects from different classes,
polymorphic message, where the same message can be invoked with different number and/or
types of arguments.
Example for polymorphic object: different classes as Aircraft and Point have the same message move inherited
from the super class Object. Both classes Point and Aircraft redefine the method move, which is executed when
the message move is sent to them. Here the specification inheritance (dynamic binding) mechanism is related to
polymorphism. Polymorphism is essential for the implementation of a loosely coupled collection of objects,
which classes are not known until they are identified by the program at run-time. Thus, the message move can
be applied to a collection of objects without knowing if the object is Point or a whole Aircraft.
Example for polymorphic message: class Point can have two messages. First, move(x), is a one argument
message and second, move(x,y), is a two argument message. In the first case the point is moved just in x
direction, while in the second case the point is moved in x-y direction. Thus, it is obvious, that the application of
the same message has two different behaviors.
move
Aircraft
move (x)
move
move (x,y)
Point
Point
176
177
<name>
2) <object involved>
<name>
3) <functionality descriptor>
<verb>
The second part <object involved>identifies an object, which is manipulated and sometimes returned, for
example the field Find member function represents a use of such naming convention. The object name is omitted
in the member functions, as the function operates on an instance itself. The order of arguments in the member
functions calls for input arguments to appear before output arguments. The <functionality descriptor> describes
the operators like Find, Insert and Make. As earlier stated, these rules are used to facilitate the reuse of the
designed class.
OO programming did not begin with the C++ language: it started in the 1960s with the development of
SIMULA, a language developed at the Norwegian Computer Center for the purpose of simulating real-world
processes. SIMULA pioneered the concepts of classes, objects and abstract data-types. This was followed by
LISP/Flavors and SMALLTALK in the 1970s, and several other object-oriented implementations. However,
these were languages mainly used in the research environments. It was not until the late 80s that OOP begun to
gain momentum and C++ started to be recognized as the powerful language that is still today.
The ancestor of OOP languages, SIMULA 67, was conceived for developing simulation software. The first
complete, stand-alone object-oriented development systems were built in the early 70s by XEROX at its Palo
Alto Research Center. The main aim of Xeroxs first research on SMALLTALK systems was to improve the
communication between human beings and computers in the Dynabook project [31].
Artificial intelligence research -- especially expert systems -- had also a strong influence on object-orientation.
The developers of the time were guided by concepts like those of Marvin Minsky, first described in his framepaper and later summarized in his The Society of Mind [105]. Minsky and other authors explained the basic
mechanisms of human problem-solving using frames, a concept well-suited to computer implementation. This
explains the similarities between classes in systems like SMALLTALK [31] and LOOPS [106], and units in KEE
[107].
Programming methodologies are used to structure the implementation process. The current state-of-the-art
suggests that OOP results in greater benefits in implementation than other methodologies (e.g. structured
programming), namely:
simplification of the programming process,
increase in software productivity,
delivery of software code of higher quality.
OOP is a set of techniques for designing applications that are reusable, extensible, scalable and portable across
different computers platforms. The purpose of designing re-usable code is to lower the time (and cost) of
producing software. Experience shows that, no matter how well specified the application requirements are, some
178
requirements are bound to change and this results in the need for further development. The requirements change
frequently for the user interface and for the system environment/platform, while they tend to remain quite stable
for the underlying algorithms. Designing extensible code facilitates the changes that are necessary to meet
modified requirements. Designing reusable code ensures that code used in today's applications can be applied in
tomorrow's applications with minimum modifications.
To implement an abstract solution well-specified in terms of OO concepts and constructs, we need a
programming language with rich semantics, i.e. which can directly express the solution in a simple syntax. OOP
is a methodology for writing and packaging software using an OO programming language. C++ is an extension
of the C language which includes OOP support features; it was developed at AT&T in the early 1980s [30, 108,
109]. OO concepts have been well integrated into C++, and a very little amount of new syntax has to be learned
when moving from C to C++.
The fundamental OOP features supported by C++ are:
data abstraction and encapsulation,
message passing,
inheritance.
In software engineering, and particularly in OOP, it is important to manage the complexity of implementation;
OOP requires a software-development environment which comprises the following components:
text editor,
class, data and file browser,
compiler and linker,
debugger,
class libraries.
The programmers creativity is best enabled when the development environment is accessible through an
integrated user interface giving total control over all aspects of the implementation process. Such an environment
also constrains the developers moves and code must be produced and changed in a disciplined, consistent way.
OOP requires browsing techniques in order to identify the classes and their interactions. Software productivity
and quality are significantly improved in a development environment that can display class relationship
information and is able to present just enough necessary and sufficient data for the developer to manage the
complexity of the code and of the coding process. The complexity is usually is layered through class categories
and hidden through the inheritance mechanism. The real benefit to the programmer is that, code changes are
bounded a class when single functional change is done. To locate the code influenced by a change, the class
including the functionality is the basis for the scope of the change. If the class is a derived one, the tool must be
able to show all the functions that it has access to, not just the ones that it defines. An integrated development
environment must help to direct the developer on what he/she wants to do rather than on how to do it.
A class library is composed of header files and a library. The header files with extension .h can be located in
the include directory. They contain all the information needed by the programmer for using the class. The
files that contain the implementation of class methods (member functions) are archived in the library in compiled
form. The library can be located in the lib directory with the .a extension.
Data flow diagram (DFD), for functional modeling (see Figure 180),
Entity-relationships diagram (ERD), for data modeling (see Figure 181),
State-transition diagram (STD), for the modeling of interactive system behavior (see Figure 182);
These modeling tools identify different characteristics of the same object; these must uniquely represent the
object and prove the necessity of the objects existence. The DFD is a graphical representation of the functional
decomposition of the system in terms of processes and data; it consists of (see Figure 180):
processes, shown as "bubbles", (e.g. calculate streamline),
data flows, shown as curved lines that interconnect the processes, (e.g. streamline geometry),
data stores, shown as parallel lines and which exists as files or databases, (e.g. velocity field),
terminators, shown as rectangular boxes. They show the external devices with which the system
communicates (e.g. screen, mouse).
180
The data flow diagram identifies the major functional components but does not provide any details on these
components. The textual modeling tools -- the data dictionary and the process specification -- respectively
provide details on the data structures and data transformations. For example, the Point data dictionary and the
Process 3 specification are as follows:
Point:
point = coordinate x + coordinate y
coordinate x = real number
coordinate y = real number
real number = [-10-6 - +106]
Process
1.
2.
3.
4.
3:
Find the cell of the first streamline point.
Interpolate the pressure.
Search for the next streamline point inside the cell.
If point found, interpolate the pressure,
else continue the search through neighbor cells.
5. repeat actions 3-4 for all streamline points.
The DFD is a useful tool for modeling the functions but it says little or nothing about data relationships. The
data-stores and the terminators in the DFD show the existence of one or more groups of data. One needs to know
in detail what data is contained in the data-stores and terminators, and what relationships exist between them.
The ERD tool is used to model these aspects of the system and is well-suited to perform an OO analysis of the
data and of their relationships.
An ERD comprises as main components (see Figure 181):
entities, shown as rectangular boxes, each representing one or more attributes, (e.g. Point, Curve),
attributes, shown as ellipses. They cannot contain entities, (e.g. Point coordinates).
1<>1,
one-to-many
1<>M,
many-to-many
M<>M.
In Figure 181, one Streamline has many Nodes. The relationship is HAS, and the multiplicity of the relationship
is one-to-many, 1<>M.
A third aspect of the system that needs to be described is its time-dependent (real-time, interactive) behavior.
This behavior can be modeled by a state-transition diagram using sequences which show the order in which data
will be accessed and functions performed. To model the streamline example with a STD (see Figure 182), we
need to add the conditions that cause changes of state and the actions that the system will take in response to the
changes of state (e.g. click of mouse button -> get coordinates). A condition is an event in the external
environment that the system can detect (e.g. mouse movement, clicking of mouse button).
181
Generally, the objects are identified as NOUNS in the process specification, and also as data in the data dictionary
(e.g. Point). The most appropriate way to find the objects is from the ERD, because of the (usual) one-to-one
correspondences between entities and objects. In this case, entities have also to be present in the process
specification and data dictionary. In the ERD of the streamline example, we can extract the objects: Point,
Quantity, Node and Streamline.
The object behaviors can be identified in the DFD. For example, the streamline can be calculated from the
velocity field. Hence, the Velocity Field must have a method for calculating the streamline when it receives the
Streamline message with the Point argument. The identification of the object behavior can be found in the
STD. For example, the Mouse object must return the Point object when the mouse button is pressed.
Three basic relationships may be identified between objects:
has
is a kind of
Point),
- indicates that one object is a specialization of another one, (e.g. Node is a kind of
uses
- one object interacts with another (e.g. Velocity Field uses Point to calculate
Streamline).
182
ATTRIBUTES:
MESSAGE:
MESSAGE RETURN:
Mouse
point
buttonDownEvent()
Point
VelocityField
velocities, cells
streamline(Point)
Streamline
Streamline
points
geometryStructure()
Structure
PressureField
nodes, cells
streamlineStructure()
Structure
Screen
display (Structure)
The essential part of the OO analysis phase is the documentation, which include the specification of the objects
with their attributes and messages. The methods have to be described in sufficient detail to ensure that the
application requirements are complete and consistent. These classes represent the problem space, the first layer
of abstractions.
Design can start as soon as we have the analysis of the problem. It is important to ensure that the analysis
specification can realistically be implemented with the available software development tools. OO design is a
bottom-up design: the lower-level classes can be designed before the high-level classes. To take advantage of
reusability, it is natural to separate the classes identified during analysis in two groups:
new components,
It is difficult to design an entire workable system without prototyping some parts of it and exploring alternative
solutions based on existing class libraries. The class libraries relate to distinct computer-system areas -- e.g.
graphics, user interface, mathematical, etc. -- and constitute frameworks that could be integrated to form a
workable solution. If we assume that the class libraries Continuum, InterViews and PHIGS are available,
the classes that must be designed are Streamline, Velocity Field and Pressure Field. OO design relies on two
graphical modeling tools for identifying the new classes, namely:
183
ScalarCurve
Node
Field
Cell
With the C++ syntax, a class is specified as a set of data (variables) together with a set of functions. The data
describes the object state and the functions describe its behavior. A class is a user-defined data type, giving the
user the capability to define new data types. For example, the following is the definition of the Point class:
class Point
{
private:
// REPRESENTATION - object state
float xx;
// x coordinate
float yy;
// y coordinate
float zz;
// z coordinate
public:
// INTERFACE - messages
void move
// increase coordinates
(float dx,
// x direction
float dy,
// y direction
float dz);
// z direction
float x();
// return x coordinate
void
x
// modify x coordinate
(float v)
// new x coordinate
{xx=v;};
};
In C++, the class definition mechanism allows to declare variables called data members (e.g. float xx) and
functions called member functions (e.g. move). The class definition is a little bit different from the ADT
decomposition; because it puts together the class representation and interface (see Figure 176).
The access to actual data is controlled by admitting only the use of the messages associated with the object; this
prevents indiscriminate and unstructured changes to the state of the data. The data members are said to be
encapsulated since the only way to get at them is by use of the member functions (invoke messages) associated
with the methods implementation (e.g. move).
void Point::move
184
The message passing mechanism forms a sort of software shell around the class. The C++ keywords private,
protected and public support the technique of encapsulation by allowing the programmer to control access
to the class members. In the class definition, member data and functions can be specified as public, protected or
private:
public members, typically functions, define the class interface,
protected members, typically functions, define the class interface when
inheritance is applied,
private members, typically data types, can only be accessed inside member
functions implementation (methods). They reflect the state of the object.
The distinction between public and private members separates the implementation of classes from their use.
As long as a class interface remains unchanged, the implementation of a class may be modified without affecting
the client code that uses it. The client programmer, user of the class is prohibited from accessing the private part.
By default, class members are private to that class. If we have to change point coordinates, we have to use
appropriate member functions, for example x to update the x coordinate. There are three ways to identify the
object in C++:
Point
Point&
Point*
a;
b=a;
c=&b;
a.x(10);
b.x(10);
c->x(10);
// by name
// by reference
// by pointer
In this example, the three points a, b and c represent the same object. In the class Point definition, the
implementation of x member function is also present. By default such function implementation is expanded as
inline function. Thus, there is no function call overhead associated with its use.
The following definition extends the Point class with two new member functions read and write allowing the
input and output of the Point instance coordinates:
class Point
{
private:
// REPRESENTATION - object state
float xx;
// x coordinate
float yy;
// y coordinate
float zz;
// z coordinate
public:
// INTERFACE - messages
void move
// increase coordinates
(float dx,
// x direction
float dy,
// y direction
float dz);
// z direction
float x();
// return x coordinate
void
x
// modify x coordinate
(float v)
// new x coordinate
{xx=v;};
ostream& write // output
(ostream&);
istream& read // input
(istream&)
...
};
In the definition of the Point class, we have introduced 2 new classes istream and ostream which are
standard classes in the C++ I/O stream library. Instances of the output stream cout and input stream cin are
defined by default. We can use their << and >> operators without caring about their implementation.
The implementation of the read and write member functions is as follows:
istream& Point::read(istream& s)
{return s>>xx>>yy>>zz;};
ostream& Point::write(ostream& s)
{return s<<"("<<xx<<","<<yy<<","<<zz<<")";};
In the body of the member function implementation, any undeclared variables or functions are taken to be
member variables or functions of the class (e.g. xx,yy,zz).
185
The default implementations of the << and >> operators are illegal too for the Point class, thus they have to be
implemented in terms of read and write operators.
ostream& operator<<(ostream& s, Point & p)
{return p.write(s);};
istream& operator>>(istream& s, Point & p)
{return p.read(s);};
In order to improve performance in the use of such functions, they may be defined inline.
In the following example member functions are called using the normal member selection operator (e.g. -> or .).
An object or pointer to an object must always be specified when one of the Point member functions is called.
Point a;
Point *b=&a;
a.read(cin);
b->write(cout);
These input and output member functions may be called from anywhere within the scope of the declaration of
the Point instance. The same functionality can be achieved with the new definition of the << and >> operators.
Point a;
Point *b=&a;
cin>>a;
cout<<*b;
Operator functions may be members of a class. For example, a += operator could be declared as a member
function in the class definition by:
Point& operator += (Point );
Often, member functions require state information to perform their operation. Since every instance may be in a
different state, the state information is stored in the data members. Thus, the member functions provide access to
the data members, whilst these support the functionalities of the member functions.
Some of the most important member functions are constructors and destructors (see class Curve). One can
imagine that a class prescribes that the layout of allocated memory should be a set of contiguous data fields.
Constructors and only one destructor per class are member functions which allocate and de-allocate memory
during the lifetime of the object.
Memory allocation during initialization can be:
automatic
dynamic
static
In the following example, the three types of initialization are shown:
main(){
...
{
};
...
// start of block
Point a;
Point *b=new Point;
static Point c;
...
//end of block
// automatic
// dynamic
// static
};
automatic objects are allocated on the program stack. Their lifetime is the execution time of the
smallest block,
186
dynamic objects are allocated in free storage. The programmer explicitly controls the lifetime of
dynamic objects by applying the new and deleted operators.
static objects are allocated statically: storage is reserved for them during program execution.
The dynamic control of memory allocation allows the programmer to write a parameterized program. Such a
program, when running, will allocate the amount of memory just required in free storage depending on the data
being processed. That is in contrast with FORTRAN code where memory must be allocated in advance for
handling the maximum data size (FORTRAN ALWAYS allocates the maximum specified memory).
The following example defines the class Curve representation in C++:
class Curve{
private:
Point* store;
// quantity value
Instances of base and derived classes are objects of different sizes because they are defined with different class
members. A derived object can be converted into a base class without the use of an explicit cast, i.e. without an
explicit request for conversion by the programmer. This standard conversion is applied in initialization,
assignment, comparison, etc. For example, a Node object can always be used as a Point object.
Node b;
Point& a=b;
Standard conversions allow the base classes to be used to implement general-purpose member functions, which
may be invoked through base class references without being aware of the objects exact derived class.
187
Any derived class inherits from its base class its representation (including the external interface messages) and
methods implementation. The derived class can modify the methods implementation and add new members to
the class definition, therefore:
the internal representation can be extended, or
the public interface can be extended (or restricted).
The same member function name can be used for a base class and one (or more) derived classes. These
polymorphic member functions can be individually tailored for each derived class by defining a function as
virtual in the base class. Virtual functions provide a form of dynamic binding (runtime type-checking).
This mechanism works together with the derivation mechanism and allows programmers to derive classes
without need to modify any member function of the base class. Virtual functions allow the flexibility of dynamic
binding while keeping the type checking of the member functions signatures.
class Point {
private:
// REPRESENTATION - object state
float xx;
// x coordinate
float yy;
// y coordinate
float zz;
// z coordinate
public:
// INTERFACE - messages
...
virtual ostream& write // output
(ostream&);
...
};
class Node: public Point {
private:
// REPRESENTATION - object state
float val;
// quantity value
public:
// INTERFACE - messages
...
ostream& write // output
(ostream&);
...
};
main(){
Point* store[2];
// set of objects
store[0]=new Point(...) // Point initialization
store[1]=new Node(...) // Node initialization
// output for both objects
for(int i=0;i<2;i++) store[i]->write(cout);
...
};
In the above example, the member function (message) write will invoke two different write implementations
(methods), respectively of the Point and Node classes.
When a member function is declared virtual in a class, all the objects of that class are labeled with the type
information as they are created. This virtual declaration adds some extra storage to the class. Any member
functions may be virtual, except constructors. Operator member functions including destructors may be virtual.
From the behavior point of view it is frequently useful to identify the common features provided by more than
one class. Most classes need to be categorized in more than one way. The mechanism of multiple inheritance
allows to extend the features of a derived class using more than one single base class. For example, the class
Node could be implemented differently if for example, the following Scalar class would exist.
class Scalar {
private:
float val;
...
};
188
This version of the class Node is composed of the Point and Scalar class giving the features of both classes to the
Node class without any additional coding.
Besides virtual member functions (polymorphic object), C++ supports function and operator overloading
(polymorphic message). Several member functions and operators can coexist having the same function name or
operator symbol. The compiler uses the member functions arguments types to determine which implementation
of a function or operator to use.
class Point {
...
float x();
// return x coordinate
void
x
// modify x coordinate
(float v)
// new x coordinate
{xx=v;};
...
};
Exception handling provides a consistent method to define what happens and prescribe a suitable system
response when a class client misuses an object. This is a central issue when developing interactive visualization
systems since one cannot avoid (unwanted) situations of implementation/ or coding anomalies. Exceptions refer
to program states corresponding to run-time errors like: hardware failures, operating system failures, input errors,
system resource shortages (e.g. failure to allocate a request for memory), etc. Exceptions are control structures
that provide a mechanism for handling such errors. Not unsurprisingly, OOM handles exceptions by adding
them to class methods where they naturally belong; this requires the class designer to consider all possible
exception situations and systematically define the methods for error handling.
189
are defined as shared objects, since more than one region (cell-zone, segment) can be defined from existing
boundaries (shared objects), which must be retained until all zone cells (segments), which are depending on them
exist. Special types of shared objects are:
pointers to functions
exception objects
error states
library signals and long jump facilities
The applied solution to this shared objects referencing problem is implemented in the base class which manage
the reference counter and through virtual destructor redirect allocation/reallocation of object memory to the
derived classes. Derived classes implement appropriate memory allocation algorithms, when the related derived
class instance is created or deleted.
OOM includes a few other concepts like concurrency, persistence and garbage collection. They are mentioned
here for completeness:
Concurrency refers to the concept of operations carried out in parallel as opposed to sequentially
(computers naturally operate sequentially). Concurrent systems consist of processors that operate,
communicate and synchronize so as to perform a task as efficiently and rapidly as possible. For
example, computational processes can be distributed and executed on several processors (distributed
environment) that run on different platforms (parallel/vector/grid clusters); hence, a Process class can
be designed to encapsulate the sequence of dispatch actions that supports the execution of parallel
protocols.
Persistence refers the activating/deactivating mechanism which permits to automatically convert
(arbitrary) objects to symbolic representations. The object representations are stored as files, and the
objects can be regenerated from these files by inverse transformation. Thus, persistence provides the
methods for saving and restoring objects between different sessions.
Garbage collection refers to the automatic management of certain types of objects. A time comes when
(previously created and used) objects are no longer needed. The system must ensure that un-needed
objects are deleted ones (and once only) and can never be accessed afterwards. This is the purpose of
garbage collection.
190
1.
To define both verb and noun for each attribute type, describing the association, as data or process
depending on the context. It is obvious that such approach introduces flexibility, but also increases the
complexity of the model, as it is not uniquely defining the related attribute type.
2.
To define only noun or only verb to uniquely define the attribute type. This is more restricted approach but it
results in a more precise software specification.
The second choice was applied, where attribute types are uniquely defined with a verb or a noun: the entity type,
associated with noun and the relationship, associated with verb, while attributes for an entity or a relationship are
or nouns or verbs. With applying ERM, we grouped the relationships and attributes within defined entities in
order to progress in the design of OO classes. In this modeling process entities are mapped to classes. The side
effect of this process is also the creation of auxiliary classes, which were not directly identified in ERM, but are
found necessary to support the envisaged software functionality. The primary goal of the analysis model is to put
in evidence the end user concepts and prepare a foundation for the classes design.
Attribute Type (AT) defines a collection of identifiable associations relating objects of one type to objects of
another type. For example in ERM, Boundary is AT of the Zone entity modeled as an attribute of the Zone class,
called Boundary. This Boundary object has the attribute, which is associating a Boundary object with one or
more Zone objects, which are bounded with this boundary.
Scenarios provide outlines of the user activities that are defining the system behavior. They provide more
focused information related to the system semantics and serve as the validation elements of the software design.
Scenarios provide means to elaborate the system functionality, modeled usually through collaboration of the
respective classes. For example, the class View often participates in several scenarios: for rendering object on the
screen, interacting with the windowing system when resizing, making icons and updating the display. The most
complex scenarios cut across large parts of the software system architecture, touching most of the classes
constituting the application. The scientific visualization system is an interactive application, with variety of such
scenarios driven by external events. Each defined visualization tool is constructed around a specific scenario,
which is found sufficiently independent, thus it can be developed independently, although in practice, it could
have some semantic connections with other scenarios.
ERM defines the static data structure consisting of entities, their relationships and related constraints. This
modeling technique was extensively used in Chapter 1, to define the Visualization System data model. In OOM,
the data structure is defined within classes. Entities and relationships are modeled as classes and there is no need
for the specification of specific entity key attributes. The entity attributes are defined also as classes, which are
modeled, or as an individual class or as collections of classes. The following distinctions need to be considered
before the mapping is made:
191
Relationships are represented explicitly in ERM, but in OOM they are designed as the class references.
In ERM the relationship model expresses the semantics, cardinalities, and dependencies between entity
types.
Key attributes uniquely distinguish entities in ERM, which is not the case in OOM as the class instances
have their run-time identifier. However, these key attributes can be of use, if the persistence or the
search on such objects has to be implemented, based on an association with an indexing or labeling
scheme.
Methods are not present the ERM notation. When there are constraints that cannot be specified
declaratively within a class, we model such constrains with methods in order to support their
verification. The methods become an integrated part of the class design.
Additional methods are designed to support constrains, which cannot be specified declaratively.
Entity type is mapped one-to-one to a class. The entity type hierarchy is mapped to the OO class
hierarchy. Common entities are mapped to a specific subclass applying the multiple inheritance
mechanism. A composite attribute is mapped to its component attributes as a collection of attributes.
Binary relationship is modeled as object reference within a designed class. The bi-directional reference
facilitates the bi-directional navigation to support such inverse referencing. If the cardinality of a
relationship is greater than one, it is appropriate to model it as a collection of objects references. If a
binary relationship type has one of more attributes, then it is appropriate to design a new class to handle
these attributes and keep track of objects references of the two related classes, for which the bidirectional communication is required.
Constraints can be specified for entity and relationship types. The multiple inheritance mechanism is a
possibility to enforce the constraint of overlapping or disjoint entities in an entity type hierarchy.
Cardinality constraints in a relationship type are implemented in methods for constructing or destroying
objects and methods for inserting/deleting members into/from a collection of referenced objects. If
these cardinality constraints exists a dependency constraint for the relationship type, should be
implemented when constructing and destroying such objects.
MODEL
192
Supports functionality, which consists of the visualization algorithms and CFD data
management elements. It handles the internal data and operations, which are isolated from
screen display, keyboard input and mouse action.
2.
VIEW
Represents the graphics display presentation, responsible for the displayed graphics as the
feedback to all the user interactions. It includes sub-layers with direct procedure calls to
window management and graphics kernels functionality.
3.
CONTROLLER
The mediating components between the CFD data model and the viewing interface. It controls
the user interaction with the model, including the system limitations that constrain the system
operations in particular circumstances. It provides initial creation of the interactive environment
and maps the user input into the application.
MODEL
2
6
4
VIEW
CONTROLLER
framework can be applied to individual classes, but also as a group of collaborating classes. The envisaged
architecture organizes the software in interchangeable layers, which are perfectly in line with the applied MVC
framework. Such separation allows software updates to be performed without need for re-implementing the main
application structure, and allows for the easier integration of components -- for example, when creating different
layouts of the GUI. This separation also permits the creation of specialized layers only applicable to specific
hardware platforms.
We distinguish the following layers in the VS architecture:
1. The layer that handles the input data (and uncoupled from any graphical output), for example a
display screen or printer device.
2. The layer which is tightly linked to the graphics engine, i.e., the explicit invocation of graphics
kernel functionality.
3. The layer which coordinates the application model and the GUI.
These features make object-oriented systems different from traditional ones in which these layers are mixed
together. The possibility of independently changing the View layer or the Controller layer makes software
maintenance easier, including adaptations to different display or input devices. For example, keyboard (or
mouse) input devices can be changed altered without affecting the application structure.
Experienced designers solve problems by re-using proven, successful solutions or solution patterns: they can be
re-applied and need not be rediscovered. This is why the visualization system reuses the MVC solution and its
mechanism of dependency, which allows a change in a model to be broadcast to all objects concerned and to be
reflected in the multiple views.
The MVC pattern consists of three kinds of objects:
1. the model, which is the application object,
2. the view giving screen presentation, and
3. the controller which defines the way the user interface reacts to user input and model output.
Model and View are coupled through the subscribe/notify protocol. A view reflects the appearance of the model.
Whenever model data change, the model notifies its dependent views; in response, each view must update itself
by accessing the modified values and updating its appearance on the screen. This approach enables the creation
of multiple views of a model. The model contains data that can describe several representations. Views are nested
and MVC defines the way a view responds to a user command given via the associated input device. The viewing
control panel in CFView is designed with a set of buttons modeled as a complex view, which contains the views
of the related buttons. The views are contained in, and managed by the Window.
The interaction mechanism is encapsulated in a Controller object. The Controller class hierarchy supports the
design of a new Controller as a variant of an existing one. A View encapsulates the interaction mechanism trough
the interface of the controller subclass. The implementation can be modified by replacing the controller instance
with another one. It is possible to change a View and a Controller at run-time. The sharing of a mouse, keyboard
or monitor by several visualization tools demands communication and cooperation. Controllers must cooperate
to ensure that the proper controller is selected to interpret an event generated via an interaction component,
which contains (is attached) to the user-mouse-cursor interaction.
The Model has a communication link to the View because the latter depends upon the Models state. Each Model
has its own set of dependent Views and notifies them upon a change. In principle, Views can recreate themselves
from the Model. Each View registers itself as a dependent component of the Model, sets its controller and sets its
View instance variables. When a View is destroyed, it removes itself from the model, controller and sub-views.
Views are designed to be nested. The top View is the root of its sub-Views. Inside a top-View, are the sub-Views
194
and their associated Controllers. A single control-thread is maintained by the cooperation of the Controllers
attached to the various Views. Control has to result in only one Controller selected to interpret the user input: the
aim is to identify the one that contains the cursor. The identification of the cursor position is computed through
the traversal of the associated Views.
The Model is an instance of a class MForm and consists of application data. A change to a Model is a situation
which requires a modification of the View, an instance of VForm. MForm messages enable VForm to update by
querying the Model data. The VForm::update is selective as determined by several parameters of the model. The
VForm::display controls the graphical appearance of the Model. It is important to note that no element of the
interface (view/controller) is coded in the application (model). If the interface is modified, the model is not
influenced by this change. The MVC framework is established using the 3 basic classes MForm, VForm, CForm,
see Figure 186 and follows the example that describes the three specialized classes for the treatment of the
Surface model.
195
models dependents of its change. VForm and CForm requests the Model for the necessary data, in the surface
example, the color variable is accessible to them and MForm receives the message color sent by the VForm and
CForm (interface components). The result of the user interaction is new surface appearance and a change of the
internal state of MForm. CForm sends the message for the thickness update to the MForm, which provides
methods for updating the model data. The CFSurface is the direct subclass of CForm and it is specialized for
controlling the surface appearance. The CFSurface is the subclass of the Controller and handle the user input of
the dialog box that accepts the new thickness parameter. Message passing mechanism requires the link from the
controller to the model. Thus, the Controller cannot be predefined by the MVC framework, except the generic
methods: Model and View. The controller concept is slightly extended with call-back messages, which are
directly sent by a View to the Model ones the Controller has set the View reference. The design components of
the MVC model are shown in Figure 186 for the Surface example and consist of the following three parts:
1. The surface is the application object, and it represents the Model.
2. The displayed surface on the screen is the View of the surface.
3. The input handling for the surface parameters is done through the Controller.
base class CFD object from which the input data classes are derived
composed of Project, Mesh, Domain, Boundary and Segment classes.
3D Model category:
consists of classes that filter the Input data model and define different runtime models for 3D graphics, derived from the base class MForm for 3D
scene building and 3D windows manipulation.
3D View category:
consists of classes that defined the appearance of the 3D graphics models and
windows, derived from the base classes VForm. This category is enriched
with AForm classes, which are responsible for controlling the appearance
parameters of the displayed models.
3D Controller category:
2D GUI category:
consist of base classes Button, Dialog and Menus which builds the 2D GUI
layout. They are applied in as standalone components, as they are reused from
existing libraries like Interviews and Tcl/Tk are.
2D & 3D Event handling: is an important category of classes which synchronize Events, which the
system generates from user interactions, and received from 2D GUI or 3D
controller category.
The layers are described in the following Sections.
196
198
identification string. The most complex coding involves the specialized parts of the geometrical space
decomposition modeled with the Segment class defining the boundary condition applied to that geometry. The
input model is designed to support flexibility for processing of multiple components of each of the mentioned
classes. For example, the labeling for of a segment could look as follows:
M1.D2.B3.S4
which means that we are identifying the segment 4 of the boundary 3 in the domain 2 of mesh 1. Each boundary
consists of different BC segments, which can be further connected or not. In order to model such cascading
relationship, the base ObjectCFD class, has a Set attribute to handle such parent-child relationship. These classes
are also referencing the Representations, which are created during the interactive session, such as cutting planes
and isosurface instances. For these instances the specific quantity field is only created, if the user selects that
quantity for investigation; on which bases the creation of such field creation is triggered. The regeneration
mechanism is implemented through the inheritance mechanism, which supports the invocation of a specific
regeneration algorithm at the requested level, where the parameters for their creation are known. The recursive
search is possible through the mentioned parent-child relationship.
classes (part of the View Layer), which are manipulated through the Controller layer in order to provide
appearance parameters of the selected graphics mode. The View Layer provides a visualization system with more
adapted functionality for building 3D scenes, navigation and windowing operations. It supports 3D graphics
content composition, display, and interaction. The graphical primitives for building the Representations are
characters, lines, polygons or other graphics shapes available from the underlying 3D graphics engine. The 3D
graphics classes for the modeling of an interactive visualization system are expected to support the following
capabilities:
geometric and raster primitives,
RGBA or color index mode,
display list or immediate mode(tradeoff between editing and performance),
3D rendering:
lighting,
shading,
hidden surface, hidden line removal (HLHSR), (depth buffer, z-buffer),
transparency (alpha-blending),
special effects:
anti-alaising,
texture mapping,
atmospheric effects (fog, smoke, haze),
feedback and selection,
stencil planes,
accumulation buffer,
compatibility, inter-operability and conformance-compilation with different OS platforms like:
DEC, IBM, SGI, HP, SUN, PC-Windows, PC-Mac etc.
200
The graphics API accesses the graphics hardware to render 2D and 3D objects directly into a frame buffer. These
objects are defined as sequences of vertices (geometric objects) or pixels (images).
Rasterization
Clipping
Fog
Display list
Transparency is best implemented by using blend function. Incoming (source) alpha is correctly thought as a
material opacity ranging from 1.0 (representing the complete opacity) to the complete transparency. In RGB is
mode is appropriate, while in Color index mode is ignored.
Graphics primitives can be used to create forms as shown in Table 32. Graphics attributes of such Forms consists
of variety of colors and rendering styles. They are applied to localize and analyze rapidly geometric entities.
Geometry
text
points
curves,
surfaces,
Graphics Primitives
string
marker
polyline
polygons, fill area facet
View
part,
unpost,
display,
flush,
201
Window
+
+
+
+
+
+
+
+
Display
resize,
clear,
destroy,
pan,
zoom,
reset,
update,
conform.
iterations before the final design was reached. They control the data input, and provide feedback on the user
interaction, including errors reporting. The feedback information is extremely important as it keeps the user
informed about the system activities. The GUI toolkit contains general components for designing the user
interface, which cover the following functionality:
presenting menus,
parsing commands
reading free-format numeric input,
handling text input,
presenting alerts and dialog boxes,
displaying windows,
presenting help, and
outputting user provoked errors
The GUI combines all the three kinds of menu components: menubar, pulldown and pullright menus. The menu
items can have shortcut commands associated to the keyboard input for speed-up the interactions of experienced
users. The visualization system is a command-driven system and operates by sequentially processing the
triggered commands and their associated parameters, executed through the Action hierarchy of classes. The GUI
classes have been designed to support a common interface to 2D GUI elements, which implicitly take care of the
window system platform by wrapping the needed functionality to assure:
System is a main controller manager that synchronizes the input- output control of the application window
consisting of three parts:
1. menu bar and time header,
2. working area for placing controllers,
3. monitor area with toolbox controlling the current active view and automatically assigning view
manipulation tools.
The System class coordinates these two different events handling mechanisms in the unified form by taking the
case of synchronization and event processing order. This activity is tightly linked with the update mechanism and
especially important for 3D graphics updates. The System delegates to the Display class the responsibility to
activate updates on the specified views. The user operation is not associated directly to a particular user interface,
because it can be triggered from multiple user interfaces. The idea behind is that the interface will changes in the
future, and it is expected that operations are changing with a slower rate. The objective is to access the required
functionality without creating many dependencies between the designed operations and the user interface classes,
which is an important element in modeling the undo and redo functionality to ease the interactive work. The
Menu Item is responsible to handle the user generated event, which contains the user request. The Event is
captured by the event Dispatcher, which sequentially process them. The Menu Item is associated with Action,
which is executed when the user selects a Menu Item and it is responsible to carry out the user request. The user
request involves an operation associated with different combinations of objects and operations. The
parameterization in the Menu Item is done through the object Action. The Action abstract class provides an
interface for issuing a request. The basic interface consists of a unified execute message, which is propagated
through the inheritance mechanism, till the requested specialization is found. The Pulldown menu is an example
of the class that triggers an Action in response to the button down event. The buttons and other visual interaction
components are derived from the Controller - Input Handler classes, which associate Actions in the same way as
done for the Menu Item class.
204
terminals
plotters
laser pointers
Portability is an important condition in order that the visualization system is running on different UNIX based
workstations and PC (Windows, Mac). Therefore, when CFView was being designed to be run on a variety of
different graphics systems we need to consider the following elements:
Number of available colors.
Line width support and availability of line type extension.
Polygon fill - maximum number of vertices availability, of pattern and hatch fill ability, to fill
certain classes of polygons.
Text (hardware character sizes and characteristics)
Selective erase
Size of display (input echo area).
Interactive input support
Double buffering,
Z - buffering,
Lighting & shading.
205
structures. Some files are source code (headers *.h, and source *.c) while others are object files *.o, produced by
the compiler and executable ones by the linker. The last one depends directly on the source code.
An O-O development environment supports the sense of design. An application is decomposed in terms of
classes and their interactions. They are coded in C++ code and linked among several directories, which contains
a number of files, which needs careful and disciplined organization and naming convention. The source code is
divided in two major parts. The include-files are named <class name>.H and define the most abstract layer of
the application. They describe the class interface protocol: class declaration defined in terms of messages. The
source-files, which define the definition of the messages, are named as <class name>.C.
In addition, the development environment must have a class-hierarchy browser which ties together all the C++
classes and maybe some native C or FORTRAN code. The visual presentation of the class hierarchy is an
important aid to the developer, to make him/her easy to comprehend, the applied (sometimes complex)
inheritance and polymorphic model. A class browser allows to access and to edit, in highly structured way the
class files and the executables. It needs to provide menu-driven operations to import existing classes into a
project create new classes and delete existing classes from an application. The facility to visually create new
number functions, data members and friends, to examine, modify, delete existing members, to change the
visibility of any member and to change a member function to be / static, inline / normal or friend represents the
indispensable tool for modern software development. The makefile manages the mechanics of building the
application by keeping track of many included files required in C++ program. The automatic construction and
maintenance of makefile which controls the dependencies of the compile and link process is essential to be
integrated in IDE. The benefits from the use of such development environment are:
significantly reduced design and development time due to high degree of sensibility, extensibility
and ready to use components,
significantly reduced program size,
enforced consistency of the GUI which have feel-and-look characteristics built in,
enforced portability because all system and device dependencies are eliminated through the property
of encapsulation and unified protocols.
A computer program is a complex artifact usually presented as text. Visual presentation of code is a method of
presenting program structure visually. The program visualization is concerned with the following aspects:
techniques used to display a program visually,
how visualization affects the software development and maintenance,
general features of program visualization tools.
The graph representation is standard visual technique to present the program structure. The graph nodes
represent program entities such as classes and functions, while graph curves represent the relationship between
these entities. For example: Inheritance and function calls. Using different line styles or colors several
representations can overlay on the same display or window or image. The C++ language provides a rich set of
syntax, which can be visualized as graph nodes. That includes classes, class templates, class members, functions,
function templates, variables (both local and global) and files together with the relationship between them.
Model
Data
Computer
Graphics
Interactive
Visualization
Quantity
User
Interface
Operators
Continuum Mechanics
Computational Geometry
Combinatorial Topology
Geometry
Continuum
Topological
Operation
208
Geometrical
Operation
In order to provide for the interactive use of many different functionalities, a visualization system needs to
integrate several types of components. The graphical user interface, for example, which provides user
interactivity, relies on a dynamic windowing environment together with several I/O facilities and sets of
graphical primitives; the visualization operators are another example of software built upon code originating
from computational geometry [125] and combinatorial topology [126]. The number and the diversity of the
components/routines that must be handled and their tight couplings add considerable complexity to the task of
developing and maintaining visualization software over an extended period of time. To deal with this
complexity, our software was developed using object-oriented programming, allowing different data types and
routines to be encapsulated in objects. These objects are grouped in classes covering different application
domains. The classes themselves are organized into class hierarchies, which match both the internal knowledge
domain and link domains in-between, see Figure 193. Using OOP, one can systematically and naturally
decompose complex software into manageable modules.
As shown in Figure 193, interactive visualization calls upon various knowledge areas and conceptual models,
namely: application data model, computational geometry, combinatorial topology and computer graphics. The
application model is meaningful to the scientists who want to be able to identify, extract and view regions of
interest in the application model. Computational geometry and combinatorial topology provide the set of
operators needed to carry out the visualization process; these operations are specialized mathematical
transformations and mappings that operate on the application data sets. Finally, computer graphics provide for
the concepts that relate to the user interface and for the graphical primitives that display data on the monitor
screens.
Scientific Visualization has become a feature intrinsic in the scientific investigation and discovery process,
indeed a methodological instrument of scientific research. SV instruments help users to gain novel, often
revealing insights into complex phenomena. The investigator controls SV tools to jump from phenomenon
overview to detail analysis then back from details to overview, with as many iterations as required to gain a
better understanding of the phenomenon under study. Interactive visualization, with its ability to integrate and
synthesize many rendering/viewing techniques and manipulations, improves the perceptual capabilities of the
user in a simple, intuitive and self-explanatory manner.
The main features of the software we have developed include:
Integrated structured and unstructured geometry treatment for efficient visualization of numericallygenerated data around complex 2D/3D geometries,
Visualization environment with interactive user control,
Development of a highly portable graphical user interface (for easy access to integrated system
components),
Creation of class libraries, as reusable and extensible components for different software products,
We have investigated visualization methods covering:
Transparent 2D/3D interactive visualization tools with user-controlled parameters.
Interactive 2D/3D qualitative tools, local values, sections, arbitrary cutting planes.
Interactive techniques simulating smoke injection (particle paths).
The interactive techniques comprise:
Interactive creation of non-computational grid surfaces (cut planes, iso-surfaces) where analysis
tools can be applied (e.g. sections, isolines),
Interactive identification of block topology and connectivity with boundary conditions,
Analysis of multiple quantities related to the same geometry in the same or side-by-side view,
Data comparison in a multi-view, multi-project environment.
The visualization tools were integrated in the GUI in order to enable seamless communication and control
between different geometries and quantity representations.
209
In the following section, we describe the impact that our visualization system has had on 5 EC and IWT R&D
projects.
Interface
Framework
Parallel
Applications
Parallel
PVM
- SCSI
- S-bus
- Other...
Network
Parallel
Server
Graphical
Workstation
Network
Application
Parallel
Machines
Bus
2.
Parallel Applications that run on parallel machines. These applications can be viewed as a collection of
stand-alone parallel programs performing mapping operations on CFD data.
3.
The Interface Framework, which governs the communication between CFView and the Parallel
Applications. The Interface Framework is a distributed program which for its largest part runs on a
Parallel Server machine, but with extensions to the graphical workstation and the parallel computers.
Among its tasks are communication management, parallel server control, networking and
synchronization. It provides generic functionalities for transparent access to the parallel applications in
a heterogeneous and distributed environment.
The four algorithms in an SIMD (as well as in an MIMD) implementation of Parallel CFView system [127] are:
1.
The parallel cutting plane algorithm uses a geometrical mesh, a scalar quantity and the equation of a
plane to calculate a collection of triangles and scalar quantities. The triangles represent the triangulated
intersections of the plane with the mesh. The scalar data are the interpolated values of the scalar quantity
at the vertices of the triangles.
2.
The parallel isosurface algorithm uses a geometrical mesh, a scalar quantity and an isovalue. It
calculates a collection of triangles. The triangles represent the triangulated intersections of the
isosurface with the given isovalue with the mesh.
210
3.
The parallel particle-tracing algorithm uses a geometrical mesh, a vector quantity and the initial
positions of a number of particles in the mesh. It calculates the particle paths of the given particles. The
paths are represented as a sequence of particle positions and associated time steps.
4.
The parallel vorticity-tracing algorithm uses a geometrical mesh, a vector quantity and the coordinates
of a number of points in the mesh. It calculates the vorticity vector lines associated with the vector
quantity that pass through the given points. The lines are represented as a sequence of positions.
Algorithm
#triangles
Cutting Plane
5189
Isosurface
10150
System
Send Data
Sequential
SIMD
MIMD
Sequential
SIMD
MIMD
---3.66
3.66
---3.66
3.66
Load Data
---5.82
28.86
---5.82
28.86
Execute
6.00
3.63
4.03
27.47
6.62
4.96
Table 33: Average times (s) for Sequential, SIMD and MIMD implementations of
Cutting Plane and Isosurface algorithms (wall-clock time)
In order to evaluate the performance of the heterogeneous and distributed approach for Parallel CFView, a
limited benchmarking analysis was conducted. We compared the performances of the SIMD and the MIMD
implementations with the stand-alone, sequential CFView system. The SIMD parallel machine used was a CPP
DAP 510C-16 with 1024 bit processors and 16MB RAM (shared). The MIMD parallel computer was a Parsytec
GCel-1/32 with 32 T-805 processors each having 4 MB of RAM. Both parallel machines were connected
(through SCSI and S-bus respectively) to the Parallel Server machine which is a SUN SparcStation10. For the
graphical workstation, we used an HP9000/735 which communicates with the Parallel Server over an Ethernet
LAN. The results of the measurements are given in Table 33. All times are in seconds, averaged over 20 runs.
The table shows the average execution times for both algorithms on the different systems.
For the SIMD and MIMD Parallel CFView implementations, the average time needed to send the data (mesh and
scalar quantity) from the graphical workstation to the Parallel Server is given (Send Data), as well as the average
time needed for loading the data from the Parallel Server onto the parallel machines (Load Data).
The average execution time (Execute) for the parallel implementations includes (i) the sending from the
graphical workstation to the parallel machines of the algorithmic parameters (i.e. the equation of the cutting
plane or the isovalue); (ii) the execution of the algorithm on the parallel machines; (iii) the retrieval of the result
data from the parallel machines to the Parallel Server; and (iv) the sending of the result data from the Parallel
Server to the graphical workstation. For the stand-alone, sequential implementation of CFView, only the average
execution time is shown. The number of triangles (averaged over the runs) generated by the algorithms is given.
The total execution time, given in Table 33, for the parallel implementations is the sum of the three timings
(Send, Load, and Execute). However, by making use of the caching mechanism in the Interface Framework, data
send and load needs be done once only, in the beginning. After that, only the new equation of the cutting plane
or the new iso-value for the isosurface has to be transmitted to the parallel machines, after which execution can
take off. Hence, in a realistic situation, only the times listed in the last column (Execute) are relevant for
comparison.
The results revealed how massively-parallel computers (SIMD as well as MIMD) can be used as powerful
computational back-ends in a heterogeneous and distributed environment. A performance analysis of Parallel
CFView showed that both types of parallel machines are about equally fast. The total execution times on the
SIMD implementations are sensitive to the amount of computation required, whereas the execution times on the
MIMD implementations are dependent upon the amount of data routed between the processors. The overheads
induced by the interface framework are seen to require only a minor fraction of the global execution times. This
211
indicates that the heterogeneous and distributed approach of Parallel CFView is indeed viable since it performs
significantly better than its stand-alone, sequential counterpart. This is especially true for computationallyexpensive operations, such as isosurface calculations on problems with large data volumes. Since the
heterogeneous and distributed nature of the system allows the transparent use of remote parallel machines on
various hardware platforms.
An EJB container with all the metadata management rules to manipulate metadata and the relational
database used to store the metadata information. The EJB container acts as security proxy for the data in
the relational database.
212
COMPARISON TOOLS
experiment vs.
experiment
experiment vs
simulation
simulation vs.
simulation
Measurement
data
Database
CFD
simulation data
CFD code
ANALYSIS TOOLS
statistical
analysis
quantification
tools
image
visualisation
animation
(3)
A Thin GUI Java client is used for remote data entry, data organization and plug-in visualization.
GUI clients must be installed at the end-user location, either at application installation time or by
automatic download (Zero Administration Client).
URL accessed data (images, video, data files, etc.) can be placed at any URL site.
QFView organizes, stores, retrieves and classifies the data generated by experiments and simulations with an
easy-to-use GUI. The data management component, see Figure 195 b, offers a user-friendly web-enabled frontend to populate and maintain the metadata repository. The user can accomplish the following tasks using this
thin GUI Java client:
Start the GUI Java client application, create a new folder (measurement), define metadata information
for the folder, such as keywords, physical characteristics, etc. It is important to emphasize that the GUI
Java client is connected to the EJB server using the HTTP protocol, and all the information entered by
the user are automatically stored in the relational database.
Organize the data into a hierarchy of predefined or newly- created tree- like nodes; user can also
execute a data search procedure, combine documents, and do several other operations on the input
folder.
Create and define new raw data such as XML files, images, input files, etc. for a particular folder by
specifying either a local or a remote location of the data.
Define old raw data (XML files, text files, videos, etc.) by specifying either a local or a remote location
of the data.
Data Entry
Folder and Document insertion
Data Classification
Full Text Search
Meta Model and Data Organization
213
the users to reduce the time and effort they put into setting up their experiments and validating the
results of their simulations,
the technology providers to develop a new products [133] capable of meeting evolving and increasingly
demanding industry requirements.
The users have observed that QFView provided them with a means not only for archiving and manipulating
datasets, but also for organizing their entire work-flow. The impact of visualization systems like QFView on the
investigative process itself opens the way to entirely novel ways of working for researchers in experimental and
computational fluid dynamics. Integrated, distributed, collaborative visualization environments give the
possibility of and indeed point to the need of reorganizing research methods and workflows; the various
experiments conducted with QFView in ALICE have given glimpses of the exciting advances that one may
expect from such systems in the coming years.
To create the Knowledge Base to collect Application Challenges and Underlying Flow Regimes from
the trusted sources and made them available to the Network members;
To maintain an open Web-site to publish the Networks work progress and achievements
To organize 4 Annual Workshops, a key instrument for disseminating material on advances and
achievements in Quality & Trust, validation techniques and uncertainty analysis.
214
215
City hall
City
Hospital
Firebrigade
Boat
Boat
The Actors/
Stakeholders
that have
partial
information
inside their
information
systems
River
Administration
Factory
Factory
Meteo
Service
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
XML
Information
Objects
Actors/Information
Providers providing input
to the LASCOT System via
SOAP/XML Middleware
LASCOT Dynamic
Business Process Engine
runs the business process
workflows based on
Decision Scenarios
Decision Scenarios
create/drive the
business process
workflows
Common
LASCOT XML
Information
Objects
Translation
Format usable for
Visualization
Engine
Visualization Engine
217
Visualization Engine
presents information as 2/3D
visual objects and alerts
problem areas to the crisis
manager and other
stakeholders
Data Model
Persistence
Data Model
Exchange
Data Model
Viewer
Data Model
Viewer
Graphical
Middlewar
218
OVERVIEW
SMART
TOURING
PTZSnap-shots
Alarmlist
3D model
Camera calibrations
Zones
Detailed
Event View
Viewpoints
Alarms
ShowCamera(...),
Render a transparent cone to indicate
from where a (virtual) camera is viewing.
ShowSource(...),
Switch to and render a live video stream in a
predefinedwindow.
LiveVideo
GotoViewPoint("myViewPoint")
Zoom in on the 3D scene and show the
scene in a helicopter view.
ShowCEPEventPositions(T/F)
Mark the positions of the CEP events in the event
database in the alarm level colors.
(Blue, yellow, orange, red)
ShowRadarImages(...)
Render the raw radar images in the
scene.
ShowZone(T/F)
Render the zones.
Display a label with status info (recording,
armed, scenario, number of events, etc)
ShowSensorEvents(...)
Request a service to stream to a metadatastream and render targets (person, car,
bicycle or unknown) and their tracks.
Future developments
Web3D for Collaborative Analysis and Visualization
Todays computer graphics technology enables users to view spatial data in a true-to-life spatial representation,
to the point that non-technical people can comprehend complex data with easiness and comfort. Visualization
tools and data are now the most common media for people -- both technical and non-technical -- to exchange
information. Yet, not everyone has access to visualization tools, although access to the Internet keeps increasing
as the public becomes more at ease with the idea of using the Internet-based services. To meet all demands for
visualization and make the best use of existing and upcoming ICT, we need methodologies and tools that can be
applied collaboratively to collect, inventory, process and visualize 3D data provided by possibly thousands of
servers in the world. A main requirement is that such Web3D-enabled visualization will occur interactively, on
demand and in real-time (seeing is believing).
The challenges are enormous and research will be about creating new visualization tools and combining them
with existing ones to 3D computer graphics and Web-based technologies, and about developing collaborative,
interactive 3D analysis and visualization methodologies for building real-time virtual reality (VR) worlds. We
need visualization aids capable of combining and interpreting Web-based spatial and temporal data to assist us in
the analysis of shared information (supporting collaboration) over the Internet. The two main user activities on
the Web -- content retrieval and content viewing -- will be profoundly modified in this Web3D approach by the
Client Spatial Interpreter (CSI), a new component which will enable the users to perform spatial analyses of
contents retrieved and viewed with the help of Graphical Middleware (GM). A first step will be to reconstruct
spatial data with a conventional PC and interactive 3D visualization using object-oriented computer graphics.
The next step will be to extend the previously mentioned application for collaborative analysis and visualization,
219
with the use of VR technologies, of Web3D, VRML/X3D and Java3D, Internet Map Server (IMS), Server Pages
(ASP and JSP) and Spatial Database Engine (SDE) and Graphical Middleware (GM). These technologies need to
be coupled with application knowledge and to use the 3D web for viewing it, as 3D spatial analysis can be
carried out by an Internet user in real times with the data coming from different servers.
Problem Statement: Although Spatial Analysis and Modeling can be achieved on the desktop using
Visualization Software tools, the development of a Methodology for Web3D Collaborative Analysis and
Visualization promises to improve the way we carry out analysis by providing real-time support for visualization
of spatial features and quantities.
Research Themes: Web3D, Object-Oriented Computer Graphics, Collaborative Analysis and Visualization,
Client Spatial Interpreter, Graphical Middleware and Software Engineering.
Integrated Modeling Environments (IMEs): Scientists largely use commercial, homemade or opensource Integrated Modeling Environments (Matlab, Scilab, Tent, Salome). Providing new
functionalities to IMEs and creating critical masses of users and professional services through
technology providers is a key to success for the European scientific computing software industry.
Scientific visualization: 2D/3D data acquisition and treatment are day-to-day business for many
scientists. Yet there is still a lack of local computer resources for the pre- or post-processing of large
data sets, a situation that could be solved by remote interaction. Collaboration with other scientists to
compare models or results (a significant part of scientists job) could also be made easier.
Scientific software-packaging tools: To ensure that an application reaches a critical mass of users, it is
mandatory for the developer to build and package it on a multi-platform basis. Preparing packaging and
testing environments on several operating systems (Linux, Windows, UNIX, MacOS) is competenceand resource-consuming. Effective tools for packaging should be provided to developers.
system. The goal is to allow several/many users to simultaneously visualize information from various data
sources on a large display wall, in multiple high-resolution 3D windows, in a situation that permits collaborative
decision-making. The emerging augmented reality systems provide several advantages over conventional
desktops [138]. Virtual reality environments may provide features such as true stereoscopy, 3D interaction, and
individual/customized viewpoints for multiple users, enabling complete natural collaboration at an affordable
cost.
221
Allow interactive analyses of high-resolution, time-varying data sets (of theoretically unlimited size -although limited in practice by available computational resources)
Collaboration functionalities include annotation, measuring, symbols and metaphors, version management and
hierarchical classification, possibly through the use of ontology [142]. The implementation of new types of
visualization techniques, such as disc trees, may overcome difficulties associated with the traditional
representations of decision trees, such as visual clutter and occlusion by elements in the foreground (cone tree
example) [143].
Without associating to the information the probability of certainty, analysis of the visualization would be
incomplete, and could lead to inaccurate or incorrect conclusions. 3D reconfigurable disc trees could be used to
provide users with the information visualization, together with uncertainty (e.g. expressed with different color
attributes).
222
supported by the system. If this list is compiled it can be structured hierarchically to provide several depths of
intervention. This presumes that an analysis of the tasks required to accomplish goals has been conducted. A
third issue lies with the need to handle different user roles -- such as facilitator or mediator; the system could
be designed to support and facilitate the tasks of various roles. One of the typical participants roles, for example,
is to understand a problem so that it can be defined and alternatives solutions generated and evaluated.
Input can be provided by (remote) sensors and other special equipment. The system could use sensor models and
3D scenic models to integrate video and image data from different sources [145]. Dynamic multi-texture
projections enable real-time updating and painting of scenes to reflect the latest scenic data. Dynamic controls,
including viewpoint as well as image inclusion, blending, and projection parameters, would permit interactive,
real-time visualization of events. Mobile devices (PDA) could be used, as an alternative to mouse and keyboard,
as user input devices.
A large multi-tiled display wall, driven by a system for parallel rendering running on clusters of workstations
(e.g. Chromium [146]) can adequately satisfy the requirements of an output device for an advance visualization
system. Several examples are given in Figure 214 to Figure 221.
Figure 213: New generation of miniature computers and multi touch-screen inputs
223
224
References
[1]
[2]
[3]
M. Gbel, H. Mller, and B. Urban, Visualization in scientific computing. Vienna ; New York:
Springer-Verlag, 1995.
[5]
K. Gaither, "Visualization's role in analyzing computational fluid dynamics data," Computer Graphics
J. D. Foley, Computer graphics : principles and practice, 3. ed.: Addison-Wesley Publ., 2006.
[7]
J. D. Foley and A. Van Dam, Fundamentals of interactive computer graphics. Reading: AddisonWesley, 1982.
[8]
P. Wenisch, A. Borrmann, E. Rank, C. v. Treeck, and O. Wenisch, "Collaborative and Interactive CFD
Simulation using High Performance Computers," 2006.
[9]
overview.html: Computational Engineering International (CEI) develops, markets and supports software
for visualizing engineering and scientific data, 2007.
[11]
E. Duque, S. Legensky, C. Stone, and R. Carter, "Post-Processing Techniques for Large-Scale Unsteady
CFD Datasets " in 45th AIAA Aerospace Sciences Meeting and Exhibit Reno, Nevada, 2007.
[12]
S. M. Legensky, "Recent advances in unsteady flow visualization," in 13th AIAA Computational Fluid
D. E. Taflin, "TECTOOLS/CFD - A graphical interface toolkit for network-based CFD " in 36th
[15]
P. P. Walatka, P. G. Buning, L. Pierce, and P. A. Elson, "PLOT3D User's Manua," NASA TM-101067
March 1990.
[16]
[17]
R. Haimes and M. Giles, "Visual3 - Interactive unsteady unstructured 3D visualization " in 29th
H.-G. Pagendarm, "HIGHEND, A Visualization System for 3d Data with Special Support for
Postprocessing of Fluid Dynamics Data," in Visualization in Scientific Computing, 1994, pp. 87-98.
[19]
[20]
225
[21]
[22]
[23]
C. Upson, "Scientific visualization environments for the computational sciences," in COMPCON Spring
'89. Thirty-Fourth IEEE Computer Society International Conference: Intellectual Leverage, Digest of
Papers., 1989, pp. 322-327.
[24]
D. Foulser, "IRIS Explorer: A Framework for Investigation," Computer Graphics, vol. 29(2), pp. 13-16,
1995.
[25]
"OpenDX is the open source software version of IBM's Visualization Data Explorer,"
http://www.opendx.org/, 2007.
[26]
"PV-WAVE, GUI Application Developer's Guide," USA: Visual Numerics Inc., 1996.
[27]
W. Schroeder, K. W. Martin, and B. Lorensen, The visualization toolkit, 2nd ed. Upper Saddle River,
NJ: Prentice Hall PTR, 1998.
[28]
W. Hibbard, "VisAD: Connecting people to computations and people to people " in Computer Graphics
"Fluent for Catia V5, Rapid Flow Modeling for PLM," http://www.fluentforcatia.com/ffc_brochure.pdf,
2006.
[30]
[31]
A. Goldberg and D. Robson, Smalltalk-80 : the language. Reading, Mass.: Addison-Wesley, 1989.
[32]
B. Meyer, Reusable software : the Base object-oriented component libraries. Hemel Hempstead:
Prentice Hall, 1994.
[33]
[34]
[35]
[36]
B. Stroustrup, The C++ Programming Language, Special Edition ed.: Addison Wesley, 1997.
[37]
G. D. Reis and B. Stroustrup, "Specifying C++ concepts," in Conference record of the 33rd ACM
B. Stroustrup, "Why C++ is not just an object-oriented programming language," in Addendum to the
proceedings of the 10th annual conference on Object-oriented programming systems, languages, and
applications (Addendum) Austin, Texas, United States: ACM Press, 1995.
[39]
R. Wiener, "Watch your language!," Software, IEEE, vol. 15, pp. 55-56, 1998.
[40]
D. Vucinic and C. Hirsch, "Computational Flow Visualization System at VUB (CFView 1.0)," in VKI
Lecture Series on Computer Graphics and Flow Visualization in CFD, Brussels, Belgium, 1989.
[41]
D. Vucinic, "Object Oriented Programming for Computer Graphics and Flow Visualization," in VKI
Lecture Series on Computer Graphics and Flow Visualization in CFD, von Karman Institute for Fluid
Dynamics, Brussels, Belgium, 1991.
226
[42]
J.-A. Dsidri, R. Glowinski, and J. Priaux, Hypersonic Flows for Reentry Problems: Survey Lectures
and Test cases for Analysis vol. 1. Antibes, France, 22-25 January, 1990. : Springer-Verlag, Heidelberg,
1990.
[43]
K. D. Torreele J., Vucinic D., van den Berghe C.S., Graat J., Hirsch Ch., "Parallel CFView : a
SIMD/MIMD CFD Visualisation System in a Heterogeous and Distributed Environment," in
[45]
"Europe: Building Confidence in Parallel HPC," IEEE Computational Science and Engineering, vol.
vol. 01, p. p. 75, Winter, 1994. 1994.
[46]
[48]
M. Gharib, "Perspective: the experimentalist and the problem of turbulence in the age of
supercomputers," Journal of Fluids Engineering, 118-2 (1996), 233-242., 1996.
[49]
B. K. Hazarika, D. Vucinic, F. Schmitt, and C. Hirsch, "Analysis of Toroidal Vortex Unsteadiness and
Turbulence in a Confined Double Annular Jet," in AIAA 39th Aerospace Sciences Meeting & Exhibit
Reno, Nevada, 2001.
[50]
[51]
F. G. Schmitt, D. Vucinic, and C. Hirsch, "The Confined Double Annular Jet Application Challenge,"
in 3rd QNET-CFD Newsletter, 2002.
[52]
[53]
[54]
Food Processing, January 2007 ed, P. D.-W. Sun, Ed.: CRC Press, 2007, p. 25 pages.
[55]
[56]
[57]
M.-J. Jeong, K. W. Cho, and K.-Y. Kim, "e-AIRS: Aerospace Integrated Research Systems," in The
2007 International Symposium on Collaborative Technologies and Systems (CTS07) Orlando, Florida,
USA, 2007.
[58]
C. M. Stone and C. Holtery, "The JWST integrated modeling environment," 2004, pp. 4041-4047
Vol.6.
227
[59]
[60]
C. Hirsch, Numerical computation of internal and external flows. Vol. 1, Fundamentals of numerical
J. H. Gallier, Curves and surfaces in geometric modeling : theory and algorithms. San Francisco, Calif.:
Morgan Kaufmann Publishers, 2000.
[62]
[63]
[64]
F. Michael and S. Vadim, "B-rep SE: simplicially enhanced boundary representation," in Proceedings
of the ninth ACM symposium on Solid modeling and applications Genoa, Italy: Eurographics
Association, 2004.
[65]
M. Gopi and D. Manocha, "A unified approach for simplifying polygonal and spline models," in
Proceedings of the conference on Visualization '98 Research Triangle Park, North Carolina, United
States: IEEE Computer Society Press, 1998.
[66]
F. Helaman, R. Alyn, and C. Jordan, "Topological design of sculptured surfaces," in Proceedings of the
19th annual conference on Computer graphics and interactive techniques: ACM Press, 1992.
[67]
[68]
O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu, The finite element method : its basis and fundamentals,
6. ed. Oxford: Elsevier Butterworth-Heinemann, 2005.
[69]
O. C. Zienkiewicz and R. L. Taylor, The finite element method, 4. ed. London: McGraw-Hill, 1989.
[70]
C. Hirsch, Numerical computation of internal and external flows. Vol. 2, Computational methods for
[72]
A. N. S. Emad and K. K. Ali, "A new methodology for extracting manufacturing features from CAD
system," Comput. Ind. Eng., vol. 51, pp. 389-415, 2006.
[73]
K. Lutz, "Designing a data structure for polyhedral surfaces," in Proceedings of the fourteenth annual
symposium on Computational geometry Minneapolis, Minnesota, United States: ACM Press, 1998.
[74]
S. R. Ala, "Design methodology of boundary data structures," in Proceedings of the first ACM
symposium on Solid modeling foundations and CAD/CAM applications Austin, Texas, United States:
ACM Press, 1991.
[75]
[77]
R. Aris, Vectors, tensors, and the basic equations of fluid mechanics. New York: Dover Publications,
1989.
[78]
W. H. Press, Numerical recipes in C++ : the art of scientific computing, 2. ed. Cambridge: Cambridge
Univ. Press, 2002.
228
[79]
W. T. Vetterling, Numerical recipes example book (C++), 2. ed ed. Cambridge: Cambridge University
Press, 2002.
[80]
C. T. J. Dodson and T. Poston, Tensor geometry : the geometric viewpoint and its uses, 2. ed. Berlin ;
New York: Springer-Vlg, 1991.
[81]
G. H. Golub and C. F. Van Loan, Matrix computations, 3rd ed. Baltimore: Johns Hopkins University
Press, 1996.
[82]
[83]
M. d. Berg, Computational geometry : algorithms and applications, 2., rev. ed. Berlin: Springer, 2000.
[84]
J. E. Goodman and J. O'Rourke, Handbook of discrete and computational geometry. Boca Raton:
Chapman & Hall, 2004.
[85]
J. D. Foley, Computer graphics : principles and practice, 2. ed. Reading: Addison-Wesley, 1990.
[86]
P. Eliasson, J. Oppelstrup, and A. Rizzi, "STREAM 3D: Computer Graphics Program For Streamline
Visualisation," Adv. Eng. Software, vol. Vol. 11, No. 4., pp. 162-168, 1989.
[87]
S. SHIRAYAMA, "Visualization of vector fields in flow analysis," in 29th Aerospace Sciences Meeting
Reno, NV: AIAA-1991-801, 8 p., 1991.
[88]
P. G. Buning and J. L. STEGER, "Graphics and flow visualization in computational fluid dynamics " in
7th Computational Fluid Dynamics Conference, Cincinnati, OH, 1985, pp. 162-170.
[89]
C. S. Yih, "Stream Functions in 3-Dimensional flows," La Houlle Blanche, vol. No. 3, 1957.
[90]
D. N. Kenwright and G. D. Mallinson, "A 3-D streamline tracking algorithm using dual stream
functions," 1992, pp. 62-68.
[91]
R. Haimes, "pV3 - A distributed system for large-scale unsteady CFD visualization " in 32nd Aerospace
Sciences Meeting and Exhibit, , Reno, NV, Jan 10-13, : AIAA-1994-321, 1994
[92]
T. Strid, A. Rizzi, and J. Oppelstrup, "Development and use of some flow visualization algorithms,"
von Karman Institute for Fluid Dynamics, Brussels, Belgium, 1989.
[93]
W. H. Press, Numerical recipes in C : the art of scientific computing, 2. ed. Cambridge: Cambridge
Univ. Press, 1992.
[94]
W. H. Press, Numerical recipes : example Book (C), 2. ed. Cambridge: Cambridge Univ. Press, 1993.
[95]
C. Dener, "Interactive Grid Generation System," in Department of Fluid Mechanics. vol. PhD Brussels:
Vrije Universiteit Brussel, 1992.
[96]
P. M. Vucinic D., Sotiaux V., Hirsch Ch., "CFView - An Advanced Interactive Visualization System
based on Object-Oriented Approach," in AIAA 30th Aerospace Sciences Meeting Reno, Nevada, 1992.
[97]
initiative on validation of CFD codes (results of the EC/BRITE-EURAM project EUROVAL, 19901992) Vieweg, Braunschweig, Germany, 1993.
[98]
program and systems design. Englewood Cliffs, N.J.: Prentice Hall, 1979.
[99]
[100]
B.Liskov and J. Guttag, Abstraction and Specification in Program Development: Mc Graw-Hill, 1986.
[101]
I. Jacobson and S. Bylund, The road to the unified software development process. Cambridge, New
York: Cambridge University Press, SIGS Books, 2000.
229
[102]
F. L. Friedman and E. B. Koffman, Problem solving, abstraction, and design using C++, 5th ed.
Boston: Pearson Addison-Wesley, 2007.
[103]
A. Koenig and B. E. Moo, Ruminations on C++ : a decade of programming insight and experience.
Reading, Mass.: Addison-Wesley, 1997.
[104]
S. B. Lippman, J. Lajoie, and B. E. Moo, C++ primer, 4th ed. Upper Saddle River, NJ: AddisonWesley, 2005.
[105]
M. L. Minsky, The society of mind. New York, N.Y.: Simon and Schuster, 1986.
[106]
Bobrow and Stefik, LOOPS (Xerox) Lisp Object-Oriented Programming System: "The LOOPS
Manual", Xerox Corp, 1983.
[107]
C. V. Ramamoorthy and P. C. Sheu, "Object-oriented systems," Expert, IEEE [see also IEEE Intelligent
K. Ponnambalam and T. Alguindigue, A C++ primer for engineers : an object-oriented approach. New
York: McGraw-Hill Co., 1997.
[109]
D. Silver, "Object-oriented visualization," Computer Graphics and Applications, IEEE, vol. 15, pp. 5462, 1995.
[110]
S. Kang, "Investigation on the three dimensional flow within a compressor cascade with and withouth
clearance," in Department of Fluid Mechanics, PhD Thesis: Vrije Universiteit Brussel, 1993.
[112]
Z. W. Zhu, "Multigrid operations and analysis for complex aerodynamics," in Department of Fluid
P. Alavilli, "Numerical simulations of hypersonic flows and associated systems in chemical and thermal
nonequilibrium," in Department of Fluid Mechanics, PhD Thesis: Vrije Universiteit Brussel, 1997.
[114]
[116]
[117]
B. Lessani, "Large Eddy Simulation of Turbulence Flows," in Department of Fluid Mechanics, PhD
230
[122]
S. Smirnov, "A finate volume formulation of compact schemes with application to time dependent
Navier-Stokes equations," in Department of Fluid Mechanics, PhD Thesis: Vrije Universiteit Brussel,
2006.
[123]
T. Broeckhoven, "Large Eddy simulations of turbulent combustion: numerical study and applications,"
in Department of Fluid Mechanics, PhD Thesis: Vrije Universiteit Brussel, 2006.
[124]
M. Mulas and C. Hirsch, "Contribution in: EUROVAL a European Initiative on Validation of CFD
Codes," in EUROVAL an European Initiative on Validation of CFD Codes, Notes on Numerical Fluid
F. P. Preparata and M. I. Shamos, Computational geometry : an introduction, Corr. and expanded 2nd
printing. ed. New York: Springer-Verlag, 1988.
[126]
[127]
J. Torreele, D. Keymeulen, D. Vucinic, C. S. van den Berghe, J. Graat, and C. Hirsch, "Parallel CFView
: a SIMD/MIMD CFD Visualisation System in a Heterogeous and Distributed Environment," in
B. Shannon, Java 2 platform, enterprise edition : platform and component specifications. Boston ;:
Addison-Wesley, 2000.
[130]
S. Purba, High-performance Web databases : design, development, and deployment. Boca Raton, Fla.:
Auerbach, 2001.
[131]
A. Eberhart and S. Fischer, Java tools : using XML, EJB, CORBA, Servlets and SOAP. New York:
Wiley, 2002.
[132]
K. Akselvoll and P. Moin, "Large-eddy simulation of turbulent confined coannular jets " Journal of
[135]
[137]
[139]
D. Santos, C. L. N. Cunha, and L. G. G. Landau, "Use of VRML in collaborative simulations for the
petroleum industry," in Simulation Symposium Proceedings, pages: 319-324, 2001.
231
[140]
G. Johnson, "Collaborative Visualization 101," ACM SIGGRAPH - Computer Graphics, pages 8-11,
Q. Shen, S. Uselton, and A. Pang, "Comparison of Wind Tunnel Experiments and Computational Fluid
Dynamics Simulations," in Journal of Visualization, volume 6, number 1, pp. 31-39, 2003.
[142]
C.-S. Jeong and A. Pang, "Reconfigurable Disc Trees for Visualizing Large Hierarchical Information
Space," IEEE Symposium on Information Visualization, pages 19-25. IEEE Visualization, 1998.
[144]
M. Kreuseler, N. Lopez, and H. Schumann, "A Scalable Framework for Information Visualization," in
IEEE Symposium on information Vizualization, INFOVIS. IEEE Computer Society, Washington, DC.,
2000.
[146]
[147]
[148]
[149]
C. Hirsch, J. Torreele, D. Keymeulen, D. Vucinic, and J. Decuyper, "Distributed Visualization in CFD "
D. Vucinic, J. Torreele, D. Keymeulen, and C. Hirsch, "Interactive Fluid Flow Visualization with
CFView in a Distributed Environment," in 6th Eurographics Workshop on Visualization in Scientific
M. Brouns, "Numerical and experimental study of flows and deposition of aerosols in the upper human
airways," in Department of Fluid Mechanics, PhD Thesis: Vrije Universiteit Brussel, to be completed
in, 2007.
[153]
232
[156]
[158]
[159]
A. Markova, R. Deklerck, D. Cernea, A. Salomie, A. Munteanu, and P. Schelkens, "Addressing viewdependent decoding scenarios with MeshGrid," in 2nd Annual IEEE Benelux/DSP Valley Signal
Transactions on Circuits and Systems for Video Technology, vol. 14, no. 7, pp. 950-966, 2004.
[161]
[162]
233
Appendixes
Lookup table and its C++ implementation for the pentahedron cell
Label
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Node Mask
0 0 0 0
0 0 0 0
0 0 0 1
0 0 0 1
0 0 1 0
0 0 1 0
0 0 1 1
0 0 1 1
0 1 0 0
0 1 0 0
0 1 0 1
0 1 0 1
0 1 1 0
0 1 1 0
0 1 1 1
0 1 1 1
1 0 0 0
1 0 0 0
1 0 0 1
1 0 0 1
1 0 1 0
1 0 1 0
1 0 1 1
1 0 1 1
1 1 0 0
1 1 0 0
1 1 0 1
1 1 0 1
1 1 1 0
1 1 1 0
1 1 1 1
1 1 1 1
0 0 0 0
0 0 0 0
0 0 0 1
0 0 0 1
0 0 1 0
0 0 1 0
0 0 1 1
0 0 1 1
0 1 0 0
0 1 0 0
0 1 0 1
0 1 0 1
0 1 1 0
0 1 1 0
0 1 1 1
0 1 1 1
1 0 0 0
1 0 0 0
1 0 0 1
1 0 0 1
1 0 1 0
1 0 1 0
1 0 1 1
1 0 1 1
1 1 0 0
1 1 0 0
1 1 0 1
1 1 0 1
1 1 1 0
1 1 1 0
1 1 1 1
1 1 1 1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
nF nN Edges Intersected
0 0
1 3
0-3-2
1 3
0-1-4
1 4
1-4-3-2
1 3
1-2-5
1 4
0-3-5-1
1 4
0-2-5-4
1 3
3-5-4
1 3
3-6-8
1 4
0-6-8-2
2 6
0-4-1, 3-6-8
1 5
1-4-6-8-2
1 6
1-2-3-6-8-5
2 7
4-7-6, 0-3-5-1
1
1
2
1
1
1
2
4
3
6
4
4
5
7
4-6-8-5
4-7-6
0-3-2, 4-7-6
0-1-7-6
1-0-3-6
0-6-7-5-2
3-5-7-6, 0-4-1
3-5-7-6
0-4-7-8-2
1-7-8-2
1
1
3
3
5-7-8
5-8-7
1-2-8-7
1
1
4
4
3-8-7-4
3-6-7-5
0-1-4-6-7-5-3
1
2
1
4
6
3
0-6-7-1
0-2-3, 4-6-7
4-6-7
2
1
2
1
1
1
1
1
1
1
1
1
0
6
5
6
4
3
3
4
4
3
4
3
3
0
1-5-2, 3-8-6
1-2-8-6-4
0-1-4, 3-8-6
0-2-8-6
3-8-6
3-4-5 *
0-4-5-2
0-1-5-3
1-5-2
0-2-3-4
0-4-1
0-2-3
-
/*---------------------------------------------------------------------------*/
/*
CLASS Cell DEFINITION
*/
/*---------------------------------------------------------------------------*/
/*
V U B
*/
/*
Department of Fluid Mechanics
Oct 1993 */
/*
Dean Vucinic
*/
/*---------------------------------------------------------------------------*/
/*
HEADER FILES
*/
/*---------------------------------------------------------------------------*/
// ----------------------- Pentahedron: -------------------------------
UCharVec
Cell3N6::NodesE_[9]={UCharVec("[0
UCharVec("[1
UCharVec("[2
UCharVec("[0
UCharVec("[1
UCharVec("[2
UCharVec("[3
UCharVec("[4
UCharVec("[5
// nodes(edge)
1]"),
2]"),
0]"),
3]"),
4]"),
5]"),
4]"),
5]"),
3]")};
// nodes(face)
Cell3N6::NodesF_[5]={UCharVec("[0 2 1]"),
UCharVec("[2 0 3 5]"),
UCharVec("[0 1 4 3]"),
UCharVec("[1 2 5 4]"),
UCharVec("[3 4 5]")};
const Cell3N6::TP2T4 Cell3N6::X_[64]=
{
{ 0,0,{{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
0,{{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 1,1,{{3,{ 0, 3, 2,-1,-1,-1},{ 2, 1, 0,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
1,{{{ 0, 3, 2},{ 2, 1, 0}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 2,1,{{3,{ 0, 1, 4,-1,-1,-1},{ 0, 3, 2,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
1,{{{ 0, 1, 4},{ 0, 3, 2}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 3,1,{{4,{ 1, 4, 3, 2,-1,-1},{ 3, 2, 1, 0,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
2,{{{ 1, 4, 3},{ 3, 2, 6}},{{ 1, 3, 2},{ 5, 1, 0}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 4,1,{{3,{ 1, 2, 5,-1,-1,-1},{ 0, 1, 3,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
1,{{{ 1, 2, 5},{ 0, 1, 3}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 5,1,{{4,{ 0, 3, 5, 1,-1,-1},{ 2, 1, 3, 0,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
2,{{{ 0, 3, 5},{ 2, 1, 6}},{{ 0, 5, 1},{ 5, 3, 0}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 6,1,{{4,{ 0, 2, 5, 4,-1,-1},{ 0, 1, 3, 2,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
2,{{{ 0, 2, 5},{ 0, 1, 6}},{{ 0, 5, 4},{ 5, 3, 2}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 7,1,{{3,{ 3, 5, 4,-1,-1,-1},{ 1, 3, 2,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
1,{{{ 3, 5, 4},{ 1, 3, 2}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 8,1,{{3,{ 3, 6, 8,-1,-1,-1},{ 2, 4, 1,-1,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
1,{{{ 3, 6, 8},{ 2, 4, 1}},{{-1,-1,-1},{-1,-1,-1}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 9,1,{{4,{ 0, 6, 8, 2,-1,-1},{ 2, 4, 1, 0,-1,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
2,{{{ 0, 6, 8},{ 2, 4, 6}},{{ 0, 8, 2},{ 5, 1, 0}},
{{-1,-1,-1},{-1,-1,-1}},{{-1,-1,-1},{-1,-1,-1}}}},
{ 10,1,{{6,{ 0, 1, 4, 6, 8, 3},{ 0, 3, 2, 4, 1, 2}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
4,{{{ 0, 1, 4},{ 0, 3, 6}},{{ 0, 4, 6},{ 5, 2, 7}},
{{ 0, 6, 8},{ 6, 4, 8}},{{ 0, 8, 3},{ 7, 1, 2}}}},
{ 11,1,{{5,{ 1, 4, 6, 8, 2,-1},{ 3, 2, 4, 1, 0,-1}},
{0,{-1,-1,-1,-1,-1,-1},{-1,-1,-1,-1,-1,-1}}},
3,{{{ 1, 4, 6},{ 3, 2, 6}},{{ 1, 6, 8},{ 5, 4, 7}},
{{ 1, 8, 2},{ 6, 1, 0}},{{-1,-1,-1},{-1,-1,-1}}}},
UCharVec
235
236
237
238
1990-1992
1992-1994
1992-1995
1993-1995
1996-1998
1995-1999
1998-2001
2000-2004
2004-2005
2004-2007
2005-2008
2006-2007
239
1994
Torreele J., Keymeulen D., Vucinic D., van den Berghe C.S., Graat J., Hirsch Ch. (1994). Parallel CFView : a
SIMD/MIMD CFD Visualization System in a Heterogonous and Distributed Environment. Published in
Proceedings of the International Conference on Massively Parallel Processing, Delft, The Netherlands, June 1994.
Vucinic A., Hirsch Ch., Vucinic D., Dener C., Dejhalle R. (1994). Blade Geometry and Pressure Distribution
Visualization by CFView Method. Stojarstvo vol. 36, No. 1, 2, pp. 45-48.
1995
Vucinic D., Torreele J., Keymeulen D. and Hirsch Ch., Interactive Fluid Flow Visualization with CFView in a
Distributed Environment, 6th Eurographics Workshop on Visualization in Scientific Computing, Chia, Italy,
1995.
2000
Vucinic D., Favaro J., Snder B., Jenkinson I., Tanzini G., Hazarika B. K., Ribera dAlcal M., Vicinanza D.,
Greco R.and Pasanisi A., Fast and convenient access to fluid dynamics data via the World Wide Web, European
Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2000, (2000).
Vucinic D., PIV measurements and CFD Computations of the double annular confined jet experiment , Pivnet
T5/ERCOFTAC SIG 32 2nd Workshop on PIV, Lisbon July 7-8 2000.
B. K. Hazarika and D. Vucinic, Integrated Approach to Computational and Experimental Flow Visualization of
a Double Annular Confined Jet, 9th International symposium on flow visualization, Edinburgh 2000.
240
2001
Hazarika B.K., Vucinic D., Schmitt F. and Hirsch Ch. (2001). Analysis of Toroidal Vortex Unsteadiness and
Turbulence in a Confined Double Annular Jet. AIAA paper No. 2001-0146, AIAA 39th Aerospace Sciences
Meeting & Exhibit, 8-11 January 2001, Reno, Nevada.
Vucinic D., Barone M.R., Snder B., Hazarika B.K. and Tanzini G. (2001). QFView an Internet Based
Archiving and Visualization System. AIAA paper No. 2001-0917, 39th Aerospace Sciences Meeting & Exhibit,
8-11 January 2001, Reno, Nevada.
D. Vucinic and B. K. Hazarika, Integrated Approach to Computational and Experimental Flow Visualization of
a Double Annular Confined Jet, Journal of Visualization, Vol.4, No. 3, 2001.
D. Vucinic, B. K. Hazarika and C. Dinescu, Visualization and PIV Measurements of the Axisymmectric InCylinder Flows, ATT Congress and Exhibition, Paper No. 2001-01-3273, Barcelona, 2001.
Charles Hirsch, Dean Vucinic, editors of the QNET-CFD Newsletter:
1st QNET-CFD Newsletter, Vol. 1, No. 1, January 2001, published in 1600 copies.
2nd QNET-CFD Newsletter, Vol. 1, No. 2, July 2001, published in 1600 copies.
2002
F. G. Schmitt, D. Vucinic and Ch. Hirsch, The Confined Double Annular Jet Application Challenge, 3rd
QNET-CFD Newsletter, January 2002.
Charles Hirsch, Dean Vucinic, editors of the QNET-CFD Newsletter:
3rd QNET-CFD Newsletter, Vol. 1, No. 3, January 2002 published in 1600 copies.
4th QNET-CFD Newsletter, Vol. 1, No. 4, November 2002 published in 1600 copies.
2003
Grijspeerdt K., Hazarika B. and Vucinic D. (2003). Application of computational fluid dynamics to model the
hydrodynamic of plate heat exchangers for milk processing. Journal of Food Engineering 57 (2003), pp. 237-242.
Charles Hirsch, Dean Vucinic, editors of the QNET-CFD Newsletter:
6th QNET-CFD Newsletter, Vol. 2, No. 1, April 2003 published in 1600 copies.
6th QNET-CFD Newsletter, Vol. 2, No. 2, July 2003, published in 1600 copies.
7th QNET-CFD Newsletter, Vol. 2, No. 3, December 2003, published in 1600 copies.
2004
Charles Hirsch, Dean Vucinic, editors of the QNET-CFD Newsletter:
8th QNET-CFD Newsletter, Vol. 2, No. 4, July 2004, published in 1600 copies.
2006
Dean Vucinic, Danny Deen, Emil Oanta, Zvonimir Batarilo, Chris Lacor, Distributed 3D Information
Visualization, Towards Integration of the dynamic 3D graphics and Web Services, 9 pages, 1st International
Conference on Computer Graphics Theory and Applications, Setbal, Portugal, February 2006.
2007
Koen Grijspeerdt and Dean Vucinic, Chapter 20. Computational fluid dynamics modeling of the hydrodynamics
of plate heat exchangers for milk processing in the book "Computational Fluid Dynamics in Food Processing",
25 pages, edited by Professor Da-Wen Sun and published by CRC Press, January 2007.
Dean Vucinic, Danny Deen, Emil Oanta, Zvonimir Batarilo, Chris Lacor, "Distributed 3D Information
Visualization, Towards Integration of the dynamic 3D graphics and Web Services," in VISAPP and GRAPP
2006, CCIS 4, Springer-Verlag Berlin Heidelberg, 2007, pp. 155168.
241
is the period when the scientific visualization (SV) system was created, and further evolved to
the industrial application
1998-2007
is the period when different parts of the developed methodology and software where applied to
the IWT and EC projects
1988-1997 period
1988
The SV system software development was initialized with a very immature C++ programming language preprocessor outputting C code, which needed to be further compiled in order to be executed. The available
hardware was APOLLO workstations running a proprietary PHIGS graphic library (completely vendordependent environment). The initialization of the object-oriented software design was done for 2D structured and
unstructured mono block data model.
1989
In the summer of 1989 the first implementation of the CFView software was accomplished, which was presented
in September at the VKI Computer Graphics Lecture Series [40]. To the authors knowledge, it was the first time
ever object-oriented interactive SV software for fluid flow analysis had been developed. It is obvious that more
powerful implementations existed, but these were not implemented with an object-oriented approach. That
summer, Prof. Hirsch made a firm decision of allowing the author to continue working on his object-oriented
methodology for developing CFView.
1990
As OOM was starting to gain acceptance, the C++ compiler came out. It immediately showed that C++
performance was starting to be comparable to C and FORTRAN. The CFView architecture had become more
elaborated; the GUI and Graphics categories of classes had been modeled to enable the upgrade of CFView to
the X-Windows C++ InterViews library from Stanford [147] and to the vendor-independent graphics library
Figaro/PHIGS. The 3D structured mono-block data model was developed.
1991
As mentioned in the Introduction, this year was crucial in the CFView development: the main visualization
system architecture was defined and the applied methodology was presented at the VKI Computer Graphics
Lecture Series [41]. The 3D structured data model was extended to the multi-block data sets. 1991 was also the
year when we received the first Silicon Graphics workstation running a specialized graphics library. It was the
first time that 3D user interaction was performed with an acceptable system response time.
1992
In January 1992, CFView was presented at the AIAA meeting [21]; to the authors knowledge, it was the firsttime ever OO-application created for interactive fluid flow analysis. CFView was compared with visualization
systems from NASA [16], MIT [17] and DLR [18] which had been implemented in C and developed with
structured programming methodologies. Another system, similar in design to CFView, was Visage [148], came
242
out later on in 1992 at the IEEE meeting; Visage was implemented in C and developed at General Electric by the
group of researchers which developed in the late 90s the OO VTK library .[27] In 1992, the CFView data model
was extended to unstructured meshes based on tetrahedrons only. An upgrade to the new version of
Figaro/PHIGS was done, which improved the CFView graphics performance.
1993
As the CFView application became more complex, the development of a data model, as presented in this thesis,
was conceived for supporting transparent interactivity with structured and unstructured data sets. In this year, the
marching cube algorithm [82, 149] was extended and became a marching cell algorithm for the treatment of
unstructured meshes with heterogeneous cell types [75], enhanced with unambiguous topology for the resulting
extracted surfaces. The Interviews library was upgraded to version 3.1 in order to support the CFView Macro
capability. The Parallel CFView was under ongoing development.
1994
The 3D structured and unstructured multi-block data model was completed, including the particle trace algorithm
for all cell types. The Parallel CFView was released and presented [127, 150]. As there were portability
problems with Figaro/PHIGS, the HOOPS graphics library was selected as the new 3D graphics platform for
CFView.
1995
The development of the HOOPS graphics layer was ongoing and the Model-View-Controller design was applied
to replace the interactive parts of CFView [151]. This process resulted in a cleaner implementation which was
then prepared for porting onto the emerging PC Windows platforms enhanced with 3D graphics hardware
running OpenGL. The PC software porting and upgrade were done later on in NUMECA.
1996
Under the LCLMS project, the symbolic calculator for CFView was developed and investigations were carried
out to find an appropriate GUI platform, which had to be PC-Windows/UNIX portable. The result was the
Tcl/Tk library, which was found appropriate; it is still today the GUI platform for CFView.
1997
The CFView multimedia learning material was prototyped in the IWT LCLMS projects; the author continued to
advocate the use of his development methodology in the EU R&D-project arena, which attracted interest in R&D
partners and succeeded in bringing new EU-funded projects to the VUB.
1998-2007 period
1998
The development of QFView in the EC ALICE project extended the authors research towards applying the
World Wide Web concept for designing and building distributed, collaborative scientific environments [47, 128].
The CFView data model was used for the development of QFView, which enabled combined visualization of
CFD and EFD data sets. As described in the Introduction, 3 test cases were performed using the developed
approach. It is important to mention that it is in the same project that the first PIV measurement system was
established at the VUB (a premiere in the Belgian universities). This influenced the Departments research to the
integrated application of CFD and EFD in fluid flow analysis; today, the PIV measurements at VUB are
performed with pulse laser [152].
243
2001
QNET-CFD was a Thematic Network on Quality and Trust for the industrial applications of CFD [134]. The
authors contributions were more of a coordination and management nature, as he was entrusted with presenting
and reporting on project activity, as well as preparing the publishing and dissemination material for the fluid
flow knowledge base, involving web-site software development and maintenance.
2004
The LASCOT project [55] was about applying the Model-View-Controller (MVC) paradigm to enhance the
interactivity of our 3D software components for: visualizing, monitoring and exchanging dynamic information,
including space- and time-dependent data. The software development included the integration and customization
of different visualization components based on 3D Computer Graphics (Java3D) and Web (X3D, SOAP)
technologies, and applying the object-oriented approach based on Xj3D to improve decision- making situational
awareness [153].
2006
The SERKET project -- currently in progress -- focuses on the development of more-realistic X3D models for
information visualization of security applications.
Interactive Visualization seems to continuously gain research interest [154], as there is a need to empower users
with tools for extracting and visualizing important patterns in very large data sets. Unfortunately, for many
application domains, it is not yet clear which are the features of interest, how to define them, let alone how they
can be detected. There is a continuous need to develop new ways, which enable more intuitive user interactions,
as a crucial element for further enhancement and exploration work. The interactive visualization tools for fluid
flow analysis were discussed in Chapter 2 Adaptation of Visualization Tools; no doubt that ongoing research
will lead to the expansion and the improvement of the capabilities of the tools that were developed in the context
of the authors work.
Several current visualization development frameworks focus on providing customized graphical components for
high-end desktops [155] and compatible with Ajax [156] and Eclipse [157] Open Source development
environments. These components are based on the MVC paradigm; they deliver point-and-click interaction tools
based on specialized SDKs, which allow the design of intuitive graphical displays for new applications. As
discussed in Chapter 3, Object-Oriented Software Development will remain an integral part of to-morrows
software development process.
Animation has become an important element of modern SV systems, and new video coding technologies will
need to be taken into account in future developments. An example of this new technology is MESHGRID,
developed at the VUB/ETRO [158-160] and adopted by MPEG-4 AFX [161]. The model-based representation
provided by the MESHGRID technology could be used for modeling time-dependent surfaces (in association with
244
the iso-surface algorithm discussed in this thesis) including video compression. This representation combines a
regular 3D grid of points, called the reference-grid, with the wire-frame model of a surface. The extended
functionality of MESHGRID could provide a hierarchical, multi-resolution structure for animation purposes,
which allows the highly compact coding of mesh data; it could be considered for integration in a new generation
of SV software for fluid flow time-dependent analysis. Other MESHGRID elements that could be considered for
SV software are view dependency (not considered in this thesis) and Region of Interest (ROI) -- a concept which
can be associated to the multi-block (multi-domain) model of fluid flow data; it offers the possibility of
restricting visualization to a limited part of the whole data set. The MESHGRID data model seems appropriate for
implementation in distributed environments since it provides advanced data compression and data transfer
techniques that are important for the quality of service in animation enhanced software.
Concerns of performance are at the core of the development of VS systems distributed over desktops. Such
systems are capable of very large-scale parallel computation and of distributed rendering on large display walls
or immersive virtual environments. Today, modern graphics hardware (Graphics Processing Units -- GPU)
performs complex arithmetic at increasingly high speed. It can be envisaged that GPU-s can be used to execute
SV non-graphics algorithms, which offers a potential for increasing VS performance for large and complex data
sets without sacrificing interactivity.
Current trends in SV research and development help advance the state-of-the-art of computational science and
engineering by:
applying distributed software with visualization and virtual reality components that improve the
ergonomics of human-machine interfaces.
These aspects are further covered in the concluding Chapter Development trends in Interactive Visualization
systems.
The 2007 Advanced Scientific Computing Research program of the US Department of Energy [162] includes
SV research and development work in relation to advances in computer hardware (such as high-speed disk
storage systems, archival data storage systems) and high-performance visualization hardware. An example of a
high-performance computer network is the UltraScienceNet Network (USNET) Testbed, a 20 gigabit- persecond, highly-reconfigurable optical network that supports petabyte data transfer, remote computational
steering and collaborative high-end visualization. USNET provides capacities that range from 50 megabits-persecond to 20 gigabits-per-second. Such capability is in complete contrast with the Internet where the shared
connections are provided statically with a resulting bandwidth that is neither guaranteed nor stable.
Ongoing research in visualization tools for scientific simulation is exploring hardware configurations with
thousands of processors and developing data management software capable of handling terabyte-large data sets
extracted from petabyte-large data archives. It is predicted that large-scale, distributed, real-time scientific
visualization and collaboration tools will provide new ways of designing and carrying out scientific work, with
distributed R&D teams in geographically distant institutes, universities and industries accessing and sharing in
real-time extremely powerful computational and knowledge resources, and yet to be measured gains in
efficiency and productivity.
245
Performance Analysis
In order to assess the capability of the CFView system, a performance analysis was conducted in the EC PASHA
project [127, 150]. The main goal was to see how two parallel SIMD and MIMD implementations of CFView
would perform compared to its sequential implementation on a SISD stand-alone machine. The benchmarks
consisted of test cases specifically designed to provide a meaningful, reliable basis for comparing the
performances of the implementations. The test cases were theoretically-minded, not necessarily representative of
practical applications. These test cases were retained because they offered significant advantages over real-world
examples, as follows:
1.
The test cases were not biased towards any particular algorithms, computer systems or applications.
This property makes them good candidates for comparisons between different systems.
2.
The test cases were based on simple, algorithmic definitions. This made them readily available to
anyone wanting to perform similar experiments.
3.
The test cases were all different, so as to avoid the testing processes to focus upon the singularities of
one particular data set.
4.
Single test cases were devised to explore a wide spectrum of difficult, uncommon, challenging or
otherwise interesting characteristics. This enabled us to considerably speed up the testing, debugging
and optimization procedures.
5.
The test cases were designed to offer the user the possibility to control the complexity of the problem
(e.g. control over the size and complexity of iso-surfaces).
The test cases that were developed and used are described below. An overview of the testing environment is
then given, including a description of the hardware used, of the algorithms tested and of the measurements taken.
The results of the experiments are reported and summarized in tables. A discussion of the results is provided
showing the performance and characteristics of the systems and algorithms under test. Finally, some conclusions
are presented.
{
FILE *geo_file, *scal_file;
int i, j, k;
int max_i, max_j, max_k;
/* data for the low data volume case: 20*20*20 vertices */
max_i = 20;
max_j = 20;
max_k = 20;
246
This code generates structured meshes with a random distortion and scalar fields with values that randomly vary
between -1 and 1. The mesh vertices coordinates are the (i,j,k) indices of the corresponding mesh cell. Each
coordinate is incremented with a positive random number strictly smaller than 1, which leads to the definition of
a structured mesh with strongly deformed cells.
The particle-tracing benchmarks were run with a helical vector-field data set defined over a regular (not
distorted), structured mesh. Here also, a low- and a medium-volume test cases were considered, respectively with
10*10*10 (1000 vertices) and 30*30*30 (= 27000 vertices). The helical field (u,v,w) was computed for every
vertex (x,y,z) using the spiral equation:
Hardware Environment
SISD computer:
HP 9000/735 (under HP-UX 9.0.1)
99 MHz clock speed
96 MB RAM
CRX-24Z graphics board
124 MIPS peak performance
40 MFLOPS peak performance
SIMD computer:
CPP DAP 510C-16
1024 bit-processors each with a 16-bit floating point coprocessor
16 MB RAM (shared)
140 MFLOPS peak performance
SCSI connection with front end
MIMD computer:
Parsytec GC-1/32 (under Parix)
32 processors (Inmos T8, RISC)
128 MB RAM (4 MB local memory per processor)
140 MFLOPS peak performance
S-bus connection with front end.
To be able to visualize the results of the computations, the parallel computers were linked to a front-end
workstation running the CFView interface. This work-station was the computer separately used for the SISD
benchmarks (see above). The SISD machine was connected to a Parallel Server via an Ethernet LAN (10
Mbits/s). The Parallel Server was running the largest part of the Interface Framework, built on top of PVM, see
Figure 194. It communicates with the SIMD and MIMD computers through its SCSI-bus and through its internal
S-bus respectively. For the Parallel Server machine the SUN workstation with the following characteristics was
used:
247
A schematic view of the environment used for the benchmarking is shown in Figure 222.
HP 9000/735
Graphical
Workstation
Ethernet
Local Area
Network
10 Mbit/s
Rendering,
Viewing, Filtering,
User Interface,
Controller, Data
Module and
local manipulation
of images
USER
SUN SparcStation 10
Parallel Server
SCSI
S-bus
SIMD
MIMD
CPP DAP
510C-16
Parsytec
GCel-1/32
Figure 222: Overview of the heterogeneous and distributed environment used for the theoretical benchmarks
Algorithms Parameters
The performance tests were done for 3 extraction algorithms: cutting-plane, iso-surface and particle-tracing. For
each algorithm, the tests were run with parameter ranges chosen to obtain data from very low computational load
(e.g. a plane that intersects only with a very small part of the mesh) to very high load (e.g. a plane that intersects
a very large part of the mesh).
For the cutting-plane algorithm, the intersection plane is defined by the normal (1,1,1) and the intersection with
the X-axis linearly varying between 0 and 2 times the x-dim of the mesh. Given the characteristics of the meshes
used, this means that the largest intersection will be found at x=1.5*x-dim.
For the iso-surface computations, a range of iso-values was chosen to generate an (approximately) linearlyvarying number of triangles (out of which the iso-surface is built) keeping their number in the range, which the
computers could handle.
For the particle-tracing benchmarks, the number of particles launched (at some fixed time t) was varied. In a
parallel configuration, multiple vector-field lines may be computed at the same time from a single data set: this is
achieved, essentially, by allowing the parallel processors to work simultaneously on different field lines. For the
CFView user, launching many particles at the same time (usually by distributing them evenly along a line
segment) is a common procedure. Typically, the user first looks for interesting regions of the model by studying
the traces of individually-placed particles. Then, the user positions a suite of particles in a region of interest.
248
(a)
(b)
Figure 223: The theoretical random-base meshes (a) 20x20x20 (b) 200x200x250
Being located fairly close to each other, the computed particles traces form a ribbon whose motion shows the
particularities of the flow field. Clearly, one expects that the (speed-up) effects of parallel computation to be
comparatively stronger at high particle numbers. By varying the number of particles from a few to very many,
one was able to explore the effects of parallelism with respect to sequential computation.
Figure 223 (a) illustrates the benchmarking mesh data (20x20x20 vertices) used in the PASHA project. Figure
223 (b) shows the 10-million-point mesh (~300MB data on disk), which was created (on the very week of
publication of this thesis) to demonstrate that CFView is capable of handling data sets of higher orders of
magnitude. The 3 visualization algorithms were executed on such large data sets. Figure 224 (a) shows a cutting
plane with particle traces illustrating the benchmarked helical vector field. Figure 224 (b) shows the iso-surface
extracted from the complex- and unconventional-geometry test case. The apparently-perfect spherical shape of
the iso-surface demonstrates the correctness of the extracted data. The clean helical geometry of the particle
traces is a visual indicator of the correctness of the algorithm applied here in trying conditions, namely to a
vector field defined on a distorted mesh with 200x200x250 vertices. For the record, these computations were run
on a Dell XPS-M2010 with Intel Dual Core 2 CPU @ 2.33 MHz and 2GB RAM.
(a)
(b)
Figure 224: Mesh size 200x200x250 (a) Cutting plane and Particle traces (b) Isosurface
249
Measurements Timing
When analyzing the performance of distributed computation, one must account for the (unavoidable) overhead
due to the network background tasks and the multiple processes that govern the calculations (e.g. their
synchronization). This is why time in the tests was measured using wall-clock time as CPU time was not
available. The time figures used include overhead due to the activity of various UNIX processes and to the
network load caused by system activities (swapping, saving, etc.).
Time was measured using the C function ftime with a resolution of about 1 millisecond. Measurements were
made on the individual workstations involved. The network times were either inferred from the measured times
(when possible) or computed through explicit handshaking between the communicating processes.
In order to obtain realistic performance figures, the measurements were performed in a typical usage situation,
i.e. with a normal network load, normal computer load, etc. In this approach, the part which includes
overhead in the timing can be considered as normal -- and indeed unavoidable for the user.
Because of the heterogeneous and distributed nature of the test implementation (SIMD/MIMD), timing a test run
requires combining several measurements obtained on sub-parts of the system. Every single run of a particular
algorithm was divided in a number of consecutive stages, and timing measured for each stage, namely:
PVMRcvGeo is the time it takes for the geometry data to go over the LAN from CFView to the parallel
server.
GCSndGeo or DAPSndGeo is the time it takes for the geometry data to go from the parallel server to
the parallel machines (Parsytec GC and DAP respectively).
PVMRcvScal is the time it takes for the scalar quantity data to go over the LAN from CFView to the
parallel server.
GCSndScal or DAPSndScal is the time it takes for the scalar quantity data to travel from the
parallel server to the parallel machines (Parsytec GC and DAP respectively).
PVMRcvParam is the time is takes for the parameter values (equation of the plane, the particular isovalue, the initial positions of the particles, etc.) to travel over the LAN from CFView to the parallel
server.
GCExe or DAPExe is the time it takes for the respective parallel machines to execute the given
algorithm using the data stored in their own memory.
GCRcvResults or DAPRcvResults is the time it takes for the respective parallel machines to
return the results of their computation (collections of triangles or streamlines) to the parallel server.
PVMSndResults is the time it takes for the computation results to travel over the LAN, from the
parallel server to CFView (running on a workstation).
Render is the time it takes for CFView to render the results on the screen of the workstation it is
running on.
Measurements Results
We present below the characteristic results of the benchmarks. A result table is given for each algorithm; it
shows averaged algorithm-execution times on the different systems for several test cases. Averages are
computed over 20 runs for each test case. All time values in the Tables below are in seconds.
For the CFView SIMD and MIMD implementations, the following average time values are given:
(Send Data): time needed to send the data (mesh and scalar quantity) from the Workstation to the Parallel
Server;
250
(Load Data). time needed for loading the data from the Parallel Server onto the parallel machines;
(Execute): execution time on the parallel implementations, which includes:
(i)
sending of the algorithmic parameters from the Workstation to the Parallel Machines;
(ii)
(iii)
sending the result data from the parallel machines to the Parallel Server; and
(iv)
sending the result data from the Parallel Server to the Workstation.
For the CFView SISD implementation, only the averaged execution time is meaningful and shown. The averaged
number of triangles generated by the algorithm and the number of particles traced are given where relevant. All
times shown are averaged over 20 different runs, with parameters varying (see Algorithmic parameters above).
The total execution time for the parallel implementations is the sum of the three timings: Send, Load, and
Execute. However, by making use of a caching mechanism in the Interface Framework, the sending and the
loading of the data needs to be done once only (in the beginning). Thereafter, data to be transmitted to the
parallel machines include only a (new) cutting-plane equation, an iso-value, or a set of initial particle positions,
after which the execution can take off. Hence, in a realistic situation, only the times in the (Execute) column are
relevant for comparison.
Mesh size
low
medium
#Triangles
915
10150
System
Send Data
SISD
SIMD
MIMD
SISD
SIMD
MIMD
---0.26
0.26
---3.66
3.66
Load Data
Execute
---0.48
1.96
---5.82
28.86
1.04
1.16
1.63
6.00
3.63
4.44
Table 36: Average times for Cutting Plane (wall-clock time in seconds)
Mesh size
#Triangles
low
11145
medium
10150
System
Send Data
SISD
SIMD
MIMD
SISD
SIMD
MIMD
---0.26
0.26
---3.66
3.66
Load Data
---0.48
1.96
---5.82
28.86
Execute
9.96
6.05
5.13
27.47
6.62
4.96
#Particles
low
25
medium
25
System
Send Data
SISD
SIMD
SISD
SIMD
---0.34
---1.79
Load Data
---0.35
---1.90
Table 38: Average times for Particle Trace (wall-clock time in seconds)
251
Execute
67.04
17.14
85.27
27.18
#particles
2
5
10
15
20
25
30
35
40
45
50
25
27.18
Table 39: Evolution of the execution times in seconds with the number of particles used
#Processors
MIMD Calculate
MIMD Retrieve
MIMD Send
2
4
8
12
16
1.9
1.49
1.33
1.26
1.21
1.47
1.47
1.49
1.51
1.54
0.39
0.40
0.39
0.39
0.41
Table 40: Execution times in seconds for Isosurface on MIMD for different machine configurations (wall-clock
time) with varying number of processors
Since the cutting plane and isosurface algorithms are conceptually very similar [151], comparing the
performance results for them makes sense from a users point of view. The timing values as seen by the user are
consolidated in Figure 225. The chart shows the averaged execution times for the cutting-plane and iso-surface
algorithms on the different machines. For the SIMD and MIMD implementations, the execution times are subdivided in three parts:
(i)
the actual algorithm-execution time on the given machine (Calculate);
(ii)
the time it takes to transfer the computation result data from the parallel machine to the
Parallel Server (Retrieve Results); and
(iii)
the time needed to send the result data from the Parallel Server to Parallel CFView
running on the Workstation (Send Results).
30
25
20
Send Results
15
Retrieve Results
Calculate
10
5
0
SISD
SIMD
MIMD
SISD
SIMD
MIMD
CUTTING PLANE
ISOSURFACE
Figure 225: Average execution times in seconds for the algorithms on the different machines (with caching
mechanism enabled for the parallel implementations).
252
14
SIMD Send Results
12
SIMD Calculate
10
8
4
MIMD Calculate
2
28195
23456
18992
14582
10004
4834
912
476
40
Number of triangles
Figure 226: Average execution times in seconds for the SIMD and MIMD implementations of the isosurface
algorithm, with respect to the number of triangles generated (caching mechanism on)
As can be seen on the chart, the execution times for the parallel implementations are significantly shorter than
for the sequential one. This is especially visible for the iso-surface algorithm because it requires a sweep
through the complete mesh. Note that the machine running the sequential CFView was a state-of-the-art HP
9000/735workstation, at the time amongst the fastest in its category. This demonstrated that the parallel CFView
SIMD/MIMD implementations offered intrinsically more power than the sequential CFView.
A second remarkable observation related to the distribution of computational load on the parallel machines.
Although both implementations are approximately equally fast, it was found that the MIMD machine spent most
of its CPU time for transferring results data to the parallel server, whereas the SIMD machine was busier with
the actual computation. The reason for this is, of course, the overhead caused by having to route the data
between the different processors of the MIMD machine. Figure 226 shows this difference more clearly.
As seen from the chart above, the total execution time (as seen by the user) varies with the number of triangles
generated, that is, with the complexity of the computation. On the SIMD machine, the largest contributing factor
is calculation -- the time for computing the iso-surface. On the MIMD implementation, however, calculation
times (MIMD Calculate) are fairly independent of the number of triangles generated. The time required to move
the results from the MIMD machine to the Parallel Server (MIMD Retrieve Results) is, by contrast, largely
dependent on the number of triangles. As mentioned earlier, this behavior results from the need to route the
results of the computations by the different processors to the master processor (which governs the
communications with the parallel server, which is a time-consuming process.
An interesting observation can be made concerning the difference in performance between the parallel machines.
As long as the number of triangles generated is small, the SIMD machine outperforms the MIMD machine. But
when the task requires more and more computation (more intersections need to be calculated and more triangles
need to be constructed), the situation reverses. This phenomenon can be explained as follows: for implementing
the iso-surface algorithm in a fully parallel mode, one requires the ability to index the data set in parallel, a
feature which is not available on the SIMD machine. Therefore, part of the computation is implemented as a
serial loop over the edges of the intersected mesh cells. As the number of intersections (hence the number of
triangles) increases, the serial part of the algorithm tends to dominate the computation and slows down the
overall execution. One can also see in the charts that the local networking tasks (SIMD/MIMD Send Results),
which ensure the communication in the heterogeneous distributed environment, account for only a small fraction
253
of the total time. This suggests that a heterogeneous, distributed approach is feasible since it does not impose
unacceptable overhead on the overall performance of the system.
A comparison of the execution times for particle-tracing on the SIMD and SISD machines can be found in Figure
227. As one can infer from the chart, the execution times on the SIMD machine raise linearly with the number of
particles traced. The performance of the SISD machine quickly degrades as the number of particles increases;
mesh size does not seem to influence this behavior. The chart shows that the parallel implementation succeeds in
keeping execution times at an acceptable level, even for very large number of particles.
The SIMD implementation seems to be able to fully exploit the computing power that results from the
distribution of the particle-tracing calculations onto several independent parallel processors.
The benchmarking experiment on the SIMD and MIMD CFView systems has shown comparable performance for
the two parallel machines. The execution times were found to be comparatively more sensitive to the amount of
computation required on the SIMD machine, and relatively more dependent on the amount of data to be routed
between the different processors on the MIMD implementation. The overhead induced by the Interface
Framework (developed for communication between the different machines) was seen to contribute to a small
fraction only of the overall execution times.
It was also observed that the SIMD implementation really takes advantage of its multi-processor structure for
particle-tracing. Indeed, the SIMD machine was demonstrated to be the only useable machine for tracing
moderately-large numbers of particles.
The overall performance of the SIMD and MIMD Parallel CFView implementations were shown to be
significantly better than the performance of the SIMD sequential version, especially for computationallyintensive operations, such as iso-surface construction on problems with large data volumes or particle-tracing
with large numbers of particles.
Overall, the benchmarking of CFView has demonstrated that heterogeneous and distributed SV system
configurations were indeed a viable proposition. This opens the interesting prospect of transparently using VS
systems capable of harnessing the computing power of different computing platforms and taking advantage of
geographically-distant, parallel machines.
150
130
110
Low SISD
90
Low SIMD
time(s)
70
Medium SISD
Medium SIMD
50
30
10
-10
10 15 20 25 30 35 40 45 50
#particles
Figure 227: Execution times in seconds for particle tracing with respect to the number of particles
254
255
256