IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1,
JANUARY/FEBRUARY 2004
29
Selective Refinement Queries for
Volume Visualization of
Unstructured Tetrahedral Meshes
Paolo Cignoni, Leila De Floriani, Member, IEEE Computer Society,
Paola Magillo, Enrico Puppo, Member, IEEE Computer Society, and
Roberto Scopigno, Member, IEEE Computer Society
Abstract—In this paper, we address the problem of the efficient visualization of large irregular volume data sets by exploiting a
multiresolution model based on tetrahedral meshes. Multiresolution models, also called Level-Of-Detail (LOD) models, allow encoding
the whole data set at a virtually continuous range of different resolutions. We have identified a set of queries for extracting meshes at
variable resolution from a multiresolution model, based on field values, domain location, or opacity of the transfer function. Such
queries allow trading off between resolution and speed in visualization. We define a new compact data structure for encoding a
multiresolution tetrahedral mesh built through edge collapses to support selective refinement efficiently and show that such a structure
has a storage cost from 3 to 5.5 times lower than standard data structures used for tetrahedral meshes. The data structures and
variable resolution queries have been implemented together with state-of-the art visualization techniques in a system for the interactive
visualization of three-dimensional scalar fields defined on tetrahedral meshes. Experimental results show that selective refinement
queries can support interactive visualization of large data sets.
Index Terms—Unstructured tetrahedral meshes, volume data visualization, multiresolution geometric modeling, selective refinement.
æ
1
INTRODUCTION
A
volume data set consists of a set of points in threedimensional space, where one or more field values are
associated with each point, and of a mesh (either structured
or unstructured) spanning the domain, formed by cells
having their vertices at the data points. In this paper, we
focus on the case of irregularly distributed volumetric data
defined in a three-dimensional domain with a nonconvex
boundary, which are represented by unstructured tetrahedral meshes. Unstructured tetrahedral meshes arise in a
variety of applications, which include computational fluid
dynamics, thermodynamics, structural mechanics, etc.
Recently, research has been focused on visualization
techniques that work directly on these kinds of meshes,
thus making it possible to avoid resampling the field onto a
regular grid [2], [15], [25]. Resampling is often not feasible
because of the large variation of cell size and of the
nonconvex shape of the domain. Resampling is sometimes
not even desirable since the user may want to zoom in on
small portions of the mesh and analyze details and the
. P. Cignoni and R. Scopigno are with the Istituto di Scienza e Tecnologie
dell’Informazione (ISTI), Consiglio Nazionale delle Ricerche, Via G.
Moruzzi, 1, 56124 Pisa, Italy.
E-mail: cignoni@iei.pi.cnr.it, roberto.scopigno@cnuce.cnr.it.
. L. De Floriani, P. Magillo, and E. Puppo are with the Dipartimento di
Informatica e Scienze dell’Informazione, Università di Genova, Via
Dodecaneso, 35, 16146 Genova, Italy.
E-mail: {deflo, magillo, puppo}@disi.unige.it.
Manuscript received 9 Mar. 2001; revised 2 Nov. 2001; accepted 13 June
2002.
For information on obtaining reprints of this article, please send e-mail to:
tvcg@computer.org, and reference IEEECS Log Number 113775.
1077-2626/04/$17.00 ß 2004 IEEE
shape of the tetrahedral elements directly in the original
decomposition.
The approach proposed in this paper is specific for
multiresolution unstructured tetrahedral meshes and is not
intended to be an alternative to multiresolution techniques
for regular grids. The problem we tackle is the online
efficient simplification of unstructured tetrahedral meshes
when such meshes can be loaded into main memory, but
are too large to be interactively visualized.
Data simplification is a form of lossy compression that
became popular in the last few years for solving similar
problems in surface and scene rendering and has been more
recently extended to volume data. The general idea is to
reduce the resolution of the input mesh while preserving its
ability to approximate the original mesh at high resolution
in the best possible way. Unfortunately, producing very
accurate simplified representations is hard. Even slow and
accurate simplification algorithms often fail to obtain high
simplification rates with a low error [3]. On the other hand,
accuracy is a critical requirement and, also, lossy compression is often not accepted in the scientific visualization
community.
Our solution consists of using a multiresolution technique.
The original data can be easily and efficiently recovered
from a multiresolution model, when and where needed. In
this way, the user can exploit the advantages of simplification—less data to be rendered and processed—in a safe
way. Original (lossless) data can be used on a selected focus
region, while simplification (lossy compression) is applied
on less relevant regions of the domain and/or in the
Published by the IEEE Computer Society
30
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
proximity of less relevant field values. This process is
usually called selective refinement.
In two-dimensional applications, it is now customary to
decouple the simplification phase, which is usually slow if
performed accurately, from the selective refinement phase,
which is performed online. A multiresolution model is built
in a preprocessing phase, which encodes the modification
performed by the simplification algorithm as a partial
order. A virtually continuous set of meshes at different
Levels-Of-Detail (LODs) can be interactively and dynamically obtained from such a model. The extension of these
concepts, data structures, and algorithms to volume data is
the basis of the work described here.
Here, we exploit the ideas underlying a general multiresolution model we defined in [9], called a MultiTessellation (MT), to define a specific three-dimensional
multiresolution model based on unstructured tetrahedral
meshes, and on an edge-collapse simplification strategy.
The major novel contributions of this paper can be
summarized as follows:
A set of queries are defined for extracting meshes at
variable resolution from a multiresolution model,
based on field values (to enhance isosurface visualization), domain location (to enhance visualization
inside a focus volume), or opacity of the transfer
function (to enhance direct volume rendering). Such
queries allow trading off between resolution and
speed in visualization, according to user needs and
hardware constraints. We show that these queries
can be expressed as instances of a general selective
refinement query and answered through an efficient
incremental algorithm.
2. We define and implement a compact data structure
for a three-dimensional Multi-Tessellation that we
show requires three times less storage space with
respect to a simple indexed data structure encoding
the original mesh at full resolution and 5.5 times less
space than a data structure for the original mesh
encoding both connectivity and adjacency information (as required, e.g., by direct volume rendering
algorithms based on cell projection).
3. We describe a modular system for rendering
unstructured volumetric meshes, that we call Tetrahedra Analyzer 2 (TAn2), obtained by integrating a
multiresolution engine, based on the above data
structure and on an incremental selective refinement
strategy, with state-of-the-art components for
volume rendering. To our knowledge, this is the
first system for interactive volume visualization
based on unstructured tetrahedral meshes that
performs online selective refinement. We report
experimental results on a few data sets which show
how selective refinement queries can support interactive visualization.
The remainder of this paper is organized as follows: In
Section 2, we review some related work. In Section 3, we
introduce some notions about volume data sets, threedimensional meshes, and mesh approximation. In Section 4,
we present the edge-based Multi-Tessellation (MT) and
briefly describe an algorithm to build it and an incremental
1.
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
algorithm for online selective refinement. In Section 5, we
present a new compact data structure to encode the MT and
to support selective refinement. In Section 6, we introduce a
set of variable resolution queries relevant for volume
visualization and show how these latter can be expressed
in terms of selective refinement. In Section 7, we describe
the architecture of the TAn2 system. In Section 8, we present
results and images obtained with TAn2, including execution
times. Finally, in Section 9, we draw some concluding
remarks and discuss future work.
2
RELATED WORK
In the past few years, several research efforts have been
devoted to the development of multiresolution geometric
models with contributions from the fields of computer
graphics, computational geometry, and finite element
analysis. Topics somewhat related to the work presented
here are tetrahedral mesh simplification, multiresolution
models for triangle and tetrahedral meshes, and multiresolution rendering of large tetrahedral meshes.
2.1 Mesh Simplification
The problem of mesh simplification has been extensively
studied for triangle meshes [17]. Several methods are based
on incremental techniques, which perform a sequence of
atomic modifications on a given mesh by either removing
details from a mesh at high resolution or adding details to a
coarse mesh. This approach is especially interesting for
constructing multiresolution models since the intermediate
steps of the process produce meshes at decreasing (or
increasing) LODs.
Some incremental techniques have been proposed for
simplification of tetrahedral meshes [3], [5], [19], [35]. All
such techniques are based on edge collapse and differ in the
way they control the error for producing a simplified mesh.
In this paper, we use a variant of the method described in
[3] to build the multiresolution model proposed here.
2.2 Multiresolution Tetrahedral Models
There is a large body of literature on multiresolution
models for surfaces based either on unstructured triangle
meshes or on nested regular grids (see [16] for a survey).
Here, we focus on the three-dimensional case, i.e., on
multiresolution models based on tetrahedral meshes. The
simplest multiresolution models encode a coarse mesh plus
a linear sequence of updates that can be applied in order to
progressively refine it [19], [31]. Such models support the
extraction of a mesh only at those intermediate resolutions
which can be obtained by truncating the sequence of
refinements at some point.
The ability to support selective refinement, i.e., to extract
meshes at a variable resolution, derives from organizing
updates according to a partial order based on a dependency
relation. This means that a virtually continuous set of
representations in which the resolution may vary in different
parts of the domain can be extracted from such models.
Dependencies between updates may be represented directly
through a directed acyclic graph (DAG) [9], [21] or with
similar structures [39] or through forests of binary trees of
vertices [26]. The former solution could lead to data structures
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
with high storage costs when dealing with large meshes,
while vertex hierarchies may result in extraction of inconsistent meshes [16]. For the special case of edge collapse to an
interior point in triangle meshes, El-Sana and Varshney [13]
have proposed a vertex enumeration scheme which can
represent the actual dependency relations by storing just a
binary forest of vertices. In this work, we use a similar
structure for encoding the partial order in the edge-based
Multi-Tessellation.
Much research has been devoted to multiresolution
models for regular data sets. Such models are based on
hierarchical space subdivision, in particular, nested tetrahedral meshes generated by recursive bisection of tetrahedra [18], [24], [28], [41] and red/green tetrahedra
refinement [20], which have been introduced for domain
decomposition in finite element analysis. A nested decomposition of the underlying space is also the basis of methods
based on octrees [32], [37], [36]. An important issue in using
nested meshes is that, if the domain is refined selectively,
the field associated with the extracted mesh (and, thus, the
resulting isosurfaces) may present discontinuities in areas
of transition. Different authors have different solutions to
this problem, including error saturation [18], [41], projection
[28], remeshing [20], insertion of points [32], and efficient
neighbor-finding [24]. Hierarchical tetrahedral meshes
based on tetrahedron bisection also allow an easy and
space/time-efficient progressive encoding of isosurfaces at
variable resolution [29]. Various hierarchical techniques
have been proposed specifically for efficiently rendering
regular volumetric data sets. The regular structure of the
data allows the use of wavelets that provide an effective
approach to multiresolution representation and analysis
[22], [23], [27]. Other approaches exploit the graphics
hardware and efficiently represent a regular data set by
using hierarchical 3D textures [12].
However, methods for regular data sets are difficult to
generalize to irregular data sets. Resampling an irregular
data set on a regular grid is not acceptable for a number of
reasons. First of all, the size of cells in irregular data sets can
be extremely variable. It is common that the ratio between
the sizes of the smallest and the largest cell in such a data
set is about 1:10,000, and large cells may lie in any position
across the domain. This would require an adaptive
resampling strategy that often makes it hard to use
standard techniques for regular data sets. Second, most
irregular data sets have nonconvex domains and the regular
grid should properly contain the original data domain. In
this case, the field will necessarily have a sharp discontinuity across the boundary of such a domain since it is
unknown outside that boundary. Methods based on
hierarchical space subdivision, including those based on
wavelets, cannot be adopted because of the need for
discriminating between the interior and the exterior of a
domain: Only cells at the highest levels of resolution would
accurately model the boundary, while cells at coarser levels
would cross it. This problem can be partially hidden at the
rendering stage by adopting a solution based on alpha
masking to eliminate inexistent domain regions when using
a 3D texture-based selective-refinement renderer [34].
31
However, this approach does not ensure that interpolation
and RGB mapping are correct.
Multiresolution models based on irregular tetrahedral
meshes are highly adaptive and can capture the geometric
shape of the field domain accurately even at low resolution.
2.3 Multiresolution Visualization
Only a few papers discuss the design and implementation
of multiresolution visualization systems for irregular data
sets. To our knowledge, previous systems often present an
incomplete set of data management functions (data simplification, uniform and selective refinement) and of rendering
approaches (usually, either isosurfaces or Direct Volume
Rendering (DVR)).
A very efficient management of isosurfaces at variable
resolution is proposed in [29], but this approach has been
developed only for tetrahedral data sets defined on a
regular mesh built through recursive tetrahedra bisection. A
system for time-critical DVR on irregular meshes is
proposed in [14], which supports only DVR of irregular
grids. Such a system adopts a ray tracing approach and
renders data under a time budget either by undersampling
image space or by using decimated representations. A
system for rendering very large tetrahedral meshes at high
resolution without any underlying multiresolution data
structure is described in [40]. The rendering of the mesh is
performed directly from a compressed representation. A
lossless compressed tetrahedral mesh is rendered incrementally while being decompressed, thus cutting down
both runtime memory footprint and disk I/O bandwidth
requirements.
The TAn (Tetrahedral Analyzer) system [4], [5] is the first
example of a system for volume visualization based on
unstructured tetrahedral meshes with multiresolution capabilities. Its multiresolution functions, however, are limited
to extracting uniform resolution representations and the
data structure implementing the multiresolution model has
a relevant overhead with respect to encoding the original
mesh at full resolution.
In the case of very large data sets, out-of-core solutions
can be very effective. Even when a multiresolution
approach is adopted, out-of-core approaches can be useful
to process the data at the maximum resolution, e.g., to fit
isosurfaces [1] or to run a DVR renderer [15]. Moreover, the
multiresolution representations of a huge data set can
exceed the available in-core memory and, therefore, the
extraction of LOD models should be implemented out-ofcore. Out-of-core issues are beyond the scope of this paper,
but the approach we present here is open to extensions in
this direction.
3
APPROXIMATED TETRAHEDRAL DATA SETS
A volume data set is a geometric mesh decomposing a
three-dimensional domain such that one or more field
values are given at each vertex of the mesh. We assume that
the domain is a manifold. In particular, a tetrahedral data set
is defined as a collection of tetrahedra ¼ f1 ; . . . ; m g
having their vertices at a set of points V ¼ fv1 ; . . . ; vn g in
3D space and a collection of one or more scalar fields
F ¼ ðf1 ; . . . ; fk Þ, each defined at all points of V . The size of
32
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
Fig. 1. Range error (a) and space error (b) in an approximated representation of a one-dimensional scalar data set. The original data set is drawn
with a thin line, the approximated data set with a bold line, and the error at some relevant points is represented by a dotted red line.
mesh is the number m of its tetrahedra. For data sets used
in practical applications, we can assume that m 6n.
Each field fj of F is extended by linear interpolation on
the cells of , thus on the whole domain. The gradient of
each field fj is constant, by definition, inside each cell of .
We extend the gradient at the vertices of by computing, at
each vertex v, the weighted average of gradients of fj at all
tetrahedra incident at v, where weights correspond to the
amplitude of solid angles formed by such tetrahedra at v.
The direction of each gradient at each vertex v, called a
vertex normal, is maintained, together with the corresponding field for shading purposes in isosurface visualization.
Without lack of generality, we will assume that all fields
of F are rescaled to span interval ½0; 1. The original field
values can be easily recovered by maintaining just two
numbers (scale factor and offset) for each field. We will
occasionally treat F as a unique vector field in a metric
space with the max norm, i.e., the distance between two
vectors is given by the maximum of the differences between
their components. In the following, we will refer to as the
reference mesh and to F as the reference field. and F together
define the reference data set.
An approximated data set is defined as a tetrahedral
mesh 0 with m0 ð< mÞ tetrahedra and with n0 ð< nÞ vertices
at a set V 0 . A collection of scalar fields F 0 ¼ ðf10 ; . . . ; fk0 Þ is
piecewise defined on 0 , similarly to F . The idea is that the
approximated data set should not differ much from the
reference data set.
A common approach to measure the error associated
with such an approximation is range-oriented. For every
point in the data domain, the difference between the value
of the field computed at P by using the reference data set
and the approximated data set is computed. This gives an
estimate of how the approximated data set differs from the
reference data set (error in the input), but it is not always
meaningful to predict how this will affect visualization
(error in the output). For instance, an approximated isosurface can be arbitrarily far in space from the true isosurface,
even with a small error in range, in areas in which the field
has a low gradient.
For the above reason, we adopt also a space-oriented error,
which is orthogonal to the range-oriented error and is more
related to error in the output. For every point P in the
reference mesh, we measure its distance to the closest
point P 0 in the approximated mesh that has the same field
value as P , where the field values at P and P 0 are computed
by using the reference and the approximated field,
respectively.
Fig. 1 illustrates the difference between range-oriented
and space-oriented error metrics in a simple one-dimensional case. Such errors will actually be computed at each
cell of the approximated data set, according to the
definitions given below.
We introduce first some notation. Each tetrahedron 0 of
0
approximates a given portion 0 of the domain of and
the field F 0 computed on 0 approximates the corresponding field F computed on 0 . We assume that F and F 0 span
the same range of values over 0 and 0 , respectively. We
will warrant later that this condition is always fulfilled
while building our approximated data set. Given a point
P 2 0 , the point in 0 corresponding to it is the point P 0
that minimizes Euclidean distance from P . Thus, P 0
coincides with P if P is internal to 0 . Usually, P and P 0
are distinct only if 0 is close to the boundary of . In this
latter case, P 0 lies on the boundary of 0 .
The range error at 0 is defined as:
"F ð0 Þ ¼ max kF ðP Þ F 0 ðP 0 Þk:
P2
0
In order to define the space error, we first define a
distance between a point P 2 0 and its closest point
having the same field value in the approximated data set:
!
dI ðP ; 0 Þ ¼ max
fj 2F
min
P 0 20 : fj0 ðP 0 Þ¼fj ðP Þ
jjP P 0 jj :
Note that dI is the largest Hausdorff distance between P
and an isosurface of value fj ðP Þ, for all j, extracted from the
approximated data set. The space error at 0 is defined as:
"I ð0 Þ ¼ max ðdI ðP ; 0 ÞÞ:
P2
0
0
An approximated mesh obtained using edge-collapse
simplification usually covers a domain slightly different
from that of the initial mesh because simplification may
affect the boundary of the mesh. However, the space error
can be used also to measure the warping induced by 0 on
the domain of . This is simply done by adding one more
dummy field f0 ¼ f00 to both F and F 0 , where the value of f0
is one on vertices of and zero elsewhere. The contribution
of f0 to "I ð0 Þ corresponds to the (nonsymmetric) one-sided
Hausdorff distance between 0 and 0 . Note that the choice
of direction in the one-sided Hausdorff distance is critical.
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
33
There is a differential relation between range error and
space error, which is given by the gradient of the field. In
our implementation, we decided to maintain only the space
error at each tetrahedron in the data structure encoding an
approximated data set and to estimate the corresponding
range error as
"F ð0 Þ ’ "I ð0 Þrð0 Þ;
where rð0 Þ is the magnitude of the largest gradient of a
component of F 0 on 0 . Another possibility is maintaining
just the range error, but, in this case, the estimate of space
error would be divergent where the gradient is very low.
Since regions with low gradient are more common than
regions with very high gradient, we prefer the former
possibility.
Giving an accurate measure of error in the output for
DVR techniques may be more complicated. In principle, an
approach similar to that used for defining the range error
can be used. By assuming an Euclidean distance in the
space RGB of colors, for a given field fj and a given transfer
function, the difference between the colors at each point P
of 0 obtained through fj and fj0 , respectively, can be
computed. Such a difference is weighted with the alpha
value computed at fj ðP Þ and the result is integrated over
0 along the view direction. However, this definition of
error involves cumbersome computations that should be
performed at each change in view, with a computational
cost possibly higher than the straight DVR of the original
mesh. On the other hand, in our experiments, we found that
the visual differences in DVR images of approximated data
sets is closely related to range errors that occur in areas of
high opacity. Therefore, we have adopted a simpler
estimate of the transfer function error by just weighting the
range error with the alpha value. Given a tetrahedron 0 , the
interval spanned by any component of F over 0 is
contained in
"
#
IF ð0 Þ ¼ min
min
f 0 ðP 0 Þ "F ð0 Þ; max
max
f 0 ðP 0 Þ þ "F ð0 Þ :
0
0
0 j
0
0
0 j
0
0
fj 2F P 2
fj 2F P 2
Interval IF ð0 Þ is easily obtained from the values of F 0 at the
vertices of 0 . The maximum alpha value for a point inside
0 is thus given by
0
max ðF ; Þ
¼ max0
x2IF ð Þ
ðxÞ;
where ðÞ is the opacity component of the transfer function.
This value is easy to compute since ðÞ is usually piecewise
linear. Then, we define the transfer function error as:
"DV R ð0 Þ ¼
max ð
0
Þ"F ð0 Þ:
In Section 6, we will consider range error, space error,
and transfer function error to define multiresolution
visualization queries. Note that only the space error will
be maintained explicitly. Range error and transfer function
error are estimated, at each tetrahedron 0 , on the basis of
space error, the geometry of 0 , and the value of the field at
its vertices.
Fig. 2. Modification of a tetrahedral mesh through an edge collapse and
a vertex split (exploded view). On the left, tetrahedra that degenerate
into triangles after edge collapse are colored. On the right, tetrahedra
marked with 0 and with 1 result from the deformation of tetrahedra
incident at v0 and at v00 .
4
THE MULTIRESOLUTION MODEL
The multiresolution model at the heart of our approach is a
specific three-dimensional version of the general, dimension-independent model introduced in [9], called the MultiTessellation (MT). The algorithm for building such a model
is based on the mesh decimation method described in [3].
The algorithm for selective refinement on the MT is similar
to the one proposed in [8], improved with ideas from [11].
4.1 The Edge-Based Multi-Tessellation
Edge collapse [19] is a local update that acts on a tetrahedral
mesh by contracting an edge e, with endpoints v0 and v00 , to
its midpoint v. The mesh around e is deformed by replacing
vertices v0 and v00 with v and tetrahedra incident at both v0
and v00 collapse into triangles (see Fig. 2 from left to right).
The reverse operation of an edge collapse, called a vertex
split, expands a vertex v into an edge e having its endpoints
at v0 and v00 . A vertex split partitions the tetrahedra incident
at v into two subsets, which are separated by a fan of
triangles incident at v. Tetrahedra of the two subsets are
deformed to become incident at v0 and v00 , respectively.
Triangles belonging to the fan become tetrahedra incident at
both v0 and v00 (see Fig. 2 from right to left).
Alternatively, an edge collapse can be seen as a
modification that removes tetrahedra incident at v0 or v00
and replaces them with new tetrahedra incident at v. A
vertex split can be seen as a modification that removes
tetrahedra incident at v and replaces them with new
tetrahedra incident at v0 and v00 .
A sequence of edge collapses transforms a reference
mesh into a simplified mesh, while the corresponding
sequence of vertex splits recovers the reference mesh
starting at the simplified mesh. Fig. 3 shows a sequence of
vertex splits that progressively refine a triangle mesh (the
example is given in two dimensions, for the sake of clarity).
In the rest of the paper, we will use the following
notation. A node u represents both an edge collapse u
(coarsening update) and the corresponding vertex split uþ
(refinement update). The edge collapsed by applying u
and the vertex split by applying uþ are denoted as eu and vu ,
respectively. Fig. 4a shows the nodes corresponding to the
sequence of updates of Fig. 3.
34
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
Fig. 3. A sequence of vertex splits progressively refining a triangulation. The mesh portion affected by each update is shaded and the split vertex is
circled. The new edge introduced by a split is thickened.
Fig. 4. (a) The nodes corresponding to the example of Fig. 3. (b) The partial order among such updates, depicted as a DAG where each node
represents an update and each arc represents a dependency. Nodes are labeled with the indices of the corresponding split vertices.
An edge-based Multi-Tessellation is composed of a base
mesh at a coarse resolution plus a set of nodes defined as
above. A partial order among nodes is induced by the
following dependency relation: A node u depends on another
node w if uþ removes some cell that has been introduced by
wþ . This means that uþ cannot be applied unless wþ has
first been applied and, conversely, w cannot be applied
unless u has first been applied. Fig. 4b shows the partial
order encoded in the MT describing the sequence of
updates depicted in Fig. 3.
We say that a subset S of nodes of an MT is consistent if,
for every node u 2 S, all nodes w such that w precedes u in
the partial order are also in S. Nodes belonging to a
consistent subset S can be applied to the base mesh in any
total order that extends the partial order, thus producing a
mesh S .
The base mesh and the reference mesh are the coarsest
and the most refined mesh that can be obtained from an
MT, respectively. The reference mesh can be obtained from
the base mesh by applying all vertex splits corresponding to
nodes of the MT. Other meshes, at a virtually continuous set
of levels of detail, can be extracted from an MT by
performing different consistent subsets of the updates
corresponding to its nodes. Consistency ensures that such
meshes will not contain cracks or overlapping tetrahedra.
4.2 Construction of an Edge-Based MT
The construction of an edge-based MT is performed offline
through a variant of the simplification algorithm for
tetrahedral meshes presented in [3]. The algorithm follows
a classical iterative approach: It starts with the reference
mesh and builds the MT by performing one edge collapse at
a time until a mesh of given size is obtained. An edge can be
collapsed only if the following conditions hold:
It does not introduce nonmanifold conditions;
The total number of vertices adjacent to its two
endpoints is not larger than 32;
3. Its two endpoints are not both local extrema (one
maximum and the other minimum) of any data field
in the current mesh.
The algorithm collapses an edge to its middle point,
computing the value(s) of field(s) and the vertex normal(s)
at the new vertex by linear interpolation. In case one of the
endpoints of the edge is a local extreme value of some data
field in the current mesh, we assign that field value to the
new vertex instead. This fact, together with Condition 3
above guarantees that the range of the field in the influence
region of an edge collapse remains the same before and
after collapse, as required in Section 3. Condition 2 is
imposed essentially to guarantee a bound to the size of the
data structure that we use to encode collapses (see
Section 5.1). The experiments we have performed confirm
that this limit works well in practice.
Each edge collapse generates a new node of the MT.
Dependencies on other MT nodes are detected and stored in
a data structure representing the partial order. The
simplified mesh obtained at the end of the simplification
process becomes the base mesh of the MT.
1.
2.
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
Simplification is driven by a priority queue. A cost is
assigned to each edge which is proportional to the increase
in space error caused by its collapse. The edge of lowest cost
has the highest priority. The priority queue is initialized
with all edges of the reference mesh which can be collapsed
without violating mesh consistency and it is updated after
each collapse by only updating those edges whose cost may
have changed (i.e., only at edges in the proximity of the
collapsed edge).
The cost cðeu Þ of collapsing an edge is defined by the
maximum of space errors evaluated at the tetrahedra 0
generated by the corresponding edge collapse. In order to
get a correct estimation of the error, each vertex of the
original mesh that was eliminated because of a collapse is
assigned to the tetrahedron containing it in the current
mesh (if a vertex lies outside the current mesh, then it is
assigned to its closest tetrahedron in the mesh). The error at
a tetrahedron 0 is computed by using the set of vertices
assigned to 0 as a discrete approximation of 0 . The
assignment of vertices to tetrahedra is updated locally
during the simplification process. Once an edge eu is
collapsed and a corresponding node u is generated for the
MT, we define and store the node error "ðuÞ ¼ cðeu Þ.
The construction algorithm assumes that the mesh has a
manifold domain and makes sure that edge collapses do not
introduce nonmanifold features. The consistency of the
multiresolution model guarantees that the same property is
maintained during selective refinement.
The current implementation of the MT construction
algorithm works with just one field value per data set.
However, the extension to deal with more field values is
straightforward.
Although the specific implementation presented in this
paper complies with the above description, we point out
that the mechanism for selective refinement and the
visualization system presented in the following sections is
completely parametric with respect to the specific construction technique and error metric adopted, as long as a
sequence of edge collapses is generated, which preserves
the manifold property of the mesh during simplification,
and an error can be evaluated at each tetrahedron of an
approximated mesh.
4.3
An Incremental Algorithm for Selective
Refinement
Selective refinement consists of extracting a tetrahedral
mesh from an MT that fulfills some resolution requirements
and has a size smaller than or equal to a size threshold.
The selective refinement algorithm is based on an
incremental approach, similar to the one proposed in [8],
and improved with ideas from [11] (use of two priority
queues). The algorithm starts with a consistent set S of
nodes of an MT and its associated mesh S (called the
current set and the current mesh, respectively), which have
been generated as the solution of a previous query. At
startup, set S is empty and mesh S is initialized to be the
base mesh. The algorithm modifies the current set (and the
current mesh, as a consequence) by iteratively adding and
deleting nodes to and from S. A node can be added to S, or
deleted from it, provided that the resulting set is still
consistent (i.e., if and only if the corresponding update can
35
be made on S ). An update that can be performed without
violating consistency is called feasible. At each step, nodes
corresponding to feasible edge collapses are considered for
deletion and nodes corresponding to feasible vertex splits
are considered for addition. Nodes to be inserted/deleted
are selected on the basis of their priorities, which depend on
resolution requirements.
Specific instances of resolution requirements, which are
application dependent, will be defined in Section 6 for
various queries related to volume visualization. In general,
we assume that a resolution function ðuÞ can be evaluated
at each node u of an MT, which defines the level of
resolution of node u, with respect to the desired requirements. A value ðuÞ > 0 means that the portion of mesh
covered by u is underrefined (with respect to the current
requirements) if vertex split uþ is not performed. A value
ðuÞ < 0 means that the portion of mesh covered by u is
overrefined (with respect to the current requirements) if edge
collapse u is not performed. The larger the positive
(negative) value of ðuÞ, the more underrefined (overrefined) the portion of current mesh covered by u.
On this basis, a selective refinement on an MT produces a
mesh S , associated with a consistent set S, such that: 1) The
size of S is b, where b is the size threshold of the query;
2) maxfðuÞ j u can be added to Sg is minimized;
3) minfðuÞ j u can be deleted from Sg 0. Requirement 1)
can always be achieved, provided that threshold b is not
smaller than the size of the base mesh. Requirement 2) means
that at least the most relevant vertex splits must be in S.
Ideally, we would like to have no underrefined portions of a
mesh, but this can be achieved only if the size bound b is large
enough. Requirement 3) means that S must not contain any
overrefined node, i.e., mesh S must be the coarsest mesh that
can be extracted based on requirements 1) and 2).
In order to perform priority-based insertions and
deletions, the selective refinement algorithm maintains
two priority queues of nodes:
A queue Qadd of candidate nodes to be added to set S
(i.e., of candidate vertices to be split in S ). Qadd
contains the nodes u 62 S such that vertex vu belongs
to S , sorted by decreasing values of ðuÞ.
. A queue Qdel of candidate nodes to be deleted from set
S (i.e., of candidate edges to be collapsed in S ). Qdel
contains the nodes u 2 S such that edge eu is in S and
u is feasible, sorted by increasing values of ðuÞ.
Note that Qdel only contains feasible nodes, unlike Qadd .
The algorithm consists of a loop in which the generic step
considers the values of for the first nodes uadd in queue
Qadd and udel in queue Qdel and the size (#) of mesh S . The
relevant cases and the actions taken by the algorithm in
each case are summarized below:
.
1.
2.
3.
4.
ðudel Þ 0, ðuadd Þ, and #S are any value: delete
udel from S and perform u
del on S .
ðudel Þ and ðuadd Þ are any value, #S > b: delete udel
from S and perform u
del on S .
ðudel Þ > 0, ðuadd Þ 0, and #S b: exit and output S as the solution.
ðudel Þ > 0, ðuadd Þ > 0, #S b: consider the set U
of all nodes u 62 S which are ancestors of uadd , and
36
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
test if the size of the mesh associated with S [ U [
fuadd g is b:
If true: add all nodes of U and node uadd to S; for
each node u added to S, perform uþ on S .
b. If false and ðudel Þ < ðuadd Þ: delete udel from S
and perform u
del on S .
c. If false and ðuadd Þ ðudel Þ: exit and output S
as the solution.
One or more applications of Case 4b will turn into a Case 4a
and, thus, will allow adding a node to S.
After adding a node to S or deleting a node from it, the
two queues are updated by taking into account the new
situation in the neighborhood of the mesh portion that has
been modified.
a.
5
A COMPACT DATA STRUCTURE
EDGE-BASED MT
FOR AN
In order to encode an edge-based MT and to implement the
selective refinement algorithm described in Section 4, we
must maintain:
A data structure for encoding the base mesh and the
current mesh, which supports edge collapses and
vertex splits efficiently.
. Information about the partial order of the MT
nodes, which allows testing whether an update
u or uþ is feasible, and retrieving the direct
ancestors of a node u.
. Information about updates u and uþ corresponding
to a node u of the MT, which allows applying such
updates on the current mesh.
In order to manage large data sets, we have designed a
compact data structure which represents both the MT nodes
and the partial order in an implicit way. For encoding the
partial order, we elaborate on a result obtained by El-Sana
and Varshney [13] for the two-dimensional case. For
encoding nodes, we propose an original method based on
a bit vector.
.
5.1 Encoding the Base and the Current Meshes
Both the base mesh and the current mesh S are encoded in
an adjacency-based data structure, implemented as an array
of vertices and an array of tetrahedra. For each vertex v, we
store:
v1. its three coordinates, the field value associated with v,
and the normal (computed as explained in Section 3)
at v;
v2. the index of one tetrahedron incident at v;
v3. the index of the node corresponding to v in the MT
data structure.
For each tetrahedron t, we store:
t1. the indexes of the four vertices of t;
t2. the indexes of the four tetrahedra adjacent to t along
a face;
t3. the reverse indexes giving the position of t with
respect to each of its adjacent tetrahedra.
Fig. 5. The forest of binary trees encoding the MT of Fig. 4. Circles
represent vertices corresponding to MT nodes, squares represent
vertices of the original mesh.
The dimension of the arrays used for the base mesh is
fixed. The dimension of arrays for the working structure
representing the current mesh is defined based on the given
size threshold b in such a way as to contain the largest
admissible mesh. Since the content of the structure is
dynamic, a simple mechanism of garbage collection (based
on reference counting) is used.
We store four bytes per coordinate/value/index, four
bytes for the normal, and one byte for the reverse indexes. If
(m
6
n and m
n) denote the number of vertices and
tetrahedra in a mesh, respectively, this data structure
þ 28 n ’ 226
requires 33 m
n bytes.
The space required for the base mesh is, in general,
negligible because its size is typically about two orders of
magnitude smaller than the reference mesh. The space
required by the current mesh depends on the size threshold.
Note that this data structure, except for field v3 (4 bytes),
is the same as a standard adjacency-based data structure
that is necessary to support DVR through cell projection,
which would therefore cost 222
n bytes. For visualization
through isosurfaces or cross sections, a data structure
storing just fields v1 and t1 is sufficient. This structure,
which is usually called an indexed data structure, would
þ 20 n ’ 116 n bytes.
require 16 m
5.2 Encoding the Partial Order
Since an explicit encoding of the DAG describing the partial
order could be too expensive, we encode it implicitly
through a mechanism proposed by El-Sana and Varshney
[13] for triangle meshes. This mechanism is based on a
binary forest of vertices and on a vertex enumeration
scheme.
The forest contains one node for each vertex of the MT
and one node for each vertex of the reference mesh. The
leaves of the forest correspond to the vertices of the
reference mesh, while the other nodes are in a one-to-one
correspondence with the MT nodes. In particular, the roots
correspond to vertices of the base mesh. The two children of
each internal node vu correspond to the endpoints v0u and v00u
of the edge eu created when splitting vu . Fig. 5 shows the
forest for the MT of Fig. 4.
The n vertices of the reference mesh are labeled
arbitrarily with numbers from 1 to n. The remaining
vertices are assigned consecutive numbers in a total order
that extends the partial order of the MT. An example of
enumeration can be seen in Fig. 3 by reading the figure from
right to left as a sequence of edge collapses. In this way, if a
node u depends on node w in the MT, then the label of
vertex vu must be lower than that of vw .
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
37
The forest can be implemented as an array in which
every node u is stored at the position corresponding to its
label. The entry for u stores:
that must replace vu with v0u and tetrahedra that must
replace v00u . For this purpose, we store the following
additional information at a node u:
an index u:WIDE which points either to the parent
up of u, if u is the second child of up (by convention,
the child having the smallest label), or to the sibling
of u, if u is the first child of up ,
. an index u:DEEP pointing to the first child of u (by
convention, the child having the largest label).
The parent of a node u is determined as u:WIDE, if
u:WIDE > u, or as u:WIDE:WIDE otherwise. The children
of u are found as u:DEEP and u:DEEP:WIDE.
Note that a leaf u of the forest requires just index
u:WIDE. Therefore, our implementation consists of two
separate arrays, one for the leaves and one for the internal
nodes. The storage cost of maintaining the forest is equal to
4 n þ 8 nin ’ 12n bytes, where n is the number of vertices of
the reference mesh (i.e., of leaves in the forest) and nin is the
number of internal nodes (nin ’ n).
Given a mesh S corresponding to a consistent set S, the
following properties hold [13]:
an offset vector, which is used to find the positions
of vertices v0u and v00u from that of vu ,
. an offset value, which is used to obtain the field
values at v0u and v00u from that of vu ,
. the normal nv0u of v0 . The other normal nv00u of v00 can
be found as 2 nvu nv0u , where nvu is the normal at
vu .
. a bit mask which is used to partition the star of vu ,
i.e., the set of tetrahedra incident at vu .
The bit mask contains one bit for each tetrahedron incident at
vu . Incident tetrahedra are sorted lexicographically according
to the triplets formed by their vertices different from vu ,
where the vertices are sorted according to their index. Then,
the following rule is applied: Tetrahedra marked with 0 must
replace vu with v0u ; tetrahedra marked with 1 must replace vu
with v00u ; each triangular face shared by two differently
marked tetrahedra must be expanded into a tetrahedron
incident at both v0u and v00u (see Fig. 2, right side).
In order to resolve ambiguities in boundary configurations, we assume the existence of dummy tetrahedra
attached to the faces lying on the mesh boundary and
incident to a dummy vertex. In the case of a boundary
update u, the bit vector includes one bit for each dummy
tetrahedron.
In selective refinement, the evaluation of the priority
function ðuÞ for an update u may depend on the node
error, as described in Section 4.2. Therefore, we also store
such a value at each node u.
Since we collapse only edges so that the total number of
adjacent vertices to their two endpoints is not larger than 32
(see Section 4.2), the number of tetrahedra incident in a
vertex is bounded by 64 and, thus, the bit mask can be
stored in 8 bytes. Moreover, we compress the normal and
the error to 4 bytes: We pack the normal in the first 18 bits
by using the compression scheme described in [10] and we
quantize the node error on the remaining 14 bits, using a
logarithmic scale to have a finer grained quantization at
small errors.
The storage cost for the information associated with a
single node contributes for a cost of 28 bytes. Since this cost is
added to all nin ’ n entries of the array of internal nodes, the
total cost of the MT data structure is 12n þ 28n ¼ 40n bytes,
plus the cost of the base mesh, which is negligible. Thus, our
data structure for the MT achieves a compression factor of
about 5.5 or 3, compared with the mesh at full resolution
encoded into an adjacency-based data structure or into an
indexed one, respectively, as reported in Section 5.1.
.
1.
2.
A vertex split uþ is feasible on S if and only if
vertex vu belongs to S and all the labels of vertices
adjacent to vu are lower than that of vu . If uþ is not
feasible, then it can be made feasible by first splitting
all the adjacent vertices of vu which have a label
greater than vu . If any of such adjacent splits is not
feasible, the rule is recursively applied.
An edge collapse u is feasible on S if and only if
edge eu belongs to S and, for each vertex w adjacent
to the endpoints of eu in S , either w is one of the
roots in the forest or the label of the parent of w is
greater than that of vu .
5.3 Encoding MT Nodes
Information stored for a node u of the MT must be sufficient
to perform both operations u (edge collapse) and uþ
(vertex split) on the current mesh S . All geometric and
attribute information about each vertex of the current mesh
are maintained in the working data structure for S , which
is dynamically linked to the corresponding node of the
forest. In particular, each entry corresponding to a vertex of
the current mesh has a pointer to the node in the forest
corresponding to the same vertex (see Section 5.1).
Let u be a feasible edge collapse for S , which should
contract an edge eu , having as endpoints a pair of vertices v0u
and v00u , to a vertex vu . Since we have built the MT by
collapsing edges to their midpoints, the position, field, and
vertex normals of vertex vu are obtained by linear
interpolation. We modify S by replacing v0u or v00u with vu
in all tetrahedra containing either of such two vertices and
deleting those tetrahedra which become degenerate. The
current mesh, together with the forest, provides sufficient
information to support edge collapse.
Conversely, let uþ be a feasible vertex split for S , which
should expand vertex vu into edge eu with endpoints v0u and
v00u . In order to perform uþ , we also need to know the
positions, field values, and normals of v0u and v00u , and a
partition of the tetrahedra incident into vu into tetrahedra
.
6
SELECTIVE REFINEMENT QUERIES
An interactive response is crucial for the effective use of a
visualization system. In a context where volume data are
analyzed and rendered through different techniques, such
as isosurfaces, planar cross sections, and DVR, the
following requirements should be fulfilled: Standard
viewing operations, such as data rotation, panning, and
zooming must be performed at a rate of at least 10 fps; a
38
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
semi-interactive response, e.g., a latency of few seconds, can
be tolerated for less frequent operations, such as modification of visualization modality and parameters (e.g., isosurface extraction and transfer function editing).
With state-of-the-art architectures and visualization
techniques, the throughput of the visualization pipeline is
not always sufficient to support an interactive response on a
large mesh, even if the latter can be loaded in main
memory. Multiresolution is then used to support interactivity by reducing the size of the input mesh and trading
some accuracy for speed. A system based on this approach
can let the user operate on smaller, more manageable,
meshes during highly interactive operations and, at the
same time, offer the full accuracy of the reference mesh
when the user can wait longer. An approximated mesh,
having a size small enough to allow interactive response
during viewing operations, is extracted online through
selective refinement. Extraction is performed only when
visualization parameters change, thus semi-interactive
response can be accepted for this operation.
In the following, we define some basic queries which can
be used to drive selective refinement in order to obtain
meshes with different characteristics, depending on the
specific visualization operation to be performed. All queries
are expressed in terms of a size threshold and of a resolution
function, as defined in Section 4.3.
The size threshold can be either adjusted by the user or
automatically set by the visualization system, depending on
its throughput characteristics. The general rule holds that
the time to render an extracted mesh is proportional to the
size threshold. Consider also that the time to render a mesh
of a given size is highly dependent on the selected
visualization modality: For instance, the time necessary
for DVR through cell projection may be two orders of
magnitude higher than the time needed for rendering
isosurfaces.
A resolution function allows us to decide whether a node
is over or underrefined with respect to the needs of the
query: The higher the negative (positive) value of the
resolution function at a given node of the MT, the more
over(under)refined the corresponding portion of mesh.
Specific resolution functions characterize specific queries
which take into account the approximation error corresponding to nodes of an MT. In order to simplify notation,
we consider only one scalar field f, defined on the input
data set. The extension to multiple scalar fields is
straightforward on the basis of the definitions given in
Sections 3 and 4.2.
6.1 Uniform LOD
This query corresponds to a standard simplification of the
input data set. It is generally intended more as a way of
obtaining a rough visualization of the whole input data set
when entering a session or during tasks like isosurface
selection and transfer function editing. Accuracy of the
extracted mesh is measured through either the range or the
space error, depending on user needs. The result of the
query is a mesh having, at each tetrahedron, a field error as
close as possible to a constant value E, called the accuracy
threshold. If E is set to zero, the best approximated mesh
with respect to the error metric considered and within the
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
given size threshold is extracted. The resolution function in
case space error is considered is simply given by:
ðuÞ ¼ "ðuÞ E;
where "ðuÞ is the node error. The definition is analogous in
case range error is considered, but for the node error which
is multiplied by the maximum gradient of the field at
tetrahedra generated by u .
6.2 Variable LOD Based on Field Value
This query is generally useful when the user is interested in
specific values, or in a set, or in a range of values of the
field, called the set of active values AV .
Typically, the user sets an active range of field values
while searching for the best value for isosurface rendering
within a candidate range of values. Once the best isovalue
has been identified, the corresponding isosurface will be
computed on a variable LOD mesh extracted through the
more appropriate isovalue-based LOD criterion (see below).
Accuracy is measured through the range error and should
be close to a threshold Ein in cells that contain active values,
while it should be close to a coarser threshold Eout
elsewhere. If Ein is set to zero and Eout is set to infinity,
the mesh within the given size threshold is extracted, which
best approximates the reference mesh in areas close to the
active values. The resolution function is given by:
"ðuÞ Ein if If ðuÞ \ AV 6¼ ;
ðuÞ ¼
"ðuÞ Eout otherwise;
where If ðuÞ is the range of field f spanned by tetrahedra of
u , as defined in Section 3.
6.3 Variable LOD Based on a Spatial Location
This query allows a user to focus on a certain portion of the
domain. It is controlled from a 3D spatial widget (e.g., an
axis-aligned box) that the user can drag interactively. A
focus volume F V is defined by such a widget. Accuracy is
measured either by the space error or by the range error,
depending on user needs. Accuracy should be close to a
threshold Ein in portions of domain that intersect F V , while
it should be close to a coarser threshold Eout elsewhere. If
Ein is set to zero and Eout is set to infinity, the mesh within
the given size threshold is extracted which best approximates the reference mesh in the focus volume. In order to
test which threshold should be used at a node u, we must
decide whether the portion of reference mesh approximated
by u intersects F V or not. This is done by a conservative
test, which considers the bounding box of u enlarged by the
node error "ðuÞ along each axis, which we denote with
BBðuÞ. The resolution function in case space error is
considered is given by:
"ðuÞ Ein if BBðuÞ \ F V 6¼ ;
ðuÞ ¼
"ðuÞ Eout otherwise:
The definition is analogous in case range error is considered, as for Uniform LOD.
6.4 Variable LOD Based on Isovalues
For isosurface rendering, accuracy of the current mesh is
relevant only at those cells that intersect the active
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
39
isovalues. This query is intended as a mean to display an
accurate isosurface once an interesting isovalue has been
identified. Given a collection of isovalues IV , accuracy,
measured by the space error, should be close to a threshold
Ein in portions of domain that intersect IV , while it can be
arbitrarily large elsewhere. In case Ein is set to zero, the
extracted isosurfaces coincide with those obtained from the
reference mesh. The resolution function is given by:
"ðuÞ Ein if If ðuÞ \ IV 6¼ ;
ðuÞ ¼
1
otherwise;
where If ðuÞ is the range of field f spanned by tetrahedra of
u , as defined in Section 3.
6.5 Variable LOD Based on the Transfer Function
This query is reserved for the DVR modality. It permits us
to have an accuracy proportional to the opacity of the field
with respect to the current transfer function. In this case, the
extracted mesh is the mesh within the given size threshold
having the best accuracy, which is measured by the transfer
function error. The resolution function is simply given by
ðuÞ ¼ "DV Rf ðuÞ;
where "DV Rf is the transfer function error at a node u, as
defined as in Section 3.
Besides reducing the size of the current mesh, this
approach may produce visual results that are even better
than those obtained by using the reference mesh. In fact, the
use of high resolution everywhere may introduce considerable quantization error in regions with low opacity because
most current graphics subsystems support only one byte
per RGBA channel. Thus, small size cells with a very low
opacity would be rendered as completely transparent. On
the contrary, the contribution of low opacity regions is not
lost if larger cells are used over them.
Our approach could support other LOD criteria as well.
In particular, view-dependent criteria could be defined
simply by using the projected value of space error on the
screen, rather than space error itself, in defining the various
resolution functions. However, in our opinion, viewdependent LOD is of marginal importance in volume
visualization since we typically deal with a single volume
data set that has small extension in space compared to the
distance from the viewer. Moreover, view-dependent LOD
would force us to make a new selective refinement each
time a view operation is performed, i.e., during interactive
viewing, which is not supported in the current implementation. For these reasons, we did not implement viewdependent criteria in our prototype system.
7
THE TAN2 SYSTEM
TAn2 is an interactive visualization system that uses an
edge-based MT to efficiently generate isosurfaces, planar
cross sections, and direct volume rendering.
The architecture of TAn2 consists of six modules, as shown
in Fig. 6. It is mainly based on a Multiresolution Engine which,
according to the user’s needs, performs selective refinement
operations on the MT data structure and extracts tetrahedral
meshes. The most recently generated tetrahedral model is
Fig. 6. The architecture of the TAn2 system.
maintained in Current Model Manager and is subsequently
used by all the other rendering and interaction modules.
The algorithms used for isosurface extraction and DVR are
standard algorithms, which require short setup times. In
this prototype system, we have favored the simplicity of
such algorithms. On the other hand, given the modular
structure of the system, they can be easily replaced with
more advanced and efficient ones. The six modules and
their interactions are described below:
1.
2.
3.
4.
The GUI Manager, which manages the user interface
(except for rendering) and controls user interaction.
It provides GUI devices for three main classes of
operations: viewing operations (rotation, zooming,
and panning of the visualized object), visualization
operations (editing of the transfer function, selection
of isovalues, and cross planes), and multiresolution
operations (selection of a LOD query, selection of
size and accuracy thresholds, and manual request to
perform a mesh extraction).
The Transfer Function Manager, which stores all
resolution and visualization parameters received
from the user through the GUI manager. Such
parameters are the field-to-color mapping (defined
by piecewise-linear functions on the RGBA components), used to assign colors to the vertices of the
current mesh to be rendered; the active isovalues,
used to generate the isosurfaces; the active cross plane
parameters, used to generate cross sections; the
current size threshold; and the accuracy thresholds.
The Multiresolution Engine, which performs selective
refinement on the MT data structure each time it
receives new user-defined thresholds from the
Transfer Function Manager.
The Current Model Manager, which stores the current
mesh enriched with additional information required
for rendering, including colors (received from the
Transfer Function Manager), isosurfaces, and cross
sections (computed by the Isosurface Engine, see
below).
40
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
Fig. 7. Two images for the Plasma64 data set: (a) Four isosurfaces extracted from the mesh at full resolution to show the complexity of the field.
(b) Two isosurfaces from a mesh at variable LOD based on isovalue with threshold Ein ¼ 0:0%. The size of the selectively refined mesh is only about
7 percent of the size of the reference mesh.
5.
6.
8
The Isosurface Engine, which extracts isosurfaces and
cross planes from the current mesh, based on
parameters received from the Transfer Function
Manager. Both isosurfaces and cross planes are
computed through a standard Marching Tetrahedra
algorithm. The normals of the isosurface at its
vertices are computed according to the vertex
normal stored into the tetrahedral model, as defined
in Section 3.
The Rendering Engine, which performs rendering
through isosurfaces, cross sections, and DVR. It can
render a data set by drawing its points, the edges of
the tetrahedral mesh, or the triangle mesh forming
its boundary. The DVR approach adopted is based
on depth-sorting and composition of the contribution of the tetrahedral cells [33], [38]. The depth
sorting of the tetrahedral mesh is based on an
approximate centroid sorting of boundary cells
followed by a topological sort of internal cells. We
have included in the system a more sophisticated
rendering mode that allows rendering the isosurfaces together with the DVR cell contribution
according to the current TF (see Fig. 11). A more
correct depth sorting algorithm, like the one of [7],
could be also added.
RESULTS
In this section, we show images and performance statistics
obtained from TAn2 on four sample data sets:
.
.
Fighter (13,832 vertices, 70,125 tetrahedra): An
irregular tetrahedral mesh which is the result of an
air flow simulation over a jet fighter from a wind
tunnel model developed at NASA Langley Research
Center (courtesy of R.W. Neely and J.T. Batina,
NASA, 1992);
BluntFin (40,960 vertices, 222,528 tetrahedra): Airflow over a flat plate with a blunt fin rising from the
plate (courtesy of C.M. Hung and P.G. Buning,
NASA, 1990);
. Turbine Blade (106,795 vertices, 576,576 tetrahedra):
A curvilinear mesh (courtesy of AVS Inc., tetrahedralized by O.G. Staadt);
. Plasma64 (262,144 vertices, 1,500,282 tetrahedra): A
large regular synthetic data set of three-dimensional
Perlin noise [30].
The first two data sets have multiple fields: We have used
just one of them, namely, the density field, for our experiments. The initial tetrahedral mesh of Plasma64 has been
obtained from the regular grid by decomposing each cubic
cell into six tetrahedra. Curvilinear grids (BluntFin and
Turbine Blade) are cubic grids to which deformations have
been applied and have been tetrahedralized in a similar way.
We needed to start from regular data sets because few
irregular data sets are available in the public domain due to
the security policies of research agencies and industries.
The above data sets have been chosen to present a set of
different characteristics. Fighter has a very complex boundary with small scale features (such as thin wings). BluntFin
has an extremely uneven cell distribution. Turbine Blade is a
rather large data set, with many degenerate cells (i.e., cells
with zero volume). Plasma64 is the largest one and its field
distribution contains strong variations of gradient; therefore, it cannot be approximated well with linear patches,
i.e., concise representations are hard to obtain for it. This
feature of the Plasma64 data set is exemplified by the four
isosurfaces depicted in Fig. 7a. All experiments have been
performed on a Pentium III at 800 MHz.
Table 1 shows the statistical parameters characterizing
the four meshes. The average size of an isosurface on these
data sets has been computed by considering nine field
values equally distributed in the data set field range. Note
that if we use standard rendering techniques and low cost
PC hardware, then isosurface visualization of the two
largest data sets often runs at noninteractive frame rates
(especially if displaying several isosurfaces at a time).
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
41
TABLE 1
Parameters Characterizing the Meshes Used in the Experiments
Number of vertices and of tetrahedra in the mesh, size (number of triangles) of the triangulated surface bounding the mesh, field range, average size
(number of triangles) of an isosurface.
TABLE 2
Meshes Extracted at a Uniform LOD from an MT for Plasma64 and Turbine
Number of vertices and tetrahedra of meshes are shown, together with time to extract them starting at the base mesh.
TABLE 3
Results of Isosurface Generation from Meshes at a Variable LOD Based on Isovalue from the Plasma64 Data Set
Times are in milliseconds.
Moreover, interactive DVR rendering usually runs at
noninteractive frame rates on all the data sets considered
in the experiments.
The simplification process used to build the MTs (as
described in Section 4.2) is accurate but far from being
interactive. For instance, simplifying the Plasma64 and the
Turbine data sets required about 35 minutes and 13 minutes,
respectively.
Table 2 reports results about the extraction of meshes at a
uniform LOD based on the field error from the MTs of
Plasma64 and Turbine data sets. Meshes are extracted with
the selective refinement algorithm, starting at the base mesh
of the MT, thus in the worst possible situation. Indeed, the
number of updates (vertex splits) necessary in this case to
extract a mesh is roughly equal to the number of its vertices.
Better performances are obtained in a more dynamic
context, where a mesh is extracted starting from another
mesh, larger than the base mesh, thus requiring a smaller
number of updates. On average, we have found that the
selective refinement algorithm can process about 60K
updates per second. This is sufficient to guarantee extraction in semi-interactive time for even the largest mesh we
have considered.
Table 3 reports results of isosurface generation on
meshes extracted with variable LOD based on isovalue from
the Plasma64 data set. Note that, with an accuracy threshold
Ein ¼ 0:0, we obtain fully accurate isosurfaces, while the
size of meshes from which they are extracted spans between
3 percent (best case) and 58 percent (worst case) of the size
of the reference mesh. Thus, when the change in isosurface
is small, computing isosurfaces with Ein ¼ 0:0 can be faster
than computing them on the reference mesh. Fig. 7b shows
two isosurfaces, together with wire frame of the boundary
of the selectively refined tetrahedral mesh. Note how only
the portion of mesh close to isosurfaces contains cells at
high resolution.
Also, in this case, times for selective refinement are taken
in the worst case, i.e., each refinement is performed starting
at the base mesh. In practical applications, the user
selectively refines a mesh by focusing on a range of
interesting values, then she/he repeatedly extracts and
visualizes isosurfaces from that mesh. Moreover, if the user
decides next to focus on a different, but close, range of
values, then the time necessary for selective refinement will
be much shorter.
If an isosurface intersects a large fraction of tetrahedra,
even a mesh extracted at a variable LOD cannot be very
concise. In this case, variable LOD based on spatial location can
be combined with variable LOD based on isovalue in order to
increase data reduction further, by focusing on a subvolume
42
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
Fig. 8. BluntFin data set. Two isosurfaces from a mesh at variable LOD based on spatial location, obtained by using a box as a focus volume (full and
zoomed view).
Fig. 9. The two images show how well the multiresolution model preserves the fine-grain detail on the mesh boundary: No self-intersection or strong
variation from the original surface are introduced on the mesh boundary.
of interest. A focus box, such as those depicted in Fig. 8 and
Fig. 9, can be dragged interactively since only a few
thousand updates are necessary to selectively refine a mesh
after moving the box.
The comparison of results presented in Tables 2 and 3
confirms that the availability of tools for refining the mesh
representation only in selected regions, according to criteria
based on field and space, permits us to generate concise
meshes that also give an accurate representation of the data
set. The multiresolution approach shows its benefit especially on large data sets, such as Plasma64 and Turbine, and
at high resolutions.
Fig. 8 shows two isosurfaces extracted from BluntFin by
using variable LOD based on spatial location. The size of the
tetrahedral mesh is about 1/3 of the size of the reference
mesh (64K cells instead of 222K cells) and the error bound
inside the box is 0.01 percent of the field range. Images in
Fig. 9 also refer to variable LOD based on spatial location, but
they just show the boundary mesh of Fighter, with zero
error inside the box. The two meshes are of approximately
the same size, which is about 1/4 of the size of the reference
mesh (16K cells instead of 70K cells) and correspond to
different positions of the focus volume.
In Figs. 10 and 11, more complex examples of rendering
from variable LOD meshes are shown. Fig. 10 shows an
image of the Turbine data set obtained by extracting a mesh
at variable LOD based on spatial location by focusing on a cross
plane. The size threshold is 40K tetrahedra, and the error
threshold is 2.0 percent of field range. The cross plane is
visualized together with one isosurface and the wire frame
of the mesh boundary. In Fig. 11, the same mesh is rendered
with DVR and hybrid DVR and isosurface rendering.
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
Fig. 10. An image of the Turbine data set produced from a mesh at a
variable LOD based on spatial location focusing on a cross plane.
9
CONCLUDING REMARKS
In this paper, we have addressed the problem of efficiently
visualizing large irregular volume data sets by using a
multiresolution model. The model is a specialization of the
Multi-Tessellation (MT) [9] for tetrahedral meshes simplified through edge-collapse operations. We have defined
43
LOD queries on an MT suitable to support volume
visualization operations. A new compact data structure
has been provided for a three-dimensional MT built based
on a sequence of general edge collapses, which is about 3 to
5.5 times smaller than a standard data structure encoding
just the reference mesh. We have briefly described TAn2
(Tetrahedra Analyzer 2), an interactive system for visualization of three-dimensional scalar fields that we have
implemented based on the concepts and methods presented
in this paper. The system supports common rendering
techniques for 3D data (isosurface fitting, cross planes and
DVR) and can deal with large irregular data sets thanks to
the multiresolution approach. To our knowledge, TAn2 is
the only volume visualization system supporting online
selective refinement on unstructured meshes and able to
deal with irregular and nonconvex data sets efficiently.
Further possible extensions of the work presented here
are concerned with the support of more sophisticated
rendering features, including the use of more effective
visualization-oriented error metrics, and with the out-ofcore management of the multiresolution model.
In particular, out-of-core techniques would allow us to
deal with data sets that are not just too large to be rendered,
but even too large to be stored in main memory. Our future
research work will be devoted to the design and development of data structures for encoding an edge-based MT on
secondary storage and of out-of-core algorithms [6] for its
construction and selective refinement. If such techniques
were available, it would be easy to integrate them in TAn2,
due to the modular architecture of the system, by simply
replacing its Multiresolution Engine.
ACKNOWLEDGMENTS
The authors wish to thank Raffaele Dell’Aversana and
Davide Sobrero for implementing parts of the TAn2 system.
Fig. 11. A DVR image (a) and a hybrid image, which integrates isosurfaces plus DVR (b) from the same mesh used in Fig. 10.
44
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,
This work has been partially supported by the Research
Training Network EC Project on Multiresolution in Geometric
Modeling (MINGLE), under contract HPRN-CT-1999-00117,
by the EC Project Virtual Planet within the Fifth Framework
Program for Information Society Technologies, under
contract IST-2000-28095, and by the National Project funded
by the Italian Ministry of Education, University and
Research (MIUR) on Representation and Processing of Spatial
Data in Geographic Information Systems.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
Y. Chiang and C.T. Silva, “I/O Optimal Isosurface Extraction,”
Proc. IEEE Visualization ’97, R. Yagel and H. Hagen, eds., 1997.
Y. Chiang and C.T. Silva, “Interactive Out-of-Core Isosurface
Extraction,” Proc. IEEE Visualization ’98, pp. 167-174, 1998.
P. Cignoni, D. Costanza, C. Montani, C. Rocchini, and R.
Scopigno, “Simplification of Tetrahedral Volume with Accurate
Error Evaluation,” Proc. IEEE Visualization 2000, pp. 85-92, 2000.
P. Cignoni, L. De Floriani, C. Montani, E. Puppo, and R. Scopigno,
“Multiresolution Modeling and Rendering of Volume Data Based
on Simplicial Complexes,” Proc. 1994 Symp. Volume Visualization,
pp. 19-26, 1994.
P. Cignoni, C. Montani, E. Puppo, and R. Scopigno, “Multiresolution Modeling and Visualization of Volume Data,” IEEE
Trans. Visualization and Computer Graphics, vol. 3, no. 4, pp. 352369, Oct.-Dec. 1997.
P. Cignoni, C. Montani, C. Rocchini, and R. Scopigno, “External
Memory Management and Simplification of Huge Meshes,” IEEE
Trans. Visualization and Computer Graphics, to appear.
J. Comba, J.T. Klosowski, N. Max, J.S.B. Mitchell, C.T. Silva, and
P.L. Williams, “Fast Polyhedral Cell Sorting for Interactive
Rendering of Unstructured Grids,” Computer Graphics Forum
(Eurographics ’99 Conf. Issue), vol. 18, no. 3, pp. C369-C376, 1999.
L. De Floriani, P. Magillo, F. Morando, and E. Puppo, “Dynamic
View-Dependent Multiresolution on a Client-Server Architecture,” CAD J., vol. 32, no. 13, pp. 805-823, 2000.
L. De Floriani, E. Puppo, and P. Magillo, “A Formal Approach to
Multiresolution Modeling,” Geometric Modeling: Theory and Practice, R. Klein, W. Straßer, and R. Rau, eds., Springer-Verlag, 1997.
M. Deering, “Geometry Compression,” Computer Graphics Proc.,
Ann. Conf. Series (SIGGRAPH ’95), pp. 13-20, 1995.
M. Duchaineau, M. Wolinsky, D.E. Sigeti, M.C. Miller, C. Aldrich,
M.B. Mineed-Weinstein, “ROAMing Terrain: Real-Time Optimally Adapting Meshes,” Proc. IEEE Visualization ’97, pp. 81-88,
1997.
B. Hamann, E. LaMar, and K.I. Joy, “High-Quality Rendering of
Smooth Isosurfaces,” J. Visualization and Computer Animation,
vol. 10, pp. 79-90, 1999.
J. El-Sana and A. Varshney, “Generalized View-Dependent
Simplification,” Computer Graphics Forum, vol. 18, no. 3, pp. C83C94, 1999.
R. Farias, J.S. Mitchell, C.T. Silva, and B. Wylie, “Time-Critical
Rendering of Irregular Grids,” Proc. Sibgrapi 2000 (XIII Brazilian
Symp. Computer Graphics and Image Processing), pp. 243-250, 2000.
R. Farias and C.T. Silva, “Out-of-Core Rendering of Large,
Unstructured Grids,” IEEE Computer Graphics and Applications,
vol. 21, no. 4, pp. 42-50, July/Aug. 2001.
L. De Floriani and P. Magillo, “Multiresolution Meshes,” Principles
of Multiresolution in Geometric Modeling—PRIMUS Summer School,
Munich, Aug. 2001.
M. Garland, “Multiresolution Modeling: Survey & Future Opportunities,” Eurographics ’99—State of the Art Reports, pp. 111-131,
1999.
T. Gerstner and R. Pajarola, “Topology Preserving and Controlled
Topology Simplifying Multiresolution Isosurface Extraction,”
Proc. IEEE Visualization 2000, pp. 259-266, 2000.
M.H. Gross and O.G. Staadt, “Progressive Tetrahedralizations,”
Proc. IEEE Visualization ’98, pp. 397-402, 1998.
R. Grosso, C. Luerig, and T. Ertl, “The Multilevel Finite Element
Method for Adaptive Mesh Optimization and Visualization of
Volume Data,” Proc. IEEE Visualization ’97, R. Yagel and H. Hagen,
eds., pp. 387-394, 1997.
VOL. 10,
NO. 1, JANUARY/FEBRUARY 2004
[21] A. Guéziec, G. Taubin, F. Lazarus, and W. Horn, “Simplicial Maps
for Progressive Transmission of Polygonal Surfaces,” Proc. ACM
Int’l Conf. Virtual Reality and Modeling Language (VRML ’98), pp.
25-31, 1998.
[22] I. Ihm and S. Park, “Wavelet-Based 3D Compression Scheme for
Very Large Volume Data,” Proc. Graphics Interface, pp. 107-116,
1998.
[23] T. Kim and Y. Shin, “An Efficient Wavelet-Based Compression
Method for Volume Rendering,” Proc. Pacific Graphics 1999,
pp.147-157, 1999.
[24] M. Lee, L. De Floriani, and H. Samet, “Constant-Time Neighbor
Finding in Hierarchical Meshes,” Proc. Int’l Conf. Shape Modeling,
2001.
[25] P. Lindstrom, “Out-of-Core Simplification of Large Polygonal
Models,” Computer Graphics Proc., Ann. Conf. Series (SIGGRAPH
2000), pp. 259-262, 2000.
[26] D. Luebke and C. Erikson, “View-Dependent Simplification of
Arbitrary Polygonal Environments,” ACM Computer Graphics
Proc., Ann. Conf. Series (SIGGRAPH ’97), pp. 199-207, 1997.
[27] S. Muraki, “Volume Data and Wavelet Transforms,” IEEE
Computer Graphics and Applications, vol. 13, no. 4, pp. 50-56, July
1993.
[28] M. Ohlberger and M. Rumpf, “Adaptive Projection Operators in
Multiresolution Scientific Visualization,” IEEE Trans. Visualization
and Computer Graphics, vol. 5, no. 1, pp. 74-93, Jan.-Mar. 1999.
[29] V. Pascucci and C.L. Bajaj, “Time Critical Isosurface Refinement
and Smoothing,” Proc. 2000 IEEE Symp. Volume Visualization,
pp. 33-42, 2000.
[30] K. Perlin and E.M. Hoffert, “Hypertexture,” Computer Graphics
(SIGGRAPH ’89 Proc.), vol. 23, no. 3, pp. 253-262, 1989.
[31] J. Popovic and H. Hoppe, “Progressive Simplicial Complexes,”
ACM Computer Graphics Proc., Ann. Conf. Series (SIGGRAPH ’97),
pp. 217-224, 1997.
[32] R. Shekhar, E. Fayyad, R. Yagel, and J. Cornhill, “Octree-Based
Decimation of Marching Cubes Surfaces,” Proc. IEEE Visualization
’96, R. Yagel and G. Nielson, eds., pp. 335-344, 1996.
[33] P. Shirley and A. Tuchman, “A Polygonal Approximation to
Direct Scalar Volume Rendering,” Computer Graphics (San Diego
Workshop Volume Visualization), vol. 24, no. 5, pp. 63-70, 1990.
[34] D.E. Sigeti, B.F. Gregorski, J. Ambrosiano, G. Graham, M.A.
Duchaineau, B. Haman, and K.I. Joy, “Approximating Material
Interfaces in Two- and Three-Dimensional Meshes during Data
Simplification,” Proc. NSF/DoE Lake Tahoe Workshop Hierarchical
Approximation and Geometrical Methods for Scientific Visualization,
2000.
[35] I.J. Trotts, B. Hamann, and K.I. Joy, “Simplification of Tetrahedral
Meshes with Error Bounds,” IEEE Trans. Visualization and
Computer Graphics, vol. 5, no. 3, pp. 224-237, 1999.
[36] R. Westermann, L. Kobbelt, and T. Ertl, “Real-Time Exploration of
Regular Volume Data by Adaptive Reconstruction of Isosurfaces,”
The Visual Computer, pp. 100-111, 1999.
[37] J. Wilhelms and A. van Gelder, “Multi-Dimensional Trees for
Controlled Volume Rendering and Compression,” Proc. 1994
Symp. Volume Visualization, pp. 27-34, 1994.
[38] P.L. Williams, N.L. Max, and C.M. Stein, “A High Accuracy
Volume Renderer for Unstructured Data,” IEEE Trans. Visualization and Computer Graphics, vol. 4, no. 1, pp. 37-54, Jan.-Mar. 1998.
[39] J.C. Xia, J. El-Sana, and A. Varshney, “Adaptive Real-Time Levelof-Detail-Based Rendering for Polygonal Models,” EEE Trans.
Visualization and Computer Graphics, vol. 3, no. 2, pp. 171-183, 1997.
[40] C. Yang, T. Mitra, and T. Chiueh, “On-the-Fly Rendering of
Losslessly Compressed Irregular Volume Data,” Proc. IEEE
Visualization 2000, T. Ertl, B. Hamann, and A. Varshney, eds.,
pp. 101-108, 2000.
[41] Y. Zhou, B. Chen, and A. Kaufman, “Multiresolution Tetrahedral
Framework for Visualizing Regular Volume Data,” Proc. IEEE
Visualization ’97, R. Yagel and H. Hagen, eds., pp. 135-142, 1997.
CIGNONI ET AL.: SELECTIVE REFINEMENT QUERIES FOR VOLUME VISUALIZATION OF UNSTRUCTURED TETRAHEDRAL MESHES
Paolo Cignoni received the advanced degree
(Laurea) in 1992 and the PhD degree in
computer science in 1998 from the University
of Pisa. He is a research scientist at the
Istituto Scienza e Tecnologie dell’Informazione
(formerly I.E.I.-CNR) of the National Research
Council in Pisa, Italy. His research interests
include computational geometry and its interaction with computer graphics, scientific visualization, 3D scanning, simplification and
multiresolution.
Leila De Floriani received the Laurea degree in
mathematics from the University of Genova in
1977. She has been a professor of computer
science at the University of Genova since 1990,
where she leads the Geometric Modeling and
Computer Graphics Group. She is currently a
visiting professor in the Department of Computer
Science of the University of Maryland. She has
written more than 140 publications in the fields of
solid and geometric modeling, computer graphics, geographic data processing, and image analysis. Her current
research interests include multiresolution geometric modeling, shape
modeling, geometric algorithms and data structures for applications in
scientific visualization, virtual reality, and Geographical Information
Systems. She is a member of the ACM and of the IEEE Computer
Society and a fellow of the International Association for Pattern
Recognition (IAPR).
Paola Magillo received the advanced degree in
computer science from the University of Genova, Italy, in 1992, and the PhD degree in
computer science from the same university in
1999. Since 1994, she has been working with
the research group in Geometric Modeling and
Computer Graphics at the Department of Computer and Information Sciences (DISI) of the
University of Genova, where she became an
assistant professor in 1996. Her research interests include computational geometry, geometric modeling, geographic
information systems, and computer graphics.
45
Enrico Puppo received the Laurea degree in
mathematics from the University of Genova,
Italy, in March 1986. He is a professor of
computer science in the Department of Computer and Information Sciences (DISI) of the
University of Genova. He has written more than
70 technical publications on the subjects of
algorithms and data structures for spatial data
handling, geometric modeling, computational
geometry, parallel algorithms, and image processing. His current research interests are in multiresolution modeling,
geometric algorithms and data structures, and object reconstruction,
with applications to virtual reality, Geographical Information Systems,
and scientific visualization. He is a member of the ACM, IEEE Computer
Society, and International Association for Pattern Recognition (IAPR).
Roberto Scopigno received the advanced
degree (Laurea) in computer science from the
University of Pisa in 1984. He is a senior
research scientist at the Istituto Scienza e
Tecnologie dell’Informazione (formerly CNUCECNR) of the National Research Council in Pisa,
Italy. He is currently engaged in research
projects concerned with scientific visualization,
volume rendering, web-based graphics, multiresolution data modeling and rendering, 3D
scanning, and applications of 3D computer graphics to cultural heritage.
He is a member of the IEEE Computer Society, Eurographics, and
Siggraph. Since 2001, he has been the co-editor-in-chief of Computer
Graphics Forum.
. For more information on this or any computing topic, please visit
our Digital Library at http://computer.org/publications/dlib.