ArticlePDF Available

A Topological Hierarchy for Functions on Triangulated Surfaces

Authors:

Abstract and Figures

We combine topological and geometric methods to construct a multiresolution representation for a function over a two-dimensional domain. In a preprocessing stage, we create the Morse-Smale complex of the function and progressively simplify its topology by cancelling pairs of critical points. Based on a simple notion of dependency among these cancellations, we construct a hierarchical data structure supporting traversal and reconstruction operations similarly to traditional geometry-based representations. We use this data structure to extract topologically valid approximations that satisfy error bounds provided at runtime.
Content may be subject to copyright.
Livermore
National
Lawrence
Laboratory
U.S. Department of Energy
Preprint
UCRL-JRNL-204661
A Topological Hierarchy
for Functions on
Triangulated Surfaces
Peer-Timo Bremer, Herbert Edelsbrunner, Bernd
Hamann, Valerio Pascucci
This article was submitted to IEEE Transactions on Visuailzation
and Computer Graphics
May 4, 2004
Approved for public release; further dissemination unlimited
DISCLAIMER
This document was prepared as an account of work sponsored by an agency of the United States Govern-
ment. Neither the United States Government nor the University of California nor any of their employees,
makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy,
completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that
its use would not infringe privately owned rights. Reference herein to any specific commercial product, pro-
cess, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or
imply its endorsement, recommendation, or favoring by the United States Government or the University
of California. The views and opinions of authors expressed herein do not necessarily state or reflect those
of the United States Government or the University of California, and shall not be used for advertising or
product endorsement purposes.
This is a preprint of a paper intended for publication in a journal or proceedings. Since changes may be
made before publication, this preprint is made available with the understanding that it will not be cited or
reproduced without the permission of the author.
A Topological Hierarchy for Functions on
Triangulated Surfaces
Peer-Timo Bremer, Member, IEEE, Herbert Edelsbrunner,
Bernd Hamann, Member, IEEE, and Valerio Pascucci, Member, IEE E
Abstract—We combine topological and geometric methods to construct a multiresolution representation for a function over a two-
dimensional domain. In a preprocessing stage, we create the Morse-Smale complex of the function and progressively simplify its
topology by cancelling pairs of critical points. Based on a simple notion of dependency among these cancellations, we construct a
hierarchical data structure supporting traversal and reconstruction operations similarly to traditional geometry-based representations.
We use this data structure to extract topologically valid approximations that satisfy error bounds provided at runtime.
Index Terms—Critical point theory, Morse-Smale complex, terrain data, simplification, multiresolution data structure.
1INTRODUCTION
T
HE efficient construction of topologically and geome-
trically simplified models is a central problem in
visualization. This paper describes a hierarchical data
structure representing the topology of a continuous func-
tion on a triangulated surface. Examples of such data are
the distribution of the electrostatic potential on a molecular
surface or elevation data on a sphere (e.g., the Earth). The
complete topology of the function is computed and encoded
in a hierarchy that provides fast and consistent access to
adaptive topological simplifications. Additionally, the
hierarchy includes geometrically consistent approximations
of the function corresponding to any topological refine-
ment. In the special case of a planar domain, the function
can be thought of as elevation and the graph of the function
as a surface in three-dimensional space. In this case, our
framework creates a topology-based hierarchy of the
geometry of this surface.
1.1 Motivation
Scientific data often consists of measurements over a
geometric domain or space. We can think of it as a discrete
sample of a continuous function over the space. We are
interested in the case in which the space is a triangulated
surface (with or without boundary).
A hierarchical representation is crucial for efficient and
preferably interactive exploration of scientific data. The
traditional approach to constructing such a representation is
based on progressive data simplification driven by a
numerical error measurement. Alternatively, we may drive
the simplification process with measurements of topological
features. Such an approach is appropriate if topological
features and their spatial relationships are more essential
than geometric error bounds to understand the phenomena
under investigation. An example is water flow over a terrain,
which is influenced by possibly subtle slopes. Small but
critical changes in elevation may result in catastrophic
changes in water flow and accumulation. Thus, our approach
is distinctly different from one that is purely driven by
numerical approximation error. It ensures that the topology
of a function is preserved as long as possible during a
simplification process, which is not necessarily the case with
simplification methods driven by approximation error.
1.2 Related Work
The topological analysis of scalar valued scientific data has
been a long-standing research focus. Morse-theory-related
methods had already been developed in the 19th century
[1], [2], long before Morse theory itself was formulated, and
hierarchical representations have been proposed [3], [4]
without making use of the mathematical framework
developed by Morse and others [5], [6]. However, most of
this research was lost and has been rediscovered only
recently. Most modern research in the area of multi-
resolution structures is geometric and many techniques
have been developed during the last decade. The most
successful algorithms developed in that era are based on
edge contraction as the fundamental simplifying operation
[7], [8] and accumulated square distances to plane con-
straints as the error measure [9], [10]. This work focused on
triangulated surfaces embedded in three-dimensional
Euclidean space, which we denote as IR
3
. W e find a similar
focus in the successive attempts to include the capability to
change the topological type [11], [12].
In the field of flow visualization, topological analysis and
topology-based simplification originate with the work of
Helman and Hesselink [13]. They showed how to find and
classify critical points in flow fields and proposed a
structure similar to the Morse-Smale complex to analyze
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004 385
. P.-T. Bremer and B. Hamann are with the Center for Image Processing and
Integrated Computing, Department of Computer Science, University of
California, Davis, One Shields Ave., Davis, CA 95616.
E-mail: tbremer@ucdavis.edu.
. H. Edelsbrunner is with the Computer Scien ce Department, Duke
University, Box 90129, Durham, NC 27708.
. V. Pascucci is with the Center for Applied Scientific Computing, Lawrence
Livermore National Laboratory, Box 808, L-560, Livermore, CA 94551.
Manuscript received 30 Sept. 2003; revised 16 Jan. 2004; accepted 11 Feb.
2004.
For information on obtaining reprints of this article, please send e-mail to:
tvcg@computer.org, and reference IEEECS Log Number TVCGSI-0093-0903.
1077-2626/04/$20.00 2004 IEEE Published by the IEEE Computer Society
vector fields. Later methods to simplify this complex based
on different error bounds have been developed [14], [15],
[16]. Unfortunately, computing such a complex relies on
numerical integration along inherently unstable regions of
the vector field and is therefore limited to relatively small and
clean data sets. For the simpler case of piecewise linear scalar
valued functions (whose gradients define a piecewise
constant flow-field), we compute the topology in a symbolic
manner which is robust even in degenerate cases. Therefore,
we can compute Morse-Smale complexes for data sets with
tens of thousands of critical points compared to hundreds of
critical points in commonly used vector fields. Unlike the
method in [14], we maintain a consistent geometric approx-
imation of the topology. Furthermore, we avoid creating
higher-order criticalities as is done in [15]. Additionally, our
error bound is directly linked to the approximation error, see
Section 5.1, and we provide a multiresolution hierarchy
rather than a simplification strategy.
To remove (spurious) topological features from all level
sets simultaneously, we interpret the critical points of the
function as the culprits responsible for topological features
that appear in the level sets [17], [18]. While sweeping
through the level sets we see that critical points indeed start
and end such features and we use the length of the interval
over which a feature exists as a measure of its importance.
For the special case of two-dimensional height fields, this
measure was first proposed by Horman [19] and later
adopted by Mark [20]. We use the more general concept of
persistence introduced in [21], where the Morse-Smale
complex of the function domain occupies a central position.
Its construction and simplification is studied for 2-mani-
folds in [22] and for 3-manifolds in [23].
1.3 Results
We follow the approach taken in [22], with some crucial
differences and extensions. Given a piecewise linear
function over a triangulated domain, we:
1. Construct a decomposition of the domain into
monotonic quadrangular regions by connecting
critical points with lines of steepest descent,
2. Simplify the decomposition by performing a se-
quence of cancellations ordered by persistence, and
3. Turn the simplification into a hierarchical multi-
resolution data structure whose levels correspond to
simplified versions of the function.
The first two steps are discussed in [22], but the third step is
new. Nevertheless, this paper makes original contributions
to all three steps and in the application of the data structure
to concrete scientific problems. These contributions are:
1. a modification of the algorithm of [22] that con-
structs the Morse-Smale complex without handle
slides,
2. the simplification of the complex by simultaneous
application of independent cancellations,
3. a numerical algorithm to approximate the simplified
function,
4. a shallow multiresolution data structure combining
the simplified functions into a single hierarchy,
5. an algorithm for traversing the data structure that
combines different levels of the hierarchy to con-
struct adaptive simplifications, and
6. the application of our method to various data sets.
The hallmark of our method is the fusion of the geometric
and topological approaches to multiresolution representa-
tions. The entire process is controlled by topological
considerations and geometric methods are used to realize
monotonic paths and patches. The latter play a crucial but
subordinate role in the overall algorithm.
2BACKGROUND
We describe an essentially combinatorial algorithm based on
intuitions provided by investigations of smooth maps. In this
section, we describe the necessary background, in Morse
theory [6], [24] and in combinatorial topology [25], [26].
2.1 Morse Functions
Throughout this paper, IM denotes a compact 2-manifold
without boundary and f :IM!IR denotes a real-valued
smooth function over IM . Assuming a local coordinate
system at a point a 2 IM , w e c ompute two partial
derivatives and call a critical when both are zero and
regular otherwise. Examples of critical points are maxima (f
decreases in all directions), minima (f increases in all
directions), and saddles (f switches between decreasing
and increasing four times around the point).
Using the local coordinates at a, we compute the Hessian of
f, which is the matrix of second partial derivatives. A critical
point is nondegenerate when the Hessian is nonsingular, which
is a property that is independent of the coordinate system.
According to the Morse Lemma, it is possible to construct a
local coordinate system such that f has the form fðx
1
;x
2
Þ¼
fðaÞx
2
1
x
2
2
in a neighborhood of a nondegenerate critical
point a. The number of minus signs is the index of a and
distinguishes the different types of critical points: minima
have index 0, saddles have index 1, and maxima have index 2.
Technically, f is a Morse function when all its critical points are
nondegenerate and have pairwise different function values.
Most of the challenges in our method are rooted in the need to
enforce these conditions for given functions that do not satisfy
them originally.
2.2 Morse-Smale Complexes
Assuming a Riemannian metric and an orthonormal local
coordinate system, the gradient at a point a of the manifold
is the vector of partial derivatives. The gradient of f forms a
smooth vector field on IM , with zeros at the critical points.
At any regular point, we have a nonzero gradient vector
and, when we follow that vector, we trace out an integral
line which starts at a critical point and ends at a critical
point while technically not containing either of them. Since
integral lines ascend monotonically, the two endpoints
cannot be the same. Because f is smooth, two integral lines
are either disjoint or the same. The set of integral lines
covers the entire manifold, except for the critical points. The
descending manifold DðaÞ of a critical point a is the set of
points that flow toward a. More formally, it is the union of a
and all integral lines that end at a. For example, the
descending manifold of a maximum is an open disk, that of
386 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
a saddle is an open interval, and that of a minimum is the
point itself. The collection of stable manifolds is a complex
in the sense that the boundary of a cell is the union of lower-
dimensional cells. Symmetrically, we define the ascending
manifold AðaÞ of a as the union of a and all integral lines that
start at a.
For the next definition, we need an additional non-
degeneracy condition, namely, that ascending and descend-
ing manifolds that intersect do so transversally. For
example, if an ascending 1-manifold intersects a descending
one, then they cross. Due to the disjointness of integral lines,
this implies that the crossing is a single point, namely, the
saddle common to both. Assuming that this transversality
property is satisfied, we overlay the two complexes and
obtain what we call the Morse-Smale complex,orMS
complex, of f. Its cells are the connected components of
the intersections between ascending and descending mani-
folds. Its vertices are the vertices of the two overlayed
complexes, which are the minima and maxima of f,
together with the crossing points of ascending and
descending 1-manifolds, which are the saddles. Each
1-manifold is split at its saddle, thus contributing two arcs
to the MS complex. Each saddle is the endpoint of four arcs,
which alternately ascend and descend around the saddle.
Finally, each region has four sides, namely, two arcs
emanating from a minimum and ending at two saddles
and two additional arcs continuing from the saddles to a
common maximum. It is generically possible that the two
saddles are the same, in which case, two of the four arcs
merge into one. The region lies on both sides of the merged
arc, so it makes sense to double-count and to maintain that
the region has four sides. An example is shown in the center
of Fig. 1.
2.3 Piecewise Linear Functions
Functions occurring in scientific applications are rarely
smooth and mostly known only at a finite set of points
spread out over a manifold. It is convenient to assume that
the function has pairwise different values at these points.
We assume that the points are the vertices of a triangulation
K of IM and we extend the function values by piecewise
linear interpolation applied to the edges and triangles of K.
The star of a vertex u consists of all simplices (vertices,
edges, and triangles) that contain u and the link consists of
all faces of simplices in the star that are disjoint from u.
Since the surface defined by K is a 2-manifold, the link of
every vertex is a topological circle. The lower star contains
all simplices in the star for which u is the highest vertex,
and the lower link contains all simplices in the link whose
endpoints are lower than u. Note that the lower link is the
subset of simplices in the link that are faces of simplices in
the lower star. Topologically, the lower link is a subset of a
circle. Following [27], we define what we mean by a critical
point of a piecewise linear function based on the lower link.
As illustrated in Fig. 2, the lower link of a maximum is the
entire link and that of a minimum is empty. In all other
cases, the lower link of u consists of k þ 1 1 connected
pieces, each being an arc or, possibly, a single vertex. The
vertex u is regular if k ¼ 0 and a k-fold saddle if k 1.As
illustrated in Fig. 2 for k ¼ 2,ak-fold saddle can be split into
k simple or 1-fold saddles.
2.4 Persistence
We require a numerical measure of the importance of
critical points that can be used to drive the simplification of
an MS complex. For this purpose, we pair up critical points
and use the absolute difference between their heights as
importance measure. To construct the critical point pairs,
we imagine sweeping the 2-manifold IM in the direction of
increasing height. This view is equivalent to sorting the
vertices by height and incrementally constructing the
triangulation K of IM one lower star at a time. The topology
of the partial triangulation changes whenever we add a
critical vertex and it remains unchanged whenever we add
a regular vertex. Except for some exceptional cases that
have to do with the surface type of IM , each change either
creates a component or an annulus or it destroys a component
(by merging two) or an annulus (by filling the hole). We
pair a vertex v that destroys with the vertex u that created
what v destroys. The persistence of u and of v is the delay
between the two events: p ¼ fðvÞfðuÞ.Analgebraic
justification of this definition and a fast algorithm for
constructing the pairs can be found in [21].
3MORSE-SMALE COMPLEX
We introduce an algorithm for computing the MS complex
of a function f defined over a triangulation K. In particular,
we compute the ascending and descending 1-manifolds
(paths) of f starting from the saddles and use them to
partition K into quadrangular regions which define the MS
complex.
3.1 Path Construction
Starting from each saddle, we construct two lines of steepest
ascent and two lines of steepest descent. We do not adopt the
original algorithm proposed in [22] and follow actual lines of
maximal slope instead of edges of K. In particular, we split
triangles to create new edges in the direction of the gradient.
We modify this basic strategy to avoid regions with
disconnected interior and regions whose interior does not
touch both saddles. Without the modification, such regions
BREMER ET AL.: A TOPOLOGICAL HIERARCHY FOR FUNCTIONS ON TRIANGULATED SURFACES 387
Fig. 1. The folded quadrangle in the middle of this MS complex has two
boundary arcs glued to each other.
Fig. 2. Classification of a vertex based on relative height of vertices in its
link. The lower link is marked black.
may be created because f is not smooth and integral lines can
merge. Fig. 3a shows one such case, where paths merge at
junctions and disconnect the interior of a region into two. The
modification that eliminates the two undesired configura-
tions consists of disallowing two paths to merge if they are of
different type; see Fig. 3b. Two paths are still allowed to
merge if they are both ascending or both descending. If two
paths are not allowed to merge, we split one edge of the
triangulation and introduce a new sample with function
value that preserves the structure of the MS complex, but
locally avoids the junction. Fig. 4 shows the repeated
application of this strategy to avoid a junction. Note that,
once two paths have merged, they never separate.
After computing all paths, we partition K into quad-
rangular regions forming the cells of the MS complex.
Specifically, we grow each quadrangle from a triangle
incident to a saddle without ever crossing a path.
In degenerate areas of IM , where several vertices may
have the same function value, the greedy choices of local
steepest ascent/descent may not work consistently. We
address this problem using the simulation of simplicity,or
(SoS) [28]. We orient each edge of K in the direction of
ascending function value. Vertex indices are used to break
ties on flat edges such that the resulting directed graph has
no cycles. This simulates a set of arbitrarily small perturba-
tions resolving all degeneracies. Using these orientations,
the search for the steepest path is transformed to a
weighted-graph search and function values are only used
as preferences. Thus, our algorithm is robust even for
highly degenerate data sets as the one shown in Fig. 5.
3.2 Diagonals and Diamonds
The central element of our data structure for the
MS complex is the neighborhood of a simple saddle or,
equivalently, the halves of the quadrangles that share the
saddle as one of their vertices. To be more specific about the
halves, recall that, in the smooth case, each quadrangle
consists of integral lines that emanate from its minimum
and end at its maximum. Any one of these integral lines can
be chosen as diagonal to decompose the quadrangle into two
triangles. The triangles sharing a given saddle form the
diamond centered at the saddle. As illustrated in Fig. 8a,
each diamond is a quadrangle whose vertices alternate
between minima and maxima around the saddle in its
center. It is possible that two vertices are the same and the
boundary of the diamond is glued to itself along two
consecutive diagonals.
3.3 The Algorithm
We compute the descending paths starting from the highest
saddle and the ascending paths starting from the lowest
saddle. Thus, when two paths aim for the same extremum,
the one with higher persistence (importance) is computed
first. The boundary of the data set is artificially tagged as a
path. The complete algorithm is summarized in Fig. 6.
4HIERARCHY
Our main objective is the design of a hierarchical data
structure that supports adaptive coarsening and refinement
of the data. In this section, we describe such a data structure
and discuss how to use it.
4.1 Cancellations
We use only one atomic operation to simplify the MS
complex of a function, namely, a cancellation that eliminates
two critical points. The inverse operation that creates two
critical points is referred to as an anticancellation. In order to
cancel two critical points, they must be adjacent in the
MS complex. Only two possible combinations arise: a
minimum and a saddle or a saddle and a maximum. The
two configurations are symmetric and we can limit the
discussion to the second case, which is illustrated in Fig. 7.
388 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
Fig. 3. Portion of the MS complex of a piecewise linear function. Since
the gradient is not continuous, ascending (solid) and descending
(dotted) paths can meet in junctions and share segments. (a) Complex
with no restrictions on sharing segments. The green region touches only
one saddle and the red one is disconnected. (b) Only paths of the same
type can meet. The interior of each region is connected and touches
both saddles.
Fig. 4. Triangle split to keep two paths separated. Solid red lines indicate
two portions of paths already computed. (a) The red circle is the current
extremum of a path that would follow the red dotted line. (b) The path is
extended splitting a first triangle. (c) Since the two paths would still
intersect, a second triangle is split.
Fig. 5. MS complex of degenerate data set. The “volcano” is created by
the rotational sweep of a function that is flat both inside the “crater” and
at the foot of the mountain. (a) Originally computed MS complex. A large
number of critical points are created by eliminating flat regions using
simulation of simplicity. (b) The same complex after removing symbolic
features with zero persistence.
Let u be the saddle and v the maximum of the cancelled
pair and let w be the other maximum connected to u.We
require w v and fðwÞ >fðvÞ;otherwise, we prohibit the
cancellation of u and v. We view the cancellation as merging
three critical points into one, namely, u, v, w into w. All
paths ending at either u, v,orware removed and we adapt
the local geometry to the new topology, as described in
Section 5. Subsequently, all paths that were connected to
either maximum are recomputed. In other words, we
connect every saddle on the boundary of the geometrically
adapted region to the unique maximum within the region.
To avoid excessive splitting of the triangulation, we restrict
the recomputed paths to share edges of the triangulation.
There are several reasons for requiring fðwÞ >fðvÞ:It
implies that all recomputed paths remain monotonic and
ensures that we do not eliminate any level sets, except that
the ones between fðuÞ and fðvÞ are simplified. We may
think of a cancellation as deleting the two descending paths
of u and contracting the two ascending paths of u.
4.2 Node Removal
We construct the multiresolution data structure from
bottom to top. The bottom layer stores the MS complex of
the function f or, to be more precise, the corresponding
decomposition of the 2-manifold into diamonds. Fig. 8b
illustrates this layer by showing each diamond as a node
with arcs connecting it to neighboring diamonds. Each node
has degree four, but there can be loops starting and ending
at the same node. A cancellation corresponds to removing a
node and reconnecting its neighbors. When this node is
shared by four different arcs, we can connect the neighbors
in two different ways. As illustrated in Fig. 9, this operation
corresponds to the two different cancellations merging the
saddle with the two adjacent maxima or the two adjacent
minima. There is only one way to remove a node shared by
a loop and two other arcs, namely, to delete the loop and
connect the two neighbors.
To construct the hierarchy by repeated cancellations, we
use the algorithm in [21] to match critical points in pairs
BREMER ET AL.: A TOPOLOGICAL HIERARCHY FOR FUNCTIONS ON TRIANGULATED SURFACES 389
Fig. 6. Sequence of high-level operations used to create an MS
complex. When we grow a cell from a triangle f we encounter four
boundary paths, p
0
to p
3
, which we incorporated into a half-edge
representation of the cell.
Fig. 7. Portion of the graph of a function before (a) and after (b)
cancellation of a maximum (red) and a saddle (green).
Fig. 8. (a) The portion of an MS complex (dotted) and the portion of the
corresponding decomposition into diamonds (solid). (b) Portion of the
data structure (solid) representing the piece of the decomposition into
diamonds (dotted). (c) Cancellation graph (solid) of the decomposition
into diamonds (dotted).
Fig. 9. The four-sided diamond (a) can be zipped up in two ways: from
top to bottom (b) or from left to right (c). A folded diamond (A) can be
zipped up in only one way (B).
ðs
1
;v
1
Þ;ðs
2
;v
2
Þ;...;ðs
k
;v
k
Þ,with persistence increasing from
left to right. Let Q
j
be the MS complex obtained after the first
j cancellations, for 0 j k. We obtain Q
jþ1
by modifying Q
j
and storing sufficient information so we can recover Q
j
from
Q
jþ1
. The hierarchy is complete when we reach Q
k
. We call
each Q
j
a layer in the hierarchy and represent it by activating
its diamonds as well as neighbor and vertex pointers and
deactivating all other diamonds and pointers. To ascend in
the hierarchy (coarsen the quadrangulation), we deactivate
the diamond of s
jþ1
; to descend in the hierarchy (refine the
quadrangulation), we activate the diamond of s
j1
. Activat-
ing and deactivating a diamond requires updating of only a
constant number of pointers.
4.3 Independent Cancellations
We generalize the notion of a layer in the hierarchy to
permit view-dependent simplifications. The key concept
here is the possibility of interchanging two cancellations.
The most severe limitation to interchanging cancellations
derives from the assignment of extrema as vertices of the
diamonds and from redrawing the paths ending at these
extrema. To understand this limitation, we introduce the
cancellation graph, whose vertices are the minima and
maxima. Fig. 8c shows an example of such a graph. For
each diamond, there exists an edge connecting the two
minima and another edge connecting the two maxima.
There are no loops and, therefore, sometimes only one edge
per diamond. Zipping up a diamond corresponds to
contracting one of the edges and deleting the other, if it
exists. One endpoint of the edge remains as a vertex and the
other disappears, implying that the diamonds that share the
second endpoint receive a new vertex. A special case arises
when a diamond shares both endpoints: The connecting
edge that would turn into a loop is deleted.
Two cancellations in a (possibly simplified) MS complex
are interchangeable when it is irrelevant in which order the
two operations are applied to the data structure. For
example, the two cancellations zipping up the same
diamond are not interchangeable since one preempts the
other. In general, two cancellations are interchangeable
when their diamonds share no vertex, a condition we refer
to as being independent. This notion of dependencies is
similar to methods used in view-dependent refinements of
polygon meshes [29], [30] applied to the MS complex. Note
that two interchangeable cancellations are not necessarily
independent. Even though independence is the more
limiting of the two concepts, it offers sufficient flexibility
in choosing layers to support the adaptation of the
representation to external constraints, such as the biased
view of the data.
When we can perform a relatively large number of
independent cancellations, we have more freedom generat-
ing layers in the multiresolution data structure. Ideally, we
would like to identify a large independent set and iterate to
construct a shallow hierarchy. However, in the worst case,
every pair of cancellations is dependent, which makes the
construction of a shallow hierarchy impossible. As illustrated
in Fig. 11a, such a configuration exists even for the sphere and
for any arbitrary number of vertices. Nevertheless, worst-
case situations are unlikely to arise as they require a large
number of folded diamonds. Specifically, it is possible to
prove that every MS complex without folded diamonds
implies a linear number of independent cancellations.
5GEOMETRIC APPROXIMATION
After each cancellation, we create or change the geometry that
locally defines f. We pursue three objectives: The approx-
imation must agree with the given topology, the error should
be small, and the approximation should be smooth.
5.1 Error Bounds
We measure the error as the difference between function
values at a point. It is convenient to think of the graph of f
as the geometry and this difference as the (vertical) distance
between the original and the simplified geometry at the
location of the point. The persistence of the critical points
involved in a cancellation implies a lower bound on the
local error. We illustrate this connection for the one-
dimensional case in Fig. 10a. Recall that the persistence p
of the maximum-minimum pair is the difference in their
function values. Any monotonic approximation of the curve
between the two critical points has an error of at least p=2.
We can achieve an error of p=2, but only if we accept a flat
segment for this portion of the curve, see the red curve in
Fig. 10a. When it is allowed to exceed p=2, smoother
approximations without flat segments are possible, such as
the green curve in the same figure. Note that the above
describes only the error between the two functions before
and after the one cancellation. The error caused by the
composition of two or more cancellations is more difficult to
analyze and will not be discussed in this paper.
5.2 Data Fitting
We know that monotonic patches exist, provided we are
tolerant to errors. Our goal is therefore to find monotonic
patches that minimize some error measure. A large body of
literature deals with the more general topic of shape-
constrained approximation [31], [32]. The general problem
is to construct the smoothest interpolant to a set of input
data while observing some shape constraints (e.g., con-
vexity, monotonicity, and boundary conditions). However,
most published work uses penalty functions instead of tight
error bounds. Additionally, the techniques are typically
described for tensor product setting and the definitions of
monotonicity for the bivariate case vary and differ from the
390 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
Fig. 10. Geometry fitting for paths: (a) One-dimensional cancellation and
several monotonic approxi mations. (b) Local averaging used to
construct smoothly varying monotonic approximations. Slopes of
neighboring edges are combined with the original slope and the function
values are adjusted accordingly (edge normals are shown).
one we use. We therefore did not adapt standard techniques
for our purposes and instead decided to use a multistage
iterative approach to construct the geometry that specifies
the simplified representation of f. It provides a smooth
C
1
-continuous approximation within a specified error
bound along the boundaries of the quadrangular patches
and a similar approximation but without observing an error
bound in the interior of the patches. The paths are
constructed iteratively by smoothing the gradients along
the edges and postfitting the function values, as illustrated
in Fig. 10b. During each iteration, we first compute the new
gradient of an edge as a convex combination of its gradient
and the gradients of the adjacent edges. We then adjust the
function values at the vertices to realize the new gradients.
During an iteration, we maintain the error bound at the
vertices andmakesurethatthecompletedpathis
monotonic. In addition, the gradient at the critical points
is set to zero.
The technique performs well in practice although it
converges slowly. Sample results are shown in Fig. 11b. The
interior of the quadrangular patches is modified by
applying standard Laplacian smoothing to the function
values [33]. During each iteration, the value at a vertex is
averaged with those of its neighbors. Since the boundaries
are monotonic, this procedure converges to a monotonic
solution for the patch interior. We summarize the steps of
the geometry fitting process:
1. Find all paths affected by a cancellation,
2. Use the gradient smoothing to geometrically remove
the canceled critical points,
3. Smooth the old regions until they are monotonic,
4. Erase the paths and recompute new paths using the
new geometry,
5. Use one-dimensional gradient smoothing to force
the new paths to comply with the constraints, and
6. Smooth the new regions until all points are regular.
The paths constructed in Step 4 are not guaranteed to satisfy
the claimed error bounds, which is the reason for the
repeated use of gradient smoothing in Step 5. We iterate
until the paths are monotone and satisfy the error bounds.
Experimentally, it takes only a constant number of itera-
tions to achieve both goals. In our software, the fitting of the
surface is currently the bottleneck due to slow convergence
iteratively solving a Laplacian system.
6REMESHING
While traversing the hierarchy we want to interactively
display geometry that agrees with the current topology of
the graph of f. Thus, we must determine a triangular mesh
within each quadrangular region. For maximal flexibility,
the triangulation for each region should not depend on
neighboring regions.
6.1 Path Smoothing
Without modifications, the algorithms used to compute
paths tend to create jagged paths on the 2-manifold, as in
Fig. 12a. These are visually not pleasing and difficult to
approximate. We therefore slightly modify the data to
obtain smoother paths, again using Laplacian smoothing.
Special care has to be taken at junctions, where we
separately average the predecessor and the successor
vertices before updating the junction. This strategy reduces
the change in direction between the incoming and outgoing
edges rather than minimizing the change of directions
between all edges. The result is a more “flow-like”
structure, as shown in Fig. 12c. No vertex can leave its
original triangle strip and, assuming a sufficiently dense
base mesh, the overall change in position is minor and
critical points are never moved. In practice, one or two
iterations are sufficient to significantly improve the layout
of the paths.
6.2 Parametrization
To enable fast and versatile rendering of the data and
reduce memory requirements, we remesh each quadran-
gular region using a regular grid structure. First, we
compute a mapping of the boundary of the region to the
boundary of one or more unit squares, see below. Then, we
use standard parametrization techniques such as [34], [35]
to extend the mapping to the interior. Next, we sample the
parameter space on a uniform grid and use its preimage on
IM as a new mesh for the region. The boundary parame-
trization is chosen such that the meshes of neighboring
regions agree geometrically along their boundary.
6.3 Boundary Parametrization
The boundary of a region consists of critical points,
junctions, and standard path vertices. Independently of
the current approximation, the triangulation of a region
always contains its critical points and junctions. The critical
points represent the extremal function values of a region.
Junctions are created when two paths flow toward the same
BREMER ET AL.: A TOPOLOGICAL HIERARCHY FOR FUNCTIONS ON TRIANGULATED SURFACES 391
Fig. 11. (a) MS complex on the sphere with pairwise dependent
cancellations. (b) One-dimensional gradient smoothing with (blue) error
constraints and prescribed endpoint derivatives. (a) Initial configuration.
(b) Constructed solution.
Fig. 12. Path smoothing: (a) A typical path structure without smoothing.
(b) Smoothing applied at junctions. (c) The path structure of (a) after two
smoothing steps.
extremum merge. Therefore, each junction replaces a critical
point for the region sharing both these paths. To avoid
cracks in the mesh, all adjacent regions must contain the
junction as well. As base-shape in parameter space, we use
one or more unit squares and we choose the number
depending on the ratio of the eigenvalues of the principle
component analysis of all boundary vertices. Once the base-
shape is known, the critical points and junctions are fitted
recursively using arc-length parametrization. The complete
process is illustrated in Fig. 13.
To remesh the path segments between critical points and
junctions, we apply midpoint subdivision based on
arclength. We permit T-junctions (hanging nodes) along
boundaries. In other words, our representation is not a
globally conforming triangulation of IM but rather a
collection of patches. Each patch is triangulated with a
regular, conforming mesh. We call the collection crack-free
when the meshes agree geometrically along boundaries.
Nevertheless, pixel-wide cracks may appear during render-
ing as polygons are rasterized at fixed precision. A possible
solution is to “fill-in” the cracks during rendering as
described by Bala
´
zs et al. [36].
7RESULTS
We have applied our algorithm to terrain data converted
from digital elevation models
1
and to four scientific data
sets, all listed in Table 1. The Combustion data set originates
from the simulation of the autoignition of a spatially
nonhomogeneous hydrogen-air mixture courtesy of Echek-
ki and Chen [37]. The Methaen data set in Fig. 18 represents
the electrostatic potential and the corresponding van der
Waals energy for a methane molecule. The Glucose-Ethane
data set in Fig. 19 describes the interaction energy between
a ligand (glucose) and a receptor (ethane) under the three
translational degrees of freedom. in both figures, the
domain is an isosurface of the electrostatic potential and
the function is the van der Waals energy. The Oil Spill data
set in Fig. 20 shows a ground remediation process after an
oil spill contamination. The domain is an isosurface of the
oil concentration, reaching from the ground level at the top
down into the soil. The superimposed pseudocolored
function shows the concentration of microbes consuming
the oil and performing the remediation process. In both
cases, the hierarchical MS complex highlights regions of
interest such as good candidate bonding sites for molecular
interaction or regions of high microbe activity in the ground
remediation process.
The most basic application of our algorithm is removal of
topological noise without smoothing. This functionality
does not depend on the hierarchy and is implemented by
repeated cancellation of critical points with lowest persis-
tence. Our experience suggests that this step should always
be applied, even if only to remove the artifacts caused by
symbolic perturbation. We classify all features with
persistence below 0.1 percent of the total function range
as noise. Fig. 14 illustrates this procedure for the Dalles data
set. Removing the noise reduces the number of critical
points from 24,617 to 2,144. As one of the main problem in
topological data analysis is the large number of spurious
topological features, this clean up is a valuable preproces-
sing step for many techniques proposed in recent years.
We have tested three strategies for creating the hier-
archy: sequential, batched, and hybrid. The sequential
strategy performs the cancellations in the order of increas-
ing persistence. The batched strategy removes a maximal
independent set of cancellations in one step, collecting a
batch greedily in the order of increasing persistence.
392 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
1. http://www.webgis.com.
Fig. 13. Creating a parametrization for the boundary. Top-left: Original
region and local coordinate system defined by the principal component
analysis. Top-right: Region after transformation into the new coordinate
system. We use a single unit square as base-shape in parameter space.
The points with extreme projections onto the two diagonals are mapped
to the corners of the base-shape. Bottom-left: Regular mesh after the
first level of recursively fitting the junctions. Bottom-right: Final mesh.
TABLE 1
Data Sets Used for Testing
The first three data sets are terrains.
TABLE 2
Statistics on the Hierarchies for Comparing
the Three Cancellation Strategies
Finally, the hybrid strategy is the batched strategy with the
added restriction that the largest persistence be at most
twice the smallest persistence in the same batch. We limit
each strategy to critical points whose persistence does not
exceed 20 percent of the function range.
Table2summarizesthecollectedstatisticsforthe
hierarchies constructed from the three terrain data. For
each combination of data set and cancellation strategy, it
lists the maximum and average depth of the leaves, the
maximum number of parents and children of the nodes,
and the average degree, defined as the combined number of
parents and children of a node. As expected, the batched
cancellation of critical points creates more shallow hier-
archies than the sequential strategy. However, this is not
always the case. In the Needles data, the average depth
created by batching exceeds that created by sequential
cancellation. This observation can be explained by the
existence of high-degree nodes illustrated in Fig. 14, which
shows the highest-resolution MS complex of the Needles
data, drawing each path as a straight line from the saddle to
the extremum. There are very few minima in the data,
forcing a large average degree and an uneven distribution
of nodes over the levels of the hierarchy.
The graphs in Fig. 15 show the number of nodes per level
for the Puget Sound and Dalles data set. The batched
strategies clearly produce superior results in terms of
overall shape of the hierarchy. However, this does not
necessarily translate into better performance in practice.
Fig. 16 shows the number of critical points in the
MS complex depending on a uniform error. Even though
the hierarchy created by batched cancellation is the most
shallow, it also contains significantly denser meshes. The
BREMER ET AL.: A TOPOLOGICAL HIERARCHY FOR FUNCTIONS ON TRIANGULATED SURFACES 393
Fig. 14. (a) Highest resolution MS complex of Needles data set. (b) Original Dalles data set containing 24,617 critical points. (c) Same data with
2,144 critical point after removing all with persistence less than 0.1 percent of height range.
Fig. 15. Nodes distribution over the levels for three cancellation strategies for the three terrain data sets.
Fig. 16. Number of critical points in MS complex for the three terrain data sets.
hybrid strategy combines the advantages of the other two
strategies and is therefore our method of choice.
The hierarchy also supports adative refinements, two
examples of which are shown in Fig. 17. Fig. 17a shows a
view-dependent refinement of the Puget Sound data. The
full resolution is preserved inside the view frustum,
yielding a total of 1,070 cirital points. Outside the view
frustum, we have simplified the dat to the extent possible.
One can observe a quick drop in resolution away from the
view frustum. Reducing the topology outside the frustum
naturally reduces the number of quadrangular regions that
must be rendered (and, therefore, the number of trianges)
394 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
Fig. 17. (a) View-dependent refinement of Puget Sound data (purple: view frustum). (b) Middel: Combustion data after topological noise removal. (c)
Adaptive refinement of combusion data based on function value. All maxima above 90 percent of the maximal function value are preserved.
Fig. 18. Electrostatic and van der Waals potentials for a methane molecule. (a) Isosurface of the electrostatic potential pseudocolored with the
corresponding van der Waals potential. (b) Full MS complex with 30 critical points. (c) Simplified MS complex with 14 critical points highlighting the
hydrogen atoms near the maxima and the carbon atom near the minimum at the center.
Fig. 19. Interaction energy between glucose and ethane under the three translational degrees of freedom. (a) Isosurface of the electrostatic
interaction pseudocolored with the corresponding van der Waals potential. (b) Full MS complex with 564 critical points. (c) Simplified MS complex
with 166 critical points highlighting good candidate binding sites.
and frustum culling can be performed directly on the
regions culling large parts of the mesh without traversal.
Fig. 17c shows the combustion data adaptively refined to
preserve only maxima with a high function value. Only
maxima above 90 percent of the maximal function value
and their ancessors in the hierarchy are preserved. Notice
that serveral lower maxima have completely disappeared.
8CONCLUSIONS
We have described a new topology-based multiresolution
data structure for real-valued functions over two-dimen-
sional domains and demonstrated its use for terrains. The
hierarchy allows for the adaptive extraction of geometry
depending on given topological error. Due to its robustness
in the presence of noise and its well-defined simplification
procedures, the approach is appealing for applications that
rely on topological analysis. Examples are data segmenta-
tion and feature detection and tracking in medical imaging
or simulated flow field data sets. Future work will be
concerned with fitting the complete geometry within a
given error bound and the extension to volumetric data.
ACKNOWLEDGMENTS
This work was performed under the auspices of the US
Department of Energy by the University of California
Lawrence Livermore National Laboratory under contract
No. W-7405-Eng-48. Herbert Edelsbrunner is partially
supported by the US National Science Foundation (NFS)
under grants EIA-99-72879 and CCR-00-86013. Bernd
Hamann is supported by the NSF under contract
ACI 9624034, through the LSSDSV program under contract
ACI 9982251, and through the NPACI; the US National
Institute of Mental Health and the NSF under contract
NIMH 2 P20 MH60975-06A2; the Lawrence Livermore
National Laboratory under ASCI ASAP Level-2 Memor-
andum Agreement B347878 and under Memorandum
Agreement B503159.
REFERENCES
[1] A. Cayley, “On Contour and Slope Lines,” London, Edinburgh and
Dublin Phil. Mag. J. Sci., vol. XVIII, pp. 264-268, 1859.
[2] J.C. Maxwell, “On Hills and Dales,” London, Edinburgh and Dublin
Phil. Mag. J. Sci., vol. XL, pp. 421-427, 1870.
[3] J. Pfaltz, “Surface Networks,” Geographical Analysis, vol. 8, pp. 77-
93, 1976.
[4] J. Pfaltz, A Graph Grammar that Describes the Set of Two-Dimensional
Surface Networks, 1979.
[5] M. Morse, “Relations between the Critical Points of a Real
Function of n Independent Variables,” Trans. Am. Math. Soc.,
vol. 27, pp. 345-396, 1925.
[6] J. Milnor, Morse Theory. Princeton Univ. Press, 1963.
[7] H. Hoppe, “Progressive Meshes, Computer Graphics (Proc.
SIGGRAPH), vol. 30, pp. 99-108, 1996.
[8] J. Popovic and H. Hoppe, “Progressive Simplicial Complexes,”
Computer Graphics (Proc. SIGGRAPH), vol. 31, pp. 209-216, 1997.
[9] M. Garland and P.S. Heckbert, “Surface Simplification Using
Quadric Error Metrics,” Computer Graphics (Proc. SIGGRAPH),
vol. 31, pp. 209-216, 1997.
[10] P. Lindstrom and G. Turk, “Fast and Memory Efficient Polygonal
Simplification,” Proc. IEEE Visualization, pp. 279-286, 1998.
[11] T. He, L. Hong, A. Varshney, and S.W. Wang, “Controlled
Topology Simplification,” IEEE Trans. Visualization and Computer
Graphics, vol. 2, pp. 171-184, 1996.
[12] J. El-Sana and A. Varshney, “Topology Simplification for Poly-
gonal Virtual Environments, IEEE Trans. Visualization and
Computer Graphics, vol. 4, pp. 133-144, 1998.
[13] J.L. Helman and L. Hesselink, “Visualizing Vector Field Topology
in Fluid Flows,” IEEE Computer Graphics and Applications, vol. 11,
pp. 36-46, 1991.
[14] W. de Leeuw and R. van Liere, “Collapsing Flow Topology Using
Area Metrics,” Proc. IEEE Visualization, pp. 349-354, 1999.
[15] X. Tricoche, G. Scheuermann, and H. Hagen, “A Topology
Simplification Method for 2D Vector Fields,” Proc. IEEE Visualiza-
tion, pp. 359-366, 2000.
[16] X. Tricoche, G. Scheuermann, and H. Hagen, “Continuous
Topology Simplification of Planar Vector Fields,” Proc. IEEE
Visualization, pp. 159-166, 2001.
[17] Topological Modeling for Visualization, A.T. Fomenko and T.L. Kunii,
eds. Springer-Verlag, 1997.
[18] C.L. Bajaj and D.R. Schikore, Topology Preserving Data
Simplification with Error Bounds,” Computers and Graphics,
vol. 22, pp. 3-12, 1998.
[19] K. Hormann, “Morphometrie der Erdoberfla
¨
che,” Schrift. Univ.
Kiel, 1971.
[20] D.M. Mark, “Topological Properties of Geographic Surfaces,”
Proc. Advanced Study Symp. Topolological Data Structures, 1977.
[21] H. Edelsbrunner, D. Letscher, and A. Zomorodian, “Topological
Persistence and Simplification,” Discrete Computer Geometry,
vol. 28, pp. 511-533, 2002.
BREMER ET AL.: A TOPOLOGICAL HIERARCHY FOR FUNCTIONS ON TRIANGULATED SURFACES 395
Fig. 20. Remediation process of contaminated ground. (a) The isosurface of the oil concentration in soil with a function in pseudocolor measuring the
density of microbes consuming the oil. (b) Full MS complex with 232 critical points. (c) Simplified MS complex with 67 critical points highlighting the
main activity sites of the microbes.
[22] H. Edelsbrunner, J. Harer, and A. Zomorodian, “Hierarchical
Morse-Smale Complexes for Piecewise Linear 2-Mmanifolds,”
Discrete Computer Geometry, vol. 30, pp. 87-107, 2003.
[23] H. Edelsbrunner, J. Harer, V. Natarajan, and V. Pascucci, “Morse-
Smale Complexes for Piecewise Linear 3-Manifolds,” Proc. 19th
Ann. Symp. Computer Geometry, pp. 361-370, 2003.
[24] Y. Matsumoto, An Introduction to Morse Theory. Am. Math. Soc.,
2002.
[25] J.R. Munkres, Elements of Algebraic Topology. Redwood City, Calif.:
Addison-Wesley, 1984.
[26] P.S. Alexandrov, Combinatorial Topology. New York: Dover, 1998.
[27] T.F. Banchoff, “Critical Points for Embedded Polyhedral Sur-
faces,” Am. Math. Monthly, vol. 77, pp. 457-485, 1970.
[28] H. Edelsbrunner and E.P. Mu
¨
cke, “Simulation of Simplicity: A
Technique to Cope with Degenerate Cases in Geometric Algo-
rithms,” ACM Trans. Graphics, vol. 9, pp. 66-104, 1990.
[29] J.C. Xia and A. Varshney, “Dynamic View-Dependent Simplifica-
tion for Polygonal Models,” Proc. IEEE Visualization, pp. 335-344,
1996.
[30] H. Hoppe, “View-Dependent Refinement of Progressive Meshes,”
Computer Graphics (Proc. SIGGRAPH), vol. 31, pp. 189-198, 1997.
[31] R. Carlson and F.N. Fritsch, “Monotone Piecewise Bicubic
Interpolation,” J. Numerical Analysis, vol. 22, pp. 386-400, 1985.
[32] H. Greiner, “A Survey on Univariate Data Interpolation and
Approximation by Splines of Given Shape,” Math. Computer
Modeling, vol. 15, pp. 97-106, 1991.
[33] G. Taubin, “A Signal Processing Approach to Fair Surface
Design,” Computer Graphics (Proc. SIGGRAPH), pp. 351-358, 1995.
[34] M.S. Floater, “Mean Value Coordinates,” Computer Aided Geometric
Design, vol. 20, pp. 19-27, 2003.
[35] M. Desbrun, M. Meyer, and P. Alliez, “Intrinsic Parameterizations
of Surface Meshes,” Computer Graphics Forum (Proc. Eurographics),
vol. 21, 2002.
[36] A. Bala
´
zs, M. Guthe, and R. Klein, “Fat Borders: Gap Filling for
Efficient View-Dependent LOD Rendering,” Technical Report CG-
2003-2, Univ. Bonn, Germany, 2003.
[37] E. Echekki and J.H. Chen, “Direct Numerical Simulation of
Autoignition in Non-Homogeneous Hydrogen-Air Mixtures,”
Combustion Flame, 2003.
Peer-Timo Bremer received the Diplom (MS) in
mathematics with a second major in computer
science from the University of Hannover, Ger-
many, in 2000. He is currently pursuing the PhD
degree in computer science at the University of
California at Davis. He holds a student employee
graduate research fellowship at the Lawrence
Livermore National Laboratory. He is a member
of the ACM, the IEEE, and the IEEE Computer
Society.
Herbert Edelsbrunner received the Dipl.-Ing.
and PhD degrees from the Graz University of
Technology in Austria. He is currently Arts and
Sciences Professor of Computer Science and
Mathematics at Duke University. He is also an
adjunct professor at the University of North
Carolina at Chapel Hill and director at Raindrop
Geomagic, a reverse engineering and geometric
modeling company he cofounded with Ping Fu in
1996. He has published two books in the general
area of geometric algorithms, the first in 1987 on Algorithms in
Combinatorial Geometry (Springer-Verlag) and the second in 2001 on
Geometry and Topology for Mesh Generation (Cambridge University
Press). He received the Alan T. Waterman Award from the US National
Science Foundation in 1991.
Bernd Hamann received the BS degree in
computer science, the BS degree in mathe-
matics, and the MS degree in computer science
from the Technical University of Braunschweig,
Germany. He received the PhD degree in
computer science from Arizona State University
in 1991. He serves as associate vice chancellor
for research, is codirector of the Center for Image
Processing and Integrated Computing (CIPIC),
and a full professor of computer science at the
University of California, Davis. He was awarded a 1992 Research
Initiation Award by Mississippi State University, a 1992 Research
Initiation Award by the US National Science Foundation (NSF), and a
1996 CAREER Award by the NSF. In 1995, he received a Hearin-Hess
Distinguished Professorship in Engineering f rom the College of
Engineering at Mississippi State University. He is a member of the
ACM, the IEEE, SIAM, and the IEEE Technical Committee on
Visualization and Graphics.
Valerio Pascucci received the PhD degree in
computer science from Purdue University in May
2000 and the EE Laurea (Master’s), from the
University “La Sapienza” in Rome in December
1993, as a member of the Geometric Computing
Group. He has been a computer scientist and
project leader at the Lawrence Livermore Na-
tional Laboratory, Center for Applied Scientific
Computing (CASC) since May 2000. Prior to his
CASC tenure, he was a senior research associ-
ate at the University of Texas at Austin, Center for Computational
Visualization, CS and TICAM Departments. He is a member of the IEEE.
. For more information on this or any computing topic, please visit
our Digital Library at www.computer.org/publications/dlib.
396 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 10, NO. 4, JULY/AUGUST 2004
... This means that they do not follow the gradient path. Other approaches such as [7] follow the line of steepest descent or ascent by inserting new nodes and splitting the triangles. However, such an approach was developed firstly for triangulated irregular networks. ...
... In a surface network, two ridges or two thalwegs can overlap and join the same peak or pit. Topological consistency is preserved either by considering that the two lines run side by side to the same node or by introducing a junction node and initiating a new line after the junction [7]. However, intersections outside a saddle can also occur between a ridge and a thalweg. ...
... The method computes thalwegs first and makes sure that no intersection with thalwegs occurs when computing ridges. As in [7], the method introduces nodes when critical lines merge. They are confluences where two thalwegs merge and junctions where two ridges merge. ...
Article
Full-text available
The surface network is an application of the Morse-Smale complex to digital terrain models connecting ridges and thalwegs of the terrain in a planar, undirected graph. Although it provides a topological structure embedding critical elements of the terrain, its application to morphological analysis and hydrology remains limited mainly because the drainage network is the most relevant structure for analysis and it cannot be derived from the surface network. The drainage network is a directed, hierarchical graph formed by streams. Ridges of the surface network are not equivalent to drainage divides, which are not contained in the drainage network, and there is no direct association between thalwegs and streams. Therefore, this paper proposes to extend the surface network into a new structure that also embeds the drainage network. This is done by (1) revising the definition of ridges so that they include drainage divides and (2) assigning a flow direction to each thalweg, taking into account spurious depressions to avoid flow interruption. We show that this extended surface network can be used to compute the flow accumulation and different hydrographic features such as drainage basins and the Strahler order. The drainage network extracted from the extended surface network is compared to drainage networks computed with the traditional D8 approach in three case studies. Differences remain minor and are mainly due to the elevation inaccuracy in flat or slightly convex areas. Hence, the extended surface network provides a richer data structure allowing the use of a common topological data structure in both terrain analysis and hydrology.
... One representative of boundary-based algorithms is Edelsbrunner et al. [15] that first introduced the MS complex for piecewise linear 2-manifolds, recording paths of steepest ascent and descent. They also introduced the notion of the quasi MS complex that was extended to 3-manifolds [16] and later improved in geometric accuracy by Bremer et al. [17]. Concerning region-growing algorithms, Danovaro et al. [18] started growing regions by using triangles incident on maxima at vertices, adding edge incident triangles iteratively. ...
... Subhash et al. [30] then accomplished computing all steps of the MS complex computation on the GPU. Even though some algorithms improved the steepest descent line tracing [17,20] by allowing the traversal to use edges and triangles, still all presented algorithms often produce incorrect connectivity and inaccurate geometry due to the refinement of the underlying discrete domain [31]. Here, Gyulassy et al. [32] implemented a probabilistic algorithm to extract the correct geometry and connectivity. ...
Preprint
Full-text available
This paper presents a well-scaling parallel algorithm for the computation of Morse-Smale (MS) segmentations, including the region separators and region boundaries. The segmentation of the domain into ascending and descending manifolds, solely defined on the vertices, improves the computational time using path compression and fully segments the border region. Region boundaries and region separators are generated using a multi-label marching tetrahedra algorithm. This enables a fast and simple solution to find optimal parameter settings in preliminary exploration steps by generating an MS complex preview. It also poses a rapid option to generate a fast visual representation of the region geometries for immediate utilization. Two experiments demonstrate the performance of our approach with speedups of over an order of magnitude in comparison to two publicly available implementations. The example section shows the similarity to the MS complex, the useability of the approach, and the benefits of this method with respect to the presented datasets. We provide our implementation with the paper.
... One representative of boundary-based algorithms is Edelsbrunner et al. [15] that first introduced the MS complex for piecewise linear 2-manifolds, recording paths of steepest ascent and descent. They also introduced the notion of the quasi MS complex that was extended to 3-manifolds [16] and later improved in geometric accuracy by Bremer et al. [17]. Concerning region-growing algorithms, Danovaro et al. [18] started growing regions by using triangles incident on maxima at vertices, adding edge incident triangles iteratively. ...
... Subhash et al. [30] then accomplished computing all steps of the MS complex computation on the GPU. Even though some algorithms improved the steepest descent line tracing [17,20] by allowing the traversal to use edges and triangles, still all presented algorithms often produce incorrect connectivity and inaccurate geometry due to the refinement of the underlying discrete domain [31]. Here, Gyulassy et al. [32] implemented a probabilistic algorithm to extract the correct geometry and connectivity. ...
Article
Full-text available
This paper presents a well-scaling parallel algorithm for the computation of Morse-Smale (MS) segmentations, including the region separators and region boundaries. The segmentation of the domain into ascending and descending manifolds, solely defined on the vertices, improves the computational time using path compression and fully segments the border region. Region boundaries and region separators are generated using a multi-label marching tetrahedra algorithm. This enables a fast and simple solution to find optimal parameter settings in preliminary exploration steps by generating an MS complex preview. It also poses a rapid option to generate a fast visual representation of the region geometries for immediate utilization. Two experiments demonstrate the performance of our approach with speedups of over an order of magnitude in comparison to two publicly available implementations. The example section shows the similarity to the MS complex, the useability of the approach, and the benefits of this method with respect to the presented datasets. We provide our implementation with the paper.
... Some of the pioneer contributors to TDA include Frosini (1992), Robins (Sidney, 2012), and Edelsbrunner et al. (2002), who founded the notion of how features persist as the data are modified. Nevertheless, the genesis of the term TDA expression appears not to have surfaced till contributions by De Silva & Carlson (2004) and Bremer (2004). Thereafter, Carlsson (2014) became instrumental in the popularization of TDA, establishing the ways topological techniques will remedy challenges encountered while implementing topology to analyze BD. ...
Article
Full-text available
In this paper, we carry out an in-depth topological data analysis (TDA) of COVID-19 pandemic using artificial intelligence (AI) and Machine Learning (ML) techniques. We show the distribution patterns of pandemic all over the world when it was at its peak with respect to big data sets in Hausdorff spaces. The results show that the world areas which experience a lot of cold seasons were affected most.
... In geometric modeling, Forman [49] introduced the discretized version of Morse theory and related algorithms, and Edelsbrunner et al. [50] demonstrated an application of the theory to linear 2-manifolds (e.g., a triangular mesh). As well as the geometry processing on triangular meshes [51], Morse theory has also been applied to image processing [52,53], visualization of cosmic objects [54], molecular analysis [55], and mesh quadrangulation [56,57]. For further applications, we refer readers to the comprehensive surveys [58,59]. ...
Article
Full-text available
X-ray CT scanners, due to the transmissive nature of X-rays, have enabled the non-destructive evaluation of industrial products, even inside their bodies. In light of its effectiveness, this study introduces a new approach to accelerate the inspection of many mechanical parts with the same shape in a bin. The input to this problem is a volumetric image (i.e., CT volume) of many parts obtained by a single CT scan. We need to segment the parts in the volume to inspect each of them; however, random postures and dense contacts of the parts prohibit part segmentation using traditional template matching. To address this problem, we convert both the scanned volumetric images of the template and the binned parts to simpler graph structures and solve a subgraph matching problem to segment the parts. We perform a distance transform to convert the CT volume into a distance field. Then, we construct a graph based on Morse theory, in which graph nodes are located at the extremum points of the distance field. The experimental evaluation demonstrates that our fully automatic approach can detect target parts appropriately, even for a heap of 50 parts. Moreover, the overall computation can be performed in approximately 30 min for a large CT volume of approximately 2000×2000×1000 voxels.
... Translating the benefits of the theoretical structure to practical settings presents a number of complexities, particularly for the Morse-Smale complex. While computing them in 2D was achieved by Bremer et al. [5,6] with an approach that subdivided triangles to follow the numerically computed PL gradient, their efficient extraction in three-dimensions has been quite challenging [16,17]. Instead, the visualization community has employed approximations based on a discrete encoding of the gradient field [13,23,24,55,58,59], based on pioneering work of Gyulassy et al. [24] to extend Forman's discrete Morse theory [19]. ...
Preprint
Full-text available
Though analyzing a single scalar field using Morse complexes is well studied, there are few techniques for visualizing a collection of Morse complexes. We focus on analyses that are enabled by looking at a Morse complex as an embedded domain decomposition. Specifically, we target 2D scalar fields, and we encode the Morse complex through binary images of the boundaries of decomposition. Then we use image-based autoencoders to create a feature space for the Morse complexes. We apply additional dimensionality reduction methods to construct a scatterplot as a visual interface of the feature space. This allows us to investigate individual Morse complexes, as they relate to the collection, through interaction with the scatterplot. We demonstrate our approach using a synthetic data set, microscopy images, and time-varying vorticity magnitude fields of flow. Through these, we show that our method can produce insights about structures within the collection of Morse complexes.
Article
Full-text available
The extremum graph is a succinct representation of the Morse decomposition of a scalar field. It has increasingly become a useful data structure that supports topological feature‐directed visualization of 2D/3D scalar fields, and enables dimensionality reduction together with exploratory analysis of high‐dimensional scalar fields. Current methods that employ the extremum graph compute it either using a simple sequential algorithm for computing the Morse decomposition or by computing the more detailed Morse–Smale complex. Both approaches are typically limited to two and three‐dimensional scalar fields. We describe a GPU–CPU hybrid parallel algorithm for computing the extremum graph of scalar fields in all dimensions. The proposed shared memory algorithm utilizes both fine‐grained parallelism and task parallelism to achieve efficiency. An open source software library, tachyon, that implements the algorithm exhibits superior performance and good scaling behaviour.
Article
This paper introduces an efficient algorithm for persistence diagram computation, given an input piecewise linear scalar field $f$ defined on a $d$ -dimensional simplicial complex $\mathcal {K}$ , with $d \leq 3$ . Our work revisits the seminal algorithm “PairSimplices” [31], [103] with discrete Morse theory (DMT) [34], [80], which greatly reduces the number of input simplices to consider. Further, we also extend to DMT and accelerate the stratification strategy described in “PairSimplices” [31], [103] for the fast computation of the $0^{th}$ and $(d-1)^{th}$ diagrams, noted $\mathcal {D}_{0}(f)$ and $\mathcal {D}_{d-1}(f)$ . Minima-saddle persistence pairs ( $\mathcal {D}_{0}(f)$ ) and saddle-maximum persistence pairs ( $\mathcal {D}_{d-1}(f)$ ) are efficiently computed by processing , with a Union-Find , the unstable sets of 1-saddles and the stable sets of $(d-1)$ -saddles. We provide a detailed description of the (optional) handling of the boundary component of $\mathcal {K}$ when processing $(d-1)$ -saddles. This fast pre-computation for the dimensions 0 and $(d-1)$ enables an aggressive specialization of [4] to the 3D case, which results in a drastic reduction of the number of input simplices for the computation of $\mathcal {D}_{1}(f)$ , the intermediate layer of the sandwich . Finally, we document several performance improvements via shared-memory parallelism. We provide an open-source implementation of our algorithm for reproducibility purposes. We also contribute a reproducible benchmark package, which exploits three-dimensional data from a public repository and compares our algorithm to a variety of publicly available implementations. Extensive experiments indicate that our algorithm improves by two orders of magnitude the time performance of the seminal “PairSimplices” algorithm it extends. Moreover, it also improves memory footprint and time performance over a selection of 14 competing approaches, with a substantial gain over the fastest available approaches, while producing a strictly identical output. We illustrate the utility of our contributions with an application to the fast and robust extraction of persistent 1-dimensional generators on surfaces, volume data and high-dimensional point clouds.
Article
Full-text available
Abstract. We present algorithms for constructing a hierarchy of increasingly coarse Morse—Smale complexes that decompose a piecewise linear 2-manifold. While these complexes are defined only in the smooth category, we extend the construction to the piecewise linear category by ensuring structural integrity and simulating differentiability. We then simplify Morse—Smale complexes by canceling pairs of critical points in order of increasing persistence.
Article
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C¹ piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables.
Article
Abstract. We formalize a notion of topological simplification within the framework of a filtration, which is the history of a growing complex. We classify a topological change that happens during growth as either a feature or noise depending on its lifetime or persistence within the filtration. We give fast algorithms for computing persistence and experimental evidence for their speed and utility.