Data Structures For Unstructured Mesh Generation
Data Structures For Unstructured Mesh Generation
14.1 Introduction
14.2 Some Basic Data Structures
Linear Lists • A Simple Hash Table
14.3 Tree Structures
Binary Trees • Heaps • Binary Search Tree • Digital Trees
14.4 Multidimensional Search
Searching Point Data • Quadtrees • Binary Trees for
Multidimensional Search • Intersection Problems
Luca Formaggia 14.5 Final Remarks
14.1 Introduction
The term data structures, or information structures, signifies the structural relationships between the
data items used by a computer program. An algorithm needs to perform a variety of operations on the
data stored in computer memory and disk; consequently, the way the data is organized may greatly
influence the overall code efficiency.
For example, in mesh generation there is often the necessity of answering queries of the following
kind: give the list of mesh sides connected to a given node, or find all the mesh nodes laying inside a
certain portion of physical space, for instance, a sphere in 3D. The latter is an example of a range search
operation, and an inefficient organization of the node coordinate data will cause the looping over all
mesh nodes to arrive at the answer. The time for this search operation would then be proportional to
the number of nodes n, and this situation is usually referred to by saying that the algorithm is of order
n, or more simply O(n). We will see later in this chapter that a better data organization may reduce the
number of operations for that type of query to O(log2 n), with considerable CPU time savings when n
is large.
The final decision to use a certain organization of data structure may depend on many factors; the
most relevant are the type of operations we wish to perform on the data and the amount of computer
memory available. Moreover, the best data organization for a certain type of operation, for instance
searching if an item is present in a table, is not necessarily the most efficient one for other operations,
such as deleting that item from the table. As a consequence, the final choice is often a compromise. The
fact that an efficient data organization strongly depends on the kind of problem at hand is probably the
major reason that a large number of information structures are described in the literature. In this chapter,
we will describe only a few of them: the ones that, in the author’s opinion, are most relevant to
unstructured mesh generation. The reader interested in a more ample surveys may consult specialized
1. n = 0 a UNDERFLOW; 1. n = 0 a UNDERFLOW;
2. start = (start + 1) mod max; 2. end = (end + max – 1) mod max;
3. n – –. 3. n – –.
1. p = Q.next; 1. r = R.prev;
2. R.next = p; 2. p = P.next;
3. (*p).prev = &R; 3. (*r).next = p;
4. Q.next = &R; 4. (*p).prev = r.
5. R.prev = &Q.
It is often convenient to use a variant of the linked list, called a circular linked list. In a circular (singly
or doubly) linked list every record has a successor and a predecessor and the basic addition/deletion
operation has a simpler implementation. There is also usually a special record called header that contains
the link to the first record in the list, and it is pointed to by the last one. Table (14.2) shows a possible
algorithm for the implementation of the basic addition/deletion operations on a circular doubly linked
list L. The memory location for a new record could be dynamically allocated from the operating system,
where we would also free the ones deleted from the list. However, this type of memory management
could be not efficient if we expect to have frequent insertions and deletions, as the operations of allocating
and deallocating dynamic memory have a computational overhead. Moreover, it cannot be implemented
with programming languages that do not support dynamic memory management. It is then often
preferable to keep an auxiliary list, called list of available space (LAS), or free list, which acts as a pool
where records could be dumped and retrieved. At start-up the LAS will contain all the initial memory
resources available for the linked list(s). The LAS is used as a stack and is often singly linked. Here, for
sake of simplicity, we assume that also the LAS is stored as a doubly linked circular list. Figure 14.4 shows
graphically an example of a doubly linked circular list and the corresponding LAS, plus the operation
required for the addition of a record. In the implementation shown in the table we have two attributes
associated with a list L, namely L.head, and L.n, which gives the location of the header and the number
of records currently stored in the list, respectively. Consequently, LAS.n indicates the number of free
records currently available for the linked list(s). In Table (14.3) we illustrate the use of the LAS for the
insert and delete operation. We have indicated with R.cont the field where the actual data associated with
R is kept.
It remains to decide what to do when an overflow occurs. Letting the list grow dynamically is easy:
we need to allocate memory for a certain number of records and join them to the LAS. The details are
left to the reader. If we want to shrink a linked list we can always eliminate some records from the LAS
by releasing them to the operating system. Again, we should take into account that many fine grain
allocations/deallocations could cause a considerable computational overhead, and a compromise should
be found between memory usage and efficiency. We have mentioned the possibility that the list of available
storage could be shared among many lists. The only limitation is that the records in all those lists should
be of equal size. Linked lists may be implemented in languages, such as Fortran, that do not provide
pointer data type. Pointers would be substituted by array indices, and both the linked list and the LAS
could be stored on the same array. The interested reader may consult [2] for some implementation details.
TABLE 14.3 Record Addition and Deletion from a Doubly Linked Circular
List, Using a List of Available Space for Record Memory Management
Insert data x in list L in a record placed
after record Q Delete R from list L
Since k is an integer number, and we expect that the k’s will be almost uniformly distributed, a simple
and effective choice for h(k) is the identity function h(k) = k. A hash table H may then be formed by an
array H.A[m], where m is the maximum node numbering allowed in the mesh. The array will be addressed
directly using the key k. Each table entry H.A[k] will either contain a null pointer, indicating that the
corresponding key is not present in the table, or a pointer to the beginning of a linked list whose records
contain the remaining keys for each face with principal key k, plus possible ancillary information. If we
use doubly linked lists, each entry in the table may store two pointers, pointing to the head and the tail
of the corresponding list, respectively. In practice, each array element acts as a header for the list.
Figure 14.5 illustrates this information structure, where, for simplicity a 2D mesh has been considered.
If we use doubly linked list, add and delete operations are O(1) while simple search is a O(1 + n/m)
operation, where n indicates the number of records actually stored in the table. Since in many practical
situations, such as the one in the example, the average value of n/m is relatively small (≈ 6 for a 2D
mesh), the searching is quite efficient. As for memory usage, if we assume that no ancillary data is stored,
we need approximately 2mP + nmax[2P + (l – 1)I] memory locations, where P and I are the storage
required for a pointer and an integer value, respectively, while l is the number of keys (3 in our case),
and nmax is the maximum number of records that could be stored at the same time in the linked lists. In
this case, nmax is at most the maximum number of faces in the mesh. All chained lists have records of the
same size, therefore a common LAS is normally used to store the available records. Some memory savings
may be obtained by storing the first record of each chained list directly on the corresponding array
element, at the expense of a more complicated bookkeeping.
The structure just presented is an example of a hash table with collision resolved by chaining. The
term collision means the event caused by two or more records that hash to the same slot in the table.
Here, the event is resolved by storing all records with the same hash function in a linked list. This is not
the only possibility, however, and many other hashing techniques are present in the literature, whose
description is here omitted. In the previous example we have assumed that we know the maximum
number of different keys. What can be done if we do not know this beforehand, or if we would like to
save some memory by having a smaller table? We need to use a different hash function. There are many
choices for hash functions that may be found in the literature. However, for the type of problems just
described, the division method, i.e.,
h(k) = k mod m
*In general, h may be a function of all keys, i.e., h = h(K). For sake of simplicity, we neglect the general case.
is simple and effective. Going back to our example, if we choose m = 103, then faces {104, 505, 670} and
{342, 207, 849} have the same hash value h = 1, even if their principal key is different (104 and 207,
respectively). In order to distinguish them, we need to store also the principal key in the chained linked
list records, changing the memory requirement to approximately 2mmax P + nmax [2P + lI]. Comparing
with the previous expression, it is clear that this method is convenient when nmax < < mmax. In which
particular situations would a hash table like the one presented in the example be useful? Let us assume
that somebody has given you a tetrahedral grid, without any indication of the boundary faces. How do
you find the boundary faces? You may exploit the fact that each mesh face, apart from the ones at the
boundary, belongs to two tetrahedra, set up a hash table H of the type just described, and run the
following algorithm.
1. Loop over the elements e of the mesh
1.1. Loop over element faces
1.1.1. Compute the keys K for the face and the principal key k
1.1.2. Search K in H
• If K is present then delete the corresponding record
• Otherwise add to H the record containing the face keys
2. Traverse all items in the hash table and push them onto stack F
The stack F will now obtain all boundary faces. A similar structure may be used also to dynamically store
the list of nodes surrounding each mesh node, or the list of all mesh sides and many other grid data. We
have found this hash table structure very useful and rather easy to program.
The implementation just described is useful in a dynamic setting, when add and delete operations are
required. In a static problem, when the grid is not changing, we may devise more compact representations
based on sequential storage and direct addressing. Again, let’s consider a practical problem, such as storing
a table with the nodes surrounding each given mesh node, when the mesh, formed by n nodes and ne
elements, is not changing. We use a structure K with the following attributes:
K.n = n, number of entries in the table;
K.IA[n + 1], the array containing the pointer to array JA;
K.JA[3ne], the array containing the list of nodes.
Figure 14.6 graphically shows how the structure works. The indices {IA[i], K, IA[i + 1] – 1} are used to
directly address the entries in array JA that contain the numbering of the nodes surrounding node i
(here, we have assumed that the smallest node number is 0). The use of the structure is straightforward.
The problem remains of how to build it in the first place. A possible technique consists of a two-sweep
algorithm. We assume that we have the list of mesh sides.* In the first sweep we loop over the sides and
we count the number of nodes surrounding each node, preparing the layout for the second pass:
1. For ( i = 0, i ≤ n ; i + + ) IA[i] = 0
2. Begin sweep 1: loop over the mesh sides i1, i2
2.1. For i Œ { i 1, i 2 } ) IA[i] ++
3. For (i = 1, i ≤ n, i ++) IA[i]+ = IA[i – 1]
4. For (i = n – 1, i ≥ 1, i – –) IA[i] = IA[i – 1]
5. IA[0] = 0
6. Begin sweep 2: loop over the mesh side i1, i2
6.1. For ( i Œ { i 1, i 2 } )
6.1.1. JA[IA[i]] = i1 + i2 – i
6.1.2. IA[i] ++
7. For (i = n, i ≥ 1, i – –) IA[i] = IA[i – 1]
8. IA[0] = 0
It is worth mentioning that this structure is also the basis of the compressed sparse row format, used to
efficiently store sparse matrices.
*The algorithm that works using the element connectivity list, i.e., the list of the nodes on each element, is only
a bit more complicated, and it is left to the reader.
14.3.2 Heaps
Often, there is the necessity to keep track of the record in a certain set that contains the maximum (or
minimum) value of a key. For example, in a 2D mesh generation procedure based on the advancing front
method [14] we need to keep track of the front side with the minimum length, while the front is changing.
An information structure that answers this type of query is a priority queue, and a particular data
organization which could be used for this purpose is the heap. A heap is normally used for sorting
purposes and indeed the heap-sort algorithm exploits the properties of a heap to sort a set of N keys in
O(Nlog2N) time, with no need of additional storage. We will illustrate how a heap may also be useful as
a priority queue.
We will indicate in the following with > the ordering relation. A heap is formally defined as a binary
tree with the following characteristics: If k is the key associated with a heap node, and kl and kr are the
keys associated to the not-empty left and right subtree root, respectively, the following relation holds:
As a consequence, the key associated to each node is not “smaller” than the key of any node of its subtrees,
and the heap root is associated to the “largest” key in the set. We have placed in quotes the words “largest”
and “smaller” because the ordering relation > may in fact be arbitrary (as long as it satisfies the definition
of an ordering relation), and it does not necessarily correspond to the usual meaning of “greater than.”
An interesting feature of the heap is that, by employing the correct insertion and addition algorithms,
the heap can be kept complete, and the addition, deletion, and simple query operations are, even in the
worst case, of O(log2 n), while accessing the “largest” node is clearly O(1).
A heap T may be stored using sequential allocation. We will indicate by T.n and T.max the current
and maximum number of records stored in T, respectively, while T.H[max] is the array that will hold
the records. The left and right subtrees of the heap node stored at H[i] are rooted at H[2i + 1] and H[2i
+ 2], respectively, as illustrated in Figure 14.8. Therefore, by using the integer division operation, the
parent of the node stored in H[j] is H[(j – 1)/2]. The heap definition may then be rewritten as
When inserting a new node, we provisionally place it in the next available position in the array and
we then climb up the heap until the appropriate location is found. Deletion could be done employing a
top-down procedure, as shown in Figure 14.9. We consider the heap rooted at the node to be deleted,
and we recursively move the “greatest” subtree root to the parent location, until we reach a leaf where
we move the node stored on the last array location. Finally, a bottom-up procedure analogous to that
used for node insertion is performed.*
Since the number of operations is clearly proportional to the height of the tree, we can deduce that,
even in the worst case, insertion and deletion are O(log2 n). Simple and range searches could be easily
implemented with a heap as well. However, a heap is not optimal for operations of this type.
*The terms top-down and bottom-up refer to the way a tree is normally drawn. So, by climbing up a tree we
reach the root!
As before, > indicates an ordering relation. It should be noted that we must disambiguate the case of
equal keys, so that the comparison may be used to discriminate the records that would follow the left
branch from the ones that would go to the right. Inorder traversal of a binary search tree returns the
records in “ascending” order.
The simple search operation is obvious. We recursively compare the given key with the one stored in
the root, and we choose the right or left branch according to the comparison, until we reach either the
desired record or a leaf node. In the latter case, the search ends unsuccessfully. In the worst case, the
number of operations for a simple search is proportional to the height of the tree. For a complete tree
the search is then O(log2 n). However, the shape of a binary tree depends on the order in which the
records are inserted and, in the worst case, (which, for example, happens when a set of ordered records
is inserted) the tree degenerates and the search becomes O(n). Fortunately, if the keys are inserted in
random order, it may be proved the search is, on average, still O(log2 n) [10].
Node addition is trivial and follows the same lines of the simple search algorithm. We continue the
procedure until we reach a leaf node, of which the newly inserted node will become a left or right child,
according to the value of the key comparison. Node deletion is only slightly more complicated, unless
the deleted node has fewer than two children. In that case the deletion is indeed straightforward if the
node is a leaf, while if it has a single child, we can slice it out by connecting its parent with its child.
In the case that the deleted node has two children, we have to find its successor S in the inorder
traversal of the tree, which has necessarily at most one child. We then slice S out of the tree and put it
at the deleted node location. The resulting structure is still a binary search tree, Figure 14.10 illustrates
the procedure. It can be proved that both insert and delete operations are O(log2 n) for complete binary
search trees. Other algorithmic details may be found, for example, in [2].
A binary search tree may be kept almost complete by various techniques, for example, by adopting
the red-black tree data structure [7] or the AVL tree, whose description may be found in [22].
change. As a consequence the structure resulting from this operation will, in general, not be a digital
search tree anymore.
The name of this data structure derives from the fact that it is in principle possible to transform a key
into an ordered set of binary digits d0, d1, K, dm so that, at tree level l, the decision to follow the right
or left path could be made by examining the lth digit. In particular, for the example just shown, the
decision could be made by considering the lth significant digit of the binary representation of the number
(k – a)/(b – a), and following the left path if dl = 0.
Directly related to the digital search tree is the trie structure [11], which differs from the digital search
tree mainly because the actual data is only stored at the leaf nodes. The trie shape is completely inde-
pendent from the order in which the nodes are inserted. The shape of a digital tree, instead, still depends
on the insertion order. On the other hand, a trie uses up more memory, and the algorithms for adding
and deleting nodes are a little more complex. Contrary to a binary search tree, both structures require
the knowledge of the maximum and minimum value allowed for the key.
Figure 14.11 shows an example of the search structures seen so far for a given set of keys k Œ [ 0, 1 ). For
the digital and binary trie we have put beside each node the indication of the associated interval.
In addition to those operations, we may want to be able to efficiently add and delete nodes to the set.
For the case d = 1, it was shown in the previous section that a binary search tree could efficiently answer
these queries. It would be natural to ask whether it can be used also for multidimensional searches.
Unfortunately, binary search is based on the existence of an ordering relation between the stored data,
and there is normally no way of determining an ordering relation between multidimensional points. In
fact, we will see that in principle an ordering relation may be found, for instance using a technique called
bit interleaving, but in practice this procedure is not feasible, as it would require costly operations, both
in terms of computation and memory.
The most popular procedures for handling multidimensional searches are either based on hierarchical
data structure or on grid methods [20]. We will illustrate some of the former, and in particular data
structures based either on binary trees quadtrees, or octrees. For sake of simplicity, we will consider only
a Cartesian coordinate system and the two-dimensional case, the extension to 3D being obvious.
14.4.2 Quadtrees
The quadtree is “4-ary” tree whose construction is based on the recursive decomposition of the Cartesian
plane. Its three-dimensional counterpart is the octree. There are basically two types of quadtrees, depend-
ing whether the space decomposition is driven by the stored point data (point-based quadtrees) or it is
determined a priori (region-based quadtrees). Broadly speaking, this subdivision is analogous to the one
existing between a binary and a digital search tree. We will in the following indicate with B the domain
bounding box defined as the smallest interval in R2 enclosing the portion of space where all points in P
will lie. Normally, it can be determined because we usually know beforehand the extension of the domain
that has to meshed. For sake of simplicity, we will often assume in the following that B is unitary, that
is B ≡ [ 0, 1 ) × [ 0, 1 ) . There is no loss of generality when using this assumption, as an affine transfor-
mation can always be found that maps our point data set into a domain enclosed by the unitary interval.
14.4.2.1 Region-Based Quadtrees
A region-based quadtree is based on the recursive partitioning of B into four equally sized parts, along
lines parallel to the coordinate axis. We can associate to each quadtree node N an interval N.I = [a, b)
× [a, b) where all the points stored in the tree rooted at N will lie. Each node N has four links, often
denoted by SW, SE, NW, NE, that point to the root of its subtrees, which have associated the intervals
obtained by the partitioning. Point data are usually stored only at leaf nodes, though it is also possible
to create variants where point data can be stored on any node. Figure (14.12) illustrates an example of
a region quadtree. The particular implementation shown is usually called PR-quadtree [20,19].
Point searching is done by starting from the root and recursively following the path to the subtree
root whose associated interval encloses the point, until we reach a leaf. Then the comparison is made
between the given node and the one stored in the leaf. Range searching could be performed by examining
only the points stored in the subtrees whose associated interval has a non-empty intersection with the
given range. Details for point addition/deletion procedures may be found in the cited reference. The
shape of the quadtree here presented, and consequently both search algorithm efficiency and memory
requirement, is independent of the point data insertion order, but it depends on the current set of points
stored. If the points are clustered, as often happens in mesh generation, this quadtree can use a great
deal of memory because of many empty nodes. Compression techniques have been developed to overcome
this problem: details may be found in [15]. In unstructured mesh generation the region quadtree, and
the octree in 3D, is often used not just for search purposes [12], but also as a region decomposition tool
(see also Chapter 22). To illustrate the idea behind this, let us consider the example shown in Figure 14.13,
FIGURE 14.13 Domain partitioning by a region quadtree. The quadtree contains the boundary points. The parti-
tions associated with the quadtree nodes are shown with dotted lines.
where the line at the border with the shaded area represents a portion of the domain boundary. A region
quadtree of the boundary nodes has been built, and we are showing the hierarchy of partitions associated
to the tree nodes. It is evident that the size of the partitions is related to the distance between boundary
points, and that the partitioning is finer near the boundary. Therefore, structures of this type may be
used as the basis for algorithms for the generation of a mesh inside the domain, in various ways. For
instance, a grid may be generated by appropriately splitting the quad/octree partitions into triangle/tet-
rahedra [23]. Alternatively, the structure may be used to create points to be triangulated by a Delaunay
type procedure [21] (cf. Chapter 16). Finally, it can be adopted for defining the mesh spacing distribution
function in an advancing front type mesh generation algorithm [9] (cf. Chapter 17).
14.4.2.2 Point-Based Quadtrees
A point quadtree is a type of multidimensional extension of the binary search tree. Here the branching
is determined by the stored point, as shown in Figure 14.14. It has a more compact representation than
the region quadtree, since point data is stored also at non-leaf nodes. However, the point quadtree shape
strongly depends on the order in which the data is inserted, and node deletion is rather complex.
Therefore, it is not well suited for a dynamic setting. However, for a static situation, where the points
are known a priori, a simple technique has been devised [4] to generate optimized point quadtree, and
this fact makes this structure very interesting for static situations, since simple search operations become
O(log4 n) n being the total number of points stored. It should be mentioned that a procedure that allows
for dynamic quadtree optimization has also been devised [16]. Its description is beyond the scope of this
chapter.
insertion order is more efficient. Range searches in an ADT are made by traversing the subtrees associated
with intervals which intersect the given range.
A region decomposition based structure similar to ADT, where the data points are stored only at leaf
nodes is the bintree [20]: it has not been considered here because of its higher memory requirement
compared with ADT.
14.4.3.1 Bit Interleaving
For sake of completeness, we mention how, at least theoretically, a binary search tree may be used also
for multi-dimensional searching using a technique called bit interleaving. Let us assume that B is unitary.
Then, given a point P = (x, y) we may consider the binary representation of its coordinates, which we
will indicate as x0, x1, x2, K, xd and y0, y1, y2, K, yd. We may now build a code by interleaving the binary
digits, obtaining x0, y0, K, xd, yd and define an ordering relation by treating the code as the binary
representation of a number. The code is unique for each point in B, and we can use it as a key for the
construction of a binary search tree. This technique, however, is not practical because it would require
storing at each node a code that has a number of significant digits twice as large as the one required for
the normal representation of a float (three times as large for 3D cases!). It may be noted, however, that
the ADT may indeed be interpreted as a digital tree where the discrimination between left and right
branching at level l is made on the base of the lth digit of the code built by a bit interleaving procedure
(without actually constructing the code!).
procedure. In the first step we associate with each geometrical entity of interest G its smallest enclosing
interval I G ≡ [ x 1G, x 2G ] × [ y 1G, y 2G ] , and we then build specialized data structure which enables one to
efficiently solve the following problem.
Given a set I of intervals (rectangles in 2D or hexahedra in 3D), find the subset H ⊂ I of all
elements of I which intersect a given arbitrary interval.
In the second phase, the actual geometrical intersection test will be made, restricted only to those
geometrical entities associated to the elements of H.
Data structures that enable solving efficiently this type of problem may be subdivided into two
categories: the ones that represent an interval in Rn as a point in Rn2, and those that directly store the
intervals. An example of the latter technique is the R-tree [20], which has been recently exploited in a
visualization procedure for three-dimensional unstructured grid for storing sub-volumes so that they
can be quickly retrieved from disk [13]. We will here concentrate on the first technique: i.e., how to
represent an interval as a point living in a greater dimensional space.
14.4.4.1 Representing an Interval as a Point
Let us consider the 2D case, where intervals are rectangles with edges parallel to the coordinate axis. A
rectangle R ≡ [ x 1, x 2 ] × [ y 1, y 2 ] may be represented by a point P Œ R 4 . There are different representa-
tions possible, two of which are listed in the following:
1. P ≡ [ x 1, y 1, x 2, y 2 ]
2. P ≡ [ xc, yc, dx, dy ] where xc = (x1 + x2)/2, dx = (x2 – xc), K
In the following we will consider the first representation. Once the rectangle has been converted into a
point, we can adopt either a k-d or an ADT data structure both for searching and geometrical intersection
problems. If we use an ADT tree, the problem of finding the possible intersections can be solved by
traversing the tree in preorder, excluding those subtrees whose associated interval in R4 cannot intersect
the given rectangle. Figure 14.16 shows a simple example of this technique.
References
1. Bonet, J. and Peraire, J., An alternating digital tree (ADT) algorithm for 3D geometric searching
and intersection problems, Int. J. Num. Meths. Eng., 31, pp. 1–17, 1991.
2. Cormen, T. H., Leiserson, C. E., and Rivest, R. L., Introduction to Algorithms, The MIT Electrical
Engineering and Computer Science Series, McGraw-Hill, 1990.
3. Fiedman, J. H., Bentley, J. L., and Finkel, R. A., An algorithm for finding best matches in loga-
rithmic expected time, ACM Transactions on Mathematical Software, 3(3), pp. 209–226, September
1977.
4. Finkel, R. A. and Bentley, J. L., Quad trees: a data structure for retrieval on composite keys, Acta
Inform. 1974, 4: pp 1–9.
5. Gaither, A., A topology model for numerical grid generation, Weatherill, N., Eiseman, P. R., Hauser,
J., Thompson, J. F., (Eds.), Proceedings of the 4th International Conference on Numerical Grid
Generation and Related Fields, Swansea, Pineridge Press, 1994.
6. Gaither, A., An efficient block detection algorithm for structured grid generation, Soni, B. K.,
Thompson, J. F., Hauser, J., Eiseman, P. R., (Eds.), Numerical Grid Generation in Computational
Field Simulations, Vol. 1, 1996.
7. Guibas, L. J. and Sedgewick, R., A diochromatatic framework for balanced trees, Proceedings of
the 19th Annual Symposium on Foundations of Computer Science, IEEE Computer Society, 1978,
pp. 8–21.
8. Guibas, L. J. and Stolfi, J., Primitives for the manipulation of general subdivisions and the com-
putation of Voronoï diagrams, ACM Transaction on Graphics, April 1985, 4(2).
9. Kallinderis, Y., Prismatic/tetrahedral grid generation for complex geometries, Computational Fluid
Dynamics, Lecture Series 1996-06. von Karman Institute for Fluid Dynamics, Belgium, March 1996.
10. Knuth, D. E., The Art of Computer Programming. Vol. 1, Fundamental Algorithms of Addison-Wesley
Series in Computer Science and Information Processing. Addison–Wesley, 2nd ed., 1973.
11. Knuth, D. E., The Art of Computer Programming, Vol. 3, Sorting and Searching of Addison–Wesley
Series in Computer Science and Information Processing. Addison-Wesley, 1973.
12. Lohner, R., Generation of three dimensional unstructured grids by advancing front method, AIAA
Paper 88-0515, 1988.
13. Ma, K. L., Leutenegger, S., and Mavriplis, D., Interactive exploration of large 3D unstructured-
grid data, Technical Report 96-63, ICASE, 1996.
14. Morgan, K., Peraire, J., and Peirò, J., Unstructured grid methods for compressible flows, Special
Course on Unstructured Grid Methods for Advection Dominated Flows, 1992, AGARD-R-787.
15. Ohsawa, Y. and Sakauchi, M., The BD-tree, a new n-dimensional data structure with highly
efficient dynamic characteristics, Mason, R.E.A., (Ed.), Information Processing 83, North-Holland,
Amsterdam, 1983, pp. 539–544.
16. Overmars, M. H. and van Leeuwev, J., Dynamic multi-dimensional data structures based on quad-
and k-d trees, Acta Informatica, 17(3), pp. 267–285, 1982.
17. Preparata F. P. and Shamos, M. I., Computational Geometry: An Introduction, Springer–Verlag, 1985.
18. Rivara, M. C., Design and data structures of fully adaptive, multigrid, finite-element software,
ACM Trans. Math. Soft., 10, 1984.