PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Comput Methods Appl Sci. Author manuscript; available in PMC 2010 September 22.
Published in final edited form as:
Comput Methods Appl Sci. 2009 January 1; 13: 1–28.
doi:  10.1007/978-1-4020-9086-8_1
PMCID: PMC2943643
NIHMSID: NIHMS155076

Modeling Cardiovascular Anatomy from Patient-Specific Imaging Data

1 Introduction

The importance of modern imaging techniques for capturing detailed structural information of a biological system cannot be understated. Unfortunately images do not reveal the “full functional story” and a spatially realistic computer model is often necessary for a comprehensive understanding of the complicated structural and physiological properties of the biological system’s entities under investigation [1]. Deeper insights into structure-to-function relationships of different entities is achieved via finite element simulations of the modeled biomedical process. A 3D (three dimensional) finite element meshed computer model of the biological system is therefore a first step to perform such simulations.

The behavioral attributes of a biological entity or the physiological interaction between different participating components of a biological system are often modeled mathematically via a coupled set of differential and integral equations, and quite often numerically evaluated using finite element (or boundary element) simulations. To further emphasize the premise of cardiac modeling from imaging data, we state a few computational biomedical modeling and simulation examples: 3D computational modeling of the human heart for a quantitative analysis of cyclical electrical conductance on the heart membrane [2, 3, 4, 5, 6]; the biomechanical properties (stress-strain, elasticity) of the heart ventricular walls[7, 8, 9, 10, 11, 12]; 3D modeling and simulation of pulsatile blood flow through human arteries/veins for vascular by-pass surgery pre-planning on a patient specific basis [13, 14, 15, 16, 17, 18]. A finite element decomposition of the geometric domain, capturing the detailed spatial features that can be gleaned from the imaging, is therefore the essential first step toward performing the necessary numerical simulations [19, 20, 21, 22].

Modeling of human vasculature from three-dimensional (3D) Computed Tomography (CT) images of the thorax is a critical step for computer-aided diagnosis (CAD) in disease domains such as lung nodules [25], coronary artery disease [26], and pulmonary embolism (PE) [27]. Even though there are many published approaches, the problem is still unsolved. Survey of various techniques on this topic can be found in [28, 29]. Thoracic CT angiography (CTA) imaging is often performed for patients suspected of having a PE that is defined as a thrombus (or a clot of blood) [30]. Therefore, in order to detect pulmonary embolism, a suitable model of vasculature distnguishing between arterial and venous blood vessel trees is a crucial step.

In subsequent subsections we highlight the computational pipeline, the main algorithmic components and a few descriptive results of our Cardio Vascular Modeling from Imaging software.

2 Data Processing

The Imaging-to-Modeling software system for cardiovascular data employs both Image Processing and Geometry processing functionalities to produce a suitable linear or higher order meshed model of the anatomy. Figure 2 describes the data-flow layout. We describe the major algorithmic components of each of the processing modules in Sections 2.1 and 2.2. The reader must note that, the modules are selectively used depending on the nature and quality of the input imaging data (Section 3).

Fig. 2
Data Flow of Cardio-Vascular Modeling from 3D Imaging Data: There are two major data processing algorithmic modules - Image Processing and Geometry Processing. The Image Processing module consists of sub-modules for Contrast Enhancement Classification/Segmentation, ...

2.1 Image Processing

[A] Contrast Enhancement

The three dimensional intensity data often possesses a low contrast between structural features and the background, thereby making further processing all the more difficult. Image contrast enhancement is a process used to improve the image quality for better visual appearance for subsequent operations. The most commonly used methods utilize global contrast manipulation based on global [31, 32] or local histogram equalization [31, 32, 33, 34], retinex model [35, 36] and wavelet decomposition [37, 38].

We have developed a fast method for image contrast enhancement [39] based on a localized version of the classical contrast manipulation [31, 32].

In this method, we design an adaptive transfer function for each individual voxel, based on the intensities in a suitable local neighborhood. First, we compute the local statistics (local average, local minimum, and local maximum) for each voxel using a fast propagation scheme [40, 41]. Then a transfer function is designed based upon the calculated local statistics. Various linear or nonlinear functions can be used here to stretch the contrast profile. We build a transfer function which consists of two pieces: a convex curve in the dark-intensity range and a concave curve in the bright-intensity range. The overall function is C1 continuous. Finally, we map the intensity of each voxel to a new one using the calculated transfer function. The performance of this algorithm is shown in Figure 3.

Fig. 3
The performance of the contrast enhancement algorithm is shown on one slice of the CTA data.

[B] Filtering

The input images are often contaminated with noise and are therefore need to be filtered. Traditional image filters include Gaussian filtering, median filtering, and frequency domain filtering [31]. Compared to these, anisotropic filters are preferred as they tend to preserve the features better. Bilateral filtering [42, 43, 44, 45] is a straightforward extension of Gaussian filtering by simply multiplying an additional term in the weighting function. Partial differential equation (PDE) based techniques, known as anisotropic geometric diffusion, have also been studied [46, 47]. Another popular technique for anisotropic filtering is by wavelet transformation [48]. By carefully designing the filter, one can smooth image noise while maintaining the sharpness of the edges in an image [49]. Finally, the development of nonlinear median-based filters in recent years has also produced promising results [50, 51]. Among the aforementioned techniques, two methods for noise reduction have been suggested for tomographic data sets, namely wavelet filtering [52] and non-linear anisotropic diffusion [53].

Our approach, utilizing a bilateral pre-filtering coupled with an evolution driven anisotropic geometric diffusion PDE (partial differential equation), has shown significant results in enhancing the features of intensity maps. The PDE model is :

tϕϕdiv(Dσϕϕ)=0

The efficacy of our method is based on a careful selection of the anisotropic diffusion tensor Dσ based on estimates of the normal and two principal curvatures and curvature directions of a feature isosurface (level-set) in three dimensions [54, 55, 56]. The diffusivities along the three independent directions of the feature isosurface are determined by the local second order variation of the intensity function at each voxel. In order to estimate continuous first and second order partial derivatives, a tricubic B-spline basis is used to locally approximate the original intensity.

[C] Classification/Segmentation

Voxel Classification

The Fuzzy C-Means (FCM) algorithm [57] and the Expectation Maximization (EM) algorithm [58] have been used for soft clustering in data-mining and image classification. Pham et al.proposed an Adaptive Fuzzy C-Means (AFCM) algorithm to classify inhomogeneous medical images and volume datasets [59, 60]. Ahmed et al.proposed a bias corrected FCM algorithm to compensate inhomogeneities of images of volume datasets [61, 62]. Each of these algorithms minimizes an objective function through iterative methods. Gopal et al.proposed a maximum likelihood estimate algorithm with a Spatially Variant Finite Mixture model (SVFMM) for image classification [63]. Laidlaw suggested a partial-volume Bayesian classification algorithm based on Bayes theorem [64].

Our goal of 3D map segmentation is to partition the map into a number of connected regions of interest. We have compared and implemented several classification algorithms [64, 65, 66, 67]. We first locate the seed points by gradient vector diffusion [68]. We then compute the min-max range of every seed point’s neighbors and cluster the seed points to belong to the same region if the min-max ranges overlap. We then apply GVF snake [69, 70] to cluster the voxels falling into separate regions. In addition, the contour spectrum is used to identify the number of materials in the image and is also used to select critical isovalues based on volume fraction of the material [71]. We have also developed a multi-dimensional signature based voxel classification scheme [72] appropriate for medical imaging data. Figure 5 shows the results of the classification on a single slice of the patient scan where the voxels have been classified into the background, lungs and vasculature.

Fig. 5
Voxel Classification: A single slice of CTA image of a patient (a) is classified to tag voxels belonging to different anatomical regions (b). The intensity values are also marked (b). The final classified voxelized image is shown with most important regions ...

Segmentation via Fast Marching Method

Segmentation is a way to electronically dissect the significant biological components, and thereby obtain a clear view into the machinery’s architectural organization[73]. Segmentation is usually carried out either manually [74, 75, 76, 77, 78] or semi-automatically[79, 80]. Current efforts on the decomposition still largely rely on manual work with an assistance of a graphical user interface [81, 82]. Manual segmentation can be tedious and often subjective [76, 83]. Automated segmentation is still recognized as one of the hardest tasks in the field of image processing although various techniques have been proposed for automated or semi-automated segmentation. Commonly used methods include segmentation based on edge detection, region growing and/or region merging, active curve/surface motion and model based segmentation. In particular, two techniques were discussed in details in the electron tomography community. One is called water-shed immersion method [79] and the other is based on normalized graph cut and eigenvector analysis [80].

In [84, 85, 86] we adopted a variant of the fast marching method [87, 88, 89]. In this method a contour is initialized from a pre-chosen seed point, and the contour is allowed to grow until a certain stopping condition is reached. Traditionally this method is designed for a single object segmentation. We present an approach based on an idea of “re-initialization” by simply regarding and classifying the critical points as seeds. Every seed initiates one contour and all contours start to grow simultaneously and independently. We further classify the critical points into clusters and merge the growing contours which are initiated by the critical points in the same cluster. This multi-label idea has been used elsewhere (e.g., [90]), but the detection and classification of seeds are different in our approach. Figure 6 shows the process of segmentation, namely the seed point classification and region growing on a single slice of the image followed by the final segmented three dimensional model of human heart from patient imaging data.

Fig. 6
Segmentation via Fast Marching Method: Leftmost subfigure shows a single slice of the CTA image of a patient along with manually placed seeds for each of the main compartments of the human heart to be identified. The middle subfigure shows the segmentation ...

[D] Skeleton Extraction

Extraction of skeletal description often leads to a complexity-reducing better understanding of the image as it amplifies lower dimensional key features present in the data. Image based skeletonization algorithms are abound in the field of image processing. The previous efforts on this topic by other authors can be categorized into three approaches - one based on isotropic diffusion, governed by a set of linear PDEs [91, 92], one using scale-space theory [93, 94, 95] and one based on pseudo-distance map [96]. We have developed two distinct approaches to extract skeletons from imaging data which offer robustness and efficiency. First approach is based on the critical point structure of the imaging data. We first compute the gradient vector field at every voxel of the imaging data. Because of the noise, the vector field is somewhat arbitrary and do not carry much useful information. To bring out the underlying structure, we then apply gradient vector diffusion (GVF). The resulting vector field is then analyzed to detect the critical points and these critical points are joined to form a skeletal structure of the foreground of the image. The details of the process is given in [56].

The second approach for skeletonization starts with an isosurface and the distance function induced by it [28]. It then places a sequence of inner medial balls in a greedy fashion, capturing as much inner volume as possible. A neighborhood graph, based on the intersection pattern of these inner medial balls, is then constructed which provides the one dimensional skeletal structure. Figure 7 shows an example result of this algorithm applied on the CTA data of human heart.

Fig. 7
Skeletonization: A slice of the imaging data, its distance map and the extracted skeletal graph using the algorithm in [28] are shown.

[E] Flexible Alignment

Image registration is a commonly employed to flexibly match two different instances of a biological structure. In the context of cardiovascular modeling, the problem is to fit one image of the anatomy in question with its counterpart observed at a different time, or imaged from a different patient. Based on the transformation model that is to be applied on one instance (source) to fit the other (target), image resgistration is primarily of two types - Affine and Elastic/Deformable.

In case of affine registration, the relationship between the source and target image is established via a set of transformation parameters, and then those parameters are estimated by minimizing a quadratic error function. Algoithms under such approaches can be found in [97, 98, 99]. We have developed an algorithm in [100] which exploits the non-equispaced Fourier transformation techniques [101, 102] to speed up the affine image registration process.

The task is far more challenging when deformation of one image needs to be taken into account to properly fit it into the other image. For a nice survey on affine and deformable image registration algorithms, see [103]. However this survey is old, and since its publication many other algorithms have been published. Bajcsy and Kovacic modeled the elastic image registration problem by the deformation of elastic plates [104]. Christensen et al.considered the deforming image to be embedded in a viscous fluid whose motion is governed by Navier-Stokes equation [105]. Based on a similar viscous fluid registration scheme, Yanovsky et al.designed a new energy function introducing Jacobian maps [106] and this method was shown to perform better than [105] in terms of converegence and stability. There are other level-set based methods, e.g.by Clarenz et al.[107], and via edge matching technique by Mumford and Shah [108].

2.2 Geometry Processing

[A] Surface Extraction

Geometry extraction from three dimensional volumetric data is a primary step toward further analysis. There are several approaches for accomplishing this task.

Contouring

Isosurfacing is a popular method to extract surface geometry from scalar volume containing intensity values of the scanned anatomy. There are typically two types of contouring method frequently used in the literature - Primal Contouring and Dual Contouring. The most widely used primal contouring technique is the Marching Cubes Method [109] which extracts the geometry in a piecewise fashion by visiting every voxel of the volume data. There are several improvements of this technique that has been reported since the first appearance of the algorithm [110]. Dual contouring technique is similar to primal contouring in a sense that it also extracts the isosurface in a piecewise fashion. However it samples every voxel, instead of every edge of the voxel, to better approximate possible sharp features in the extracted geometry. We have experimented with both techniques for extracting a geometry from Cardiac CT data and we have seen that they perform similarly without rendering any significant advantage of any technique over the other. Figure 8 shows sample isocontours extracted from CTA imaging data of human heart.

Fig. 8
Contouring: A single annotated slice of human cardio-vasculature and the geometry extracted from the imaging data via contouring (isosurfacing) are shown from left to right. In the rightmost subfigure, the heart of the patient is shown in green while ...

Other than isocontouring, one can also apply level set based methods where a seed is grown to capture the boundary of a region in the image based on the intensity values. In literature, such technique is commonly known as snake [69, 70]. Note, this is similar to the image segmentation technique described earlier.

Point Cloud Reconstruction

Both of these approaches are very sensitive to the noise present in the data and especially isosurfacing techniques suffer when the imaging data is inhomogeneous. To circumvent these problems we adopt a third approach which is based on scattered data interpolation. After performing image segmentation on the CT image, we obtain a set of voxels belonging to every region of interest. We then apply the point based reconstruction technique to extract the geometry from the cloud of boundary voxels. The point based reconstruction technique has been researched extensively in the last decade. We refer the readers to some recent surveys for prior work in this area, e.g.[111]. We have adopted two recent techniques for our purpose of reconstruction - TightCocone and RobustCocone. TightCocone algorithm by Dey and Goswami [112] reconstructs a watertight triangulated surface from possibly undersampled input point cloud. A variant of this algorithm, called RobustCocone, was also developed in 2004. This algorithm is particularly suitable for noisy data. In our case, we often encounter noise in the segmented image even after applying image segmentation techniques, and to tackle such cases, we use RobustCocone for a reconstruction of the geometry. Figure 9 shows the results of surface reconstruction on the pointsets sampling compartments of human heart.

Fig. 9
Surface Reconstruction from Point Cloud Data: Major components of the human heart are reconstructed using the voxels surrounding the boundaries of the individual regions.

Surface reconstruction from scattered data has also been approached using variational approach. These techniques typically formulate an energy function based on the input data points and try to extract a surface that minimizes that energy. Such approach was first advocated by Zhao et al.in [113] who formulated the energy function as the integral of the distance function weighted by the area element of the input set of primitives for which a surface needs to be fit. Then they evolved an initial guess using a convection based approach.

We adopt an approach based on higher order level set spline (HLS) method. Given a non-negative energy function g(x), the surface Γ is defined to be the one that minimizes the energy function E(Γ) = Γ g(x)dx+[sm epsilon]Γ h(x, n)dx. Given a input set of points, one then formulates this energy using the distance function and evolves an initial approximation to guide the evolution so that the resulting deformation minimizes the energy. Details of this method can be found in [114].

[B] Curation/Filtering

The cardiovascular geometry extracted from imaging data typically has topological anomalies, namely small components, spurious noisy features etc. Therefore a careful investigation and subsequent removal of the spurious features present in the data is essential. Following are different scenarios.

Regularization

Geometry from volume data is often reconstructed via image segmentation. However the set of voxels segregated from the imaging data does not always conform with the true surface topology. As a result, we encounter subsets of voxels which do not sample a two dimensional manifold. Therefore it is important to recognize the dimension of the underlying space of the voxels marked by the segmentation process and remove the spurious ones. To perform this task, we use the technique described in [115]. A similar Voronoi-based approach was also reported in [116]. Following [115] we first construct a k-neighborhood graph. Then for every point we collect the neighboring points and perform Principal Component Analysis (PCA) on that subset. The eigenvalues of the covariance matrix determines the underlying dimension of the manifold. More precisely if all the eigenvalues are almost equal, the voxel is inside the segmented region and the point samples a three dimensional manifold. If two eigenvalues are almost equal and one is much smaller than the other two, the voxel lies on the boundary surface of the segmented region and is a true candidate for subsequent geometry reconstruction. Finally if two of the eigenvalues are much smaller than the third one, then the voxel samples a dangling one dimensional strand and such voxels must be removed.

Volumetric Feature Quantification

Given a set of points P sampling the entire shape, possibly contaminated with topological artifacts like small connected components and thin tunnels, we synthesize a distance function hP which assigns every point in R3 the distance to the nearest sample point in P. There are four types of critical points of hP, namely maxima, index 2 saddles, index 1 saddles and minima. It was shown that these critical points can be detected efficiently using the duality of Voronoi and Delaunay diagram of the original pointset P [117]. It was further shown that the stable manifolds of the maxima interior and exterior to the shape contains useful information connecting to the primal and complementary feature space of the shape [118, 119]. Figure 10 shows how the bone, ribs and thin blood vessels are removed via volumetric feature quantification process since they are not essential for creating a suitable model of human heart.

Fig. 10
Geometry Curation: Geometry of human heart extracted from the imaging data, which is cluttered with bone and other unnecessary parts, is cleaned up using the curation process.

[C] Segmentation

Based on the critical points of the distance function induced by the input geometry, we perform the geometry segmentation as follows. The detail of the algorithm is given in [118].

The geometric shape is given by a set of points P sampling the shape. The feature of the shape is then defined in terms of the stable manifold of the maxima of the distance function hP. The maxima are first computed by identifying the Voronoi vertices which lie inside their dual tetrahedra. Applying a Delaunay-based reconstruction technique on the pointset, one can further classify the tetrahedra holding the maxima into inside or outside. For our purpose we use only the interior maxima. We compute the stable manifold of these maxima using the algorithm in [118]. The adjacent stable manifolds are then merged if the generating maxima have almost same value of hP as measured by a parameter δ. Figure 11 shows the performance of this algorithm on the cardiovascular geometries. In this figure, the six main components of a human heart, namely aorta, pulmonary artery, left and right atrium and ventricle are segmented out from a set of points sampling the boundary of heart.

Fig. 11
Geometry Segmentation: Different components - Aorta (A.1), Pulmonary Artery (A.2), Left Atrium (A.3), Right Atrium (A.4), Left Ventricle (A.5) and Right Ventricle (A.6) are extracted from the point sample of the whole heart.

[D] Skeletonization

Computing skeletons of a geometric shape is a research issue that has been around for a long time. Medial axis cite(Blum) is considered a standard skeletal description of a shape and there are algorithms [120, 121] and publicly available software to compute the Medial Axis Transform (MAT) of a shape from its pointsample [122]. However, medial axis is composed of planar (two dimensional) and linear (one dimensional) parts. In order to compute such dimension-dependent decomposition of the medial axis, as well as pruning away hairy branches from the medial axis one requires some extra gadgets which we describe in the next paragraph. There are some previous works which focussed on computing a linear skeleton of an arbitrary shape. Some of these are topological thinning [123], distance field based methods [124, 125, 126, 127], potential field based methods [128], thinning via medial geodesic function [129] and others [130, 131, 132]. Cornea et al.[133] give a comprehensive survey of these techniques.

Once a suitable description of the geometry is obtained either by reconstruction or by contouring, the distance function based approach is used to compute a skeletal structure of the shape. As for segmentation, the critical points of the function hP is computed for a set P of input points sampling the shape. The index 1 and index 2 saddle points are then detected using the Voronoi-Delaunay duality [117]. To generate the skeletal structure, we then compute the unstable manifold of these critical points. The unstable manifold of an index 2 saddle point (U2) is one dimensional and the unstable manifold of an index 1 saddle point (U1) is two dimensional. Moreover, every U1 is bounded by some U2’s. The details of the computation of U1 and U2 are given in [134]. Figure 12 shows two instances where the skeletal structures have been constructed using this method.

Fig. 12
Skeletonization: A. Skeleton of the template geometry is extracted. B. The skeletal structure of the abdominal aorta is extracted. Note, compared to the medial axis (B.1), the unstable manifold of index 2 saddles is a much cleaner one dimensional skeletal ...

[E] Alignment

Alignment of two similar but not identical geometric objects is a difficult problem. There are relatively few papers in the geometric modeling community that address this problem. Recently, an interesting technique was reported by Eckstein et al.[135] where generalized surface flow was used for non-rigid alignment of a template geometry into the patient data. The authors design an energy function based on pseudo-Hausdorff distance between the two geometries and evolve the template geometry to fit the patient geometry following the gradient of the energy function.

We experimented with a different approach for non-rigid fitting of the segments of the template heart into the patient data in order to inherit the correct topological structure from the template geometry as well as retrieve the missing information in the patient data. We construct a skeletal description of different parts of the template geometry as described earlier. Every segment is then described as a union of balls centered at the one dimensional skeleton following a popular approach due to [136]. A mass-spring network is then built where each ball’s mass is proportional to its radius and the spring constant is taken to be constant. The normal mode analysis (NMA), which is a popular way to depict the vibrational nature of a molecule [137], is then applied to the network which produces a spectrum of possible deformed shapes. We choose one deformed shape from the spectrum that best fits the patient geometry.

[F] Quality Meshing

The final goal of this Imaging to Modeling software system is to produce suitably discretized meshed model of the imaged biological entity. The task of meshing is primarily divided into two parts - Boundary Element and Finite Element. Each of Boundary Element and Finite Element meshing again has three sub-parts.

Boundary Elements

Boundary Element meshing refers to the meshing of the surface geometry. Depending on the smoothness and shape of every patch forming the surface mesh.

  1. Triangle/Quadrilateral Elements: Given a CT image, the task is to compute a triangulated or quadrangulated discretization of the boundary of the cardio-vascular anatomy. In a sense the task is similar to contouring the zero-set of the original intensity function or a distance function induced by the reconstructed geometry. There are numerous algorithms to accomplish such tasks as mentioned in the previous subsection. However in this step our goal is to build a boundary element mesh of superior quality than what is typically output by the contouring routines. The mesh quality metrics are described in [138].
    We approach the problem in two steps. First we apply the dual contouring method proposed by Ju et al.[139] and apply geometric flow to improve the quality of the surface mesh. Figure 13 shows the performance of the boundary element meshing algorithm.
    Fig. 13
    Meshing: Triangle-Tetrahedral, A-Patch, NURBS meshes.
  2. B-Spline Elements: Building B-Spline model for free-form geometric objects has been an active area of research in the past. Given a triangulated or quadrilateral surface mesh, there have been numerous approaches to build a smooth network of B-Spline patches to model the underlying surface [140]. Subdivision based methods, namely Catmull-Clark [141], Doo-Sabin [142, 143] and Loop subdivision [144] for producing quadratic and cubic B-Spline models from triangulated and quadrilateral control meshes have also gained popularity due to their simplicity. However in most of the cases, it is not straight-forward how to generate the initial control mesh for free-form shapes especially when the input is a set of scattered set of points or a set of voxels representing the boundary of a segmented region inside a three dimensional volume.
    Until recently there were very works that dealt with the problem of building a control quadrilateral mesh from a free-form geometry of arbitrary topology [145, 146]. Recently there have been substantial research works in the computer graphics community that tackled the problem of quadran-gulating a surface mesh following the intrinsic anisotropy of the geometry [147, 148, 149]. Following these approaches, one can build a quadrilateral base surface mesh on which any of the standard subdivision scheme can be applied to build the desired B-Spline model.
  3. A-Patch Elements: The linear meshes do not always provide the adequate smoothness for them to be effectively used further. This is the reason a higher order meshed description of the geometry is often desired. Bajaj et al.presented a solution for this problem in 1995 where they devised a scheme that takes a triangulated surface mesh and builds a higher order (cubic) patch complex to describe the same surface. Under certain conditions, they also showed that these patches meet in a C1 continuous manner. They called this patched description of a two dimensional geometry an algebraic surface patch or an A-patch [150].
    The construction of A-patches proceeds as follows. First a triangle mesh is created that linearly approximates the given geometry. A tetrahedral scaffolding is then built around it so that the triangle mesh lies inside it. By assigning a suitable weight at every node of the scaffold, a polynomial inside each tetrahedron is constructed in such a way that its zero set satisfies certain properties that makes the higher order patch inside the tetrahedron free of singularity. Moreover the pacthes are derivative-continuous across the tetrahedra. The patch complex then provides a higher order smooth description of the geometry. Details of this method can be found in [150]. This method of construction of surface A-patches has also been generalized to hexahedral elements, as well as to the multiresolution construction of shell finite elements [151, 152]. Figure 13 shows a higher order geometric model of human heart.

3D Finite Elements

Finite Element meshing refers to the techniques of producing the discretization of the volume enclosed by the surface geometry.

  1. Tetrahedral/Hexahedral Elements: There have been prior works done on building volumetric (finite element) tetrahedral and hexahedral meshes from imaging data. For a detailed survey on the prior works see [153]. We have developed two algorithms for such purposes - TetMesh and HexMesh. The approach is as follows. First we extract a correct boundary using the boundary element meshing. For tetrahedral mesh generation we use the triangulated boundary and for hexahedral mesh generation we use the quadrangulated boundary meshing scheme as described above. Next, we design a series of templates to build a solid tetrahedral or hexahedral mesh from that conforms with the boundary mesh. The details about the templates are given in [154, 138]. As with the boundary element mesh, we further improve the quality of the finite element mesh by applying geometric flow [155]. Figure 13 shows the tetrahedral mesh cut-away for a template human heart.
  2. B-Spline Elements: As the most highly developed and widely utilized technique, NURBS (Non Uniform Rational B-Splines) [156, 157, 158] has evolved into an essential tool for a semi-analytical representation of geometric entities. Sometimes NURBS solid models are taken as input for finite element mesh generation [159]. Anderson et al.proposed a fast generation of NURBS surfaces from polygonal mesh models of human anatomy [160]. An enhanced algorithm was developed for NURBS evaluation and utilization in grid generation [161]. In isogeometric analysis [162], NURBS basis functions are used to construct the exact geometry, as well as the corresponding solution space.
    We have developed a skeleton-based approach for building NURBS model of vasculature. Using the skeletonization approach described in the earlier paragraph, a one dimensional polylinear skeletal structure is first extracted from the tubular geometry of vasculature. Then we design a set of templates that builds a hexahedral mesh around the skeleton. The details about the templates can be found in [24]. The hexahedral mesh is used as the control mesh for further NURBS mesh generation. Figure 13 shows the NURBS model of the inner blood volume of human heart.
  3. Shell A-Patch Elements: Shell structures appear frequently in biological entities. The muscle wall of heart and blood vessels are perfect examples of such surfaces. It is desired to model such fat surfaces with desired smoothness. Bajaj et al.presented algorithms that model smooth shell structures using shell A-patch finite elements [151, 152].
    The algorithm takes a pair of triangle mesh as input where the correspondence between the triangle meshes is implicit. It then decimates both the meshes simultaneously to build a coarser representation and occasionally merges the adjacent triangles to form quadrilaterals wherever possible.
    Using the correspondence between the triangles and quadrilaterals in the two-sheeted surface, the interval volume is then filled with 3-sided (triangular) and 4-sided (quadrilateral) prisms. A C1 piecewise trivariate function is then constructed over this collection of prisms. The range of the function varies from −1 to 1. For any scalar α [set membership] [−1, 1], the 0-set thus gives the higher order approximation of the intermediate surface within the shell element. When α equals −1 or 1, the resulting patch gives a higher order approximation of the inner and outer boundaries of the shell.

3 Implementation Results

We exhibit the implementation results of our image and geometry processing algorithms on modeling the Heart and Vasculature (Coronary Artery, Pulmonary Arterry, Abdominal Aorta and Thoracic Aorta).

3.1 Heart

We have experimented with images of patient hearts. For illustration of the performance of the algorithms, we have selected two datasets one of which is a low quality image whereas the other is of relatively better quality.

The first heart dataset is of dimension 512 × 512 × 432 and the spacing in x, y, z directions are respectively 0.390625mm, 0.390625mm, 0.3mm. We first applied the contrast enhancement to the original image and then applied the fast marching based segmentation on the contrast-enhanced image to separate the subvolumes corresponding to the aorta, pulmonary artery, right atrium and left atrium. Because of the poor quality of the image, the ventricles could not be recovered well. The result of the image processing on this dataset is shown in Figure 14(B). After the four components of the patient heart are segmented from the volume, we took the boundary voxels of each of these four regions and applied surface reconstruction from scattered points to build the initial triangulated geometric models. As one can see these models have a reasonable amount of spurious parts as well as some missing information that could not be retrieved from the patient data. We analyzed each of the recovered geometries using the critical point structure of the distance function described in Section 2.2. Using curation and geometry segmentation, we identified the portions which most prominently correspond to the portions of a template heart model while pruning away the undesired portions. Figure 14(C) shows the relevant portions that have high correlation with the corresponding portions of the template geometry. Parallel to the processing of the patient heart, we also performed geometric analysis of the template heart model. We construct the one dimensional skeletal structure of all the six components of the template, as well as we segmented the template into aorta, pulmonary artery, right and left ventricle and atrium (Figure 14(D).

Fig. 14
Result: Heart I. (A) One slice of the original data and isocontour-enhanced volume rendering of the input. (B) Due to relatively poor quality, the imaging data is first passed through the image processing unit that enhances the contrast and segments the ...

From there we could draw a correspondence between the segmented portions of the patient heart with the template heart (Figure 14(E)).

The second dataset was of better quality in terms of contrast and noise present. We first extracted the geometry via isocontouring. However using a single isovalue could not entirely serve the purpose as it also extracts the surrounding vasculature as shown in Figure 15(A). We therefore resorted on curation to extract just the geometry of heart by pruning away the surrounding thin blood vessels using curation (Figure 15(B)). After this step we were left with the geometry of heart which we further segmented using the stable manifold of the maxima of the distance function induced by the geometry. The segmentation step was crucial as it was able to separate the six main components of the patient heart, namely aorta, pulmonary artery, right and left ventricle and atrium as shown in Figure 15(C). We were then able to draw the correspondence between each of the segmented parts with the corresponding ones from the template geometry as shown in Figure 15(D).

Fig. 15
Result: Heart II. Starting with an isocontour extracted from the raw imaging data (A), the model is first curated to extract a clean geometry without the nearby bone structure or the thin blood vessels which are otherwise irrelevant for the specific task ...

3.2 Vasculature

Pulmonary Artery

The primary objective behind modeling pulmonary artery was to detect pulmonary embolism (PE) automatically [29]. This required an initial artery-vein separation from the CT scans of the vasculature which was performed directly from the input imaging data by the image skeletonization technique described in Section 2.1. The skeletons are traced from the end of the branches and traversed upward toward the heart. Once the trace reached the patient heart through a series of disambiguation, as needed because of the poor image quality, one of the branches was tagged arterial while the other is tagged venous. At the same time, the rest of the volume was classified using the voxel classification technique as described in Section 2.1 (Figure 16(b,c)). This led to a complete characterization of the CT data into the major components along with the artery and vein separated (Figure 16(d–f) ).

Fig. 16
Result: Pulmonary Artery. The imaging data (one slice is shown in (a)), is first classified to identify the voxels belonging to lungs and the vasculature (b,c). Using the skeletonization technique from Section 2.1, the arterial (red) and venous (blue) ...

Abdominal Aorta

Starting with the CT scan of the abdominal section of the patient, we first extracted the geometry using an isovalue that best captures the geometry of the abdominal aorta along with the surrounding bone structure and other anatomical parts which are however irrelevant for the modeling of the aorta itself. To separate the aorta, we performed geometry segmentation, using the stable manifold of the maxima of the distance function. The segmented aorta is shown in green in Figure 17 (a,b). We then performed geometry skeletonization on this segmented geometry and as a result a one-dimensional skeletal structure was produced (Figure 17 (c,d)). This skeleton was then used for building a swept hexahedral volume that best represented the geometry of the aorta, as well as served the purpose of a control mesh which was then used to build a solid NURBS model of the abdominal aorta (Figure 17 (e,f)).

Fig. 17
Result: Abdominal Aorta. Volume rendering of the original imaging data is shown in (a). An isocontour is extracted from the imaging data from which the abdominal aorta (green) is segmented (b). The medial axis and the linear skeleton extracted from it ...

Thoracic Aorta

Starting with the scan of the patient heart, we first segmented out and extracted the geometry of the thoracic aorta via isocontouring. We then performed the skeletonization of this geometry in order to obtain an one-dimensional polylinear skeletal (Figure 18 (A.1)). An external pathway is added to the skeletal structure in order to simulate the LVAD used in time of open-heart surgery. Using the skeleton as a sweeping path, we then built a hexahedral control mesh that best represented the aorta along with its inner volume (Figure 18 (A.2)). This control mesh was then used to construct a solid NURBS mesh of the thoracic aorta as shown in Figure 18 (A.3).

Fig. 18
Result: A: Thoracic Aorta. Skeleton is extracted from the geometric model (A.1). An artificial pathway is added to simulate the LVAD. The hexahedral control mesh and the resulting NURBS model are shown in (A.2,3). B: Coronary Artery. The coronary artery ...

Coronary Artery

Starting with a CT scan of the patient heart, first the coronary artery was extracted as shown in Figure 18 (B.1). The coronary artery has two main branches - right and left, which were then geometrically segmented from the whole vascular structure. Figure 18 (B.2) shows both the vasculature trees colored in red and blue. We then computed the skeletal structure for each of these substructures using the skeletonization method described earlier (Figure 18 (B.3) ) and subsequently a NURBS mesh model was constructed using the swept volume technique as described in the meshing subsection earlier (Figure 18(B.4, B.5) ).

4 Conclusions

We have developed a comprehensive software collection (http://cvcweb.ices.utexas.edu/software/) of data processing tools (http://cvcweb.ices.utexas.edu/cvc/projects/medx/pipeline.php) that can be utilized in producing patient specific, and spatially realistic, linear and curved, boundary and finite element models of human cardio-vasculature. Such a combination of geometry extraction and geometric modeling are the necessary enablers for quantitative and interrogative querying, analysis and visualization. These boundary and finite elements are additionally useful for a variety of physiological, bio-chemical modeling and simulation of normal or diseased conditions. They are also useful for virtual surgical training, treatment planning and drug or dosage delivery. Current imaging modalities we have successfully processed through our software include Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), and their blood perfused variations for cardio-vasculature modeling and micro-CT for osteoporotic bone modeling.

Fig. 1
Cardiovascular Modeling from Imaging Data. Top row illustrates the modeling of patient heart from imaging data. From left to right: An illustration of internal structure of human heart (courtesy [23]), a typical Computed Tomography (CT) imaging data and ...
Fig. 4
The performance of the bilaterally pre-filtered and anistropic diffusion filtered algorithm is shown on one slice of the CTA data

Acknowledgments

The research was NSF grant CNS-0540033 and NIH contracts: P20-RR020647, R01-EB00487, R01-GM074258, R01-GM07308. We also thank Dr. Charlie Walvaert of the Austin Heart Hospital, USA for providing us with the CT64 thoracic scan. Thanks are also due to Joe Rivera of CVC for his immense help with data processing. We would like to additionally thank Dr. Tom Hughes, Dr. Yuri Bazilevs for helpful discussions, and Dr. Sangmin Park and Dr. Yongjie Zhang for their help with image processing and mesh generation parts of this paper. The research was supported in part by NIH, NSF etc.

References

1. Winslow RL, Scollan DF, Greenstein JL, Yung CKWB, Jr, Bhanot G, Gresh DL, Rogowitz BE. Mapping, modeling, and visual exploration of structure-function relationships in the heart. Deep computing for the life sciences. 2001;40(2)
2. Luo C, Rudy Y. A model of the ventricular cardiac action potential: Depolarization, repolarization and their interaction. Circulation Research. 1991;68(6):1501–1526. [PubMed]
3. Luo C, Rudy Y. A dynamic model of the cardiac ventricular action potential: I. simulations of ionic currents and concentration changes. Circulation Research. 1994;74(6):1071–1096. [PubMed]
4. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology. 1952;117:500–544. [PubMed]
5. Hille B. Ionic Channels of Excitable Membranes. Second edn. Sunderland, MA: Sinauer Associates, Inc.; 1992.
6. Winslow RL, Scollan DF, Holmes A, Yung CK, Zhang J, Jafri MS. Electrophysiological modeling of cardiac ventricular function: From cell to organ. Annual Reviews in Biomedical Engineering. 2000;2:119–155. [PMC free article] [PubMed]
7. Vetter F, McCulloch A, Rogers J. A finite element model of passive mechanics and electrical propagation in the rabbit ventricles. Computers in Cardiology. 1998:705–708.
8. Rogers JM, McCulloch AD. A collocation-galerkin finite element model of cardiac action potential propagation. IEEE Trans Biomed. Eng. 1994;41:743–757. [PubMed]
9. Rudy Y, Plonsey R. A comparison of volume conductor and source geometry effects on body surface and epicardial potentials. Circ. Res. 1980;46:283–291. [PubMed]
10. Costa KD, Hunter PJ, Wayne JS, Waldmann LK, Guccione JM, McCulloch AD. A three-dimensional finite element method for large elastic deformations of ventricular myocardium: Ii - prolate spheroidal coordinates. J. Biomedical Engineering. 1996;118(4):464–472. [PubMed]
11. Hunter P, McCulloch A, Nielsen P, Smaill B. A finite element model of passive ventricular mechanics. ASMS BED. 1988;9:387–397.
12. Sachse FB. Computational Cardiology: Modeling of Anatomy, Electrophysiology, and Mechanics. Berlin, Heidelberg, New York: Springer; 2004. LNCS 2966.
13. Taylor C, Hughes T, Zarins C. Finite element modeling of blood flow in arteries. Computer Methods in Applied Mechanics and Engineering. 1998;158(1–2):155–196.
14. Taylor C, Hughes T, Zarins C. Finite element modeling of 3-dimensional pulsatile flow in the abdominal aorta: Relevance to atherosclerosis. Annals of Biomedical Engineering. 1998;26(6):1–13. [PubMed]
15. Taylor C, Hughes T, Zarins C. Effect of exercise on hemodynamic conditions in the abdominal aorta. Journal of Vascular Surgery. 1999;29:1077–1089. [PubMed]
16. Sahni O. PhD thesis. RPI; 2005. Adaptive procedure for efficient blood-flow simulations.
17. Sahni O, Mueller J, Jansen KE, Shephard MS, Taylor CA. Efficient anisotropic adaptive discretization of the cardiovascular system. Technical report, RPI; 2005.
18. Yin L, Luo X, Shephard MS. Identifying and meshing thin sections of 3-d curved domains. Technical report, RPI; 2005.
19. Hackbusch W. Multi-Grid Methods and Applications. Berlin, Heidelberg, New York, Tokyo: Springer Verlag; 1985.
20. Braess D. Towards algebraic multigrid for elliptic problems of second order. Computing. 1995;55:379–393.
21. Brown P, Byrne G, Hindmarsh A. VODE: A variable-coefficient ode solver. SIAM Journal on Scientific Computation. 1989;10:1038–1057.
22. de Munck J. A linear discretization of the volume conductor boundary integral equation using analytically integrated elements. IEEE Trans Biomed. Eng. 1992;39(9):986–990. [PubMed]
23. Team PE. Essential Atlas of Anatomy. English edn. Barcelona, Spain: Parramon Ediciones; 2001.
24. Zhang Y, Bazilevs Y, Goswami S, Bajaj CL, Hughes TJR. Patient-specific vascular nurbs modeling for isogeometric analysis of blood flow. Computer Methods in Applied Mechanics and Engineering (CMAME) 2007;196(29–30):2943–2959. [PMC free article] [PubMed]
25. Gady Agam Samuel G, Armato I, Wu C. Vessel tree reconstruction in thoracic ct scans with application to nodule detection. IEEE Transaction on Medical Imaging. 2005;24(4):486–499. [PubMed]
26. Dehmeshki J, Ye X, Wang F, Lin XY, Abaei M, Siddique M, Qanadli S. An accurate and reproducible scheme for quantification of coronary artery calcification in ct scans. Proceedings of the 26th Annual International Conference of IEEE EMBS, IEEE, The International Society for Optical Engineering; 2004. pp. 1918–1921. [PubMed]
27. Yoshitaka Masutani HM, Doi J. Computerized detection of pulmonary embolism in spiral ct angiography based on volumetric image analysis. IEEE Transaction on Medical Imaging. 2002;21(12):1517–1523. [PubMed]
28. Park SM, Gladish GW, Bajaj CL. Artery-vein separation from thoracic CTA scans. IEEE Transactions on Medical Imaging. (Submitted)
29. Park SM, Gladish GW, Bajaj CL. Automatic pulmonary embolism detection from thoracic CTA scans. IEEE Transactions on Medical Imaging. (Submitted)
30. Schoepf UJ, Costello P. Ct angiography for diagnosis of pulmonary embolism: State of the art. Radiology. 2004;230(2):329–337. [PubMed]
31. Gonzalez R, Woods R. Digital image processing. Addison-Wesley. 1992
32. Pratt W. Digital Image Processing. 2nd Ed. A Wiley-Interscience Publication; 1991.
33. Caselles V, Lisani J, Morel J, Sapiro G. Shape preserving local histogram modification. IEEE Trans. Image Processing. 1998;8(2):220–230. [PubMed]
34. Stark J. Adaptive contrast enhancement using generalization of histogram equalization. IEEE Trans. Image Processing. 2000;9(5):889–906. [PubMed]
35. Jobson D, Rahman Z, Woodell G. Properties and performance of a center/surround retinex. IEEE Trans. Image Processing. 1997;6(3):451–462. [PubMed]
36. Jobson D, Rahman Z, Woodell G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Processing. 1997;6(7):965–976. [PubMed]
37. Lu J, Healy D, Weaver J. Contrast enhancement of medical images using multiscale edge representation. Optical Engineering. 1994;33(7):2151–2161.
38. Laine A, Schuler S, Fan J, Huda W. Mammographic feature enhancement by multiscale analysis. IEEE Trans. Medical Imaging. 1994;13(4):725–738. [PubMed]
39. Yu Z, Bajaj C. A fast and adaptive algorithm for image contrast enhancement. Proc of IEEE International Conference on Image Processing; 2004. pp. 1001–1004.
40. Deriche R. Fast algorithm for low-level vision. IEEE Trans. on Pattern Recognition and Machine Intelligence. 1990;12(1):78–87.
41. Young I, Vliet L. Recursive implementation of the gaussian filter. Signal Processing. 1995;44:139–151.
42. Barash D. A fundamental relationship between bilateral filtering, adaptive smoothing and the nonlinear diffusion equation. IEEE Trans. on Pattern Analysis and Machine Intelligence. 2002;24(6):844–847.
43. Durand F, Dorsey J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Conference on Computer Graphics (SIGGRAPH); 2002. pp. 257–266.
44. Elad M. On the bilateral filter and ways to improve it. IEEE Transactions On Image Processing. 2002;11(10):1141–1151. [PubMed]
45. Tomasi C, Manduchi R. Bilateral filtering for gray and color images. IEEE International Conference on Computer Vision; 1998. pp. 836–846.
46. Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. on Pattern Analysis and Machine Intelligence. 1990;12(7):629–639.
47. Weickert J. Anisotropic Diffusion In Image Processing. ECMI Series, Teubner, Stuttgart. 1998 ISBN 3-519-02606-6.
48. Donoho D, Johnson I. Ideal spatial adaptation via wavelet shrinkage. Biometrika. 1994;81:425–455.
49. Xu Y, Weaver JB, Healy DM, Lu J. Wavelet transform domain filters: A spatially selective noise filtration technique. IEEE Trans. Image Processing. 1994;3(6):747–758. [PubMed]
50. Hamza AB, Luque P, Martinez J, Roman R. Removing noise and preserving details with relaxed median filters. Journal of Mathematical Imaging and Vision. 1999;11(2):161–177.
51. Hamza AB, Krim H. Image denoising: A nonlinear robust statistical approach. IEEE Transactions on Signal Processing. 2001;49(12):3045–3054.
52. Stoschek A, Hegerl R. Denoising of electron tomographic reconstructions using multiscale transformations. J. Struct Biol. 1997;120:257–265. [PubMed]
53. Frangakis A, Hegerl R. Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion. J. Struct. Biol. 2001;135:239–250. [PubMed]
54. Bajaj C, Wu Q, Xu G. ICES Technical Report 301. The Univ. of Texas at Austin; 2003. Level set based volumetric anisotropic diffusion.
55. Jiang W, Baker M, Wu Q, Bajaj C, Chiu W. Applications of Bilateral Denoising Filter in Biological Electron Microscopy. Journal of Structural Biology. 2003;144:114–122. [PubMed]
56. Yu Z, Bajaj C. A segmentation-free approach for skeletonization of gray-scale images via anisotropic vector diffusion. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’04); 2004. pp. 415–420.
57. Bezdek J. A convergence theorem for the fuzzy ISODATA clustering algorithm. IEEE Trans. Pattern Anal. Machine Intell. 1980;2(1):1–8. [PubMed]
58. Titterington DM, Smith AFM, Makov UE. Statistical Analysis of Finite Mixture Distrubutions. Chichester, UK: John Wiley and Sons; 1985.
59. Pham DL, Prince JL. Adaptive fuzzy segmentation of magnetic resonance images. IEEE Transactions on Medical Imaging. 1998;18(9):737–752. [PubMed]
60. Pham DL, Prince JL. An adaptive fuzzy c-means algorithm for image segmentation in the presence of intensity inhomogeneities. Pattern Recognition Letters. 1999;20(1):57–68.
61. Ahmed MN, Yamany SM, Farag AA, Moriarty T. A bias field estimation and adaptive segmentation of MRI data using a modified Fuzzy C-Means algorithm. Proc. of 13th Int. Conf. on Computer Assisted Radiology and Surgery.1999.
62. Ahmed MN, Yamany SM, Mohamed N, Farag AA. A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Transactions on Medical Imaging. 2002;21(3):193–199. [PubMed]
63. Gopal SS, Hebert TJ. Maximum likelihood pixel labeling using a spatially variant finite. IEEE Transaction on Nuclear Science. 1999;44(4):1578–1582.
64. Laidlaw DH, Fleischer KW, Barr AH. A poster presented at the First International Workshop on Statistical Mixture Modeling. France: Aussois; 1995. Bayesian mixture classification of mri data for geometric modeling and visualization.
65. Ahmed MN, Yamany SM, Mohamed N, Farag AA, Moriarty T. A modified fuzzy c-means algorithm for bias field estimation and segmentation of mri data. IEEE Transactions on Medical Imaging. 2002;21(3) [PubMed]
66. Kindlmann G, Darkin JW. Semi-automatic generation of transfer functions for direct volume rendering. Proceedings of 1998 Symposium on Volume Visualization; 1998. pp. 79–86.
67. Laidlaw D. PhD thesis. CalTech Univ.; 1995. Geometric Model Extraction from Magnetic Resonance Volume Data.
68. Tomasi C, Madcuchi R. Bilateral filtering for gray and color images. Proceedings of the 1998 IEEE International Conference on Computer Vision; Bombay, India. 1998. pp. 839–846.
69. Xu C, Prince JL. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Processing. 1998;7(3):359–369. [PubMed]
70. Yu Z, Bajaj C. Normalized gradient vector diffusion and image segmentation. Proceedings of European Conference on Computer Vision; 2002. pp. 517–530.
71. Bajaj C, Pascucci V, Schikore D. The contour spectrum. Proc. of IEEE Visualization Conference; 1997. pp. 167–173.
72. Park S, Bajaj C. Feature selection of 3d volume data through multi-dimensional transfer functions. Pattern Recognition Letters. 2007;28(3):367–374. [PMC free article] [PubMed]
73. Ellis R. Macromolecular crowding: obvious but underappreciated. Trends Biochem. Sci. 2001;26(10):597–604. [PubMed]
74. Hessler D, Young SJ, Ellisman MH. A flexible environment for the visualization of three-dimensional biological structures. Journal of Structural Biology. 1996;116(1):113–119. [PubMed]
75. Kremer J, Mastronarde D, McIntosh J. Computer visualization of three-dimensional image data using imod. J Struct Biol. 1996;116:71–76. [PubMed]
76. Li Y, Leith A, Frank J. Tinkerbell-a tool for interactive segmentation of 3d data. Journal of Structural Bioloby. 1997;120(3):266–275. [PubMed]
77. McEwen B, Marko M. Three-dimensional electron micros-copy and its application to mitosis research. Methods Cell Biol. 1999;61:81–111. [PubMed]
78. Harlow M, Ress D, Stoschek A, Marshall R, McMahan U. The architecture of active zone material at the frog’s neuromuscular junction. Nature. 2001;409:479–484. [PubMed]
79. Volkmann N. A novel three-dimensional variant of the watershed transform for segmentation of electron density maps. Journal of Structural Biology. 2002;138(1–2):123–129. [PubMed]
80. Frangakis AS, Hegerl R. Segmentation of two- and three-dimensional data from electron microscopy using eigenvector analysis. Journal of Structural Biology. 2002;138(1–2):105–113. [PubMed]
81. Zhou ZH, Baker ML, Jiang W, Dougherty M, Jakana J, Dong G, Lu G, Chiu W. Electron cryomicroscopy and bioinformatics suggest protein fold models for rice dwarf virus. Nature Structural Biology. 2001;8(10):868–873. [PubMed]
82. Jiang W, Li Z, Baker ML, Prevelige PE, Chiu W. Coat protein fold and maturation transition of bacteriophage p22 seen at subnanometer resolution. Nature Structural Biology. 2003;10(2):131–135. [PubMed]
83. Marko M, Leith A. Sterecon - three-dimensional reconstructions from stereoscopic contouring. Journal of Structural Biology. 1996;116(1):93–98. [PubMed]
84. Bajaj C, Yu Z, Auer M. Volumetric feature extraction and visualization of tomographic molecular imaging. Journal of Structural Biology. 2004;145(1):168–180. [PubMed]
85. Yu Z, Bajaj C. Automatic ultrastructure segmentation of reconstructed cryoem maps of icosahedral viruses. IEEE Transactions on Image Processing: Special Issue on Molecular and Cellular Bioimaging. 2005;14(9):1324–1337. [PubMed]
86. Baker M, Yu Z, Chiu W, Bajaj C. Automated Segmentation of Molecular Subunits in Electron Cryomicroscopy Density Maps. Journal of Structural Biology. 2006 online-version. [PubMed]
87. Malladi R, Sethian J. A real-time algorithm for medical shape recovery. IEEE International Conference on Computer Vision; 1998. pp. 304–310.
88. Sethian J. A marching level set method for monotonically advancing fronts. Proc. Natl. Acad. Sci. 1996;93(4):1591–1595. [PubMed]
89. Sethian J. Level Set Methods and Fast Marching Methods. 2nd edition. Cambridge University pPress; 1999.
90. Sifakis E, Tziritas G. Moving object localization using a multi-label fast marching algorithm. Signal Processing: Image Communication. 2001;16(10):963–976.
91. Tari S, Shah J, Pien H. Extraction of shape skeletons from gray-scale images. Computer Vision and Image Understanding. 1997;66(2):133–146.
92. Chung DH, Sapiro G. Segmentation-free skeletonization of gray-scale images via pde’s. Intl. Conf. Image Processing.; 2000. pp. 927–930.
93. Lindeberg T. The Kluwer International Series in Engineering and Computer Science. Netherlands: Kluwer Academic Publishers; 1994. Scale-space Theory in Computer Vision.
94. Pizer SM, Eberly D, Fritsch DS, Morse BS. Zoom invariant vision of figural shape: The mathematics of cores. Computer Vision and Image Understanding. 1998;69(1):55–71.
95. Morse BS, Pizer SM, Puff DT, Gu C. Zoominvariant vision of figural shape: Effects on cores of images disturbances. Computer Vision and Image Understanding. 1998;69(1):72–86.
96. Jang JH, Hong KS. A pseudo-distance map for the segmentation-free skeletonization of gray-scale images. Proc. Intl Conf. Computer Vision; 2001. pp. 18–23.
97. Castro ED, Morandi C. Registartion of translated and rotated images using finite fourier transforms. IEEE Transactions on Patten Analysis and Machine Intelligence. 1986;9(5):700–703. [PubMed]
98. Kuglin CD, Hines DC. The phase correlation image alignment method. Proc. IEEE International Conference on Cybrnetics and Society; 1975. pp. 163–165.
99. Lehmann TM. A two stage algorithm for model-based registration of medical images. Proceedings of the International Conference on Pattern Recognition ICPR’98; 1998. pp. 344–352.
100. Araiza R, Averill M, Keller G, Starks S, Bajaj C. 3D image registration using Fast Fourier Transform, with potential applications to geoinformatics and bioinformatics. Proc. Int. Conf. on Inform. Proc. and Management of Uncertainty in Knowledge-Based Systems IPMU06; 2006. pp. 817–824.
101. Dutt A, Rokhlin V. Fast fourier transform for nonequispaced data. SIAM J. Sci. Computing. 1993;14:1368–1393.
102. Dutt A, Rokhlin V. Fast fourier transform for nonequispaced data ii. Appl. Comput. Harmon. Anal. 1995;2:85–100.
103. Brown LG. A survey of image registration techniques. ACM Computing Surveys. 1992;24(4):325–376.
104. Bajcsy R, Kovacic S. Multiresolution elastic matching. Computer Vision Graphics and Image Processing. 1989;46(1–21):1–21.
105. Christensen GE, Rabbitt RD, Miller MI. Deformable templates using large deformation kinematics. IEEE Transaction on Image Processing. 1996;5(10):1435–1447. [PubMed]
106. Yanovsky I, Thompson P, Osher S, Leow A. Large deformation unbiased diffeomorphic nonlinear image registration: Theory and implementation. Technical report, UCLA CAM. 2006
107. Clarenz U, Droske M, Rumpf M. Towards fast non-rigid registration. Proceedings of the AMS. 2002;313:67–84.
108. Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure App. Math. 1989;42(4)
109. Lorensen W, Cline H. SIGGRAPH. 1987. Marching Cubes: A High Resolution 3D Surface Construction Algorithm; pp. 163–169.
110. Lopes A, Brodlie K. IEEE Transactions on Visualization and Computer Graphics. Volume 9. 2003. Improving the robustness and accuracy of the marching cubes algorithm for isosurfacing; pp. 16–29.
111. Dey TK. Curve and Surface Reconstruction: Algorithms with Mathematical Analysis. Cambridge Monographs on Applied and Computational Mathematics. 2006
112. Dey TK, Goswami S. Tight cocone: A water-tight surface reconstructor. Proc. 8th ACM Sympos. Solid Modeling and Applications; 2003. pp. 127–134.
113. Zhao H, Osher S, Fedkiw R. 1st IEEE Workshop on Variational and Level Set Methods. 2001. Fast surface reconstruction using the level set method; pp. 194–202.
114. Bajaj C, Xu G, Zhang X. Smooth surface constructions via a higher-order level-set method. Pro. CAD/GRAPHICS 2007; 2007. to appear.
115. Cheng SW, Wang Y, Wu Z. Provable dimension detection using principal component analysis. Proc. Sympos. on Comput. Geom.; 2005. pp. 208–217.
116. Dey TK, Giesen J, Goswami S, Zhao W. Shape dimension and approximation from samples. Discrete and Computaional Geometry. 2003;29:419–434.
117. Siersma D. Voronoi diagrams and morse theory of the distance function. 1999
118. Dey TK, Giesen J, Goswami S. Shape segmentation matching with flow discretization. In: Dehne F, Sack JR, Smid M, editors. Proc. Workshop Algorithms Data Strucutres (WADS 03) Berlin, Germany: 2003. pp. 25–36. LNCS 2748.
119. Bajaj C, Gillette A, Goswami S. TopoInVis. 2007. Topology based selection and curation of level sets. (Accepted) [PMC free article] [PubMed]
120. Dey TK, Zhao W. Approximate medial axis as a Voronoi subcomplex. Computer Aided Design. 2003;36(2):195–202.
121. Chazal F, Lieutier A. The λ-medial axis. Graphical Models. 2005;67(4):304–331.
122. Cocone. Tight Cocone Software for surface reconstruction and medial axis approximation. 2003. http://www.cse.ohio-state.edu/~tamaldey/cocone.html.
123. Borgefors G, Nystrom I, Baja GD. Computing skeletons in three dimensions. Pattern Recognition. 1999;32(7)
124. Bitter I, Kaufman A, Sato M. Penalized distance volumetric skeleton algorithm. IEEE TVCG. 2001;7(3)
125. Bouix S, Siddiqi K, Tannenbaum A. Flux driven fly throughs. IEEE Conf. Computer Vision and Pattern Recognition; 2003. pp. 449–454.
126. Hassouna MS, Farag AA. Robust centerline extraction framework using level sets. IEEE Conf. Computer Vision and Pattern Recognition; 2005. pp. 458–465.
127. Zhou Y, Toga A. Efficient skeletonization of volumetric objects. IEEE Trans. Visualization and Comp. Graphics. 1999;5(3):196–209. [PMC free article] [PubMed]
128. Cornea N, Silver D, Yuan X, Balasubramaniam R. Computing hierarchical curveskeletons of 3D objects. The Visual Computer. 2005;21(11):945–955.
129. Dey TK, Sun J. Defining and computing curve-skeletons with medial geodesic functions. Sympos. Geom. Processing; 2006. pp. 143–152.
130. Costa L. IWSNHC3DI. 1999. Multidimensional scale space shape analysis; pp. 214–217.
131. Ogniewicz RL, Kubler O. Hierachic voronoi skeletons. Pattern Recognition. 1995;28(3):343–359.
132. Verroust A, Lazarus F. Extracting skeletal curves from 3D scattered data. The Visual Computer. 2000;16:15–25.
133. Cornea N, Silver D, Min P. IEEE Visualization. 2005. Curve skeleton applications; pp. 95–102.
134. Goswami S, Dey TK, Bajaj CL. Identifying flat and tubular regions of a shape by unstable manifolds. Proc. 11th Sympos. Solid and Physical Modeling; 2006. pp. 27–37.
135. Eckstein I, Joshi AA, Kuo CJ, Leahy R, Desbrun M. MICCAI. 2007. Generalized surface flows for deformable registration and cortical matching. to appear. [PMC free article] [PubMed]
136. Amenta N, Choi S, Kolluri R. The power crust, unions of balls, and the medial axis transform. Computational Geometry: Theory and Applications. 2001;19(2–3):127–153.
137. Tama F, Miyashita O, Brooks C. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. J. Mol. Biol. 2004;337:985–999. [PubMed]
138. Zhang Y, Bajaj C. Adaptive and quality quadrilateral/hexahedral meshing from volumetric data. Computer Methods in Applied Mechanics and Engineering (CMAME) 2006;195(9–12):942–960. [PMC free article] [PubMed]
139. Ju T, Losasso F, Schaefer S, Warren J. Dual contouring of hermite data. SIGGRAPH 2002, Computer Graphics Proceedings, To Appear ACM Press / ACM SIGGRAPH.2002.
140. Farin G. Curves and Surfaces for CAGD: A Practical Guide. 5th ed. Morgan-Kaufmann; 2002.
141. Catmull E, Clark J. Recursively generated b-spline surfaces on arbitrary topological surfaces. Computer-Aided Design. 1978;10(6):350–355.
142. Doo D. A subdivision algorithm for smoothing down irregularly shaped polyhedrons. Proceedings on Interactive Techniques in Computer Aided Design; 1978. pp. 157–165.
143. Doo D, Sabin M. Behavior of recursive division surfaces near extraordinary points. Computer-Aided Design. 1978;10(6):356–360.
144. Loop C. A g1 triangular spline surface of arbitrary topological type. Computer Aided Geometric Design. 1994;11(3):303–330.
145. Krishnamurthy V, Levoy M. Fitting smooth surfaces to dense polygon meshes. Proceedings of SIGGRAPH; 1996. pp. 313–324.
146. Eck M, Hoppe H. Automatic reconstruction of b-spline surfaces of arbitrary topological type. Proceedings of SIGGRAPH; 1996. pp. 325–334.
147. Cohen-Steiner D, Alliez P, Desbrun M. Variational shape approximation. Proc. SIGGRAPH; 2004. pp. 905–914.
148. Dong S, Bremer PT, Garland M, Pascucci V, Hart J. Spectral surface quadrangulation. ACM Transactions on Graphics. 2006;25(3):1057–1066.
149. Ying L, Zorin D. A simple manifold-based construction of surfaces of arbitrary smoothness. ACM Trans. Graph. 2004;23(3):271–275.
150. Bajaj C, Chen J, Xu G. Modeling with cubic A-patches. ACM Transactions on Graphics. 1995;14(2):103–133.
151. Bajaj C, Xu G. Smooth shell construction with mixed prism fat surfaces. In: Brunett G, Bieri H, Farin G, editors. Geometric Modeling Computing Supplement. Vol. 14. 2001. pp. 19–35.
152. Bajaj C, Xu G, Holt R, Netravali A. Hierarchical multiresolution reconstruction of shell surfaces. Computer Aided Geometric Design. 2002;19:89–112.
153. Goswami S, Gillette A, Bajaj C. Efficient Delaunay mesh generation from sampled scalar function. Proc. 16th Int. Meshing Roundtable; 2007. to appear.
154. Zhang Y, Bajaj C, Sohn BS. 3D finite element meshing from imaging data. The special issue of Computer Methods in Applied Mechanics and Engineering (CMAME) on Unstructured Mesh Generation. 2005;194(48–49):5083–5106. [PMC free article] [PubMed]
155. Zhang Y, Bajaj C, Xu G. Surface smoothing and quality improvement of quadrilateral/hexahedral meshes with geometric flow. Proceedings of 14th International Meshing Roundtable; 2005. pp. 449–468. [PMC free article] [PubMed]
156. Rogers DF. An Introduction to NURBS With Historical Perspective. San Diego, CA: Academic Press; 2001.
157. Piegl L, Tiller W. The NURBS Book (Monographs in Visual Communication) 2nd ed. New York: Springer-Verlag; 1997.
158. Thompson JF, Soni BK, Weatherill NP. Grid Generation. CRC Press LLC; 1999.
159. Gursoy HN. Tetrahedral finite element mesh generation from nurbs solid models. Engineering with Computers. 1996;12(19):211–223.
160. Anderson CW, Crawford-Hines S. Technical Report CS-99-101. Colorado State University; 2000. Fast generation of nurbs surfaces from polygonal mesh models of human anatomy.
161. Yu TY, Soni BK. Nurbs evaluation and utilization for grid generation; 5th International Conference on Numerical Grid Generation in Computational Field Simulations; 1996. pp. 323–332.
162. Hughes TJ, Cottrell JA, Bazilevs Y. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry, and mesh refinement. CMAME. 2005;194:4135–4195.