|Home | About | Journals | Submit | Contact Us | Français|
The importance of modern imaging techniques for capturing detailed structural information of a biological system cannot be understated. Unfortunately images do not reveal the “full functional story” and a spatially realistic computer model is often necessary for a comprehensive understanding of the complicated structural and physiological properties of the biological system’s entities under investigation . Deeper insights into structure-to-function relationships of different entities is achieved via finite element simulations of the modeled biomedical process. A 3D (three dimensional) finite element meshed computer model of the biological system is therefore a first step to perform such simulations.
The behavioral attributes of a biological entity or the physiological interaction between different participating components of a biological system are often modeled mathematically via a coupled set of differential and integral equations, and quite often numerically evaluated using finite element (or boundary element) simulations. To further emphasize the premise of cardiac modeling from imaging data, we state a few computational biomedical modeling and simulation examples: 3D computational modeling of the human heart for a quantitative analysis of cyclical electrical conductance on the heart membrane [2, 3, 4, 5, 6]; the biomechanical properties (stress-strain, elasticity) of the heart ventricular walls[7, 8, 9, 10, 11, 12]; 3D modeling and simulation of pulsatile blood flow through human arteries/veins for vascular by-pass surgery pre-planning on a patient specific basis [13, 14, 15, 16, 17, 18]. A finite element decomposition of the geometric domain, capturing the detailed spatial features that can be gleaned from the imaging, is therefore the essential first step toward performing the necessary numerical simulations [19, 20, 21, 22].
Modeling of human vasculature from three-dimensional (3D) Computed Tomography (CT) images of the thorax is a critical step for computer-aided diagnosis (CAD) in disease domains such as lung nodules , coronary artery disease , and pulmonary embolism (PE) . Even though there are many published approaches, the problem is still unsolved. Survey of various techniques on this topic can be found in [28, 29]. Thoracic CT angiography (CTA) imaging is often performed for patients suspected of having a PE that is defined as a thrombus (or a clot of blood) . Therefore, in order to detect pulmonary embolism, a suitable model of vasculature distnguishing between arterial and venous blood vessel trees is a crucial step.
In subsequent subsections we highlight the computational pipeline, the main algorithmic components and a few descriptive results of our Cardio Vascular Modeling from Imaging software.
The Imaging-to-Modeling software system for cardiovascular data employs both Image Processing and Geometry processing functionalities to produce a suitable linear or higher order meshed model of the anatomy. Figure 2 describes the data-flow layout. We describe the major algorithmic components of each of the processing modules in Sections 2.1 and 2.2. The reader must note that, the modules are selectively used depending on the nature and quality of the input imaging data (Section 3).
The three dimensional intensity data often possesses a low contrast between structural features and the background, thereby making further processing all the more difficult. Image contrast enhancement is a process used to improve the image quality for better visual appearance for subsequent operations. The most commonly used methods utilize global contrast manipulation based on global [31, 32] or local histogram equalization [31, 32, 33, 34], retinex model [35, 36] and wavelet decomposition [37, 38].
In this method, we design an adaptive transfer function for each individual voxel, based on the intensities in a suitable local neighborhood. First, we compute the local statistics (local average, local minimum, and local maximum) for each voxel using a fast propagation scheme [40, 41]. Then a transfer function is designed based upon the calculated local statistics. Various linear or nonlinear functions can be used here to stretch the contrast profile. We build a transfer function which consists of two pieces: a convex curve in the dark-intensity range and a concave curve in the bright-intensity range. The overall function is C1 continuous. Finally, we map the intensity of each voxel to a new one using the calculated transfer function. The performance of this algorithm is shown in Figure 3.
The input images are often contaminated with noise and are therefore need to be filtered. Traditional image filters include Gaussian filtering, median filtering, and frequency domain filtering . Compared to these, anisotropic filters are preferred as they tend to preserve the features better. Bilateral filtering [42, 43, 44, 45] is a straightforward extension of Gaussian filtering by simply multiplying an additional term in the weighting function. Partial differential equation (PDE) based techniques, known as anisotropic geometric diffusion, have also been studied [46, 47]. Another popular technique for anisotropic filtering is by wavelet transformation . By carefully designing the filter, one can smooth image noise while maintaining the sharpness of the edges in an image . Finally, the development of nonlinear median-based filters in recent years has also produced promising results [50, 51]. Among the aforementioned techniques, two methods for noise reduction have been suggested for tomographic data sets, namely wavelet filtering  and non-linear anisotropic diffusion .
Our approach, utilizing a bilateral pre-filtering coupled with an evolution driven anisotropic geometric diffusion PDE (partial differential equation), has shown significant results in enhancing the features of intensity maps. The PDE model is :
The efficacy of our method is based on a careful selection of the anisotropic diffusion tensor Dσ based on estimates of the normal and two principal curvatures and curvature directions of a feature isosurface (level-set) in three dimensions [54, 55, 56]. The diffusivities along the three independent directions of the feature isosurface are determined by the local second order variation of the intensity function at each voxel. In order to estimate continuous first and second order partial derivatives, a tricubic B-spline basis is used to locally approximate the original intensity.
The Fuzzy C-Means (FCM) algorithm  and the Expectation Maximization (EM) algorithm  have been used for soft clustering in data-mining and image classification. Pham et al.proposed an Adaptive Fuzzy C-Means (AFCM) algorithm to classify inhomogeneous medical images and volume datasets [59, 60]. Ahmed et al.proposed a bias corrected FCM algorithm to compensate inhomogeneities of images of volume datasets [61, 62]. Each of these algorithms minimizes an objective function through iterative methods. Gopal et al.proposed a maximum likelihood estimate algorithm with a Spatially Variant Finite Mixture model (SVFMM) for image classification . Laidlaw suggested a partial-volume Bayesian classification algorithm based on Bayes theorem .
Our goal of 3D map segmentation is to partition the map into a number of connected regions of interest. We have compared and implemented several classification algorithms [64, 65, 66, 67]. We first locate the seed points by gradient vector diffusion . We then compute the min-max range of every seed point’s neighbors and cluster the seed points to belong to the same region if the min-max ranges overlap. We then apply GVF snake [69, 70] to cluster the voxels falling into separate regions. In addition, the contour spectrum is used to identify the number of materials in the image and is also used to select critical isovalues based on volume fraction of the material . We have also developed a multi-dimensional signature based voxel classification scheme  appropriate for medical imaging data. Figure 5 shows the results of the classification on a single slice of the patient scan where the voxels have been classified into the background, lungs and vasculature.
Segmentation is a way to electronically dissect the significant biological components, and thereby obtain a clear view into the machinery’s architectural organization. Segmentation is usually carried out either manually [74, 75, 76, 77, 78] or semi-automatically[79, 80]. Current efforts on the decomposition still largely rely on manual work with an assistance of a graphical user interface [81, 82]. Manual segmentation can be tedious and often subjective [76, 83]. Automated segmentation is still recognized as one of the hardest tasks in the field of image processing although various techniques have been proposed for automated or semi-automated segmentation. Commonly used methods include segmentation based on edge detection, region growing and/or region merging, active curve/surface motion and model based segmentation. In particular, two techniques were discussed in details in the electron tomography community. One is called water-shed immersion method  and the other is based on normalized graph cut and eigenvector analysis .
In [84, 85, 86] we adopted a variant of the fast marching method [87, 88, 89]. In this method a contour is initialized from a pre-chosen seed point, and the contour is allowed to grow until a certain stopping condition is reached. Traditionally this method is designed for a single object segmentation. We present an approach based on an idea of “re-initialization” by simply regarding and classifying the critical points as seeds. Every seed initiates one contour and all contours start to grow simultaneously and independently. We further classify the critical points into clusters and merge the growing contours which are initiated by the critical points in the same cluster. This multi-label idea has been used elsewhere (e.g., ), but the detection and classification of seeds are different in our approach. Figure 6 shows the process of segmentation, namely the seed point classification and region growing on a single slice of the image followed by the final segmented three dimensional model of human heart from patient imaging data.
Extraction of skeletal description often leads to a complexity-reducing better understanding of the image as it amplifies lower dimensional key features present in the data. Image based skeletonization algorithms are abound in the field of image processing. The previous efforts on this topic by other authors can be categorized into three approaches - one based on isotropic diffusion, governed by a set of linear PDEs [91, 92], one using scale-space theory [93, 94, 95] and one based on pseudo-distance map . We have developed two distinct approaches to extract skeletons from imaging data which offer robustness and efficiency. First approach is based on the critical point structure of the imaging data. We first compute the gradient vector field at every voxel of the imaging data. Because of the noise, the vector field is somewhat arbitrary and do not carry much useful information. To bring out the underlying structure, we then apply gradient vector diffusion (GVF). The resulting vector field is then analyzed to detect the critical points and these critical points are joined to form a skeletal structure of the foreground of the image. The details of the process is given in .
The second approach for skeletonization starts with an isosurface and the distance function induced by it . It then places a sequence of inner medial balls in a greedy fashion, capturing as much inner volume as possible. A neighborhood graph, based on the intersection pattern of these inner medial balls, is then constructed which provides the one dimensional skeletal structure. Figure 7 shows an example result of this algorithm applied on the CTA data of human heart.
Image registration is a commonly employed to flexibly match two different instances of a biological structure. In the context of cardiovascular modeling, the problem is to fit one image of the anatomy in question with its counterpart observed at a different time, or imaged from a different patient. Based on the transformation model that is to be applied on one instance (source) to fit the other (target), image resgistration is primarily of two types - Affine and Elastic/Deformable.
In case of affine registration, the relationship between the source and target image is established via a set of transformation parameters, and then those parameters are estimated by minimizing a quadratic error function. Algoithms under such approaches can be found in [97, 98, 99]. We have developed an algorithm in  which exploits the non-equispaced Fourier transformation techniques [101, 102] to speed up the affine image registration process.
The task is far more challenging when deformation of one image needs to be taken into account to properly fit it into the other image. For a nice survey on affine and deformable image registration algorithms, see . However this survey is old, and since its publication many other algorithms have been published. Bajcsy and Kovacic modeled the elastic image registration problem by the deformation of elastic plates . Christensen et al.considered the deforming image to be embedded in a viscous fluid whose motion is governed by Navier-Stokes equation . Based on a similar viscous fluid registration scheme, Yanovsky et al.designed a new energy function introducing Jacobian maps  and this method was shown to perform better than  in terms of converegence and stability. There are other level-set based methods, e.g.by Clarenz et al., and via edge matching technique by Mumford and Shah .
Geometry extraction from three dimensional volumetric data is a primary step toward further analysis. There are several approaches for accomplishing this task.
Isosurfacing is a popular method to extract surface geometry from scalar volume containing intensity values of the scanned anatomy. There are typically two types of contouring method frequently used in the literature - Primal Contouring and Dual Contouring. The most widely used primal contouring technique is the Marching Cubes Method  which extracts the geometry in a piecewise fashion by visiting every voxel of the volume data. There are several improvements of this technique that has been reported since the first appearance of the algorithm . Dual contouring technique is similar to primal contouring in a sense that it also extracts the isosurface in a piecewise fashion. However it samples every voxel, instead of every edge of the voxel, to better approximate possible sharp features in the extracted geometry. We have experimented with both techniques for extracting a geometry from Cardiac CT data and we have seen that they perform similarly without rendering any significant advantage of any technique over the other. Figure 8 shows sample isocontours extracted from CTA imaging data of human heart.
Other than isocontouring, one can also apply level set based methods where a seed is grown to capture the boundary of a region in the image based on the intensity values. In literature, such technique is commonly known as snake [69, 70]. Note, this is similar to the image segmentation technique described earlier.
Both of these approaches are very sensitive to the noise present in the data and especially isosurfacing techniques suffer when the imaging data is inhomogeneous. To circumvent these problems we adopt a third approach which is based on scattered data interpolation. After performing image segmentation on the CT image, we obtain a set of voxels belonging to every region of interest. We then apply the point based reconstruction technique to extract the geometry from the cloud of boundary voxels. The point based reconstruction technique has been researched extensively in the last decade. We refer the readers to some recent surveys for prior work in this area, e.g.. We have adopted two recent techniques for our purpose of reconstruction - TightCocone and RobustCocone. TightCocone algorithm by Dey and Goswami  reconstructs a watertight triangulated surface from possibly undersampled input point cloud. A variant of this algorithm, called RobustCocone, was also developed in 2004. This algorithm is particularly suitable for noisy data. In our case, we often encounter noise in the segmented image even after applying image segmentation techniques, and to tackle such cases, we use RobustCocone for a reconstruction of the geometry. Figure 9 shows the results of surface reconstruction on the pointsets sampling compartments of human heart.
Surface reconstruction from scattered data has also been approached using variational approach. These techniques typically formulate an energy function based on the input data points and try to extract a surface that minimizes that energy. Such approach was first advocated by Zhao et al.in  who formulated the energy function as the integral of the distance function weighted by the area element of the input set of primitives for which a surface needs to be fit. Then they evolved an initial guess using a convection based approach.
We adopt an approach based on higher order level set spline (HLS) method. Given a non-negative energy function g(x), the surface Γ is defined to be the one that minimizes the energy function E(Γ) = ∫Γ g(x)dx+ ∫Γ h(x, n)dx. Given a input set of points, one then formulates this energy using the distance function and evolves an initial approximation to guide the evolution so that the resulting deformation minimizes the energy. Details of this method can be found in .
The cardiovascular geometry extracted from imaging data typically has topological anomalies, namely small components, spurious noisy features etc. Therefore a careful investigation and subsequent removal of the spurious features present in the data is essential. Following are different scenarios.
Geometry from volume data is often reconstructed via image segmentation. However the set of voxels segregated from the imaging data does not always conform with the true surface topology. As a result, we encounter subsets of voxels which do not sample a two dimensional manifold. Therefore it is important to recognize the dimension of the underlying space of the voxels marked by the segmentation process and remove the spurious ones. To perform this task, we use the technique described in . A similar Voronoi-based approach was also reported in . Following  we first construct a k-neighborhood graph. Then for every point we collect the neighboring points and perform Principal Component Analysis (PCA) on that subset. The eigenvalues of the covariance matrix determines the underlying dimension of the manifold. More precisely if all the eigenvalues are almost equal, the voxel is inside the segmented region and the point samples a three dimensional manifold. If two eigenvalues are almost equal and one is much smaller than the other two, the voxel lies on the boundary surface of the segmented region and is a true candidate for subsequent geometry reconstruction. Finally if two of the eigenvalues are much smaller than the third one, then the voxel samples a dangling one dimensional strand and such voxels must be removed.
Given a set of points P sampling the entire shape, possibly contaminated with topological artifacts like small connected components and thin tunnels, we synthesize a distance function hP which assigns every point in 3 the distance to the nearest sample point in P. There are four types of critical points of hP, namely maxima, index 2 saddles, index 1 saddles and minima. It was shown that these critical points can be detected efficiently using the duality of Voronoi and Delaunay diagram of the original pointset P . It was further shown that the stable manifolds of the maxima interior and exterior to the shape contains useful information connecting to the primal and complementary feature space of the shape [118, 119]. Figure 10 shows how the bone, ribs and thin blood vessels are removed via volumetric feature quantification process since they are not essential for creating a suitable model of human heart.
Based on the critical points of the distance function induced by the input geometry, we perform the geometry segmentation as follows. The detail of the algorithm is given in .
The geometric shape is given by a set of points P sampling the shape. The feature of the shape is then defined in terms of the stable manifold of the maxima of the distance function hP. The maxima are first computed by identifying the Voronoi vertices which lie inside their dual tetrahedra. Applying a Delaunay-based reconstruction technique on the pointset, one can further classify the tetrahedra holding the maxima into inside or outside. For our purpose we use only the interior maxima. We compute the stable manifold of these maxima using the algorithm in . The adjacent stable manifolds are then merged if the generating maxima have almost same value of hP as measured by a parameter δ. Figure 11 shows the performance of this algorithm on the cardiovascular geometries. In this figure, the six main components of a human heart, namely aorta, pulmonary artery, left and right atrium and ventricle are segmented out from a set of points sampling the boundary of heart.
Computing skeletons of a geometric shape is a research issue that has been around for a long time. Medial axis cite(Blum) is considered a standard skeletal description of a shape and there are algorithms [120, 121] and publicly available software to compute the Medial Axis Transform (MAT) of a shape from its pointsample . However, medial axis is composed of planar (two dimensional) and linear (one dimensional) parts. In order to compute such dimension-dependent decomposition of the medial axis, as well as pruning away hairy branches from the medial axis one requires some extra gadgets which we describe in the next paragraph. There are some previous works which focussed on computing a linear skeleton of an arbitrary shape. Some of these are topological thinning , distance field based methods [124, 125, 126, 127], potential field based methods , thinning via medial geodesic function  and others [130, 131, 132]. Cornea et al. give a comprehensive survey of these techniques.
Once a suitable description of the geometry is obtained either by reconstruction or by contouring, the distance function based approach is used to compute a skeletal structure of the shape. As for segmentation, the critical points of the function hP is computed for a set P of input points sampling the shape. The index 1 and index 2 saddle points are then detected using the Voronoi-Delaunay duality . To generate the skeletal structure, we then compute the unstable manifold of these critical points. The unstable manifold of an index 2 saddle point (U2) is one dimensional and the unstable manifold of an index 1 saddle point (U1) is two dimensional. Moreover, every U1 is bounded by some U2’s. The details of the computation of U1 and U2 are given in . Figure 12 shows two instances where the skeletal structures have been constructed using this method.
Alignment of two similar but not identical geometric objects is a difficult problem. There are relatively few papers in the geometric modeling community that address this problem. Recently, an interesting technique was reported by Eckstein et al. where generalized surface flow was used for non-rigid alignment of a template geometry into the patient data. The authors design an energy function based on pseudo-Hausdorff distance between the two geometries and evolve the template geometry to fit the patient geometry following the gradient of the energy function.
We experimented with a different approach for non-rigid fitting of the segments of the template heart into the patient data in order to inherit the correct topological structure from the template geometry as well as retrieve the missing information in the patient data. We construct a skeletal description of different parts of the template geometry as described earlier. Every segment is then described as a union of balls centered at the one dimensional skeleton following a popular approach due to . A mass-spring network is then built where each ball’s mass is proportional to its radius and the spring constant is taken to be constant. The normal mode analysis (NMA), which is a popular way to depict the vibrational nature of a molecule , is then applied to the network which produces a spectrum of possible deformed shapes. We choose one deformed shape from the spectrum that best fits the patient geometry.
The final goal of this Imaging to Modeling software system is to produce suitably discretized meshed model of the imaged biological entity. The task of meshing is primarily divided into two parts - Boundary Element and Finite Element. Each of Boundary Element and Finite Element meshing again has three sub-parts.
Boundary Element meshing refers to the meshing of the surface geometry. Depending on the smoothness and shape of every patch forming the surface mesh.
Finite Element meshing refers to the techniques of producing the discretization of the volume enclosed by the surface geometry.
We exhibit the implementation results of our image and geometry processing algorithms on modeling the Heart and Vasculature (Coronary Artery, Pulmonary Arterry, Abdominal Aorta and Thoracic Aorta).
We have experimented with images of patient hearts. For illustration of the performance of the algorithms, we have selected two datasets one of which is a low quality image whereas the other is of relatively better quality.
The first heart dataset is of dimension 512 × 512 × 432 and the spacing in x, y, z directions are respectively 0.390625mm, 0.390625mm, 0.3mm. We first applied the contrast enhancement to the original image and then applied the fast marching based segmentation on the contrast-enhanced image to separate the subvolumes corresponding to the aorta, pulmonary artery, right atrium and left atrium. Because of the poor quality of the image, the ventricles could not be recovered well. The result of the image processing on this dataset is shown in Figure 14(B). After the four components of the patient heart are segmented from the volume, we took the boundary voxels of each of these four regions and applied surface reconstruction from scattered points to build the initial triangulated geometric models. As one can see these models have a reasonable amount of spurious parts as well as some missing information that could not be retrieved from the patient data. We analyzed each of the recovered geometries using the critical point structure of the distance function described in Section 2.2. Using curation and geometry segmentation, we identified the portions which most prominently correspond to the portions of a template heart model while pruning away the undesired portions. Figure 14(C) shows the relevant portions that have high correlation with the corresponding portions of the template geometry. Parallel to the processing of the patient heart, we also performed geometric analysis of the template heart model. We construct the one dimensional skeletal structure of all the six components of the template, as well as we segmented the template into aorta, pulmonary artery, right and left ventricle and atrium (Figure 14(D).
From there we could draw a correspondence between the segmented portions of the patient heart with the template heart (Figure 14(E)).
The second dataset was of better quality in terms of contrast and noise present. We first extracted the geometry via isocontouring. However using a single isovalue could not entirely serve the purpose as it also extracts the surrounding vasculature as shown in Figure 15(A). We therefore resorted on curation to extract just the geometry of heart by pruning away the surrounding thin blood vessels using curation (Figure 15(B)). After this step we were left with the geometry of heart which we further segmented using the stable manifold of the maxima of the distance function induced by the geometry. The segmentation step was crucial as it was able to separate the six main components of the patient heart, namely aorta, pulmonary artery, right and left ventricle and atrium as shown in Figure 15(C). We were then able to draw the correspondence between each of the segmented parts with the corresponding ones from the template geometry as shown in Figure 15(D).
The primary objective behind modeling pulmonary artery was to detect pulmonary embolism (PE) automatically . This required an initial artery-vein separation from the CT scans of the vasculature which was performed directly from the input imaging data by the image skeletonization technique described in Section 2.1. The skeletons are traced from the end of the branches and traversed upward toward the heart. Once the trace reached the patient heart through a series of disambiguation, as needed because of the poor image quality, one of the branches was tagged arterial while the other is tagged venous. At the same time, the rest of the volume was classified using the voxel classification technique as described in Section 2.1 (Figure 16(b,c)). This led to a complete characterization of the CT data into the major components along with the artery and vein separated (Figure 16(d–f) ).
Starting with the CT scan of the abdominal section of the patient, we first extracted the geometry using an isovalue that best captures the geometry of the abdominal aorta along with the surrounding bone structure and other anatomical parts which are however irrelevant for the modeling of the aorta itself. To separate the aorta, we performed geometry segmentation, using the stable manifold of the maxima of the distance function. The segmented aorta is shown in green in Figure 17 (a,b). We then performed geometry skeletonization on this segmented geometry and as a result a one-dimensional skeletal structure was produced (Figure 17 (c,d)). This skeleton was then used for building a swept hexahedral volume that best represented the geometry of the aorta, as well as served the purpose of a control mesh which was then used to build a solid NURBS model of the abdominal aorta (Figure 17 (e,f)).
Starting with the scan of the patient heart, we first segmented out and extracted the geometry of the thoracic aorta via isocontouring. We then performed the skeletonization of this geometry in order to obtain an one-dimensional polylinear skeletal (Figure 18 (A.1)). An external pathway is added to the skeletal structure in order to simulate the LVAD used in time of open-heart surgery. Using the skeleton as a sweeping path, we then built a hexahedral control mesh that best represented the aorta along with its inner volume (Figure 18 (A.2)). This control mesh was then used to construct a solid NURBS mesh of the thoracic aorta as shown in Figure 18 (A.3).
Starting with a CT scan of the patient heart, first the coronary artery was extracted as shown in Figure 18 (B.1). The coronary artery has two main branches - right and left, which were then geometrically segmented from the whole vascular structure. Figure 18 (B.2) shows both the vasculature trees colored in red and blue. We then computed the skeletal structure for each of these substructures using the skeletonization method described earlier (Figure 18 (B.3) ) and subsequently a NURBS mesh model was constructed using the swept volume technique as described in the meshing subsection earlier (Figure 18(B.4, B.5) ).
We have developed a comprehensive software collection (http://cvcweb.ices.utexas.edu/software/) of data processing tools (http://cvcweb.ices.utexas.edu/cvc/projects/medx/pipeline.php) that can be utilized in producing patient specific, and spatially realistic, linear and curved, boundary and finite element models of human cardio-vasculature. Such a combination of geometry extraction and geometric modeling are the necessary enablers for quantitative and interrogative querying, analysis and visualization. These boundary and finite elements are additionally useful for a variety of physiological, bio-chemical modeling and simulation of normal or diseased conditions. They are also useful for virtual surgical training, treatment planning and drug or dosage delivery. Current imaging modalities we have successfully processed through our software include Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), and their blood perfused variations for cardio-vasculature modeling and micro-CT for osteoporotic bone modeling.
The research was NSF grant CNS-0540033 and NIH contracts: P20-RR020647, R01-EB00487, R01-GM074258, R01-GM07308. We also thank Dr. Charlie Walvaert of the Austin Heart Hospital, USA for providing us with the CT64 thoracic scan. Thanks are also due to Joe Rivera of CVC for his immense help with data processing. We would like to additionally thank Dr. Tom Hughes, Dr. Yuri Bazilevs for helpful discussions, and Dr. Sangmin Park and Dr. Yongjie Zhang for their help with image processing and mesh generation parts of this paper. The research was supported in part by NIH, NSF etc.