Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2942026

Formats

Article sections

- Abstract
- I. INTRODUCTION
- II. MODELING UNCERTAINTY IN DEPOT SHAPE
- III. METHODS
- IV. RESULTS
- V. DISCUSSION
- REFERENCES

Authors

Related links

IEEE Trans Biomed Eng. Author manuscript; available in PMC 2010 September 20.

Published in final edited form as:

Published online 2008 December 2. doi: 10.1109/TBME.2008.2009882

PMCID: PMC2942026

NIHMSID: NIHMS231269

J. Samuel Preston, Tolga Tasdizen, Member, IEEE, Christi M. Terry, Alfred K. Cheung, and Robert M. Kirby, Member, IEEE^{}

J. Samuel Preston, Scientific Computing and Imaging Institute and the School of Computing, University of Utah, Salt Lake City, UT 84112 USA (Email: ude.hatu.ics@masj)

The publisher's final edited version of this article is available at IEEE Trans Biomed Eng

See other articles in PMC that cite the published article.

Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact of model and parameter variability. We propose a framework that combines component shape parameterization with the stochastic collocation method to study the effect of drug depot shape variability on the outcome of drug diffusion simulations in a porcine model. We use realistic geometries segmented from MR images and employ level-set techniques to create two alternative univariate shape parameterizations. We demonstrate that once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the validation and verification process.

In any numerical simulation, assumptions are made that do not precisely reflect the physical reality of the problem. These simplifications yield tractable problems and also introduce error. As part of the traditional simulation validation and verification (V&V) process, it is important to understand the impact of a given assumption on the final result. Assuming that a given modeling decision represents a specific choice from a set of possible decisions, it is possible to quantify the impact of that choice on the simulation. Knowing the variability introduced by the range of possible choices allows confidence in the final result to be adjusted accordingly. Probabilistic characterization of the effect of modeling decisions on simulation results is a powerful tool for validation, but is generally prohibitively costly due to the number of simulations that must be run to obtain a reasonable confidence level for the statistics. To quantify the impact of a given modeling decision using a standard Monte Carlo approach, individual samples (each representing a specific modeling choice) are randomly taken from the population of possible choices according to the underlying probability distribution. However, this method converges quite slowly (as $1/\sqrt{N}$ [1]), often requiring thousands of samples (and therefore thousands of simulations), which generally proves a prohibitive requirement. If instead we can model this population as arising from an underlying stochastic process, it is possible to apply the generalized polynomial chaos–stochastic collocation (gPC-SC) method [2], [3] to efficiently sample the probability space, yielding accurate statistics from a nearly minimal number of samples. The stochastic collocation method can be seen as a “sampling” extension to generalized polynomial chaos, which represents the stochastic process as a linear combination of orthogonal polynomials of random variables. This gPC representation has been used to solve stochastic differential equations in a number of fields [4]–[7]. Stochastic collocation has been successfully used to quantify the effects of heart position in ECG forward simulation [8] and in modeling uncertainty in diffusion simulation due to microstructure variability [9]. Since stochastic collocation builds statistics based on deterministic solutions for sampled stochastic parameter values, it only requires a standard discrete solver for the problem of interest. This makes for easy implementation and allows its use on problemswith complicated governing equations for which a nonsampling gPC formulation would be difficult or impossible.

The specific experiment we are simulating in this paper attempts to test the efficacy of the antiproliferative drug rapamycin delivered via an injected triblock copolymer gel in limiting neointimal hyperplasia growth at the venous anastomosis of an implanted arteriovenous expanded polytetrafluoroethylene (ePTFE) graft. This experiment is part of a study analyzing the efficacy of such a procedure for possible future use in hemodialysis patients with implanted ePTFE grafts. The current study uses a porcine model with grafts implanted between the common carotid artery and ipsilateral external jugular vein [10].

We are specifically interested in drug transport via diffusion from the injected gel deposit containing drug (the “drug depot”) to the venous anastomosis (the “target site”). The geometry of the vein, artery, graft, and depot are modeled from MR images and diffusion coefficients of each are experimentally determined [11] (see Table I). All other structures in the test region are combined and labeled as homogeneous “nonvascular tissue.” Additionally, the vein, artery, and graft wall thicknesses cannot be measured reliably from MR images. Therefore, we use measurements obtained from histological studies.

One of the modeling decisions involved in this simulation is the geometric shape of the drug depot. While the volume of gel and amount of drug are known and constant from experiment to experiment, each injection is a nonrepeatable process that yields a different depot shape based on the properties of the local tissue and variability in the injection technique. For an individual simulation, a single depot shape must be chosen from the range of possible shapes. This choice introduces variability to the experimental outcome; hence, it is important to study. In this paper, we propose a method for studying the effect of depot shape on the outcome of the experiment. In order to determine the variability in drug concentrations at some simulation time due to the choice of depot shape, we develop a shape parameterization to represent the underlying population of shapes. We then use the gPC-SC method to sample this population at well-chosen parameter values and derive statistics of the effect of shape on drug concentration at our chosen simulation time. While the specific shape models developed in this paper are characterized by a single parameter and are too simple to capture the entire variability observed from all injections, the contribution of this paper is the development of a method for studying the effect of shape variability on simulation outcomes. More complex shape models with multiple parameters can be easily inserted into the proposed framework in future studies.

The remainder of the paper is organized as follows. Section II introduces the notion of probabilistic characterization of shape variability and the use of the stochastic collocation method that take advantage of this characterization to effectively compute simulation outcome statistics. Section III outlines the image segmentation pipeline for obtaining realistic geometries (Section III-A), the specific transformations used in characterizing shape variability (Section III-B), the discretization of segmentation results (Section III-C), the simulation model (Section III-D), and its numerical approximation (Section III-E). Results of experiments with two different shape variability characterizations are presented in Section IV, followed by a concluding discussion in Section V.

To begin our study, we must first understand and clearly articulate the mathematical problem we are attempting to solve. We accomplish this by first describing our “experiment” (in the sense that a probabilist would), then by articulating a probability space that captures the salient features of the experiment about which we want to reason, and finally, by describing how the stochastic collocation method can be employed to allow computation of statistics of interest. Our “experiment” of interest is as follows. We assume that we have been given a test volume containing the static structures of interest—in our case, the vein, artery, and graft. These objects are modeled as remaining fixed in both position and material properties (see Table I) throughout the course of all experiments. We assume that we have also been given a parameterization of drug depot shapes such that a specific parameter value corresponds to a specific geometric depot model. Having both the static test volume and a specific depot model, diffusion simulations can be run yielding drug concentrations for positions within the test volume at any future time, respecting assumptions regarding our simulation method.

For our probabilistic experiment, we limit the set of admissible depot shapes to those represented by a univariate parameterization. In Section III-B, we will formulate two specific parameterizations: one that generate shapes with varying regularity in surface curvature and another that generates shapes by morphing between two realistic depot shapes as segmented from a magnetic resonance image. The initial concentration of drug in the depot is assumed to be uniform, and the volume of all depot geometries generated by this parameterization is constant. The volume, position, and gross shape of the depot were chosen to represent a “standard” injection result as seen by postinjection MRI. Variation in depot shape can now be expressed by assuming that any specific shape is drawn from a set consisting of shapes resulting from our parameterization. We assume that there is an equal probability of obtaining any specific depot shape from our parameterization, i.e., the probability distribution is uniform. Using this parameterization, we aim to compute statistics of drug concentration at some time *t _{f}* induced by the variations in depot shape expressed by this simple (but quantifiable) probabilistic experiment. To formalize this process, let (χ, , µ) be a complete continuous probability space that expresses variation in the depot shape where χ is the event space consisting outcomes corresponding to depot shapes, 2

Let the distribution of drug concentration due to ξ be denoted by *f*(* x*,

$$\begin{array}{cc}\text{mean}(f)\phantom{\rule{thinmathspace}{0ex}}(\mathit{x},{t}_{f})\hfill & =\mathbb{E}[f(\mathit{x},{t}_{f},\xi )]\hfill \\ \hfill & ={\displaystyle {\int}_{\mathrm{\Gamma}}\phantom{\rule{thinmathspace}{0ex}}f(\mathit{x},{t}_{f},\xi )d\mu}\hfill \end{array}$$

(1)

$$\begin{array}{cc}\text{var}(f)\phantom{\rule{thinmathspace}{0ex}}(\mathit{x},{t}_{f})\hfill & =\mathbb{E}[{(f(\mathit{x},{t}_{f},\xi )-\text{mean}(f))}^{2}]\hfill \\ \hfill & ={\displaystyle {\int}_{\mathrm{\Gamma}}{(f(\mathit{x},{t}_{f},\xi )-\text{mean}(f))}^{2}d\mu}\hfill \end{array}$$

(2)

or more generally for the *p*th moment

$$\begin{array}{c}\mathbb{E}[{(f(\mathit{x},{t}_{f},\xi )-\text{mean}(f))}^{p}]\hfill \\ \hfill \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}={\displaystyle {\int}_{\mathrm{\Gamma}}{(f(\mathit{x},{t}_{f},\xi )-\text{mean}(f))}^{p}d\mu}\end{array}$$

(3)

where mean(*f*) and var(*f*) (and other high-order moments) are themselves fields defined over our test volume giving the mean and variance of drug concentration at any ** x** due to depot shape variability, and Γ is the domain over which the random variable varies. Also note that mean(

We note that the process of moving from the uniform shape distribution to the distribution of *f* (** x**,

The gPC-SC approach takes advantage of quadrature rules to integrate the stochastic process of interest over the stochastic domain, thus allowing the efficient computation of quantities such as means and variance. Under assumptions of smoothness, which, in this case, equate to the assumption that drug concentration varies smoothly with changes to our parameterized depot shape model, we can gain exponential convergence in the accuracy of our computed statistics as a function of the number of diffusion simulations we are required to run. In the discussion of our experiment, we will refer to the simplified diagram in Fig. 1. In the stochastic collocation approach, a collection of points at which the random field is to be sampled is specified, and a set of corresponding weights is computed, which account for the probability density function (pdf) characteristics underlying the set from which the points (or outcomes) are drawn. In the context of our study, each collocation point ξ represents a particular gel shape selected from our outcome set. Fig. 1(a) shows an example of a univariate smoothing parameterization sampled at three collocation points, and Fig. 1(b) shows the resulting depot shapes. Since the embedding process is deterministic given a depot shape, the entire volume [shown in Fig. 1(c)] is a consequence of the chosen collocation point. At each collocation point, we can then compute the drug concentration *f* (** x**,

$$\begin{array}{cc}\text{mean}(f)\phantom{\rule{thinmathspace}{0ex}}(\mathit{x},{t}_{f})\hfill & =\mathbb{E}[f(\mathit{x},{t}_{f},\xi )]\hfill \\ \hfill & \approx {\displaystyle \sum _{j=1}^{q}{w}_{j}f(\mathit{x},{t}_{f},{\xi}_{j})}\hfill \end{array}$$

(4)

$$\begin{array}{cc}\text{var}(f)(\mathit{x},{t}_{f})\hfill & =\mathbb{E}[{(f(\mathit{x},{t}_{f},\xi )-\text{mean}(f))}^{2}]\hfill \\ \hfill & \approx {\displaystyle \sum _{j=1}^{q}{w}_{j}{(f(\mathit{x},{t}_{f},{\xi}_{j})-\text{mean}(f))}^{2}}\hfill \end{array}$$

(5)

where *q* is the number of points used and *w _{j}* denotes the weights. As discussed further in Section IV-A, calculation of these statistics over the distinct diffusion solutions requires sampling the solutions at a set of points uniform between simulation solutions, as shown in Fig. 1(e). The means and variances can then be computed at these points [see Fig. 1(f)].

Calculation of the mean drug concentration for a shape parameterization using first-order Smolyak points for collocation. (a) Univariate shape parameterization and three collocation points. (b) Three gel shapes corresponding to the collocation points. **...**

Similarly, higher order centralized moments can be approximated by

$$\begin{array}{c}\mathbb{E}[{(f(\mathit{x},{t}_{f},\xi )-\mathrm{\text{mean}}(f))}^{p}]\hfill \\ \hfill \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\approx {\displaystyle \sum _{j=1}^{q}{w}_{j}{(f(\mathit{x},{t}_{f},{\xi}_{j})-\mathrm{\text{mean}}(f))}^{p}.}\end{array}$$

(6)

Discussion of the formulation and computation of further statistics based upon the stochastic collocation method can be found in [3]. Of note is that increasing the order of the moments under examination may lead to an increase in the number of collocation points required.

In order to run our diffusion simulation, we must first have a discretized representation of the tissue in which we are interested. For our statistical experiment, the model will be a combination of “static” components not changed from simulation to simulation and a model of the injected drug depot generated from our parameterization that changes based on the parameter value chosen. The depot shape is assumed to be unchanged within a single diffusion simulation. While this ignores degradation of the gel over time, changes to the shape or diffusive properties of the depot would be difficult to experimentally determine and greatly increase the complexity of the simulation. Both the models of the static components and the depot shape parameterization rely on segmentations of MRI volumes. Once specific structures are segmented, geometric models of these objects are generated. These models are used to create a finite-element diffusion simulation for each parameter value. This process is described in greater detail in the following sections.

Lumen and depot geometries were constructed from MR images taken at weekly intervals following surgery. Gel injection was performed at these times, with pre- and postinjection scans acquired. Lumen geometry was obtained via a “black blood” 3-D turbo spin echo pulse sequence. This sequence uses a nonselective inversion pulse followed by a selective reinversion pulse in preparation for imaging in order to suppress signal from blood flowing into the imaging plane. A single lumen geometry was selected as the “standard” and used for all simulations; this is the static part of the model. Fig. 2 depicts the processing pipeline for image segmentation and mesh generation. A rough segmentation of the lumen was performed using user-defined seed points within the lumen as inputs to a connected component/thresholding algorithm [see Fig. 2(b)]. Due to the extreme narrowing of the vessels near the anastomosis and the common presence of flow and pulsatile motion artifacts in this region, a minimum cost path (MCP) algorithm is necessary to complete the lumen connection through this area [see Fig. 2(c)]. This technique is similar to the stenosis-robust segmentation technique described in [13]. Starting from the initial segmentation, this algorithm defines a graph in which voxels are considered nodes and each voxel has an edge to each immediate neighbor (six-connected). Since lumen should be the darkest region in this black blood scan, the cost associated with each edge is based on the intensity of the neighboring pixel. A search is then performed to find the MCP connecting the disconnected segmentation regions. An active-contour-based region growing step (implemented in the Symbolic Numeric Algebra for Polynomials (SNAP) [14] software package) was then used to refine and “fill in” the segmentation [Fig. 2(d)].

Segmentation and discretization pipeline for lumen geometry. (a) Black blood MR image. (b) Initial segmentation from user-defined seed points. (c) Disconnected segments of lumen (see artery in the left of previous image) are connected using an MCP algorithm. **...**

Gel images were taken immediately following injection using a specially developed pulse sequence for depot imaging [15]. Due to the improved contrast introduced by this sequence, a simple thresholding algorithm was sufficient for segmentation. The segmented volume agreed well with the known injected volume—taking a maximum and minimum “reasonable” threshold level changed the segmented volume to approximately ±20% of the known volume. The threshold level used was chosen such that the segmented volume agreed with the known injected volume.

To apply the stochastic collocation method, we require a parameterization of depot shape—the variable component of the geometric model. To physically model the injection process would require a complex, high-dimensional parameterization based on properties of surgically damaged tissue, gel properties, and injection techniques, and would result in a more complex simulation than the diffusion problem we are attempting to solve. Instead, our parameterization is based on the results we have—namely segmentations of the injected depot from postinjection MRI. In this way, we can develop a simple, low-dimensional parameterization that still respects the available data.

We explore two parameterizations both based on the evolution of existing depot segmentations using level-set methods [16]. Since level-set methods operate directly on a volume of data, they are an obvious choice for working with images acquired from MRI and have the advantages of easily allowing deformations based on differential properties of a surface and automatically handling changes in surface topology. For both parameterizations, we require that the volume of the depot be constant for any parameter value in order to keep the amount of drug uniform for all simulations while still using the known value for drug concentration in the injected gel.

Our parameterization will be formed by deforming an initial depot surface over time. Note that “time” in this section does not refer to the diffusion simulation time, but an arbitrary unit describing the amount of evolution of our level-set surface has undergone. In order to set up our level-set methods, assume that we start with some initial surface _{0} at time *t* = 0. We will find a scalar function θ such that _{0,k} is an isosurface of θ with value *k*, i.e.

$${\mathcal{S}}_{0,k}=\{\mathit{x}|\theta (\mathit{x},{t}_{0})=k\}.$$

(7)

We can then modify this surface by changing the function θ due to properties of _{t}. More generally, any point * x* in a volume dataset at time

$$\mathit{n}=\frac{\nabla \theta}{\Vert \nabla \theta \Vert}$$

(8)

by modifying the values of θ over a series of time steps

$${\mathcal{S}}_{t+1}(\mathit{x})={\mathcal{S}}_{t}(\mathit{x})+V(\mathit{x})n$$

(9)

where *V*(** x**) is the “speed function,” a function of local and global properties of the local isosurface and independent forcing functions. As described in Section II, this parameterization will be sampled at specified collocation points (“times” in our current formulation). In order to ensure stability, we use a variable time step for our level-set evolution based on the fastest moving point on our surface. We do not attempt to sample at the exact collocation point specified, but use a “nearest neighbor” approach, ensuring that our surface is sampled within one time step of the prescribed sample time. Since the purpose of the variable time step is to limit the movement of any point on the surface to one voxel length in a single time step, the maximum error introduced by this approximation is one grid spacing, and for most of the surface, the actual error is much less.

Since we are only interested in a single isosurface of θ, it is unnecessary to update the entire volume at each time step. Only voxels near _{t} at time step *t* are needed to calculate _{t+1}. Therefore, we use a sparse-field method [17] for calculating updates. This greatly reduces the computational burden while giving subvoxel accuracy for surface position.

The first parameterization takes an existing gel shape chosen as an example of a particularly nonregular surface and smooths it using a volume-preserving mean curvature flow (MCF) transformation. MCF smooths a surface at each time step by moving each point on the surface in the surface normal direction by an amount proportional to the mean curvature. In its original, nonvolume-preserving formulation, MCF is the gradient descent for the surface area. The mean curvature *H* of the local isosurface is proportional to the divergence of the local isosurface normal

$$H=\frac{1}{2}(\nabla \mathit{n}).$$

(10)

This technique, however, reduces the interior volume of the surface, generally resulting in a singularity at *t*_{∞}. In order to enforce volume preservation, the movement of the surface is adjusted by the average mean curvature so that the total offset of the surface in the normal direction is zero

$$V(\mathit{x})=(H(\mathit{x})-\overline{H})$$

(11)

where

$$\overline{H}=\frac{{\displaystyle {\int}_{S}\phantom{\rule{thinmathspace}{0ex}}H(\mathit{x})\mathit{\text{dx}}}}{{\displaystyle {\int}_{S}\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{dx}}}}.$$

(12)

The effect of volume preserving MCF is to create progressively smoother shapes that approach a sphere, as shown in Fig. 3. For a more rigorous treatment, see [18]. Using this parameterization in conjunction with stochastic collocation, we can study the effect of depot shape smoothness on experimental outcomes of drug diffusion. Equivalently, for a fixed depot volume, this can be seen as studying the effect of surface area.

The second parameterization performs a volume-preserving metamorphosis between two depot segmentations chosen to represent the extremes of surface regularity for the existing segmentations. The less-regular shape was taken as the “source” surface and the more-regular shape as the “target” surface for the morph. In order to allow volume preservation, the target surface was scaled and resampled to have the same interior volume as the source surface.

The morphing technique used is based on Breen and Whitaker’s work on level-set shape metamorphosis [19], where transitioning between the two surface models occurs via a level-set formulation with the speed function given by

$$V(\mathit{x})=\gamma (\mathit{x})$$

(13)

where γ(** x**) is the distance transform of the target image. In this way, points on the source surface will move toward the target surface whether they are inside or outside the target surface. This shape metamorphosis, however, is not volume preserving. While we know that the interior volumes at the beginning and end of the transformation are equal, the volume of intermediate surfaces is not constrained. As Breen and Whitaker noted, the volume inside an intermediate surface Ω

$$\begin{array}{cc}{\rho}_{i}=\{\begin{array}{cc}\frac{{\upsilon}_{o}}{{\upsilon}_{i}},\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}{\upsilon}_{i}>{\upsilon}_{o}\hfill \\ 1.0,\hfill & \text{otherwise}\hfill \end{array}\hfill & \text{\hspace{1em}\hspace{1em}}{\rho}_{o}=\{\begin{array}{cc}1.0,\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}{\upsilon}_{i}>{\upsilon}_{o}\hfill \\ \frac{{\upsilon}_{i}}{{\upsilon}_{o}},\hfill & \text{otherwise}\hfill \end{array}\hfill \end{array}$$

where the level-set formulation now becomes

$${\mathcal{S}}_{t+1}(\mathit{x})=\{\begin{array}{cc}{\mathcal{S}}_{t}(\mathit{x})+{\rho}_{i}V(\mathit{x})\phantom{\rule{thinmathspace}{0ex}}\mathit{n},\hfill & \text{\hspace{1em}}\mathit{x}\in {\mathrm{\Omega}}_{i}\hfill \\ {\mathcal{S}}_{t}(\mathit{x})+{\rho}_{o}V(\mathit{x})\phantom{\rule{thinmathspace}{0ex}}\mathit{n},\hfill & \text{\hspace{1em}}\mathit{x}\in {\mathrm{\Omega}}_{o}.\hfill \end{array}$$

Fig. 4 shows the intermediate shapes obtained by the metamorphosis method between two segmented gel shapes. One of the gel shapes is less regular than the other, providing a second way of studying the effect of surface area for fixed-volume depot shapes.

Once segmented, geometric surface models of the structures of interest were created and embedded in a control volume. This multisurface structure is used to define material regions and voids in a surface-conforming tetrahedralization of the control volume.

Since the imaged region is smaller than the desired control volume, the vessels are extended to the boundaries of the control volume. The vessels travel in roughly the *z*-direction within the imaged region, so this extension is accomplished by simply repeating the boundary *z* slices of the segmentation to the extents of the control volume.

A surface mesh of the lumen segmentation was created by slightly blurring the segmentation with a Gaussian kernel, then using this gray-level image as input to an isosurface meshing algorithm. The algorithm used is part of the Computational Geometry Algorithms Library (CGAL) [20] and uses iterative point insertion on the isosurface to create a triangle surface mesh with constraints on the maximum size and minimum angle of any member triangle.

Each of the vascular structures (vein wall, artery wall, and graft) has a different thickness and different diffusive properties. The portions of the lumen contained in the different structures cannot be clearly distinguished in the available MR images, so the generated surface mesh of the lumen was hand-annotated with this information based on knowledge of the anatomical structures. The resolution of the MRI sequence used to image the lumen was not high enough to directly image the vessel walls or graft. In order to create the geometry of the graft and vessel walls, thickness measurements from histology were used. Using the labeled surface dataset, a tetrahedral model of the vascular structures was generated by duplicating the surface mesh and offsetting each duplicate mesh point in the surface normal direction by a distance equal to the thickness of the vascular structure (see Table I for thickness values). Connecting each vertex to its offset duplicate produces a mesh of triangular prisms. The maximum triangle size of the surface mesh was chosen with respect to the thickness of the vascular structure in order to produce well-shaped prisms. This mesh is then subdivided such that each prism becomes three tetrahedra and all tetrahedra have coherent faces. Coherent prism splits are computed using the “rippling” algorithm described in [21]. The tetrahedra generated by this process then have maximum edge lengths roughly equal to the thickness of the structure they represent, 0.3–0.7 mm. While the simple offsetting technique used has the possibility of producing intersecting or incorrectly oriented tetrahedra, experimental results show that the curvature of our surface is low enough at all points relative to the offset distance to avoid producing any invalid elements. The surface mesh of a specific drug depot shape is generated from the output of our parameterized depot shape generation algorithm by the same method as used for the lumen surface.

The depot surface and the outer (offset) lumen surface are embedded in a cube representing the extent of our test volume. These surfaces are combined and used as the input piecewise linear complex (PLC) to TetGen [22]. The cube and the offset lumen surface define the boundaries of the PLC (i.e., the offset lumen surface represents a “hole” in the volume), and the elements of the depot surface mesh are used as internal constraining faces of the generated constrained Delaunay tetrahedralization (CDT) defining the drug depot region. The area outside of the depot in this CDT is considered nonvascular tissue. Elements generated are constrained to have a maximum radius–edge ratio of 2.0, allowing element size to grow away from the structures of interest. Finally, the previously computed tetrahedral mesh of the vascular structures is merged with this tetrahedral volume to create our final test domain containing five regions—the graft, arterial wall, venous wall, drug depot, and nonvascular tissue. The tetrahedralization step is constrained to preserve boundary faces along the offset lumen surface, so this merging process requires only these duplicate points be merged. This process yields a discretization containing 210 000–270 000 elements depending on the specific depot shape.

For the purposes of our simulation, we assume that drug transport is only due to diffusion. Furthermore, we assume that diffusion rates are isotropic and constant for each material; let *D* be a function that is piecewise discontinuous and consists of diffusion coefficients *D _{m}* associated with each material

$$\frac{\partial c(\mathit{x},t)}{\partial t}=D{\mathbf{\nabla}}^{2}c(\mathit{x},t),\text{\hspace{1em}}\mathit{x}\in \mathrm{\Omega}$$

(14)

where appropriate boundary conditions are applied. We assume that any drug diffusing into the interior of the vascular structures would be immediately removed by blood flow, leading to a known (zero) concentration at the lumen boundary (zero Dirichlet conditions). We also require that no diffusion occur at the boundary of the test volume.

Solving (14) for any but the simplest geometries requires a numerical approximation, typically based on methods such as boundary elements or finite elements. For our simulation, we use the standard finite-element formulation with piecewise linear basis functions.

For the domain Ω representing our control volume, let *T*(Ω) be the tetrahedralization of the volume, as described in Section III-C. Also, let the set contain the indexes of the vertices of *T*(Ω), which are the nodes of our finite-element mesh. We then decomposed the set into two nonintersecting sets, and , representing nodal indexes that lie on the boundary and nodal indexes of the interior, respectively. Let us furthermore decompose into _{q}, the outer boundary of the test volume, and _{c}, the lumen boundary. To satisfy our requirements from Section III-D, we impose zero Neumann boundary conditions at _{q} and zero Dirichlet boundary conditions at _{c}.

Let *ϕ _{i}*(

$$u(\mathit{x})={\displaystyle \sum _{k\in \mathcal{N}}{\widehat{u}}_{k}{\varphi}_{k}(\mathit{x})}$$

(15)

$$={\displaystyle \sum _{k\in \mathcal{U}}{\widehat{u}}_{k}{\varphi}_{k}(\mathit{x})+{\displaystyle \sum _{k\in {\mathcal{B}}_{c}}{\widehat{u}}_{k}{\varphi}_{k}(\mathit{x})}}$$

(16)

where = _{q}, which represents the unknown values of the problem. The first term of (16) then denotes the sum over the degrees of freedom (the values at the unknown vertices weighted by the basis functions) and the second term denotes the same sum for the (known) Dirichlet boundary conditions of the solution. Substituting the expansion (16) into the differential equation (14), multiplying by a function from the test space {*ϕ _{j}*(

$$\mathbf{M}\frac{\partial \overrightarrow{u}}{\partial t}+\mathbf{K}\overrightarrow{u}=\overrightarrow{f}$$

(17)

where *$\stackrel{\u20d7}{u}$* denotes a vector containing the solution of the system (i.e., concentration values at the nodal positions in ) and **M, K**, and *$\stackrel{\u20d7}{f}$* denote the mass matrix, stiffness matrix, and right-hand side function, respectively, given by the following expressions:

$$\mathbf{M}={M}_{\mathit{\text{jk}}}=({\varphi}_{j},{\varphi}_{k})$$

(18)

$$\mathbf{K}={K}_{\mathit{\text{jk}}}=(\mathbf{\nabla}{\varphi}_{j},D\xb7\mathbf{\nabla}{\varphi}_{k}).$$

(19)

In the previous expressions, *j*, *k* , and (·, ·) denotes the inner product taken over the entire spatial domain Ω. In our case, all *û** _{i}* for

The time-dependent equation (17) can be discretized using the Crank–Nicolson scheme. Arranging the unknown (future time step) values on the left-hand side yields the time stepping equation

$$(\mathbf{M}-\frac{\mathrm{\Delta}t}{2}\mathbf{K})\phantom{\rule{thinmathspace}{0ex}}{\mathit{u}}^{n+1}=(\mathbf{M}+\frac{\mathrm{\Delta}t}{2}\mathbf{K})\phantom{\rule{thinmathspace}{0ex}}{\mathit{u}}^{n}$$

(20)

where Δ*t* is our time step and * u^{n}* represents our solution at time step

The data presented in this section are computed from simulation results for input data representing the discretization of the simulation domain for depot shapes obtained by sampling our shape parameterizations at the nine Smolyak collocation points specified for third-order convergence.

Note that while these results give accurate characterization of the variability based on our modeling and simulation assumptions, validation based on *in vivo* experimental studies has not yet been performed.

We will describe and present data for three methods of obtaining statistics based on these simulation results. Section IV-A deals with obtaining statistics over an area for which the location and number of nodes of the solution *u*(** x**) differ for different shape parameters. Section IV-B shows the mean and standard deviation of total drug delivery to a region of interest over time, as well as an example analysis of higher order statistics on this data. Section IV-C shows the simpler case of computing statistics over a static region of the discretized domain.

Since a different finite-element mesh is generated for each depot shape (as described in Section III-C), the solution vectors ** u** from different discretizations do not represent weights for the same basis functions, or even the same number of basis functions. However, the solutions

Since we are interested in the amount of drug delivered to the venous anastomosis over time, we also compute mean and standard deviation of total drug in the vein wall at each time point *t* **T**. Each element of our tetrahedralization ε *T*(Ω) represents one of the materials in our domain. In particular, we define _{υ}, the set of elements representing the tissue of the vein wall (see Fig. 6). From this subset of elements, the total amount of drug in the vein wall *V _{j}*(

$$\text{mean}(V(t))={\displaystyle \sum _{j=1}^{q}{w}_{j}{V}_{j}(t)}$$

(21)

$$\text{var}(V(t))={\displaystyle \sum _{j=1}^{q}{w}_{j}\phantom{\rule{thinmathspace}{0ex}}{({V}_{j}(t)-\text{mean}\phantom{\rule{thinmathspace}{0ex}}(V(t)))}^{2}}$$

(22)

while as previously noted, we can compute higher order moments in a similar manner to (22).

Total drug deposited in vein wall [opaque geometry in (a)] over time. (b) MCF parameterization. (c) Morph parameterization. Results for the nine individual collocation points (depot shapes) are shown in blue to green, with the mean and variance shown **...**

Fig. 6(b) and (c) shows the amount of drug over time for individual shape parameter simulations as well as the mean and standard deviation as calculated earlier for our two shape parameterizations.

To look more specifically at the amount of drug reaching our target site, we define a spherical region of interest surrounding the venous anastomosis [see Fig. 7(a)]. We then define the set of elements * _{s}* as those falling completely within this sphere [see Fig. 7(b)]. Of interest to us are the elements

(a) and (b) Region of interest around venous anastomosis. Total drug deposited in the vein wall over time. (c) MCF parameterization. (d) Morph parameterization. Results for the nine individual collocation points (depot shapes) are shown in blue to green, **...**

As noted in Section II, the resulting drug concentration distribution due to our shape parameterization is not completely specified by a mean and variance. Further statistics can be calculated, as shown in Fig. 8. This shows the skewness and kurtosis of the distributions of total drug delivered to the venous tissue at our sampled time points under the MCF shape parameterization. The skewness and kurtosis are calculated from the normalized third and fourth moments of the distribution as given in [23]. The pdf (whose domain is normalized to [0, 1]) of the distribution at day 10 is also shown. The means and variances for this simulation are shown in Fig. 6. Comparing these figures, we see that the skewness and kurtosis correspond to the increased “lopsidedness” of the distributions from the mean, as we would expect. The pdf is also as we would expect from the given time point’s distribution, which has several samples grouped at the lower end of the concentration range—note that the mean is near the lowest concentration at the time point.

While, in general, the discretization *T*(Ω) is different for different parameter values of ξ, using the discretization method described in Section III-C, the elements comprising the static geometry are always the same. In particular, the elements _{υ} comprising the tissue we are interested in are the same for any value of ξ. Since the concentration field within a tetrahedral linear finite element is given by a simple linear weighting of the values at the element’s nodes, the mean and standard deviation fields within _{υ} can be completely described by computing the means and standard deviations at the points _{υ} and using these as the weights of the linear basis functions. Since sampling, as described in Section IV-A, can be computationally expensive at high sampling resolutions or fine discretizations of the test volume due to the search for containing cell of each sample point, this method can improve performance when applicable. Fig. 9 shows the mean and standard deviation on the surface of _{sυ} as computed by this method.

The results of Section IV explore both the variability introduced by changes in depot shape and the effect of a specific parameterization of gel shape on this variability. From the results of Section IV-B, we see that the total amount of drug delivered to the venous tissue is quite similar for both shape parameterizations. We also see that the variance due to shape variation within each parameterization is fairly small, and that the amount of drug delivered decreases for “smoother” depot shapes (see Fig. 6). This agrees with our notion that increased smoothness corresponds to decreased surface area, which should lead to lower total diffusion rates. Fig. 10 shows how depot surface area changes over our two parameterizations. While the surface area decreases for both parameterizations, the MCF parameterization has a smaller final surface area. From this, we would expect a lower diffusion rate; however, as we can see in Fig. 6, the curves for total delivery over time are nearly identical for the later collocation points of the two parameterizations. This may be due to changes in the initial location and distribution of drug relative to the tissue in which we are interested. This effect is more pronounced if we restrict our interest to a more specific target site. Fig. 7 shows that if we look at only drug delivered to the venous tissue near the anastomosis, the two parameterizations are not equivalent. The MCF parameterization now has higher peak delivery as well as higher variance compared to the morph parameterization. A side effect of the “smoothing” caused by MCF is the reduction of the convex hull of the depot, concentrating more drug near the center of mass. We believe that this explains the increased delivery seen here, as more drug is located near the target site. As can be seen in Fig. 4, the depot retains a larger positional “spread” during our metamorphosis parameterization, and therefore, this effect is not apparent there. The results from Section IV-A show that the variance at any given point is closely related to the local mean concentration. This also agrees with expectations about a smooth process such as diffusion varying by a smooth process, e.g., such as our shape parameterization. These results are also quite helpful in determining how drug concentration varies across the target site and how the “front” of diffusing drug moves through the test volume.

While the parameterization of properties such as shape can be cumbersome and still requires judgment in determining that the appropriate range of variation is expressed, we have demonstrated that through methodologies such as the stochastic collocation method, the problem of studying the impact of shape variability on a particular forward simulation is tractable. The basic requirements are that:

- the variation can be expressed by a parameterized model;
- the resulting stochastic field varies smoothly with changes to model parameters.

Once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the V&V process, without which modeling (geometric, parameter, and mathematical) and simulation error bounds on the final solution must be viewed with some skepticism.

While our univariate shape parameterization is simple, in many cases, such a simple model will suffice. As noted earlier, development of this model will require judgment on the part of the modeler and affect the accuracy of the computed statistics. At the very least, however, a formal statement of confidence contingent on experimental realities respecting assumptions of the parameterization can be made.

It is worth noting that we are not limited to a univariate shape parameterization. If such a simple parameterization is not adequate, a multidimensional shape representation can be created. In this case, we will have a multidimensional random variable and will use multidimensional sample points as described in [3]. The primary restriction on this method is that the entries of must be independent. If this cannot be shown directly, the covariance matrix of can be generated in a similar manner to the statistics previously described, and independence generated through a Gramm–Schmidt-like procedure. Although requiring more sample points than the univariate case, this extension scales well to a limited number of dimensions and is quite straightforward as the simulation process is identical regardless of the dimensionality of the underlying parameterization.

The authors would like to thank R. Whitaker for his implementation of level-set metamorphosis and acknowledge the computational support and resources provided by the Scientific Computing and Imaging Institute and the work of Dr. S.-E. Kim and Dr. E. Kholmovski of the Utah Center for Advanced Imaging Research for their help in the development of the pulse sequences for imaging the gel depot.

The work of A. K. Cheung was supported by the National Institutes of Health (NIH) under Grant 5R01HL067646. The work of R. M. Kirby was supported by the National Science Foundation (NSF) Career Award NSF-CCF0347791. The work of C. M. Terry was supported by the Dialysis Research Foundation Grant.

**J. Samuel Preston** received the B.S. degree in computer engineering from the University of Florida, Gainesville, in 2003. He is currently working toward the M.S. degree in computer science at the University of Utah, Salt Lake City.

From 2004 to 2007, he was a Research Associate at the Center for Advanced Engineering Environments, Old Dominion University. His current research interests include image analysis and scientific computing.

**Tolga Tasdizen** (S’98–M’00) received the B.S. degree in electrical engineering from Bogazici University, Istanbul, in 1995. He received the M.S. and Ph.D. degrees in engineering from Brown University, Providence, RI, in 1997 and 2001, respectively.

From 2001 to 2004, he was a Postdoctoral Research Scientist in the Scientific Computing and Imaging Institute, University of Utah. From 2004 to 2008, he was a Research Assistant Professor in the School of Computing at the University of Utah. Since 2008, he has been an Assistant Professor in the Electrical and Computer Engineering Department and the Scientific Computing and Imaging Institute at the University of Utah, Salt Lake City. He also holds adjunct faculty positions in the School of Computing and the Department of Neurology.

Prof. Tasdizen is a member of the IEEE Computer Society.

**Christi M. Terry** received a B.S. degree in biology from Mesa State College, Grand Junction, CO, in 1988, and received the Ph.D. degree in pharmacology and toxicology from the University of Utah, Salt Lake City, in 1997.

She was with the biotechnology industry for approximately three years. She did a one-year Postdoctoral Fellowship on iron homeostasis under the University of Utah Human Molecular Biology and Genetics Program. She completed a three-year fellowship as a National Research Service Award Fellow in 2004 at the University of Utah, where she was enagaged in research on the effect of polymorphisms on the expression and activity of the clotting protein, tissue factor. She is currently a Research Assistant Professor in the School of Medicine, Division of Nephrology and Hypertension, and an Adjunct Assistant Professor in the Department of Pharmaceutics and Pharmaceutical Chemistry, both at the University of Utah. Her current research interests include *in vivo* imaging of hemodialysis arteriovenous graft hyperplasia, the role inflammatory cells play in stenosis development, and characterization of drug pharmacokinetics and dynamics in an in vivo model of arteriovenous graft stenosis.

**Alfred K. Cheung** received the Graduate degree from Albany Medical College, Albany, NY, in 1977, and completed the Nephrology Fellowship from the University of California at San Diego, San Diego, in 1983.

He is a Board-Certified Clinical Nephrologist and a Professor of Medicine at the University of Utah, Salt Lake City. He holds the Dialysis Research Foundation Endowed Chair for Excellence in Clinical and Translational Research in kidney dialysis. His current research interests include laboratory and clinical research efforts focusing on the medical problems of patients with chronic kidney disease and the technical aspects of hemodialysis treatments. Other interests include the development of local pharmacological strategies for the prevention of vascular access stenosis.

**Robert M. Kirby** (M’04) received the M.S. degree in applied mathematics, the M.S. degree in computer science, and the Ph.D. degree in applied mathematics from Brown University, Providence, RI, in 1999, 2001, and 2002, respectively.

He is currently an Associate Professor of Computer Science at the School of Computing, an Adjunct Associate Professor in the Departments of Bioengineering and of Mathematics, and a member of the Scientific Computing and Imaging Institute, all at the University of Utah, Salt Lake City. His current research interests include scientific computing and visualization.

J. Samuel Preston, Scientific Computing and Imaging Institute and the School of Computing, University of Utah, Salt Lake City, UT 84112 USA (Email: ude.hatu.ics@masj)

Tolga Tasdizen, Scientific Computing and Imaging Institute and the Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT 84112 USA (Email: ude.hatu.ics@aglot)

Christi M. Terry, Department of Medicine, University of Utah, Salt Lake City, UT 84112 USA (Email: ude.hatu.csh@yrret.itsirhc)

Alfred K. Cheung, Department of Medicine, University of Utah, Salt Lake City, UT 84112 USA, and also with the Medical Service, Veterans Affairs Salt Lake City Healthcare System, Salt Lake City, UT 84112 USA (Email: ude.hatu.csh@gnuehc.derfla)

Robert M. Kirby, Scientific Computing and Imaging Institute and the Department of Mathematics and Bioengineering, University of Utah, Salt Lake City, UT 84112 USA (Email: ude.hatu.sc@ybrik)

1. Robert CP, Casella G. Monte Carlo Statistical Methods. New York: Springer-Verlag; 2004.

2. Xiu D, Hesthaven JS. High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput. 2005;vol. 27(no. 3):1118–1139.

3. Xiu D. Efficient collocational approach for parametric uncertainty analysis. Commun. Comput. Phys. 2007;vol. 2(no. 2):293–309.

4. Xiu D, Karniadakis GE. Modeling uncertainty in steady state diffusion problems via generalized polynomial chaos. Comput. Methods Appl. Mech. Eng. 2002;vol. 191(no. 43):4927–4948.

5. Xiu D, Karniadakis GE. Modeling uncertainty in flow simulations via generalized polynomial chaos. J. Comput. Phys. 2003;vol. 187(no. 1):137–167.

6. Narayanan VAB, Zabaras N. Stochastic inverse heat conduction using a spectral approach. Int. J. Numer. Methods Eng. 2004;vol. 60(no. 9):1569–1593.

7. Geneser SE, Kirby RM, MacLeod RS. Application of stochastic finite elementmethods to study the sensitivity of ECG forward modeling to organ conductivity. IEEE Trans. Biomed. Eng. 2008 Jan;vol. 55(no. 1):31–40. [PubMed]

8. Geneser SE, Stinstra JG, Kirby RM, MacLeod RS. Cardiac position sensitivity study in the electrocardiographic forward problem using stochastic collocation and BEM. to be submitted for publication. [PMC free article] [PubMed]

9. Ganapathysubramanian B, Zabaras N. Modeling diffusion in random heterogeneous media: Data-driven models, stochastic collocation and the variational multiscale method. J. Comput. Phys. 2007;vol. 226(no. 1):326–353.

10. Kuji T, Masaki T, Goteti K, Li L, Zhuplatov S, Terry CM, Zhu W, Leypoldt JK, Rathi R, Blumenthal DK, Kern SE, Cheung AK. Efficacy of local dipyridamole therapy in a porcine model of arteriovenous graft stenosis. Kidney Int. 2006;vol. 69(no. 12):2179–2185. [PubMed]

11. Christopherson RJ, Terry CM, Kirby RM, Cheung AK, Shiu Y-T. Finite element modeling of the continuum pharmacokinetics of perivascular drug delivery at the anastomoses of an arterio-venous graft. submitted for publication.

12. Trefethen LN. Spectral Methods in MATLAB. Philadelphia, PA: SIAM; 2000.

13. Wink O, Frangi AF, Verdonck B, Viergever MA, Niessen WJ. 3D MRA coronary axis determination using a minimum cost path approach. Magn. Reson. Med. 2002;vol. 47(no. 6):1169–1175. [PubMed]

14. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage. 2006;vol. 31(no. 3):1116–1128. [PubMed]

15. Terry CM, Li H, Kholmovski E, Kim S, Zhuplatov I. Serial magnetic resonance imaging (MRI) to monitor the precise location of a biodegradable thermal gelling copolymer for drug delivery after ultrasound-guided injection in a porcine model of arteriovenous (AV) hemodialysis graft stenosis. submitted for publication.

16. Sethian JA. Level set methods and fast marching methods. In: Sethian JA, editor. Level Set Methods and Fast Marching Methods. Cambridge, U.K: Cambridge Univ. Press; 1999. Jun, p. 400.

17. Whitaker RT. A level-set approach to 3D reconstruction from range data. Int. J. Comput. Vis. 1998;vol. 29(no. 3):203–231.

18. Escher J, Simonett G. The volume preserving mean curvature flow near spheres. Proc. Amer. Math. Soc. 1998;126(9):2789–2796. [Online]. Available: http://www.jstor.org/stable/119205.

19. Breen DE, Whitaker RT. A level-set approach for the metamorphosis of solid models. IEEE Trans. Vis. Comput. Graphics. 2001 Apr;vol. 7(no. 2):173–192.

20. Boissonnat J-D, Oudot S. Provably good sampling and meshing of surfaces. Graph. Models. 2005 Sep;vol. 67(no. 5):405–451.

21. Porumbescu SD, Budge B, Feng L, Joy KI. SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers. New York, NY, USA: ACM; 2005. Shell maps; pp. 626–633.

22. Si H, Gartner K. Proc. 14th Int. Meshing Roundtable. Berlin, Germany: Springer-Verlag; 2005. Meshing piecewise linear complexes by constrained Delaunay tetrahedralizations; pp. 147–164.

23. Snedecor GW, Cochran WG. Statistical Methods. Oxford, U.K.: Blackwell; 1989.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |