Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2749709

Formats

Article sections

Authors

Related links

Med Image Anal. Author manuscript; available in PMC 2010 October 1.

Published in final edited form as:

Published online 2009 July 9. doi: 10.1016/j.media.2009.07.002

PMCID: PMC2749709

NIHMSID: NIHMS131598

The publisher's final edited version of this article is available at Med Image Anal

See other articles in PMC that cite the published article.

Biomechanical models that simulate brain deformation are gaining attention as alternatives for brain shift compensation. One approach, known as the “forced-displacement method”, constrains the model to exactly match the measured data through boundary condition (BC) assignment. Although it improves model estimates and is computationally attractive, the method generates fictitious forces and may be ill-advised due to measurement uncertainty. Previously, we have shown that by assimilating intraoperatively acquired brain displacements in an inversion scheme, the Representer algorithm (REP) is able to maintain stress-free BCs and improve model estimates by 33% over those without data guidance in a controlled environment. However, REP is computationally efficient only when a few data points are used for model guidance because its costs scale linearly in the number of data points assimilated, thereby limiting its utility (and accuracy) in clinical settings. In this paper, we present a steepest gradient descent algorithm (SGD) whose computational complexity scales nearly invariantly with the number of measurements assimilated by iteratively adjusting the forcing conditions to minimize the difference between measured and model-estimated displacements (model-data misfit). Solutions of full linear systems of equations are achieved with a parallelized direct solver on a shared-memory, eight-processor Linux cluster. We summarize the error contributions from the entire process of model-updated image registration compensation and we show that SGD is able to attain model estimates comparable to or better than those obtained with REP, capturing about 74% to 82% of tumor displacement, but with a computational effort that is significantly less (a factor of 4-fold or more reduction relative to REP) and nearly invariant to the amount of sparse data involved when the number of points assimilated is large. Based on five patient cases, an average computational cost of approximately 2 minutes for estimating whole-brain deformation has been achieved with SGD using 100 sparse data points, suggesting the new algorithm is sufficiently fast with adequate accuracy for routine use in the operating room (OR).

Modern neuronavigational systems rely on preoperative magnetic resonance images (pMR) and fiducial-based registration schemes to establish a rigid-body transformation between anatomical structures in the operating room (OR) and their counterparts in pMR. During the course of open cranial neurosurgery, however, brain shift induced by gravitational effects and surgical interventions often exceeds 20 mm at the cortical surface after dural opening and can reach more than 3 mm at the tumor boundary (Nimsky et al., 2000). These magnitudes in the face of the complex, dynamic and seemingly unpredictable nature of the brain motion (Hartkens et al., 2003; Nabavi et al., 2001; Sun et al., 2005a) pose a significant challenge to the accuracy of neuronavigational systems.

To compensate for the brain deformation, various intraoperative imaging techniques have been devised. Intraoperative stereovision (Skrinjar et al., 2002; Sun et al., 2005a,b) and laser range scanning (Miga et al., 2003) acquire images of the parenchymal surface at the craniotomy site, but not internal anatomical structures. The dominant schemes are intraoperative magnetic resonance (iMR; e.g., Black et al., 1997; Ferrant et al., 2002; Hall et al., 2000; Nimsky et al., 2001; Wirtz et al., 1997) and ultrasound (iUS; e.g., Bonsanto et al., 2001; Bucholz et al., 2001; Comeau et al., 2000; Unsgaard et al., 2005), because they supply full or partial volume sampling of the anatomical features of interest deeper in the brain. Intraoperative MR provides unparalleled delineation of parenchymal tissue and is able to scan the full volume of the head (e.g., Maurer et al., 1998; Nabavi et al., 2001; Nimsky et al., 2000). However, its capital cost and impact on neurosurgical workflow in the OR as well as its relatively long image acquisition are potential barriers to gaining wide acceptance. By comparison, iUS is easy to implement, low in cost and achieves real time image acquisition compared with iMR. However, iUS image noise, artifacts and limited feature localization accuracy make anatomical structures more difficult to delineate. In addition, tumor motion may only be partially measured when two-dimensional (2D) iUS is applied because of discontinuous and limited imaging plane sampling. These respective characteristics of MR and US make their combination particularly attractive in image-guided neurosurgery, where anatomical features (e.g., tumor) deep in the brain captured in iUS (i.e., in deformed state) can be compared with their counterparts in pMR (i.e., in undeformed, original state) to generate brain displacement maps locally to compensate for intraoperative motion.

When incomplete brain motion is measured, biomechanical models are effective alternatives to estimate whole brain deformation for brain shift compensation (e.g., Clatz et al., 2005; Skrinjar et al., 2002; Sun et al., 2005b; Lunn et al., 2005, 2006; Ferrant et al., 2001; Wittek et al., 2005). These models assimilate intraoperative displacement data measured from the cortical surface using stereo vision (Skrinjar et al., 2002; Sun et al. 2005b) or laser range scanning (Dumpuri et al., 2007), or from deep brain anatomical structures (e.g., tumor, ventricle) using iMR (e.g., Clatz et al., 2005; Ferrant et al., 2001; Hu et al., 2007; Wittek et al. 2005) or iUS (e.g., Lunn et al., 2005, 2006), and estimate brain deformation subject to loading and boundary conditions in the OR. One approach, known as the “forced-displacement method”, constrains the model to exactly match the measured data through boundary condition assignment (Clatz et al., 2005; Ferrant et al., 2001; Skrinjar et al., 2002; Sun et al., 2005b; Wittek et al., 2005). Although this method improves model displacement estimates and is computationally attractive, fictitious forces at otherwise stress-free locations are generated that may degrade its performance (Liu et al., 2006). Further, matching the measured data exactly may be ill-advised because of measurement errors that are always present (Carter et al., 2005; Lunn et al., 2005).

Alternatively, a least squares inversion scheme using an adjoint equations method (AEM) is able to maintain stress-free boundary conditions while fitting the model to the measured data approximately by adjusting the forcing conditions (Lynch, 2004; Lunn et al., 2005, 2006). Using the Representer (REP) algorithm, model response subject to a unit misfit is computed, and the overall model deformation is then the weighted superposition of these individual responses determined through a least squares fitting scheme. As shown in Lunn et al. (2005), this algorithm solves the set of adjoint equations directly (at a cost of 6*N _{sprs}*+1 full linear system matrix solutions in total for a 3D model, where

However, the REP algorithm applied in these studies is computationally efficient only when the number of sparse data points is small (Lynch, 2004; e.g., *N _{sprs}*=10/4 in the simulated/clinical 2D model in Lunn et al. (2005), and

Therefore, developing and implementing a computationally efficient, yet sufficiently accurate algorithm for estimating whole-brain deformation is *critical* for application of the model-based scheme for brain shift compensation in the OR. The major steps involved in such a scheme typically include patient registration, intraoperative imaging to sample the exposed cortical surface and/or features deep in the brain (e.g., tumor), measurement of local brain displacements in the sampled regions, data assimilation using a biomechanical model, and visualization feedback based on the computed whole-brain deformation. The most computationally intensive steps appear to be the measurement of local brain displacement and the computational data assimilation to estimate whole-brain deformation (model-updated image visualizations are fast and not a time-limiting step). Given that we have recently developed a completely automatic and efficient (1–2 min) image-based re-registration scheme that can be used to measure tumor displacement by comparing tumor boundaries captured in iUS and pMR (Ji et al., 2008a), the efficiency of the data assimilation to compute whole-brain deformation is, therefore, of critical importance to the overall procedure for application in the OR (e.g., the computational cost of the overall procedure to be within 5 min).

For this purpose, we have incorporated an algorithm using the steepest gradient descent (SGD) to dramatically reduce the computational burden relative to the REP algorithm without sacrificing accuracy in the model estimate. The SGD algorithm solves the identical set of adjoint equations by iteratively adjusting the forcing conditions according to the gradient of the objective function with respect to the forcing vector. In contrast to REP, the computational complexity of SGD scales nearly invariantly with the number of sparse data points, *N _{sprs}*, used for model guidance, making it computationally much more efficient than REP when

To demonstrate that the SGD algorithm is able to produce model estimates of comparable or improved accuracy relative to REP but with significantly less computational effort that is nearly invariant to the amount of data assimilated, we applied the technique first to a clinical brain model with simulated tumor displacement data and then to five (N=5) clinical datasets with measured intraoperative tumor displacement after dural opening, and compared the accuracy and computational efficiency with the results obtained from REP.

Biot’s consolidation theory (Biot, 1941) was employed to model the brain as a porous deformable medium to incorporate a pressure gradient from incompressible interstitial fluid as a hydrodynamic driving force. The detailed mathematical formulation has been developed elsewhere (Paulsen et al., 1999), and is summarized by the following coupled equations:

$$\nabla \xb7G\nabla \mathbf{u}+\nabla \frac{G}{1-2v}(\nabla \xb7\mathbf{u})-\alpha \nabla p=({\rho}_{t}-{\rho}_{f})\mathbf{g}$$

(1)

$$\alpha \frac{\partial}{\partial t}(\nabla \xb7\mathbf{u})+\frac{1}{S}\frac{\partial p}{\partial t}-\nabla \xb7k\nabla p=\mathrm{\Psi}$$

(2)

where **u** and *p* are the displacement and pore fluid pressure to be computed, respectively. Other material properties are summarized in Table 1. The values of the parameters used in this study were obtained from the literature (Miga, 1998), and were identical to those in Lunn et al. (2005). The utility of these model parameters has been demonstrated in a previous porcine brain retraction study (Platenik et al., 2002). Using the finite element discretization method, the above equations are reduced to a linear algebraic system expressed in matrix form (Paulsen et al., 1999)

$$\mathbf{K}\mathbf{x}=\mathbf{b}$$

(3)

where **K** is the stiffness matrix that embeds the discretized model equations, **x** is the displacement and pore fluid pressure vector (4 degrees-of-freedom (DOF) per node), and **b** includes the surface forces and displacement boundary conditions due to surgical intervention, interior forces due to the pressure gradient of the interstitial fluid and tissue buoyancy forces caused by gravitational effects. In addition, when brain-skull contact boundary conditions are incorporated, forces at the points of contact between the brain and skull surfaces are also included (Ji et al., 2007).

When incomplete displacement measurements are available, a generalized least squares inversion scheme is suitable for minimizing the difference ( ) between the model estimate (**x**) and measured data (**d**) through a minimization process constrained by the model equations (Eqn. 3; Lynch, 2004). This is achieved by using the well-known Lagrange multiplier method that embeds the model equations into the minimizing objective function. Specifically, a data error covariance model, a forcing condition correlation covariance model and a set of Lagrange multipliers ( ) are introduced to construct an augmented quadratic objective function, (Lynch, 2004):

$$\mathrm{\Omega}={(\mathbf{d}-\mathbf{A}\mathbf{x})}^{T}{\mathbf{W}}_{\epsilon}(\mathbf{d}-\mathbf{A}\mathbf{x})+{\mathbf{b}}^{T}{\mathbf{W}}_{b}\mathbf{b}+{\lambda}^{T}(\mathbf{K}\mathbf{x}-\mathbf{b})$$

(4)

where **A** is the sampling matrix producing values at given locations of the data. The matrices **W** and **W**_{b} (the inverse of the covariances of and **b**, respectively) describe the covariances of the misfit between any two data points and the forcing conditions between any two locations, and are controlled by the standard deviation (std) in the measurement error ( ) and the correlation length (*L*), respectively (Lunn et al., 2005; Lynch, 2004). Since the optimal covariance structures may be case-dependent, the influence of these values was explored in a parametric study using synthetic data to investigate their effect on model estimation accuracy (see Section 2.6). By embedding the model equations (Eqn. 3) directly into the objective function, we have converted the constrained optimization problem into an unconstrained one. Eqn. 4 includes the unknown control variables **b** and **u** that must be constrained through the model equations. The Lagrange multipliers, , are additional scalar variables that serve as the constraint weighting factors, adding to the list of unknowns. The value of each Lagrange multiplier denotes the resistance of the objective function minimum to the violation of the constraint as determined by the corresponding model equation. Using the first-order conditions to determine the minimum of the objective function, the derivatives of with respect to **x, b** and are set to zero, resulting in the following set of adjoint equations after some simplification (Eqn. 8 rewrites the definition of the model-data misfit; Lunn et al., 2005, 2006; Lynch, 2004):

$$\frac{\partial \mathrm{\Omega}}{\partial \mathbf{x}}={\mathbf{K}}^{T}\lambda -2{\mathbf{A}}^{T}{\mathbf{W}}_{\epsilon}\epsilon =0$$

(5)

$$\frac{\partial \mathrm{\Omega}}{\partial \mathbf{b}}=2{\mathbf{W}}_{b}\mathbf{b}-\lambda =0$$

(6)

$$\frac{\partial \mathrm{\Omega}}{\partial \lambda}=\mathbf{K}\mathbf{x}-\mathbf{b}=0$$

(7)

$$\epsilon -(\mathbf{d}-\mathbf{A}\mathbf{x})=0$$

(8)

Unlike REP that solves these equations directly for each component of displacement misfit, the iterative method adjusts the forcing conditions to search for the minimum of the scalar function ( ) in the multidimensional space, **b**. Suppose that the forcing vector at the *k*^{th} iteration is **b**_{k}, and is adjusted at the next iteration according to:

$${\mathbf{b}}_{k+1}={\mathbf{b}}_{k}+\mathrm{\Delta}\mathbf{b}$$

(9)

where the change in the forcing conditions, Δ**b**, is divided into its direction (**b**) and magnitude ( ). Using the steepest gradient, the directional component is determined through:

$$\partial \mathbf{b}=-\partial \mathrm{\Omega}/\partial \mathbf{b}$$

(10)

which specifies a line direction in the *n*-dimensional space (*n* is the total number of model constraint equations in Eqn. 3 or 4×*m*, where *m* is the number of nodes in the model) along which the minimum of the scalar function, , is sought. The magnitude of the forcing vector adjustment, , is computed next. First, the effect of a change in the forcing vector, **b**, is defined with the following equations:

$$\partial \mathbf{x}=[{\mathbf{K}}^{-1}]\partial \mathbf{b},\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\partial \epsilon =-[\mathbf{A}]\partial \mathbf{x}$$

(11)

which are linearly related to the magnitude,:

$$\mathrm{\Delta}\mathbf{b}=\alpha \phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\partial \mathbf{b}$$

(12)

$$\mathrm{\Delta}\mathbf{x}=[{\mathbf{K}}^{-1}]\mathrm{\Delta}\mathbf{b}=[{\mathbf{K}}^{-1}]\alpha \phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\partial \mathbf{b}$$

(13)

$$\mathrm{\Delta}\epsilon =\alpha \phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\partial \epsilon $$

(14)

Consequently, the value of the objective function, , at the (*k*+1)^{st} iteration is given by:

$$\begin{array}{c}\hfill {\mathrm{\Omega}}_{k+1}={({\epsilon}_{k}+\mathrm{\Delta}\epsilon )}^{T}{\mathbf{W}}_{\epsilon}({\epsilon}_{k}+\mathrm{\Delta}\epsilon )+({\mathbf{b}}_{k}+\mathrm{\Delta}\mathbf{b}){\mathbf{W}}_{b}({\mathbf{b}}_{k}+\mathrm{\Delta}\mathbf{b})\hfill \\ \hfill ={\mathrm{\Omega}}_{k}+\mathrm{\Delta}{\epsilon}^{T}{\mathbf{W}}_{\epsilon}{\epsilon}_{k}+{{\epsilon}_{k}}^{T}{\mathbf{W}}_{\epsilon}\mathrm{\Delta}\epsilon +\mathrm{\Delta}{\epsilon}^{T}{\mathbf{W}}_{\epsilon}\mathrm{\Delta}\epsilon \hfill \\ \hfill +\mathrm{\Delta}{\mathbf{b}}^{T}{\mathbf{W}}_{b}{\mathbf{b}}_{k}+{{\mathbf{b}}_{k}}^{T}{\mathbf{W}}_{b}\mathrm{\Delta}\mathbf{b}+\mathrm{\Delta}{\mathbf{b}}^{T}{\mathbf{W}}_{b}\mathrm{\Delta}\mathbf{b}\hfill \end{array}$$

(15)

Assuming that [*W*_{b}] and [*W*] are symmetric, the above expression is further simplified:

$$\begin{array}{ll}\mathrm{\Delta}\mathrm{\Omega}=\hfill & 2\alpha \xb7(\partial {\epsilon}^{T}{\mathbf{W}}_{\epsilon}{\epsilon}_{k})+{\alpha}^{2}\xb7(\partial {\epsilon}^{T}{\mathbf{W}}_{\epsilon}\partial \epsilon )\hfill \\ \hfill & +2\alpha \xb7(\partial {\mathbf{b}}^{T}{\mathbf{W}}_{b}{\mathbf{b}}_{k})+{\alpha}^{2}\xb7(\partial {\mathbf{b}}^{T}{\mathbf{W}}_{b}\partial \mathbf{b})\hfill \end{array}$$

(16)

The optimal scalar magnitude is then determined by setting {ΔΩ}/α = 0, leading to:

$$\alpha =-\frac{{\{\partial \mathbf{b}\}}^{T}[{\mathbf{W}}_{b}]\{\mathbf{b}\}+{\{\partial \epsilon \}}^{T}[{\mathbf{W}}_{\epsilon}]\{\epsilon \}}{{\{\partial \mathbf{b}\}}^{T}[{\mathbf{W}}_{b}]\{\partial \mathbf{b}\}+{\{\partial \epsilon \}}^{T}[{\mathbf{W}}_{\epsilon}]\{\partial \epsilon \}}$$

(17)

which dictates the amount of forcing vector advancement along the line parallel to b **b**. The overall solution algorithm becomes:

- Initialize stiffness, covariance and sampling matrices ([
**K**],[**W**_{ε}]^{−1},[**W**_{b}]^{−1},[**A**]). - Compute the best prior estimate of
**b**; set to**0**if unavailable. - Solve the forward model for
**x: Kx = b** - Sample the model-data misfit: ε =
**d − Ax**. - Solve the adjoint model for λ: [
**K**]^{T}λ = 2[S^{T}**W**_{ε}]ε. - Evaluate Ω with respect to
**b; g**= 2[**W**_{b}]**b**− λ; STOP if ‖g‖≈0. - Set the direction of descent (steepest gradient descent):
**b**= −**g**. - Project in the resulting direction: (a:)
**x**= [**K**]^{−1}**b**; (b:) ε = −[**A**]**x**. - Set the step size: $\alpha =-\frac{{\{\partial \mathbf{b}\}}^{T}[{\mathbf{W}}_{b}]\{\mathbf{b}\}+{\{\partial \epsilon \}}^{T}[{\mathbf{W}}_{\epsilon}]\{\epsilon \}}{{\{\partial \mathbf{b}\}}^{T}[{\mathbf{W}}_{b}]\{\partial \mathbf{b}\}+{\{\partial \epsilon \}}^{T}[{\mathbf{W}}_{\epsilon}]\{\partial \epsilon \}}$.
- Assemble increments:
**b = b**+α ×**b**.

The most computationally intensive steps (3, 5 and 8a) involve the solution of full linear systems of equations (3×*Iter* equations in total, with *Iter* iterations). The number of sparse data points used to construct the estimate of **b** only affects the computation of the model-data misfit (step 4) and the calculation of the effect of •**x** on the misfit (step 8b). Thus, the computational costs only increase marginally with the increase in *N _{sprs}* because no change in the rank of the full linear systems of equations is involved. Therefore, the overall solution computation with SGD does not scale directly with

To further reduce the computational costs of solutions of linear systems of equations that are the most computationally extensive in both SGD and REP algorithms, parallelization was exploited. A direct solver, PARDISO package (Schenk and Gartner, 2004), was employed for this purpose that explicitly factorizes the system matrix and uses forward elimination and backward substitution to complete the solution process (Gould et al., 2007), thereby eliminating the need for any iterations required in otherwise iterative solvers (e.g., SPLIB; Bramley R., Wang X., 1997). All model computations in this work were executed with the PARDISO direct solver on a shared-memory eight-processor Linux cluster (8G RAM in total and 2.33GHz for each processor), a computer platform to which we have ready access in the OR.

To demonstrate the applicability of the SGD algorithm for minimizing the misfit between the model estimated displacement data and those measured, five (N=5) patients undergoing open cranial tumor resection were recruited at the Mary Hitchcock Memorial Hospital, under approval by the Institutional Review Board at the Dartmouth-Hitchcock Medical Center. The only criterion for patient inclusion was the presence of well-defined tumor (or cyst) boundaries in both iUS and pMR, which allows for accurate feature segmentation to determine intraoperative displacement. Patient age, gender, and type and location of tumor (cyst) are summarized in Table 2. For each patient, preoperative T1-weighted axial MRIs (pMR; N=124; field of view: 240 mm; voxel size: 1.016 mm × 1.016 mm × 1.5 mm) were obtained from a 1.5 Tesla GE MR scanner with the use of a head coil. The brain was segmented through a two-step level set scheme (Wu et al., 2005) and meshed into tetrahedral elements (Sullivan et al., 1997), from which the triangular boundary elements were extracted. The average number of mesh nodes (in thousand) was 17.18±3.05, while it was 94.98±17.48 and 8.58±0.59 for the number of volume and boundary elements, respectively. Depending on the image contrast, either the tumor (patient *1, 3* and *5*) or cyst (patient *2* and *4*) in each pMR image was segmented semi-automatically using isointensity contours at a level determined by an expert (typical intensity values ranged 4000–6000), and a triangulated surface was generated to represent the undeformed tumor (or cyst; same as below) configuration.

Patient *3* was randomly chosen to benchmark the performance of the SGD algorithm using a noise-free synthetic displacement field generated from an imposed forced-displacement solution. This was achieved by assigning the boundary nodes at the craniotomy site (N=68) to be free to move while those at the brainstem were stress free. All remaining boundary nodes were fixed in the direction normal to the skull, but allowed to freely move tangentially. Fluid pressure at the craniotomy nodes was fixed (zero pressure), while free at all remaining positions. A fluid plane passing through the “lowest” craniotomy node but perpendicular to gravity was defined. Elements below/above the fluid plane were saturated/unsaturated, and the surrounding fluid density was assigned accordingly (1000 *kg/m*^{3} and 1 *kg/m*^{3} for saturated and unsaturated regions, respectively; Ji et al., 2007; Lunn et al., 2005), which allowed the brain to deform under a gravitational load.

To simulate the measurements, displacements at the triangulated tumor surface (generated from pMR) boundary nodes were interpolated from the noise-free displacement field (average displacement of 0.75±0.17 mm for all tumor boundary nodes). These nodes and their displacements were used as a pooled sample (*N _{tot}*=2890), from which a subset was randomly selected for model guidance. In order to parametrically study the influence of the covariance in measurement error and the correlation length on model performance, and

Finally, we investigated the influence of *N _{sprs}* on the accuracy of the model estimate. This was accomplished by using the optimal covariance structures and a fixed level of measurement noise (normally distributed zero mean and a standard deviation of 0.5 mm in the magnitude of displacement error and uniform distribution in direction) but different numbers of sparse data points (

Prior to surgery, the patient’s head was rigidly registered with its pMR through a group of MR-compatible fiducial markers (typically N=10–15) glued externally to the scalp. The markers were manually identified in the pMR image space through a dedicated interactive software tool that allows the user to move a cursor until the identified marker is visually symmetrical in all orthogonal cross-sectional images. The centers of the fiducials were subsequently digitized in the OR using a stylus tracked by a Polaris system (Northern Digital Inc., London ON, Canada). Although a rigid-body transformation can be determined from two sets of paired homologous points through closed-form solution (e.g., singular value decomposition (Arun et al., 1987)), it is not always practical or very convenient to pair the points in the surgical setting. The closed-form approach is prone to errors and inefficiencies even for experienced operators because the precise order of fiducial identification matters and often some fiducials are not even visible to the tracking system because of the relative orientation of the patient’s head, making the number successfully identified in the OR (typically, N=7–10) unpredictable and generally smaller than those found in pMR. Thus, keeping track of the order of fiducial identification in the OR requires extra surgeon/operator interaction and communication at the time of the procedure. The additional task of documenting order, which was prone to error because of he lack of a natural or well-defined order for localizing fiducials on the patient's head coupled to the uncertainty of which fiducials would be visible in any given surgery, motivated us to develop an alternative patient registration method that does *not* require the same degree of manual record-keeping and verbal interaction with the surgeon. Specifically, instead of manually pairing the points to be used for registration, we applied a genetic search (Hartov et al., 2007) to find the 3D locations in the OR that best matched with those in pMR by computing the closed-form solution for rigid-body registration (through SVD) for each of the candidate pairings presented during the search and keeping the one that produced the minimum registration error. Because of the potential for a mis-paired set of points to occasionally yield the minimum registration error (e.g, in cases where the fiducials possess a particularly high degree of medial-lateral and/or anterior-posterior spatial symmetry), the resulting pre-operative patient registration was always visually verified by comparing the actual head orientation in the OR with the relative orientation of the gravitational vector (obtained by placing the stylus at two vertical locations in the OR) transformed into the pMR image space with respect to the patient’s scalp surface (rendered from the pMR images).

For each patient, a series of 2D B-mode iUS images before and after dural opening (typically N=20–200; Siemens Sonoline Sienna ultrasound system with C8-5 transducer model; image size: 640×480; 8-bit grayscale) were digitized through a frame grabber (DT 3155, Data Translations Inc., Marlboro, MA; image acquisition time: 50–60 ms). The depth scale of the iUS images in general could vary from 30 mm to 160 mm, and was chosen depending on the optimal imaging window. All images used in this study had an iUS scan depth of 60 mm, resulting in a pixel size of 0.15 mm × 0.16 mm. The US transducer was rigidly attached to a tracker that was visible to the Polaris system through a group of infrared light emitting diodes, and the position and orientation of the transducer were continuously monitored. The US image coordinate system has previously been calibrated against the “world” coordinates determined by the Polaris (Hartov et al., 1999). This calibration, together with the rigid-body registration between the patient head in the OR and pMR allowed anatomical features in the iUS images to be spatially merged into the pMR image space.

However, tumor boundaries in iUS images before dural opening do not necessarily coincide with their pMR counterparts when using the rigid fiducial-based registration alone, due to errors from the fiducial registration itself, ultrasound scan-head calibration and even brain shift at this early stage of surgery, thereby degrading the accuracy of local tumor alignment in the two image sets. The mean fiducial registration error (FRE; West et al., 1997) was 3.7 mm for the pooled patients (range of 2.6–5.1 mm). Similar magnitude of tumor boundary misalignment (average of 2.8 mm, range of 1.8–4.9 mm) was present in terms of closest point projection distance (CPPD; Ji et al., 2008a; similar to that used in Maurer et al., 1998). To account for these errors, a mutual-information-based re-registration scheme (Ji et al., 2008a) was employed that adjusted the transformation between iUS and pMR to minimize the misalignment of tumor before dural opening, leading to an improved or a last-known-correct local tumor registration (Hartov et al., 2007). With the pooled patient cases, tumor boundary misalignment was reduced to an average of 1.0 mm (0.7–1.3 mm) in terms of CPPD. Following the re-registration scheme, manually segmented tumor boundaries in iUS images after dural opening were spatially merged with the undeformed pMR tumor surface. An iterative closest point procedure (ICP; Besl and McKay, 1992) was employed to determine a rigid-body transformation such that the overall distance from the transformed tumor boundary points in iUS relative to the pMR tumor surface was minimized. Tumor displacement after dural opening was essentially the difference between the tumor boundary points at this stage of surgery and their counterparts after applying the transformation. The single transformation generated locally rigid-body motion of the tumor (both translation and rotation), which was used to approximate the nonrigid motion in this region (Lunn et al., 2005). To further minimize the adverse effect of segmentation errors especially from iUS, points only in regions of well-defined tumor boundaries were included. In addition, cyst boundaries were used for patients *2* and *4*, due to their high contrast. The verage tumor misalignment as a result of ICP in terms of CPPD was 1.4 mm after dural opening for the pooled sample of 5 patient cases. The total number of data points with measured displacement for each patient (*N _{tot}*; i.e., the pooled sample) and the resulting initial model-data misfit before model correction (

The ultimate goal of estimating brain deformation intraoperatively is to compensate for brain shift in the registered images displayed in the OR. Because multiple steps are involved in model-based image compensation, the accuracies of the individual components of the approach contribute to the overall fidelity of the system and need to be evaluated. The steps that influence the overall accuracy of the method include patient registration, calibration of intraoperative imaging techniques (e.g., US scan-head and stereovision), intraoperative data acquisition of features either deep in the brain (e.g., tumor) or on the exposed cortical surface, measurement of local brain displacements, and computational data assimilation and motion estimation. The accuracies of these components of our compensation scheme have been evaluated in detail in previously published studies, and are summarized here as a frame of reference for judging the computational results reported in Section 3.

The typical accuracy of skin-affixed fiducial-based patient registration has been well documented to be 2–5 mm (Helm and Eckel, 1998; Ammirati et al., 2002; and Wolfsberger et al., 2002), and we have found similar results in our experience (Roberts et al., 1998; Roberts et al., 1999; Hartov et al., 2008; Ji et al., 2008a). US scan-head calibration error can be as high as 2–3 mm (Hartov et al., 1999). When US images are spatially transformed into the pMR image space, these errors combine and potentially compromise the accuracy of the inter-modality image registration. Controlled studies in the porcine brain (Lunn et al., 2003) showed that features in iUS could be transformed into coregistered preoperative scans with an accuracy of 1–2 mm. However, in clinical cases, we have found this degree of fidelity to degrade to 2–3 mm or more (Hartov et al., 2008). To improve the coregistration accuracy, we have recently developed an image-based re-registration technique that collectively minimizes errors due to fiducial-based registration as well as US scan-head calibration at the start of surgery. In six patient cases, we showed that this technique was able to minimize tumor misalignment in US and pMR to achieve an accuracy of approximately 1 mm with 1–2 minutes of additional computation (Ji et al., 2008a). The resulting “last-known-correct” registration at the start of surgery is critical for maintaining the accuracy of subsequent measurements of tumor displacements, the details of which have been described in Section 2.5 as well as in (Ji et al., 2009a).

Another important intraoperative imaging technique that is frequently employed in our group is stereovision that captures the profile of the exposed cortical surface during surgery. The accuracy of the stereopsis system was found to be within 1 mm in a brain-shaped phantom (Sun et al., 2005b) and between 1–2 mm in patient cases where the stereopsis was compared to gold standard validation measurements obtained with fiducial markers implanted in the bone comprising the rim of the craniotomy (Sun et al., 2005a). This intraoperative imaging scheme has enriched the amount of displacement data available, which can be used as information either to drive the model deformation or to verify the accuracy of the model estimates (Ji et al., 2009b).

The accuracy of the computational data assimilation has also been previously evaluated in animals where a group of stainless-steel beads were implanted in the brain (Lunn et al., 2006). By simulating a 10-mm retraction, the mean model-data misfit between the measured bead locations (N=21) in actual CT scans and those calculated from the model deformation was only 0.9 mm. The mean model-data misfit was 1.6 mm when optimal values of the covariance length, *L*, and the covariance in measurement error, , were used to simulate balloon inflation (maximum displacement at the balloon surface was 14 mm). In both experiments, the inverse data assimilation scheme was able to recover approximately 90% of the induced brain deformations, demonstrating the effectiveness of the process.

Although a separate phantom study undergoing the equivalent clinical procedures investigated here has not been repeated, an overall system accuracy of 1–2 mm can be assumed based on the individual accuracies of the major steps involved (e.g., error in tumor displacements after dural opening of 1.4 mm in terms of CPPD; Section 2.5). Consequently, we expect that the model-based brain shift compensation scheme that estimates whole-brain deformation by assimilating intraoperative displacement data to be well worth the effort given that the accuracy and efficiency of the system components are 1–2 mm; yet, the uncompensated brain shift is typically greater than 5 mm and often reaches 10 mm and more in many surgeries. A detailed analysis of the composite system performance where the errors in its individual components are allowed to fully interact in order to determine the overall accuracy and computational efficiency of the approach remains to be completed and is important work to be reported in the future.

For both the synthetic and clinical displacement data, the remainder of the pooled sample that was not selected for model guidance was defined as the cross-validation locations (*N _{cross} = N_{tot} − N_{sprs}*), whose simulated or “ground-truth” (clinical) displacements were used to evaluate the accuracy of model estimates generated by SGD and REP. The boundary conditions used in both inverse algorithms were identical to those in the forced-displacement method described in Section 2.4, in which a gravitational load induced the model deformation.

The synthetic data were used to parametrically investigate the influence of the covariance in measurement error and the correlation length on model estimation accuracy, from which the optimal covariance structures were obtained. Using these optimal covariance structures, the influence of measurement noise on model performance was also investigated, which was achieved by varying the level of noise (i.e., standard deviation of the magnitude of displacement error) and comparing the resulting model-data misfits at sparse and cross-validation locations (defined as _{cross} for the latter). Finally, a fixed level of measurement noise was used to investigate the influence of the number of sparse data points on model performance by randomly selecting 10, 100, and 200 sparse data points from the pooled sample to guide the model. Statistical comparisons were performed using a one-tailed paired *t*-test, with a null hypothesis that the mean values of the two samples being compared were equal.

To determine *Iter _{min}* (the minimum iterations required for SGD to achieve a statistically smaller

REP and SGD solutions were computed on an Intel Xeon shared-memory (8G RAM), eight-processor (2.33GHz for each processor) Linux cluster running Ubuntu 6.10, and all data analysis was performed in Matlab (Matlab 7.3; The Mathworks, Natick, MA).

The mean model-data misfit obtained from SGD and REP by parametrically varying the covariance in measurement error ( ) and the correlation length (*L*; Fig. 1) demonstrate that model performance depends on across the tested values (similarly to that reported in Lunn et al. 2006), regardless of which algorithm was applied. However, model performance depended to a lesser degree on *L* except when was large (>2 mm). The optimal values of and *L* based on the previous investigation (0.2 mm and 5 mm, respectively) appear to be equally applicable in this study as the mean model-data misfit was approximately the minimum for both algorithms at these values, which is reassuring since SGD is only expected to influence the overall computational efficiency.

Mean model-data misfit at the sparse data points as a function of the standard deviation in measurement error ( ) and correlation length (*L*). The sparse data used for model guidance were noise-free.

Using these optimal values of and *L* but varying the level of measurement noise (i.e., the standard deviation of the magnitude of the imposed displacement error that followed a normal distribution), **ε**_{sprs}, the model-data misfits at the sparse data locations, increased with the increase in the noise level, as expected (Fig. 2a). In addition, SGD and REP generated similar **ε**_{sprs} across the range of tested noise levels, although _{cross}, the misfit at the cross-validation locations, generated from SGD was significantly smaller than that from REP, regardless of the level of noise, indicating an improved tolerance to measurement noise with the SGD algorithm. This was also evident when **ε**_{sprs} and _{cross} from SGD were plotted against the number of SGD iterations, and compared with those obtained from REP (Fig. 2b; noise level of 0.5 mm). Despite the relatively high level of noise (considering the average displacement around the tumor was only 0.75 mm) that led to large values of **ε**_{sprs} (0.41 mm), much smaller values of _{cross} were generated from SGD (0.05 mm) and REP (0.13 mm; Fig. 2b). Both **ε**_{sprs} and _{cross} from SGD decreased negligibly after *Iter*_{min} (38 in this case).

Using the optimal values of and *L* (0.2 mm and 5 mm, respectively), the model-data misfit as a function of the level of measurement noise (i.e., the standard deviation of the magnitude of displacement error that followed a normal distribution; a). Model-data **...**

The difference in model-estimation accuracy was also apparent when the whole-brain deformation generated from SGD and REP were compared using the identical set of sparse data (with standard deviation of 0.5 mm in the magnitude of measurement noise) and the optimal covariance structures for model guidance. The average difference between the ground-truth and the model estimation by SGD and REP was 0.10±0.06 mm and 0.32±0.12 mm, respectively. The difference was smallest near the tumor boundary for both algorithms (0.04±0.03 mm and 0.13±0.08 mm for SGD and REP, respectively), while larger in regions far away (0.15±0.07 mm and 0.53±0.23 mm for SGD and REP, respectively), which was not surprising because only sparse data from the tumor boundary were generated and used for model-guidance. Nevertheless, SGD achieved similar model-data misfit at the sparse data points relative to REP, but improved the overall model estimation accuracy, especially at points more distant from the guiding data, when measurement noise was present, indicating it is well suited to clinical applications.

Using different numbers of sparse data points to guide the brain model, _{sprs} showed a statistically significant decrease with increase in *N _{sprs}* for both algorithms, as expected (

*Iter _{min}* as well as the computational costs of SGD and REP increased with

With clinical data assimilated for model guidance, the residual _{sprs} and _{cross} for each *N _{sprs}* were normalized with respect to

The residual model-data misfit at the sparse data (a; _{sprs}) and cross-validation locations (b; _{cross}) normalized with respect to _{init} as a function *N*_{sprs} for the pooled clinical datasets. Both normalized _{sprs} and _{cross} decreased with the increase in **...**

*Iter _{min}* for each dataset with each

Summary of computational cost for REP (*t*_{REP}) and SGD (*t*_{SGD}) in seconds, and *Iter*_{min} for SGD with different *N*_{sprs} for each individual patient. Also shown is the number of nodes (in thousands) of the corresponding brain mesh (the DOF is then four times **...**

Summery of the computational cost for REP (*t*_{REP}) and SGD (*t*_{SGD}) in seconds, and *Iter*_{min} for SGD with different *N*_{sprs} for the pooled patient cases using clinical data for model guidance. The SGD computational costs are also normalized by those produced **...**

The purpose of data assimilation is to generate a deformation field where the model-updated location and shape of the tumor match the iUS. Patient *3* was used to illustrate the procedures involved in the model-based brain shift compensation employed in this study. In order to begin with the last-known-correct registration between iUS and pMR, tumor in iUS before dural opening was rigidly displaced to match the pMR through maximization of the normalized mutual-information. IUS images after dural opening were subsequently transformed into the pMR image space, and the misalignment between tumor boundaries in the two image volumes demonstrated the gross tumor displacement as the result of dural opening (Fig. 5a). The resulting displacement data were randomly selected for model guidance using SGD and REP (*N _{sprs}*=200). Based on the computed whole-brain deformation, the pMR image was modified (with a computational cost of approximately 6sec) and overlaid with iUS images after dural opening (Fig. 5cd). Clearly, both SGD and REP generated comparable deformations near the tumor, as evidenced by the alignment of the model-updated tumor boundaries and their counterparts in iUS. However, model solutions from SGD and REP differed in regions farther away from the tumor, as indicated by the varying misalignment of the tentorium (Fig. 5cd). Prominent non-rigid tumor deformation was clearly demonstrated with the large discrepancy between the model-updated and pMR tumor cross-sections (solid and dashed lines in Fig. 5cd), the latter of which was

A typical iUS image after dural opening transformed into the pMR space (patient *3*) using the last-known-correct registration obtained between iUS before dural opening and pMR (a). Tumor displacement was achieved by first rigidly transforming pMR to match **...**

For all other patients, a typical iUS image after dural opening was overlaid against its corresponding pMR oblique slice (Fig. 6). The undeformed tumor (dashed thin lines in Fig. 6) is represented by the cross-section of the pMR tumor surface intersected by the iUS imaging plane. When the tumor was displaced according to the model estimated deformation field generated by SGD or REP with 200 sparse data points (*N _{sprs}*=200), the resulting tumor surface cross-section (solid/dashed thick lines as generated by SGD/REP) represented the model updated tumor location and shape. Both algorithms produced similar magnitudes of residual

Computational efficiency is important for effective deployment of biomechanical brain models for intraoperative brain shift compensation, especial for inverse methods where the deformation is obtained through an estimation scheme to match model response with measurements at sparse data points rather than computed directly. The computational inefficiency of the REP algorithm with large numbers of sparse data points (more than an hour with *N _{sprs}*=7; Lunn et al., 2006) has prompted us to investigate and evaluate an alternative algorithm – SGD to solve the identical set of adjoint equations and to increase the amount of sparse data used to improve model performance. Using a simulated dataset and five actual clinical cases, we have demonstrated that increasing the amount of sparse data used for model guidance significantly improves the accuracy of the model deformation estimates, and that SGD is able to achieve an estimate comparable to or better than that generated by REP but with far fewer computations, especially when the amount of sparse data is large. With sufficient sparse data (

Because the most time consuming part of both inversion algorithms is the solution of systems of equations that contribute over 90% of the total computational cost when *N _{sprs}* is large, it is advantageous to exploit parallelization. In this work, a parallelized direct solver, PARDISO package (Schenk and Gartner, 2004), was employed for this purpose. When a sequential, uni-processor iterative sparse solver (e.g., SPLIB; Bramley and Wang, 1997) was used that does not take advantage of multiprocessor parallelization, the run times were prohibitively long (e.g., up to 3.6 hours for SGD and 18 hours for REP with

The relative merits of using methods that enforce displacements as a type of boundary condition or invoke inversion processes that approximate the measured displacements remain an open question. Forced-displacement methods are usually simple to code and are computationally efficient, while inversion schemes are more difficult to implement and involve more extensive computations. However, forced-displacement methods are sensitive to measurement noise because measurement errors that are always present are directly encoded into the computed whole-brain deformation. When ICP or other rigid-body registration methods (e.g., mutual-information-based rigid registration; Ji et al., 2008a, Ji et al., 2009a) are used to measure tumor displacement, non-rigid tumor deformation cannot be recovered through forced-displacement methods (because the rigidly-acquired displacements are directly imposed in the model solution), which potentially would degrade the accuracy of the model-updated pMR images, especially when non-rigid deformation is prominent. In these cases, a non-rigid registration must be applied instead to generate more faithful tumor displacements in order to correctly assign the data constraints (e.g., see Fig. 5) Because the computational cost of non-rigid registrations (Holden, 2008) can easily be more expensive than the SGD inversion algorithm used in this study (the computational cost of the latter is similar to that of mutual-information-based *rigid* registration between pMR and iUS (1–2 min); Ji et al., 2008a), the computational effectiveness of forced-displacement methods may be compromised. By contrast, inversion schemes are less prone to measurement errors, and tumor displacements generated from rigid (as opposed to non-rigid) registrations can be directly applied to *guide* the model in order to achieve non-rigid tumor deformation (see Fig. 5).

Our inversion methods actively construct the right-hand-side vector and potentially introduce non-zero forcing terms for nodes that we have assigned as having stress-free boundary conditions (e.g., surface nodes exposed at the craniotomy or in the brainstem region). However, the magnitude and directionality of these terms are determined by the approximation scheme in order to minimize the model-data misfit at the sparse data points, and are, therefore, *post priori.* For example, the magnitudes of the force vectors estimated to exist on the surface nodes exposed at the craniotomy typically range from 10^{−5} to 10^{−3} Newtons after SGD convergence, whereas the range is 10^{−14} to 10^{−13} for nodes in the brainstem region where no or minimal displacements are present. Apparently, the values of the reconstructed right-hand-side forcing terms depend on the exact nature of the brain motion (e.g., sagging, bulging, or both). Forced-displacement methods essentially construct the right-hand-side terms corresponding to the nodes where displacement is measured through boundary condition assignment, and the nature of these terms is, therefore, *a priori.* Finally, it is important to note that the right-hand-side forcing vectors produced by the inversion process are expected to be realistic under the presumption that the underlying model equations and mechanical properties are realistic representations of brain tissue because the computed displacement fields arise from the model system driven by the same inferred forcing conditions which matched the model response to the measurement data. We do not currently constrain nodes associated with the exposed parenchyma resulting from craniotomy to remain stress free, but it would be relatively easy to introduce such a constraint during the data assimilation and displacement estimation process which is an idea well worth exploring in the future.

The distinction between forced-displacement and inversion methods can be minimized when both are applied for brain shift compensation. Specifically, using displacements generated from a forced-displacement method as the initial starting point for SGD solution may reduce the number of SGD iterations at a cost of one additional forward model solution (both methods share essentially identical pre-processing, e.g., global stiffness matrix assembly), leading to further reduction in the overall computational cost in this case.

Our inversion approach does not rely on heavy pre-computations as in the case of the very promising technique reported in Dumpuri et al. 2007 (10 hours required to compute an atlas with 320 basis solutions with 64 different gravitational vectors and 5 levels of fluid saturation). Although pre-computations are often desirable (if they significantly reduce the computational burden in the OR as in Dumpuri et al., 2007), the cost of these computations grows exponentially when more variables (e.g., administration of mannitol, size of craniotomy) and denser variations are considered, which may ultimately become prohibitive. In addition, the Dumpuri et al. technique essentially interpolates motion of the brain from pre-computed basis solutions. It is important, therefore, that the intraoperative values of the variables (e.g., gravitational direction) lie within the ranges of the quantities considered in the construction of the basis solutions. When the pre-computed basis solutions incompletely represent the true motion of the brain, *extrapolation* (as opposed to interpolation) is otherwise needed to generate brain deformation, the accuracy of which remains to be explored.

The accuracy and efficiency of the overall procedure of brain shift compensation depends on the composite accuracy and efficiency of all of the steps involved (e.g., brain segmentation and mesh generation, measurement of local brain displacements at the tumor boundary and/or cortical surface, fidelity of the computational data assimilation as well as the brain-skull boundary conditions). We are currently integrating the overall model-based scheme for brain shift compensation in the OR, and we have planned a controlled study to validate the accuracy of the overall approach. Results from these investigations will be reported in the future.

We have introduced an iterative SGD algorithm to solve the set of adjoint equations for recovering whole-brain deformation and have compared the model performance and computational efficiency of SGD relative to the REP algorithm. With five clinical datasets, we show that the average initial model-data misfit of 5.2 mm was reduced to 1.1 mm and 1.4 mm by applying SGD and REP, respectively, suggesting that SGD is able to recover whole-brain deformation with accuracy comparable to or better than that determined with REP. Importantly, the computational cost of SGD is significantly less than that of REP (a 4-fold reduction) when the amount of sparse data is large. The average run time for the pooled patient cases was approximately 2 min with SGD when 100 sparse data points were assimilated. In addition, the accuracy of the model estimates increases as the amount of measurement data used to guide the model increases and SGD has enabled the incorporation of hundreds of data points that was not previously practical or possible with REP. Together with further code parallelization, local mesh refinement and a practical stopping criterion for SGD as well as with the availability of volumetric 3D ultrasound that will generate more complete volumetric data on intraoperative tumor displacement, we expect that SGD will play an important role in intraoperative brain shift compensation in the OR in the near future.

Funding from the National Institutes of Health grant R01 EB002082-11 is acknowledged. The authors are grateful to Dr. Andrea Borsic at the Dartmouth College for helpful discussion on utilizing the PARDISO software package.

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

- Ammirati M, Gross JD, Ammirati G, Dugan S. Comparison of Registration Accuracy of Skin-and Bone-Implanted Fiducials for Frameless Stereotaxis of the Brain: A Prospective Study. Skull Base. 2002;129(3):125–130. [PMC free article] [PubMed]
- Arun KS, Huang TS, Blostein SD. Least squares fitting of two 3D point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1987;9:698–700. [PubMed]
- Balay S, Buschelman K, Eijkhout V, Gropp WD, Kaushik D, Knepley MG, McInnes LC, Smith BF, Zhang H. 2001. Petsc webpage: http://www.mcs.anl.gov/petsc.
- Besl PJ, McKay ND. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1992;14(2):239–256.
- Biot M. General theory of three-dimensional consolidation. J. Appl. Phys. 1941;12:155–164.
- Black PM, Moriarty T, Alexander E, III, et al. Development and implementation of intraoperative magnetic resonance imaging and its neurosurgical applications. Neurosurgery. 1997;41:831–845. [PubMed]
- Bonsanto MM, Staubert A, Wirtz CR, Tronnier V, Kunze S. Initial experience with an ultrasound-integrated single-rack neuronavigation system. Acta Neurochirurgica. 2001;143(11):1127–1132. [PubMed]
- Bramley R, Wang X. SPLIB: a library of iterative methods for sparse linear systems. Bloomington, IN: Department of Computer Science, Indiana University; 1997. ( http://www.cs.indiana.edu/ftp/bramley/splib.tar.gz)
- Bucholz RD, Smith KR, Laycock KA, McDurmont LL. Three-dimensional localization: From image-guided surgery to information-guided therapy. Methods (Duluth) 2001;25(2):186–200. [PubMed]
- Carter TJ, Sermesant M, Cash DM, Barratt DC, Tanner C, Hawkes DJ. Application of soft tissue modeling to image-guided surgery. Med. Eng. & Physics. 2005;27:893–909. [PubMed]
- Clatz O, Delingette H, Talos IF, Golby AJ, Kikinis R, Jolesz FA, Ayache N, Warfield SK. Robust nonrigid registration to capture brain shift from intraoperative MRI. IEEE Trans Med Imaging. 2005;24(11):1417–1427. [PMC free article] [PubMed]
- Comeau RM, Sadikot AF, Fenster A, Peters TM. Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery. Medical Physics. 2000;27:787–800. [PubMed]
- Dumpuri P, Thompson RC, Dawant BM, Cao A, Miga MI. An atlas-based method to compensate for brain shift: Preliminary results. Medical Image Analysis. 2007;11:128–145. [PMC free article] [PubMed]
- Ferrant M, Nabavi A, Macq B, Jolesz F, et al. Registration of 3D intraoperative MR images of the brain using a finite element biomechanical model. IEEE Trans Medical Imaging. 2001;20:1384–1397. [PubMed]
- Ferrant M, Nabavi A, Macq B, Black PM, Joles FA, Kinkinis R, Warfield SK. Serial registration of intraoperative MR images of the brain. Medical Image Analysis. 2002;6:337–359. [PubMed]
- Gould NIM, Scott JA, Hu Y. A numerical evaluation of sparse direct solvers for the solution of large sparse symmetric linear systems of equations. ACM Trans. Math. Softw. 2007;33(2) Article 10.
- Hall WA, Liu H, Martin AJ, Pozza CH, Maxwell RE, Truwit CL. Safety, efficacy, and functionality of high-field strength interventional magnetic resonance imaging for neurosurgery. Neurosurgery. 2000;46:632–642. [PubMed]
- Hajnal JV, Hill DLG, Hawkes DJ, editors. Medical Image Registration. Biomedical Engineering Series. M. N. Neuman Ed. CRC Press; 2001.
- Hartkens T, Hill DLG, Castellano-Smith AD, Hawkes DJ, Jr, Maurer CR, Martin MJ, Hall WA, Liu H, Truwit CL. Measurement and analysis of brain deformation during neurosurgery. IEEE Transactions on Medical Imaging. 2003;22(1):82–92. [PubMed]
- Hartov A, Roberts DW, Paulsen KD. A comparative analysis of co-registered ultrasound and MRI in neurosurgery. Neurosurgery. 2008;62(3 Suppl 1):91–101. [PubMed]
- Hartov A, Eisner SD, Roberts DW, Paulsen KD, Platenik LA, Miga MI. Error analysis for a free-hand three-dimensional ultrasound system for neuronavigation. Neurosurg Focus. 1999;6(3) Article 5.
- Helm PA, Eckel TS. Accuracy of Registration Methods in Frameless Stereotaxis. Computer Aided Surgery. 1998;3:51–56. [PubMed]
- Holden M. A Review of Geometric Transformations for Nonrigid Body Registration. IEEE Trans. Med. Imaging. 2008;27(1):111–128. [PubMed]
- Hu J, Jin X, Lee JB, Zhang L, Chaudhary V, Guthikonda M, Yang KH, King AI. Intraoperative brain shift prediction using a 3D inhomogeneous patient-specific finite element model. J. Neurosurg. 2007;106:164–169. [PubMed]
- Liu F, Paulsen K, Lunn K, Sun H, Hartov A, Wu Z, Roberts D. Comparative Study of Brain Deformation Estimation Methods. In: Cleary Kevin R, Galloway Robert L., Jr, editors. Medical Imaging 2006: Visualization, Image-Guided Procedures, and Display; Proceedings of SPIE Vol. 6141 (SPIE, Bellingham, WA 2006) 61411D: 2006.
- Lunn KE, Paulsen KD, Roberts DW, Kennedy FE, Hartov A, West JD. Displacement Estimation with Co-Registered Ultrasound for Image Guided Neurosrugery: A Quantitative In Vivo Porcine. IEEE Trans. Med. Imaging. 2003;22(11):1358–1368. [PubMed]
- Lunn KE, Paulsen KD, Lynch DR, Roberts DW, Kennedy FE, Hartov A. Assimilating intraoperative data with brain shift modeling using the adjoint equations. Medical Image Analysis. 2005;9:281–293. [PubMed]
- Lunn KE, Paulsen KD, Liu F, Kennedy FE, Hartov A, Roberts DW. Data-guided brain deformation modeling: Evaluation of a 3-D adjoint inversion method in porcine studies. IEEE Transactions on Biomedical Engineering. 2006;53(10):1893–1900. [PubMed]
- Lynch D. Numerical Partial Differential Equations for Environmental Scientists and Engineers. Springer: Berlin; 2004.
- Ji S, Liu F, Hartov A, Roberts DW, Paulsen KD. Brain-Skull Boundary Conditions in a Computational Deformation Model; Medical Imaging 2007: Visualization, Display and Image-Guided Procedures; Proceedings of SPIE, Vol. 6509, 65092J. San Diego, USA: 2007.
- Ji S, Wu Z, Hartov A, Roberts DW, Paulsen KD. Mutual-Information-Based Image to Patient Re-Registration Using Intraoperative Ultrasound in Image-Guided Neurosurgery. Medical Physics. 2008a;35(10):4612–4624. [PubMed]
- Ji S, Hartov A, Fontaine K, Borsic A, Roberts D, Paulsen K. Coregistered Volumetric True 3D Ultrasonography in Image-Guided Neurosurgery; Medical Imaging 2008: Visualization, Display and Image-Guided Procedures; Proceedings of SPIE, Vol. 6918, 69180F. San Diego, USA: 2008b.
- Ji S, Roberts DW, Hartov A, Paulsen KD. Intraoperative ultrasound for brain shift compensation in image-guided neurosurgery. 2009a (under review)
- Ji S, Roberts DW, Hartov A, Paulsen KD. Brain-skull contact boundary conditions in an inverse computational deformation model. Medical Image Analysis. 2009b (in press) [PMC free article] [PubMed]
- Maurer CR, Jr, Hill DLG, Maciunas RJ, Barwise JA, Fitzpatrick JM, Wang MY. Measurement of intraoperative brain surface deformation under a craniotomy. MICCAI. 1998:51–62. LNCS 1496. [PubMed]
- Miga MI. Development and Quantification of a 3D Brain Deformation Model for Model-Updated Image-Guided Stereotactic Neurosurgery. Hanover, NH: Dartmouth College, Thayer School of Engineering; 1998.
- Miga MI, Paulsen KD, Kennedy FE, Hartov A, Roberts DW. Model-updated image-guided neurosurgery using the finite element method: incorporation of the falx cerebri. MICCAI. 1999:900–909. [PMC free article] [PubMed]
- Miga MI, Sinha TK, Cash DM, Galloway RL, Weil RJ. Cortical surface registration for image- guided neurosurgery using laser-range scanning. IEEE Trans. Med. Imag. 2003;22(8):973–985. [PMC free article] [PubMed]
- Nabavi A, Black PM, Gering DT, Westin CF, Mehta V, Pergolizzi RS, Jr, Ferrant M, Warfield SK, Hata N, Schwartz RB, Wells WM, III, Kikinis R, Jolesz FA. Serial intraoperative magnetic resonance imaging of brain shift. Neurosurgery. 2001;48(4):787–798. [PubMed]
- Nimsky C, Ganslandt O, Cerny S, Hastreiter P, Greiner G, Falbusch R. Quantification of, visualization of and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurgery. 2000;47(5):1070–1080. [PubMed]
- Nimsky C, Ganslandt O, Hastreiter P, Fahlbush R. Intraoperative compensation for brain shift. Surgical Neurology. 2001;56:357–365. [PubMed]
- Paulsen KD, Miga MI, Kennedy FE, Hoopes PJ, Hartov A, Roberts DW. A computational model for tracking subsurface tissue deformation during streotactic neurosurgery. IEEE Trans. Biomed. Eng. 1999;46:213–225. [PubMed]
- Platenik LA, Miga MI, Roberts DW, Lunn KE, Kennedy FE, Hartov A, Paulsen KD. In vivo quantification of retraction deformation modeling for updated image-guidance during neurosurgery. IEEE Transactions on Biomedical Engineering. 2002;49(8):823–835. [PubMed]
- Roberts DW, Hartov A, Kennedy FE, Miga MI, Paulsen KD. Intraoperative brain shift and deformation: A quantitative analysis of cortical displacement in 28 cases. Neurosurgery. 1998;43(4):749–760. [PubMed]
- Roberts DW, Miga MI, Kennedy FE, Hartov A, Paulsen KD. Intraoperatively updated neuroimaging using brain modeling and sparse data. Neurosurgery. 1999;45:1199–1207. [PubMed]
- Schenk O, Gartner K. Solving unsymmetric sparse systems of linear equations with PARDISO. J. Future Gen. Compu. Syst. 2004;20(3):475–487.
- Skrinjar O, Nabavi A, Duncan J. Model-driven brain shift compensation. Medical Image Analysis. 2002;6:361–373. [PubMed]
- Sun H, Roberts DW, Farid H, Wu Z, Hartov A, Paulsen KD. Cortical surface tracking using an operating microscope. Neurosurgery. 2005a;56(1):86–97. [PubMed]
- Sun H, Lunn KE, Faird H, Wu Z, Roberts DW, Hartov A, Paulsen KD. Stereopsis-Guided Brain Shift Compensation. IEEE Transactions on Medical Imaging. 2005b;24(8):1039–1052. [PubMed]
- Sullivan JM, Jr, Charron G, Paulsen KD. A three dimensional mesh generator for arbitrary multiple material domains. Finite Element Anal. and Design. 1997;25(3):219–241.
- Unsgaard G, Selbekk T, Muller TB, Ommedal S, Torp SH, Myhr G, Bang J, Hernes TAN. Ability of navigated 3D ultrasound to delineate gliomas and metastases – comparison of image interpretation with histopathology. Acta Neurochir (Wien) 2005;147:1259–1269. [PubMed]
- West J, Fitzpatrick JM, Wang MY, Dawant BM, Maurer CR, et al. Comparison and evaluation of retrospective intermodality brain image registration techniques. J. of Computer Assisted Tomography. 1997;21(4):554–566. [PubMed]
- Wirtz CR, Bonsanto MM, Knauth M, Tronnier VM, Albert FK, Staubert A, Kunze S. Intraoperative magnetic resonance imaging to update interactive navigation in neurosurgery: Method and preliminary experience. Computer Aided Surgery. 1997;2:172–179. [PubMed]
- Wittek A, Kikinis R, Warfield SK, Miller K. Brain shift computation using a fully nonlinear biomechanical model. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2005;8(Pt 2):583–590. [PubMed]
- Wolfsberger S, Rössler K, Regatschnig R, Ungersböck K. Anatomical landmarks for image registration in frameless stereotactic neuronavigation. Neurosurg Rev. 2002;25:68–72. [PubMed]
- Wu Z, Paulsen KD, Sullivan JM., Jr Model initialization and deformation for automatic segmentation of T1-weighted brain MRI data. IEEE Transaction on Biomedical Engr. 2005;52:1128–1131. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |