|Home | About | Journals | Submit | Contact Us | Français|
Fluorescence-guided resection (FGR) of brain tumors is an intuitive, practical and emerging technology for visually delineating neoplastic tissue exposed intraoperatively. Image guidance is the standard technique for producing 3-dimensional spatially coregistered information for surgical decision making. Both technologies together are synergistic: the former detects surface fluorescence as a biomarker of the current surgical margin while the latter shows coregistered volumetric neuroanatomy but can be degraded by intraoperative brain shift. We present the implementation of deformation modeling for brain shift compensation in protoporphyrin IX FGR, integrating these two sources of information for maximum surgical benefit.
Two patients underwent FGR coregistered with conventional image guidance. Histopathological analysis, intraoperative fluorescence and image space coordinates were recorded for biopsy specimens acquired during surgery. A biomechanical brain deformation model driven by intraoperative ultrasound data was used to generate updated MR images.
Combined use of fluorescence signatures and updated MR image information showed substantially improved accuracy compared to fluorescence or the original (i.e., nonupdated) MR images, detecting only true positives and true negatives, and no instances of false positives or false negatives.
Implementation of brain deformation modeling in FGR shows promise for increasing the accuracy of neurosurgical guidance in the delineation and resection of brain tumors.
Fluorescence-guided resection (FGR) for intraoperative, real-time delineation of tumors has been gaining wider acceptance within the neurosurgical community [1, 2, 3, 4, 5, 6, 7, 8, 9]. Prior to surgery, δ-aminolevulinic acid (ALA) is administered orally, leading to accumulation of protoporphyrin IX (PpIX) in neoplastic tissue. When excited with blue light, tumor tissue accumulating sufficient levels of PpIX displays a red fluorescence [2, 3, 4]. FGR provides the neurosurgeon with a surface guidance technology that is not susceptible to intraoperative brain shift, in contrast to conventional image guidance systems where navigational accuracy is degraded [10, 11, 12, 13, 14]. However, not all neoplastic tissue fluoresces to levels observable intraoperatively, leaving some tumor tissue undetected [7, 9, 15, 16] and subsurface levels of fluorescent tissue are not visible with current surgical microscope systems, requiring the need for complementary 3-dimensional volumetric guidance in locating abnormal tissue.
To account for image-guided registration degradation secondary to brain shift, biomechanical models that compensate for brain deformation are currently under development [10, 11, 17]. In one form, intraoperative data drive a biomechanical model to estimate the 3-dimensional displacement field and subsequently deform the preoperative MR (pMR) images used for navigation to produce updated MR (uMR) images. The uMR provides the neurosurgeon with more accurate MR image correspondence with the current surgical field. A dual-modality, neurosurgical guidance system that incorporates both FGR and uMR images would offer the neurosurgeon a biomarker of the surgical margin as currently exposed along with coregistered volumetric neuroanatomy that is not degraded by intraoperative brain shift. In this study, we present the implementation of a biomechanical deformation model for brain shift compensation in the setting of PpIX FGR of brain tumors.
Two patients, one with a gliosarcoma (GS) and one with a glioblastoma multiforme (GBM), provided written informed consent for this investigational study and were enrolled in an institutional review board-approved study of coregistered fluorescence-enhanced tumor resection. pMR, T1-weighted images (matrix: 256 × 256, 1.5 mm slice thickness) with gadolinium enhancement (0.2 ml/kg gadolinium-diethylenetriamine pentaacetic acid) and scalp-based registration fiducials were acquired for both patients and used for image guidance. Three hours prior to induction of anesthesia, both patients received a 20 mg/kg body weight oral dose of ALA (DUSA Pharmaceuticals, Tarrytown, N.Y., USA) dissolved in 100 ml of water.
For FGR, a commercial image guidance system (Treon® StealthStation®; Medtronic, Louisville, Colo., USA) was coregistered with a surgical microscope offering fluorescence capabilities (Zeiss OPMI Pentero®; Carl Zeiss Surgical GmbH, Oberkochen, Germany). The surgical microscope excites with blue light and collects the filtered red fluorescence emission of PpIX. During the procedure, the surgeon intermittently switched from the white light to the blue light illumination mode (fig. (fig.1).1). Eighteen biopsy specimens were collected (9 from patient 1: GS; 9 from patient 2: GBM) at the beginning, middle, and end of resection in regions within the bulk of the tumor and tumor margins, and were divided into 3 groups: one was placed in 10% buffered formalin, one was frozen in a cryogenic vial, and one was frozen in mounting medium (OCT). The navigation system was used to determine the image space coordinates of each specimen, which was subsequently matched to its corresponding white and blue light digital images.
Prior to the start of surgery, pMR images were rigidly registered to the patient's head in the physical space coordinate system through a fiducial-based registration. At the time of surgery and before craniotomy, the locations of the fiducial markers were digitized using a stylus probe whose location was identified by a 3-dimensional tracking system (Polaris System Northern Digital Inc., London, Ont., Canada) coupled to a workstation dedicated to the deformation modeling. The registration process was used to match the fiducials’ physical space coordinates with the corresponding image space coordinates. The stylus probe was then placed at two vertical locations to estimate the direction of gravity that was used in the model computations.
After craniotomy, a Philips (Philips Medical Systems, Bothell, Wash., USA) iU22 3-dimensional ultrasound system (US) was used to collect predurotomy and postdurotomy images. The US transducer was tracked by the Polaris System continuously through a rigidly attached infrared light-emitting tracker to allow for the 3-dimensional US images to be registered to the pMR stack [18, 19].
Formalin-fixed paraffin-embedded tissue was processed for hematoxylin and eosin staining and analyzed by a single neuropathologist (B.T.H.). The neuropathologist was blinded to the pathological diagnoses from the main surgical specimen and from the intraoperative fluorescence or imaging data. Each biopsy tissue section was analyzed for the presence or absence of tumor cells and was evaluated based on the WHO classification schema .
The biomechanical brain deformation model implemented simulates the amount and direction of brain shift and deformation (i.e., displacement) that occurs intraoperatively, by incorporating standard surgical conditions (e.g. direction of gravity) and intraoperative data. Predurotomy and immediate postdurotomy US images were used to measure the displacement that occurs in a subvolume of the brain at the beginning of surgery. They provided sufficient patient-specific intraoperative data, in combination with standard surgical conditions, to drive the biomechanical brain deformation model to estimate displacement for the whole-brain volume which was used to deform the pMR images and generate uMR images. Additional details on the brain deformation model are provided in the Appendix.
Two-by-two tables were used to calculate the following values and statistical measures for FGR alone, image guidance with (IGW) and without brain shift compensation (IGWO) alone, and FGR coupled with brain shift compensated image guidance (FGR-IGW): (1) true negatives; (2) false negatives; (3) true positives; (4) false positives; (5) sensitivities; (6) specificities; (7) negative predictive values, and (8) positive predictive values. To calculate these measures for FGR, intraoperative red fluorescence was considered a positive test result, whereas absence of fluorescence was recorded as a negative test result. Similarly, to calculate the same values and measures for image guidance, tissue judged radiologically abnormal from T1-weighted pMR images (i.e., preoperative MRI without brain shift compensation) or uMR images (i.e., preoperative MRI with brain shift compensation) was considered a positive test result, whereas tissue judged radiologically normal from pMR or uMR images was scored as a negative test result.
The 2 patients in this study displayed distinct patterns of PpIX fluorescence and different degrees of brain shift (table (table1).1). The GS and GBM (both WHO grade IV) cases displayed areas of strong fluorescence as well as gadolinium-enhancing regions on MRI. The GS surgery (brain shift displacement values: mean = 2.9 mm, max = 16.0 mm) had the largest degree of brain shift (table (table2),2), whereas the GBM surgery (brain shift displacement values: mean = 0.1 mm, max = 1.4 mm) showed much less brain shift. The deformation model was run on a Linux computer (2.33 GHz, 8 GB RAM) with 500 iterations for patient 1 (GS) and 51 iterations for patient 2 (GBM). The computational cost was less than 30 min for patient 1 and less than 6 min for patient 2. Figure Figure11 shows representative pMR, uMR, and white and blue light images of the coregistered focal point of the operating microscope after dural opening during the surgery of patient 1 from which it is evident that the fluorescing tumor distended. The uMR views correctly compensate for the tumor movement after dural opening whereas their pMR counterparts clearly do not.
Sensitivity, specificity, negative predictive values, and positive predictive values for both cases are shown in table table2.2. An increase in the sensitivity and negative predictive value of image guidance was noted after brain shift compensation (sensitivity: IGWO = 0.69 vs. IGW = 0.94, negative predictive value: IGWO = 0.29 vs. IGW = 0.67). FGR-IGW showed the best statistical measures for accurately identifying abnormal tissue (sensitivity = 1.00, negative predictive value = 1.00, specificity = 1.00, positive predictive value = 1.00). True negative, false negative, and true positive values from both cases are graphically displayed in figure figure2.2. No false positives were recorded in this study and two biopsies were true negatives (nonenhancing, nonfluorescent tissue without the presence of tumor cells on histology). The percentage of true positive samples increased (IGWO = 61.1% vs. FGR-IGW = 88.9%) and false negatives decreased (IGWO = 27.8% vs. FGR-IGW = 0.00%) in IGWO versus FGR-IGW. Figure Figure33 shows representative pre- and postoperative gadolinium-enhanced, T1-weighted images of both cases.
With evidence that the extent of tumor resection correlates with patient survival and quality of life [5, 6, 8, 21, 22, 23], intraoperative brain shift poses a major challenge in brain tumor resections using conventional image guidance technologies. FGR based on ALA-PpIX has recently gained acceptance as a safe and easy way to aid the neurosurgeon in delineating neoplastic tissues during tumor resection. FGR is not limited by degradation of registration accuracy due to the intraoperative brain shift that is known to compromise conventional image guidance systems. However, FGR only provides the neurosurgeon with surface information on the location of neoplastic tissue, and as such, accurate 3-dimensional navigation that accounts for intraoperative brain shift is still needed to complement FGR.
Here, we present the implementation of a brain deformation biomechanical model to compensate for intraoperative brain shift in PpIX FGR of brain tumors. Two surgical cases with different levels of PpIX fluorescence and intraoperative brain shift were studied. FGR-delineated tumor tissue appeared in both the GBM and GS instances. In both surgeries, observable intraoperative fluorescence was mapped only to areas of contrast-enhancing tissue on the uMR images, whereas the pMR images incorrectly mapped fluorescing tissue to radiologically normal regions (e.g. scalp), which on pathological analysis was demonstrated to be neoplastic. Some biopsy specimens of highly enhancing tissue in patient 1 (n = 3) did not display observable fluorescence. In these specimens, a heavy infiltration of mostly the sarcomatous element of the tumor was observed. Meanwhile, tissue sections containing mostly astrocytic elements of the tumor showed intraoperative fluorescence and contrast enhancement on uMR images, suggesting that high-grade glioma tumor cells accumulate more PpIX than high-grade sarcomas.
The uMR images provided more accurate anatomical information increasing both the sensitivity (pMR = 0.69 vs. uMR = 0.94) and negative predictive value (pMR = 0.29 vs. uMR = 0.67) of T1-weighted images for surgical correspondence with histologically abnormal tissue (i.e., tissue with the presence of tumor cells). The dual-modality approach using fluorescence signatures with uMR offers a complementary approach that increased the accurate determination of abnormal tissue in the two cases (fig. (fig.2).2). Image guidance provides the neurosurgeon with accurate 3-dimensional volumetric information for navigation based on MR-specific image signatures that are widely used to identify abnormal and resectable tissue. PpIX FGR provides the neurosurgeon with an intuitive guidance tool that delineates tumor tissue in real time from which the current resection margin can be evaluated. Thus, the dual-modality approach described here takes advantage of pathophysiological changes in tumor cell metabolism (e.g. increased PpIX production and accumulation), blood-brain barrier breakdown (e.g. gadolinium enhancement on MRI), and tumor-specific biological signatures (e.g. astrocytic tumors) to improve the accuracy of tumor identification in the operating room.
Current technologies for intraoperative updating during brain surgery include intraoperative MRI, CT, and US (table (table3).3). These technologies provide intraoperative feedback to the neurosurgeon, counteracting error in navigational accuracy due to brain deformation [24, 25, 26, 27, 28, 29, 30, 31, 32]. Each technology has pros and cons with respect to its costs, limitations and capabilities for tissue contrast, and effects on surgical procedures and workflow that remain to be fully understood and evaluated in the operative setting. In this study, FGR-IGW combines intraoperative feedback from surface fluorescence with a biomechanical model enabled through intraoperative US to update the pMR images, providing the neurosurgeon with real-time feedback on tumor fluorescence coregistered with periodically compensated 3-dimensional uMR views for enhanced neuronavigation.
Although this dual-modality system provides a platform for integrating two powerful guidance technologies, some limitations remain. FGR currently detects only the visually apparent surface levels of fluorescence, since thin layers of blood and/or intervening tissue can obscure subsurface fluorescence emissions. As noted with the GS case, not all tumor tissue produces observable PpIX fluorescence (i.e., sarcomatous element of the tumor). As such, the optimal brain tumor population for efficient use of PpIX FGR needs to be determined. We are currently studying various brain tumor histologies in patients undergoing FGR to determine the most appropriate and relevant biologies for efficient accumulation of PpIX to observable levels. In addition, our group is developing intraoperative fluorescent probes to target non-PpIX-fluorescent tumors. We continue our ongoing efforts to develop a software platform to execute the deformation model intraoperatively as efficiently as possible. Our approach exploits intraoperative US to create a sparse data set to drive the model where one US image is acquired before durotomy, followed by a series of intraoperative US images at different stages during the surgery in order to generate corresponding uMR images . We have also utilized a stereovision system that can provide surface sparse data to improve the generation of uMR images with minimal disruption to the surgical workflow [10, 11].
We present an implementation of a deformation model guided by intraoperatively acquired data in the setting of PpIX FGR of brain tumors. FGR was used to delineate and guide the resection of fluorescent tissue visually evident within the surgical field. A biomechanical brain deformation model provided uMR images by assimilating intraoperative data acquired with 3-dimensional intraoperative US that improved the correspondence between the volumetric representation of tumor and the fluorescing biomarker of tumor associated with the surgeon's visual field. Postoperative analysis of fluorescence signatures and the uMR images were complementary in more accurately identifying tumor tissue confirmed histopathologically from resected specimens. We believe that this dual-modality approach which uses a deformation model that compensates for brain shift coregistered with fluorescence imaging can provide the neurosurgeon with an accurate, intuitive platform that improves intraoperative guidance during tumor resection.
This research was supported by National Institutes of Health grants R01NS052274-02 and R01EB002082-13. We acknowledge DUSA Pharmaceuticals (Tarrytown, N.Y., USA), Carl Zeiss (Carl Zeiss Surgical GmbH, Oberkochen, Germany), Medtronic Navigation (Louisville, Colo., USA), and Phillips Medical Systems (Bothell, Wash., USA) for the supply of ALA, and the use of the OPMI Pentero® surgical microscope, StealthStation® Treon® navigation system, and iU22 3D ultrasound system, respectively.
A biomechanical brain deformation model computed with the finite element method was used to estimate displacement vectors within the whole-brain volume. This model can be represented by the following coupled equations, where u is the displacement, p is the pore fluid pressure, and the other parameters symbolize the tissue mechanical properties:
These equations are discretized into a matrix form, Kx = b, where K is the stiffness matrix, x is the displacement and pore fluid pressure to be computed, and b contains the forcing conditions.
The patient's parenchymal volume was isolated within the pMR images using a level set segmentation algorithm . A triangular surface mesh and its corresponding tetrahedral volume mesh were generated using the segmented brain. After patient registration, boundary conditions were assigned to different types of surface nodes as follows: (1) craniotomy nodes were identified using the contour line drawn by the surgeon and were allowed to move freely; (2) brainstem nodes were also allowed to move unconstrained; (3) fluid drainage was defined by a plane passing through the lowest craniotomy node which was perpendicular to the direction of gravity, with elements below and above being assigned with different parameters (e.g. saturated with fluid or not); (4) a second plane was determined by moving the fluid plane along the direction of gravity by 20 mm; all boundary nodes above this plane except the craniotomy nodes were assigned as contact nodes that are free to move towards or away from the inner surface of the skull and were constrained to motion tangential to the skull, if and when, they moved into direct contact during the displacement computations; (5) other boundary nodes were assigned as fixed, meaning that they were only allowed to move tangentially with respect to the skull.
Subsequently, a master surface was generated by projecting the brain boundary nodes along the average nodal normal by a specified distance to simulate the inner surface of the skull . The data used to drive the model estimates were the displacements between predurotomy and immediately postdurotomy ultrasound images. A mutual information-based rigid registration followed by a B-spline nonrigid registration was performed to align the predurotomy and postdurotomy ultrasound images and compute displacement vectors. Figure Figure44 shows typical US images from patient 1 before and after durotomy as well as their un- and re-registered overlays which were used to extract intraoperative displacement data. Displacement vectors were mapped from US coordinates to pMR coordinates through a series of transformations and used as data to drive the biomechanical model. To generate a displacement map for model assimilation, the tumor does not have to be well defined on US per se as long as features exist in both US and MR image volumes which correspond sufficiently well to allow the mutual information registration to occur successfully. Once the displacement map has been generated, the model will assimilate the measured data to produce whole-brain deformation in order to create the model uMR images.
A generalized least-squares inverse method was used to solve the estimation problem by minimizing the difference between measured data, d, and the model estimate, x . The model constraint was embedded in the objective function through Lagrange multipliers to form an augmented quadratic expression
where A is the sampling matrix which computes the model estimate at locations where the measurements were made. W is the inverse of the covariance matrix of the misfit, , between measured data and model estimates, and Wb is the inverse of the covariance of forcing conditions, b.
The objective function is minimized when all derivatives are zero, and the resulting set of equations was solved using the steepest gradient descent algorithm with displacement vectors throughout the whole-brain volume as output files [17, 35]. The pMR image was then deformed using these displacement vectors and uMR images were generated. A schema of the biomechanical brain deformation modeling process is shown in figure figure55.
This study was presented in part at the 77th Annual Meeting of the American Association of Neurological Surgeons, San Diego, Calif., USA, May 5, 2009.
Conflict of Interest
Dr. David W. Roberts serves on the data monitoring committee for a Medtronic deep brain stimulation study.