Search tips
Search criteria 


Logo of skullbaseInstructions for AuthorsSubscribe to Skull BaseAbout Skull BaseEditorial BoardThieme Medical PublishingSkull Base An Interdisciplinary Approach ...
Skull Base. 2006 May; 16(2): 59–66.
Prepublished online 2006 February 14. doi:  10.1055/s-2006-931620
PMCID: PMC1502039

Virtual Reality Augmentation in Skull Base Surgery

Steffen K. Rosahl, M.D.,1 Alireza Gharabaghi, M.D.,2 Ulrich Hubbe, M.D.,1 Ramin Shahidi, Ph.D.,3 and Madjid Samii, M.D.4


Objective: Skull base anatomy is complex and subject to individual variation. Understanding the complexity of surgical anatomy is faster and easier with virtual models created from primary imaging data of the patient. This study was designed to investigate the usefulness of virtual reality in image guidance for skull base procedures. Design: Primary volumetric image data from 110 patients was acquired using magnetic resonance, computed tomography (CT), and CT angiography. Pathologies included lesions in the anterior, middle, and posterior skull base. The data were transferred to an infrared-based image-guidance system for creation of a virtual operating field (VOF) with translucent surface modulation and optional “fly-through” video mode. During surgery, the target registration error for anatomical landmarks was assessed and the VOF was compared with the patient's anatomy in the operative field. Results: Complex structures like the course of the sigmoid sinus, the carotid artery, and the outline of the paranasal sinuses were well visualized in the VOF and were recognized by the surgeon instantly. Perception was greatly facilitated as compared with routine mental reconstruction of triaxial images. Accurate assessment of the depth of field and very small objects was not possible in VOF images. Conclusion: Supported by sound anatomical knowledge, creation of a virtual operating field for a surgical approach in an individual patient offers a déjà vu experience that can enhance the capabilities of a surgical team in skull base approaches. In addition, application of this technique in image-guided procedures assists in targeting or avoiding hidden anatomical structures.

Keywords: Three-dimensional image guidance, virtual reality, skull base approaches

The interdisciplinary subspecialty of skull base surgery has recognized the potential benefits of surgical image guidance for some time.1,2,3,4 Because most of our surgical targets are fixed to the skull base, they are less susceptible than unfixed targets to intraoperative shifts that could make image guidance inaccurate. At the same time, surgical targets, vital structures, and anatomical landmarks are often obscured from the surgeon's vision in the early stages of an operation. These considerations alone make a strong case for the use of image guidance during most operations. Today, surgical image guidance based on three-dimensional (3D) volumetric data has become part of the routine in most skull base centers worldwide.

At the same time, in medical disciplines there is a trend toward augmenting interventions by virtual reality. The ideal situation would be a virtual stereoscopic view of the surgical field together with everything contained within it.5,6 Image-based, stereoscopic virtual reality models are already in use for planning a specific surgical procedure and for teaching purposes in neurosurgery and temporal bone dissection.7,8 Stereoscopic guidance in 3D has also been developed to allow the surgeon to explore radiological imaging information during surgery.6,9,10,11 Stereoscopic guidance using image overlay on the operative field and viewed through the microscope will probably be the method of choice, but it still has a way to go in terms of computation speed, capacity, accuracy,12 and visual perception of the depth in 3D images.13

However, with increasing image quality, coregistration of various image modalities, and 3D volumetric image rendering in real time, image guidance is on the verge of providing nonstereoscopic, color-coded models that closely match the predicted anatomy of the real surgical field. This will relieve the surgeon of having to reconstruct triaxial images in his or her own mind for each individual intervention.14,15,16

We have created such models, or virtual operating fields (VOF), from individual radiological data to reconstruct virtual images of the surgical field in more than 100 patients with lesions of the skull base.


Magnetic resonance (MR) imaging (1.5 and 3 Tesla), computed tomography (CT), and CT angiography were acquired for primary imaging to create volumetric datasets.

All CT examinations were performed in axial orientation with slice thickness of 1 mm and an ultra-high algorithm using a Somatom Plus 4® scanner (Siemens, Erlangen, Germany). For arterial and venous CT angiography, an intravenous bolus of 40 mL contrast (Ultravist®, Schering, Berlin, Germany) was injected to visualize enhancing tumors, followed by 60 mL administered at a fast rate after an individual delay determined by a bolus test.17

MR imaging was performed on either a 1.5-Tesla Magnetom (Sonata®, Siemens), or a 3-Tesla Magnetom (Allegra®, Siemens), respectively.

Axial T1-weighted 3D magnetization-prepared–rapid gradient echo scans (MP-RAGE) (TR/TE/T1 11.08/4.3/300 ms; flip angle 15 degrees; bandwidth 130 Hz/pixel; effective slice thickness 1 mm; pixel size 1.2 × 0.9 mm) without and with IV contrast medium (0.1 mmol per kg body weight gadolinium-DTPA, Magnevist®, Schering) were usually sufficient for MR image guidance at the skull base. This sequence could be acquired in just over 7 minutes.

For improved imaging of the temporal bone and the cortical surface of the temporal fossa, an axial T2-weighted constructive interference in steady state (CISS) sequence (TR/TE/flip angle 17 ms/8.08 ms/70 degrees effective slice 0.7 mm; pixel size 0.6 × 0.45 mm; acquisition time 7 min 51 s) was obtained for selected cases.

Data from 110 patients were transferred to an image-guidance system via a local area network and processed with software that is capable of rendering high-resolution, 3D images that can be interactively rotated in real time (Image Guidance Laboratories, Stanford University, CA, USA).

Pathologies included sellar and parasellar tumors, lesions in the paranasal sinuses and the temporal bone, acoustic neuromas and various other cerebellopontine angle tumors, epidermoids, brainstem lesions, glomus tumors, and craniocervical meningiomas.

After designing a surgical strategy, segmentation and the creation of volumes of interest from the imaging data containing the landmark anatomical structures in the surgical path were prepared by the neurosurgical team. A virtual “fly-through” 3D movie was generated to simulate the major surgical steps in the planned procedure using the individual image data for each patient (“helicopter view”). The surgical strategy was adjusted whenever an improved surgical approach could be derived from these visualizations.

Several volumes of interest were selected and color-coded by the neurosurgical team to create a volumetric image that contained all the anatomical landmarks that would also appear in the real operating field. This image, the VOF, was interactively rotated to match the orientation of the real surgical field as seen through the microscope. Surfaces were gradually rendered translucent in the image to allow a view of the lesion in relation to more superficial morphologic landmarks (“driver's seat view”). The VOF was zoomed to the size of the real microscopic field and both views could be displayed on a video monitor simultaneously. Depending mainly on the quality of image data and the level of computer skills of the surgeon, image rendering took between 15 minutes and 2 hours.

In the operating room, the patient's head was registered with 5 to 10 adhesive skin fiducials attached. Either a pointing probe or a surgical microscope with an infrared transmitter attached to it were detected by a standard infrared-based camera (see Fig. Fig.1).1). A digital reference frame (DRF) was attached to the Mayfield® head holder and kept visible to the camera by draping it with a transparent bag before surgery. Throughout the surgical procedure, both macroscopic and microscopic pictures together with videos were taken to compare with the virtual 3D models of the patient's anatomy off-line. In addition, neurophysiological investigations, employing auditory and somatosensory evoked potentials as well as facial, trigeminal, and oculomotor nerve electromyographic recording, were performed intraoperatively to detect the nerves early and chart their position in relation to real and virtual landmarks.

Figure 1
Intraoperative setup of the infrared-based image-;guidance system showing the probe pointing on the skin over the mastoid, the digital reference frame behind it, and the navigation monitor screen.

During surgery, the stereoscopic 3D views as seen through the operating microscope were compared with the VOF images at regular intervals. Depth was controlled by measurement of the z-axis in triaxial renderings on the monitor screen of the navigation system. Target registration error was assessed for major landmarks on the surface as well as in the depth in most cases. Descriptive statistics were obtained employing the SPSS PC+ statistical package (SPSS, Inc., Chicago, Illinois).


Mean target registration error (TRE) was 1.5 mm (SD 0.6 mm) for CT-based and 2.1 mm (SD 0.9 mm) for MR-based image guidance.

The quality of VOF images with respect to their closeness to reality depended largely on the quality of the primary dataset (movement artifacts, tissue contrast) and the anatomical location of the target structures.

Presurgical planning based on VOF images and movies facilitated surgical preparation especially for complex lesions when there was a possible alternative approach.

There was no need for the surgeon to mentally reconstruct the surgical anatomy from two-dimensional scans (see Fig. Fig.2).2). Compared with triaxial 2D images, orientation was significantly faster and more comprehensible with VOF images. This became most apparent when irregularly shaped structures like the dural sinuses (see Fig. Fig.3),3), the basal arterial circulation, the bone of the skull base, tumor borders, and the cortical surface were involved or the operating field was rotated to an unusual orientation.

Figure 2
Axial magnetic resonance (top row) and computed tomography (bottom row) images of the same lesion (epidermoid) that is also shown in subsequent VOF images. VOF, virtual operating field.
Figure 3
CT and CT angiography model of an epidermoid located in the left petrous bone adjacent to the sigmoid sinus. View from above and three views from the left side with gradual modification of the transparency of the skull bone. Interactive rotation of the ...

Zooming in the virtual images to enlarge the detail to the size of the surgical field reduced the amount of information presented at a time to the required minimum, steadying the surgeon's focus of attention.

Individual, patient-specific anatomy was visualized in VOF images down to a resolution of ~2 mm for well-delineated structures. In general, all structures that were readily discernable in the primary image data were also seen in virtual 3D models of the surgical field. Bony surfaces, embedded vessels, and different types of soft tissue along the surgical path, in front of and beyond the lesion, were identified in a single image that could be adjusted to the position of the surgical microscope in relation to the field (see Fig. Fig.33).

Smaller and less discernable structures, such as most cranial nerves, could not be shown in routine images. Electrophysiological monitoring was by far more effective for early identification of the cranial nerves.

Gradual modulation of the opacity of surfaces in VOF images allowed not only for visualization of hidden anatomical structures but also for relating them to more superficial landmarks that were already within the surgeon's view (see Fig. Fig.3).3). By this means, intracranial soft tissue anatomy, including gyri and sulci, was well represented in MR-based VOF images (see Fig. Fig.44).

Figure 4
Volumetrically rendered gadolinium-enhanced MR image of the lesion. The surface of the cerebellum and the temporal cortex, the epidermoid, and the venous structures are well delineated after the bone and outer tissue layers were rendered translucent. ...

Depth along the z-axis was visualized and could be measured on the navigation monitor screen in triaxial images (see Fig. Fig.55).

Figure 5
Screen shot of the monitor of the image-guidance system during surgery. Note that the virtual probe corresponding to the tip of the real probe is clearly seen inside the lesion in multiaxial images only.


The common conception of 3D image guidance today involves 2D, triaxial images in which an entry and a target point as well as a trajectory between these points are defined.18,19

However, during skull base procedures, most surgeons rely on 3D, stereoscopic information from the surgical field delivered through the microscope. A “virtual microscope” operating in 3D space that uses radiographic images of the patient's anatomy would be an ideal instrument to plan these procedures and to obtain information beyond the field during surgery. While several problems remain to be solved,12,13,20 true 3D imaging on a routine basis will probably be realized in the future by taking advantage of stereoscopy.6,11

In the meantime, modeling of the surgical field with volumetric image data as shown in this study provides a method that is ready for use in skull base procedures now. Delivering an appropriate 3D image that virtually matches the surgical field certainly augments the surgeon's ability.

While anatomical knowledge and experience still remain the most crucial factors affecting the surgical result, this virtual reality technique offers the surgeon a déjà vu experience of the individual anatomy. It can also provide additional information on the operative field and beyond that is invisible through the operating microscope.

Of course, this method relies on the assumption that the images acquired prior to surgery, and upon which the surgical guidance is based, accurately represent the morphology of the tissue during the surgical procedure. Even though there is usually little systematic error due to shifting of structures in interventions at the skull base and the target registration error usually does not exceed 2 mm,21 in many instances this assumption will be invalid.20 Intraoperative real-time imaging, using interventional CT or MRI, ultrasound, electrophysiological recordings, and surface tracking, may be added to overcome this limitation.9,15,22,23,24,25

With the current technique, depth cannot be accurately estimated from the VOF. Triaxial images must be used in addition in order to overcome this limitation.

Although now in extensive clinical use, image guidance is still often perceived as an intrusion into the operating room.20 In this series we have experienced that it was accepted best when additional preparation time was minimal, when the VOF images were zoomed to the size of the real surgical field containing relevant landmarks without redundant information, and when the target and the approach were outlined in a single 3D image that could be rotated interactively and rendered translucent.


Image guidance on the basis of 3D images cannot substitute anatomical knowledge and experience, because systematic and accidental technical errors happen and the depiction of anatomical detail is still limited by the resolution of current imaging techniques. With respect to the planning of skull base procedures, however, virtual images of the operative field offer the surgeon a déjà vu experience of the surgical field. They eliminate the task of having to mentally reconstruct and rotate triaxial images and provides additional information on the real operative field that is otherwise invisible and inaccessible.


  • Schul C, Wassmann H, Skopp G B, et al. Surgical management of intraosseous skull base tumors with aid of Operating Arm System. Comput Aided Surg. 1998;3:312–319. [PubMed]
  • McEvoy A W, Porter D G, Bradford R, Wright A. Intra-operative localisation of skull base tumours. A case report using the ISG viewing wand in the management of trigeminal neuroma. Postgrad Med J. 1999;75:35–38. [PMC free article] [PubMed]
  • Metson R, Cosenza M, Gliklich R E, Montgomery W W. The role of image-guidance systems for head and neck surgery. Arch Otolaryngol Head Neck Surg. 1999;125:1100–1104. [PubMed]
  • Wadley J, Dorward N, Kitchen N, Thomas D. Pre-operative planning and intra-operative guidance in modern neurosurgery: a review of 300 cases. Ann R Coll Surg Engl. 1999;81:217–225. [PMC free article] [PubMed]
  • John N W. Using stereoscopy for medical virtual reality. Stud Health Technol Inform. 2002;85:214–220. [PubMed]
  • Miller A, Allen P, Fowler D. In-vivo stereoscopic imaging system with 5 degrees of freedom for minimal access surgery. Stud Health Technol Inform. 2004;98:234–240. [PubMed]
  • Kockro R A, Serra L, Tseng-Tsai Y, et al. Planning and simulation of neurosurgery in a virtual reality environment. Neurosurgery. 2000;46:118–135. [PubMed]
  • Stredney D, Wiet G J, Bryan J, et al. Temporal bone dissection simulation—an update. Stud Health Technol Inform. 2002;85:507–513. [PubMed]
  • Hernes T A, Ommedal S, Lie T, Lindseth F, Lango T, Unsgaard G. Stereoscopic navigation-controlled display of preoperative MRI and intraoperative 3D ultrasound in planning and guidance of neurosurgery: new technology for minimally invasive image-guided surgery approaches. Minim Invasive Neurosurg. 2003;46:129–137. [PubMed]
  • Serra L, Kockro R, Goh L C, Ng H, Lee E C. The DextroBeam: a stereoscopic presentation system for volumetric medical data. Stud Health Technol Inform. 2002;85:478–484. [PubMed]
  • Vogt S, Khamene A, Niemann H, Sauer F. An AR system with intuitive user interface for manipulation and visualization of 3D medical data. Stud Health Technol Inform. 2004;98:397–403. [PubMed]
  • Mitchell P, Wilkinson I D, Griffiths P D, Linsley K, Jakubowski J. A stereoscope for image-guided surgery. Br J Neurosurg. 2002;16:261–266. [PubMed]
  • Johnson L G, Edwards P, Hawkes D. Surface transparency makes stereo overlays unpredictable: the implications for augmented reality. Stud Health Technol Inform. 2003;94:131–136. [PubMed]
  • Gralla J, Guzman R, Brekenfeld C, Remonda L, Kiefer C. High-resolution three-dimensional T2-weighted sequence for neuronavigation: a new setup and clinical trial. J Neurosurg. 2005;102:658–663. [PubMed]
  • Dey D, Gobbi D G, Slomka P J, Surry K J, Peters T M. Automatic fusion of freehand endoscopic brain images to three-dimensional surfaces: creating stereoscopic panoramas. IEEE Trans Med Imaging. 2002;21:23–30. [PubMed]
  • Shahidi R, Bax M R, Maurer C R, Jr, et al. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system. IEEE Trans Med Imaging. 2002;21:1524–1535. [PubMed]
  • Wilkinson E P, Shahidi R, Wang B, Martin D P, Adler J R, Jr, Steinberg G K. Remote-rendered 3D CT angiography (3DCTA) as an intraoperative aid in cerebrovascular neurosurgery. Comput Aided Surg. 1999;4:256–263. [PubMed]
  • Kurtsoy A, Menku A, Tucer B, Oktem I S, Akdemir H. Neuronavigation in skull base tumors. Minim Invasive Neurosurg. 2005;48:7–12. [PubMed]
  • Vannier M W, Marsh J L. Three-dimensional imaging, surgical planning, and image-guided therapy. Radiol Clin North Am. 1996;34:545–563. [PubMed]
  • Peters T M. Image-guided surgery: from X-rays to virtual reality. Comput Methods Biomech Biomed Engin. 2000;4:27–57. [PubMed]
  • Berry J, O'Malley B W, Jr, Humphries S, Staecker H. Making image guidance work: understanding control of accuracy. Ann Otol Rhinol Laryngol. 2003;112:689–692. [PubMed]
  • Gering D T, Nabavi A, Kikinis R, et al. An integrated visualization system for surgical planning and guidance using image fusion and an open MR. J Magn Reson Imaging. 2001;13:967–975. [PubMed]
  • Miga M I, Roberts D W, Kennedy F E, et al. Modeling of retraction and resection for intraoperative updating of images. Neurosurgery. 2001;49:75–84. [PubMed]
  • Roberts D W, Farid H, Wu Z, Hartov A, Paulsen K D. Cortical surface tracking using a stereoscopic operating microscope. Neurosurgery. 2005;56:86–97. [PubMed]
  • Unsgaard G, Ommedal S, Muller T, Gronningsaeter A, Nagelhus Hernes T A. Neuronavigation by intraoperative three-dimensional ultrasound: initial experience during brain tumor resection. Neurosurgery. 2002;50:804–812. [PubMed]

Articles from Skull Base are provided here courtesy of Thieme Medical Publishers