PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Image Comput Comput Assist Interv. Author manuscript; available in PMC 2010 June 14.
Published in final edited form as:
Med Image Comput Comput Assist Interv. 2009 October 1; 12(Pt. 1): 795–802.
doi:  10.1007/978-3-642-04268-3
PMCID: PMC2880830
NIHMSID: NIHMS198300

Combining Multiple True 3D Ultrasound Image Volumes through Re-registration and Rasterization

Abstract

We present an accurate and efficient technique to combine and rasterize multiple 3D ultrasound (3DUS) image volumes originally presented in spherical coordinates into a single, 3D Cartesian image that uniformly samples the total field of view. To ensure the consistency of merged image content in overlapping regions, image re-registration was performed by maximizing mutual information (MI). The technique was applied to 22 3DUS image volumes obtained during five neurosurgical patient cases. The computational cost of the approach increases linearly with the number of images involved (average time to combine and rasterize one pair of 3DUS images was 1.5 sec). Interpolation was approximately 20% more accurate in overlapping regions when reregistration was performed before rasterization and minimized feature loss and/or blurring that was evident without re-registration. In addition, we report the average translational (35.2 mm) and rotational (38.5°) capture ranges for the MI re-registration of two volumetric 3DUS images. The technique is applicable in any clinical application in which volumetric true 3DUS is acquired.

1 Introduction

Ultrasonography is an important imaging technique with a wide range of both diagnostic and intraoperative applications. Conventional 2D ultrasound (2DUS) is currently the most commonly used imaging scheme where multiple freehand sweeps are acquired to sample the region of interest. Recently, we have integrated volumetric true 3D ultrasound (3DUS) into image-guided neurosurgery, during which image volumes are generated from a dedicated ultrasound transducer without the need for freehand sweeps or 3D image reconstruction [1]. However, the 3DUS image space is represented in an unconventional spherical coordinate system, making it incompatible with most processing and visualization software that expects Cartesian coordinates. In addition, multiple acquisitions, as opposed to a single snapshot, are still often recorded (e.g., to increase the overall sampling of the region of interest). Therefore, a technique that combines multiple 3DUS image volumes into one Cartesian 3D dataset is needed to enable application of existing software as well as to improve the efficiency of any subsequent image processing of the 3DUS acquisitions.

This paper presents an accurate and efficient technique to combine arbitrarily-oriented 3DUS volumes into a single Cartesian coordinate system. The consistency of the combined image content in overlapping regions is ensured through re-registration with mutual information (MI), the importance of which is demonstrated by comparing rasterized images generated with and without the re-registration. We show that image features can become lost or significantly blurred without re-registration. In addition, the capture ranges of two true 3DUS image volumes registered with MI are quantitatively evaluated and found to be much larger than those of tracked 2DUS acquisitions.

2 Material and Methods

Five patients (3 male, 2 female; average age of 48) undergoing open cranial brain tumor resections (three low grade gliomas, one high grade glioma and one meningioma) with deployment of volumetric 3DUS were included in the study. A set of volumetric 3DUS images was acquired (3–9 image volumes for each patient and 22 in total; time interval between two consecutive volumes was approximately 10–15 sec) before dural opening using a dedicated transducer (X3-1 broad band matrix array) and ultrasound system (iU22, Philips Healthcare, N.A.; Bothell, WA). All image acquisitions were configured to cover the maximum angular ranges allowed [1]. The scan-depth was set to 140–160 mm to capture the parenchymal surface contralateral to the craniotomy.

2.1 Geometric Transformation and Interpolation for Volumetric 3DUS

Interpolation is essential for 3DUS image rasterization. We have recently developed an accurate and efficient trilinear interpolation scheme out of necessity because the vendor of the 3DUS scanner did not openly provide conversion functionality to transform scans into Cartesian coordinates. The interpolation was achieved by first converting physical points into an integer Cartesian grid space, similarly to that reported in [2]. Briefly, the dimensions of the 3DUS image matrix and the ranges in depth (r, in mm), and lateral (θ; in degrees) and medial angles (ϕ; in degrees) determine the step sizes in each direction (Fig. 1a). Therefore, the indices of the row, column and slice of a voxel in a typical 3DUS image determine its physical location. Conversely, for any given point in the physical space (Fig. 1b), its equivalent coordinates in grid space (i, j, k; not necessarily integers) can be uniquely determined (Fig. 1c; [1]).

Fig. 1
(a) A typical 3DUS voxel (B) in physical space. (b–d) Sequential transformations of a typical point from physical space (p) to grid space (p′), and subsequently to natural coordinates (p″) of a hexahedral element determined by ...

The intensity value at p or its equivalence, p′, can then be linearly interpolated using the standard finite element trilinear shape functions [3]. Specifically, the 8 neighboring voxels formulate an 8-node hexahedral element in grid space, which is further transformed into natural coordinates (Fig. 1d). The trilinear shape functions for a normalized hexahedral element are expressed as:

Na=1/8(ξ+ξa)(η+ηa)(ζ+ζa),
(1)

where a indexes from 1 to 8, representing the eight neighboring nodes in the element, while ξ, η, and ζ are three normalized coordinates (subscript indicates value at node a). The intensity at p is, therefore, calculated as the weighted sum of the intensities at the neighboring voxels according to the following equation (Fig. 1d):

I(p)=I(p)=I(p)=a=18Na×I(a),
(2)

where I(a) is the intensity value at the corresponding voxel. The accuracy of the trilinear interpolation algorithm has been verified with clinical 3DUS images, and is an improvement over both voxel nearest neighbor or distance weighting algorithms [1].

2.2 Combination of Multiple Images

In order to combine multiple 3DUS images, a common coordinate system is required, which was accomplished through optical tracking (Northern Digital Inc., London ON, Canada) that continuously monitored two infra-red light emitting sources rigidly coupled with the patient’s head (patient tracker) and the US scan-head (US tracker), respectively. Image transformation is illustrated in Fig. 2a, in which trackerTUS was obtained by scan-head calibration (accuracy of approximately 2 mm; [4]). An arbitrary 3DUS image was transformed into a pre-selected 3DUS volume (chosen as the first 3DUS image acquired for each patient) coordinate system by:

TUS1US2=inv(TtrackerUS)×inv(Tpatienttracker(1))×Tpatienttracker(2)×TtrackerUS,
(3)
Fig. 2
(a) Illustration of image coordinates involved in transforming 3DUS images. (b) Illustration of image combination and rasterization. Image intensity values of voxels enclosed by multiple 3DUS images (n≥1) were obtained by averaging.

The accuracy of the image transformation depended on the accuracies of the Polaris tracker (error <1 mm) as well as the US scan-head calibration (error of 2–3 mm; [4]). To improve the accuracy of image transformation, an inter-image re-registration was performed to maximize image alignment (see 2.3 for details) before rasterization.

After all 3DUS images were transformed into a common coordinate system (the relative size of the overlapping region between two image volumes with the same scan-depth was 79.9% on average (ranged from 56.7% to 94.0%)), a 3D bounding box was established with its major axes parallel to those of the first 3DUS image acquisition (Fig. 2b). A set of regularly spaced voxels was generated to fill the bounding box. The spacing between voxels determined the voxel size, and was chosen to be 1.0 mm along all three directions. The image intensities of these voxels were determined through interpolation (see section 2.1 for details). In addition, an extra band of 5 zero intensity pixels was padded along the boundaries of the bounding box to ensure complete sampling of the combined imaging volume.

Apparently, each Cartesian voxel may be physically enclosed by any number of the 3DUS acquisitions (n = 0, or ≥1; Fig. 2b). When n=0 (i.e., the voxel was not enclosed by any 3DUS image), a zero-intensity was assigned. Otherwise (i.e., n≥1), an averaging scheme was used to prescribe a unique intensity value by interpolating across all of the 3DUS images in which the voxel was enclosed.

2.3 Inter-image Re-registration through Maximization of MI

Inter-image re-registration through MI (Insight Segmentation and Registration Tool-kit; ITK version 3.8 [5]) was used to correct errors in the transformations (i.e., Eqn. 3) required to place all 3DUS volumes into a common coordinate system. The first 3DUS acquisition was chosen as the fixed image while all of the rest of the volumes were treated as moving images. In total, (N−1) re-registrations were performed, where N is the number of 3DUS acquisitions for a particular patient.

Gaussian smoothing (kernel of 5×5) of both the fixed and moving images as well as thresholding of the moving images were performed to improve the robustness of the registration. The initial transformation obtained from the tracking system (Eqn. 3) served as the starting point for re-registration with Mattes version of MI [5] as the image similarity measure. Multithreading was enabled and a steepest gradient descent optimization was employed to maximize the MI. Convergence was reached when either the net change in MI was less than 10−3 or the number of iterations reached a preset maximum value of 200 [5]. With image re-registration, the adjusted version of Eqn. 3, which transformed an arbitrary 3DUS volume into the coordinate system of the first 3DUS acquisition can be written as:

TUS1US2adjust=Tadjust×inv(TtrackerUS)×inv(Tpatienttracker(1))×Tpatienttracker(2)×TtrackerUS,
(4)

2.4 Capture Range

To quantify the capture range of registration between two volumetric 3DUS acquisitions, the first two 3DUS volumes recorded in each patient case were selected, rasterized, and then registered. The converged locations of all non-zero intensity voxels in the thresholded moving image (defined as the “true” locations) were subsequently obtained. The centroid of these voxels was defined as the origin of a local coordinate system (Olocal). The moving image was then transformed away from the “true” locations over a specified range (0–60 mm in translation and 0–60° in rotation about Olocal, respectively). For each patient, a total of 400 translational and 400 rotational perturbations were generated, in which the magnitudes of the translational and rotational perturbations linearly increased over the specified range. In addition, the directionality of the translation as well as that of the rotational axis passing through Olocal was randomly selected from a uniform distribution. The distances from the locations of converged voxels with respect to their corresponding “true” positions were calculated, and the average (distance) error was determined. Successful registration was defined to occur when the average distance error was less than 2 mm. Successful registrations were counted and the capture range was defined as the largest misalignment at or below which the registration success rate was at least 95% across all of the patient cases.

2.5 Data Analysis

To demonstrate the importance of inter-image re-registration, representative images were qualitatively compared with those obtained when no image re-registration was otherwise performed. We also demonstrated the differences by transforming the second 3DUS volume into the coordinates of the first with and without re-registration to interpolate intensities at locations defined by the transformed voxels in the second image based on their corresponding intensities in the first. Absolute differences in image intensities between the interpolated and the “ground-truth” second images indicate the influence of image re-registration on the interpolation accuracy. Scatter plots of the average distance error relative to the initial translational and rotational misalignments as well as success rate curves were generated.

Image rasterization was implemented in C with multithreading enabled by OpenMP [6], and was compiled in Matlab (Matlab R2008b; The Mathworks, Natick, MA). Image rasterization and registration were executed on an 8-CPU Intel Xeon computer running Ubuntu Linux 6.10 (2.33GHz, 8G RAM). All data analyses were performed in Matlab. We report (i) the typical computational cost of registering one pair of 3DUS volumes and combining and rasterizing multiple 3DUS acquisitions, (ii) the interpolation accuracy in overlapping regions with and without image reregistration, and (iii) the capture ranges of the intra-modality image registration.

3 Results

Seventeen (17) image re-registrations were performed and they all converged within 100 iterations (typically 30–70 iterations). The average computational cost for each re-registration was 31 sec. In addition, the amount of transformation (translation and rotation) required to align the moving images with respect to the fixed image was 2.0±0.5 mm (range of 1.3–3.5 mm). Image transformations were adjusted by Eqn. 4.

Subsequently, image rasterization was performed to uniformly sample the combined image volume. Clearly, the computational cost of image combination and rasterization depended linearly on the number of 3DUS volumes involved. The average cost to combine and rasterize each 3DUS acquisition was 1.5 sec. Representative 2D cross-sectional images passing through the center of the acquisition volume are shown for a representative patient (Fig. 3a). For comparison, the 3DUS images were also combined and rasterized without applying the re-registrations, and the corresponding results are also shown (Fig. 3b). Apparently, the combined 3DUS image was sharper (and less fuzzy in appearance) when image re-registration was applied. This is not surprising because tissues in the overlapping regions generally presented the same image intensities (given that the angular changes of the transducer were usually small due to the confinements of the craniotomy). Therefore, image blurring would certainly occur when misalignment was present, demonstrating the importance of image re-registration to minimize any misalignment before rasterization.

Fig. 3
Combined and rasterized 3DUS images (a) with and (b) without inter-image reregistration for a representative patient. The combined image is significantly sharper when reregistration was applied before rasterization (see arrows and enlarged view of image ...

The significance of re-registration between 3DUS volumes was apparent when overlaying two image acquisitions in the same space, where clearly misalignment of features between image acquisitions was significantly minimized with the application of registration (Fig. 4). The significance was further demonstrated by transforming the second 3DUS image into the coordinates of the first and comparing the absolute differences in image intensities between the interpolated and the “ground-truth” second image. With re-registration, the absolute difference was 9.5±2.8 for five patient cases, whereas it was 11.7±4.1 otherwise (all images were 8-bit grayscale).

Fig. 4
Significance of registration is apparent when comparing representative overlays of two 3D volumes in the same space generated (a) with or (b) without registration. Arrows in (b) indicate areas of significant misalignment (approx. 3.5 mm) corrected by ...

Scatter plots of the average distance error relative to the initial translational and rotational misalignments when perturbed away from the “true” locations are shown in Fig. 5ab. With the results pooled across patients, the registration success rate was plotted against the initial misalignments (Fig. 5cd). The intersection of the success rate curve with the horizontal dashed line at 95% indicates that the overall translational and rotational capture range was 35.2 mm and 32.8°, respectively. As a comparison, when the threshold for average distance error of 4 mm was used (instead of 2 mm) to define a successful registration, the translational and rotational capture ranges differed by less than 2 mm and 1°, respectively. When a success rate of 90% was considered sufficient, the capture ranges were improved to 39.2 mm and 38.9°.

Fig. 5
Scatter plots of average distance error vs. initial (a) translational and (b) rotational misalignment, along with the corresponding (c) translational and (d) rotational success rate curves

4 Discussion and Conclusion

We have presented an accurate and efficient image combination and rasterization technique for generating a single Cartesian 3D image volume from multiple 3DUS image acquisitions that has been used in five neurosurgical patient cases. The computational cost of the approach increased linearly with the number of images involved with an average of 1.5 sec for combining and rasterizing each 3DUS acquisition. Reregistering 3DUS images before rasterization significantly increased the consistency of image content in overlapping regions. The average magnitude of transformation required to re-align the 3DUS volumes was 2.0 mm. The importance of re-registration in producing sharper images in overlapping regions was demonstrated by visually comparing the results generated without re-registration. The enlarged views in Fig. 3 clearly indicate that features becomes lost or significantly blurred without the MI re-registration. By comparing the absolute differences between the interpolated and “ground-truth” image intensities, the interpolation was found to be approximately 20% more accurate in overlapping regions when image re-registration was performed. The residual interpolation error (9.5±2.8) was likely due to inherent noise in the 3DUS acquisitions (e.g., caused by varying acoustic coupling to brain parenchyma).

The capture ranges of intra-modality registration between two volumetric 3DUS acquisitions (35.2 mm and 35.8°) are similar to those in reports of reconstructed 3DUS images in the literature (25.5 mm [7] and 40° [8]) and much larger than those we have obtained when re-registering tracked 2DUS. Given the large capture ranges, it may be possible to register two 3DUS images without the need for optical tracking if sufficient overlap is present, which may further simplify the technique.

Although the approach presented in this study was demonstrated with neurosurgical cases, it is also applicable in other imaging contexts (e.g., imaging the abdomen, pelvis, etc.) in which volumetric true 3DUS is deployed. As long as the images are tracked or they have sufficient overlap to allow accurate inter-image transformation, multiple volumes can be combined and rasterized into a single 3D Cartesian coordinate system. Because only one image volume (instead of multiple volumes) is produced, the efficiency of subsequent image processing can be significantly improved.

Acknowledgments

Funding from the National Institutes of Health grant R01 EB002082-11 and support from Philips Medical Systems for the iU22 3D ultrasound system are acknowledged.

References

1. Ji S, Hartov A, Fontaine K, Borsic A, Roberts DW, Paulsen KD. In: Miga MI, Cleary KR, editors. Coregistered volumetric true 3D ultrasonography in image-guided neurosurgery; Proceedings of SPIE; Bellingham, WA: SPIE; 2008. p. 69180F.
2. Duan Q, Angelini E, Song T, Laine A. Fast interpolation algorithms for real-time three-dimensional cardiac ultrasound. Proceedings of the 25th Annual International Conference of the IEEE EMBS; 2003. pp. 1192–1195.
3. Zienkiewicz OC, Taylor RL, Zhu JZ. The finite element method: its basis and fundamentals. 6. Elsevier Butterworth-Heinemann; Amsterdam: 2005.
4. Pallatroni H, Hartov A, McInerney J, et al. Coregistered ultrasound as a neurosurgical guide. Stereotactic and functional neurosurgery. 1999;73:1–4. [PubMed]
5. Insight Segmentation and Registration Toolkit (ITK) http://www.itk.org/ [PubMed]
7. Shekhar R, Zagrodsky V. Mutual information-based rigid and nonrigid registration of ultrasound volumes. IEEE Tran Med Imag. 2002;21(1):9–22. [PubMed]
8. Slomka PJ, Mandel J, Downey D, Fenster A. Evaluation of voxel-based registration of 3-D power Doppler ultrasound and 3-D magnetic resonance angiographic images of carotid arteries. Ultrasound in Med & Biol. 2001;27(7):945–955. [PubMed]