Home | About | Journals | Submit | Contact Us | Français |

**|**J Digit Imaging**|**v.24(5); 2011 October**|**PMC3180551

Formats

Article sections

- Abstract
- Introduction
- Materials and Methods
- 2D Registration Method
- 3D Registration Method
- Results
- 2D Experiments
- 3D Experiments
- Discussion
- Conclusions
- References

Authors

Related links

J Digit Imaging. 2011 October; 24(5): 913–925.

Published online 2010 November 18. doi: 10.1007/s10278-010-9352-z

PMCID: PMC3180551

Panayiotis Kotsas, Phone: +30-231-0886048, Fax: +30-231-0825151, Email: ku.ca.dleiffehs@sastok.p.

Copyright © Society for Imaging Informatics in Medicine 2010

Image registration is a necessary procedure in everyday clinical practice. Several techniques for rigid and non-rigid registration have been developed and tested and the state-of-the-art is evolving from the research setting to incorporate image registration techniques into clinically useful tools. In this paper, we develop a novel rigid medical image registration technique which incorporates binary projections. This technique is tested and compared to the standard mutual information (MI) methods. Results show that the method is significantly more accurate and robust compared to MI methods. The accuracy is well below 0.5° and 0.5 mm. This method introduces two more improvements over MI methods: (1)for 2D registration with the use of 1D binary projections, we use minimal interpolation; and (2) for 3D registration with the use of 2D binary projections the method converges to stable final positions, independent of the initial misregistration.

The online version of this article (doi:10.1007/s10278-010-9352-z) contains supplementary material, which is available to authorized users.

Image registration is the process of geometrically aligning two images so that corresponding voxels/pixels can be superimposed on each other. There are several applications of image registration [1]. Examples include remote sensing, medicine, cartography, and computer vision.

In the medical field, image registration is used for diagnostic purposes when images of the same anatomical structure must be superimposed on each other. Registration methods are used for combining computer tomography (CT) and magnetic resonance imaging (MRI) data to obtain more complete information about the patient, for monitoring tumor growth, for treatment verification, for comparison of the patient’s data with anatomical atlases [1]. Image registration is often a necessary procedure. It is used to merge images from different imaging modalities and different examination dates and therefore it is useful for diagnosis and assessment of disease progression or remission. Different imaging modalities provide complementary information and when the images are aligned and merged, this information is added to give a more clinically useful result. In another type of application, the progression of a disease over time can be assessed by registering images of the same patient from two different examination dates. For example, after registration, measurements of a tumor growth can be made.

The majority of image registration methods are based on the use of a similarity/disparity criterion which, when the two images are brought to register, is maximized/minimized. Numerical analysis techniques are used to maximize/minimize the similarity/disparity criterion. There are many different criteria, with mutual information (MI) being the standard since it is quite accurate for rigid body registration and does not require any image segmentation prior to registration.

Image registration is an active research field and in recent years image registration methods have evolved from the research setting, to being incorporated into clinically useful software tools [2]. The image registration methods can be in general divided into rigid and non-rigid. Rigid registration techniques adjust for rotations and translations only (six parameters for the 3D case). This is the case with rigid brain scans. Non-rigid techniques assume a nonlinear transformation model and can adjust for image warping. Warping occurs usually due to the soft tissue deformations of the body organs between different scans [2]. Medical image registration techniques are also categorized according to the type of features they use for registration. Surface-based techniques rely on the characteristics of the surface of the registrable objects while volume-based techniques use the full volume information. West et al. [3] define as volume-based “any technique which performs registration by making use of a relationship between voxel intensities within the images and as surface-based, any technique which works by minimizing a distance measure between two corresponding surfaces in the images to be matched”. According to Slomka et al. [2] volume- or voxel-based techniques are more robust and accurate because they do not rely on the preprocessing of the images for being accurate. This is especially the case for the MI methods. These methods rely on maximizing the amount of information sharing between the two images to be registered. According to Bardera [4], “MI methods have become a standard reference due to their accuracy and robustness.” In Liao et al. [5] surface matching and MI methods are compared and the conclusion is that the surface matching registration algorithms could be followed by a few iterations of a MI algorithm for better accuracy. Improvement of the standard MI algorithms is an active research field and the effort is to use a combined approach that does not rely on voxel values only, but incorporates geometrical or regional features for computation of the MI [4, 6–9].

The type of problem which is solved by the registration algorithm is another categorization criterion. The methods may be suitable for image-to-image space registration (3D–3D, 2D–3D) or physical to image space registration. 3D–3D methods register image volumes to image volumes (MR-MR, CT-MR, (positron emission tomography) PET-MR, Ultrasound-MR) [2, 3, 10]. 2D–3D registration techniques register, for example, one or more intraoperative X-ray projections of the patient and the preoperative 3D volume [11, 12]. Physical to image space registration is similar to 2D–3D registration but may use interventional techniques like bone-implanted markers for patient to image registration [13].

In this paper, we follow a novel approach to the medical image registration problem. We propose, test and compare to the standard MI methods a method which uses binary projections of the 2D or 3D images for the computation of the registration function.

Several techniques for signal-intensity projection-based image registration have been developed [14–16]. The most relevant work to this report is the method presented in Khamene et al. [14]. In this work, the registration problem is analyzed into the sub-problems of registering, using signal-intensity-based algorithms and criteria, the rendering projections of the two volumes along the three axes and adjusting the two volumes according to the projection-based computed registration parameters. In this work, we use a different similarity/disparity measure and a different iteration loop which have been shown to be very accurate and robust for volume-based registration [17]. The similarity/disparity measure allows us to use binary (shadowing) projections and not renderings simplifying the hardware limitations for projection computation presented in Khamene et al. [14].

Another projection-based technique for 3D–3D vascular registration is presented in Chan et al. [15]. In this technique, the 3D–3D registration problem is transformed into multiple 2D–3D vascular registration problems. The 2D images are the maximum intensity projection (MIP) images (gray-scale signal-intensity images) which are first generated for the reference volume along the three axes. At each iteration, three binary projections from the segmented binary floating volume are compared and registered to the corresponding MIPs. The similarity measure used is the sum-of-squared-differences.

A projection-based 2D-2D image registration technique in the presence of fixed pattern noise is presented in Cain et al. [16]. In this method, the 1D projections along the two axes are computed by accumulating pixels along the two main axes of the 2D image. The horizontal and vertical components of the shift are then computed using 1D cross-correlation. They show that the method is very robust in the presence of temporal and spatial noise and computationally efficient compared to the 2D correlation based shift estimator.

The goal of this work is to develop and test a registration solution that will be able to address different forms of the registration problem using a common registration logic. The common logic is to use a simple registration criterion which utilizes minimal information. We also implement a novel and easy to understand iteration loop which, in comparison to other minimization techniques, makes it easier to register images with less information used. In this context, the motivation is the need to produce a well-engineered registration system of methods for 3D–3D rigid body registration (volume- and projection-based), 2D–3D registration and non-rigid body registration. By well-engineered, we mean that we will be able to address the main registration algorithm problems which are accuracy and convergence. We want to research the goodness of the registration algorithm convergence criterion in relation to the accuracy desired and the data set used. For example, we want to find out how many iterations have to be taken for the registration algorithm to converge.

The rest of this paper presents 2D rigid registration of MR scans using 1D binary projections and 3D registration of MR volumes using 2D binary projections. The registration function used is the mean-squared value of the weighted ratio of the binary projections. In the next section the basic characteristics of the projection-based registration methods are given. In Section 3, a full set of results for 2D and 3D registration are presented together with detailed comparisons with MI methods. Finally, we discuss and draw conclusions on the proposed methods and indicate areas for future work.

The registration function used was first introduced for 3D rigid volume registration [17, 18]. It was then defined as follows. Given two superimposed non-registered images, two types of areas can be identified: (1) areas where signal voxels/pixels are superimposed on other signal voxels/pixels; and (2) areas where signal voxels/pixels are superimposed on background voxels/pixels. The registration function is the mean-squared value of the weighted ratio image. This ratio is computed on a voxel per voxel basis and weighting is performed by setting the ratios between signal and background voxels to a standard high value. The mean value is computed over the union of the signal areas of the two images. For the evaluation of the accuracy of the method, 3D MR images from ten patients from the database of the Cleveland Clinic Foundation were used. The images were interleaved T1- and T2-weighted studies. The T2 MR study was transformed using ten arbitrary rigid 3D transformations and then registered back to the T1 MR study. The experiments were performed at half resolution of 1.8 mm. Three to five iterations per geometric transformation parameter for registration are needed. The nature of the similarity criterion is multi-resolution. When the resolution is halved both the high value areas and the area over which they are averaged are equally divided. The average rotational error was found to be 0.36° and the average translational error 0.36 mm giving sub-voxel accuracy. In no experiment convergence to a local minimum occurred. The method performed well in the presence of high noise areas. The method was extended in [19] for 2D non-rigid body binary volume-based registration using a local elastic geometric transformation model which uses cubic B-splines. The mathematical formulation of the registration function may be found in the supplement provided with this paper. The difference of this paper with the above described work [17] is that instead of the full volumes/areas, the projections only are used for the computation of the registration function.

The main steps of the method are:

- Preprocessing of the MR data
- Registration using projections

The method is applied for 2D and 3D registration and has a different form for each case.

The data used consist of pairs of MR scans of the head which are provided registered. The images are preprocessed in order to separate the head area from the background area and using the head area the outer contour of the head is identified. Five different pairs are used. Four of these pairs come from the Harvard Medical Atlas database and are provided carefully registered on the internet [20]. This data set has been used in other Image Registration research papers for evaluation of the method [21].

For 2D registration using 1D projections, the scans correspond to 3D studies and the ones used were randomly selected from the cases of acute stroke, multiple embolic infarctions, multiple sclerosis and vascular dementia. One additional scan pair consists of T1/T2 interleaved study scans from the database of the Cleveland Clinic Foundation.

The preprocessing of the 2D scans was performed using the Bioimage Suite Software of Yale University [22]. Preprocessing consists of the following steps:

- Median filtering to reduce the level of the noise with window size of 3 (this step is necessary for the Cleveland Clinic Foundation scans which are more noisy).
*K*-means segmentation with three clusters. The purpose of the*K*-means algorithm is to compute, for a given data set*x*[1…*n*], the optimal values of the centers*V*[1…*k*] of*k*clusters, by using the*k*memberships assigned to each data element*u*[1…*n*,1…*c*] and by minimizing iteratively a within-groups sum-of-squared-errors function. It is included as a standard choice for segmentation in the Bioimage Suite Software. The value of three clusters was found to give better threshold for the separation of signal from background area than the value of two clusters. A presentation of*K*-means is given in [23]. We found through experimentation that the use of three clusters better outline the head area.- Region growing for background connection and hence head area and contour extraction. We use the method included in Bioimage Suite.

The effort in the preprocessing step was to introduce minimum non-registrable areas which affect the registration accuracy.

Figure 1 shows the scan pairs for the five cases and the areas of non-overlap after preprocessing:

Scans used for the registration experiments and the non-overlap image prior to registration, showing, *left to right*, the reference image, the reslice image and the areas of non-overlap prior to the application of the initial misregistration

The 2D registration method works in the following way:

- After preprocessing, the contour pixels of the two images are projected along the
*x*- and*y*-axes giving two sets of*x*- and*y*-projections. They are then rotated by*θ*degrees and projected onto the*x*-axis giving a set of*θ*degree projections. The projection of the reslice image is part of the iteration loop whereas the projection of the reference image is performed only once. Projections are incorporated into the geometric transformation function. The minimum and maximum values of x- and y-coordinates of the non-zero pixels of the geometrically transformed data set are computed and the 1D projections are created by padding the in-between ranges [*x*min,*x*max], [*y*min,*y*max], [*xθ*min,*xθ*max] with a standard non-zero value. The min and max values refer to the minimum and maximum projection along*x*,*y*and angular directions. The projections have double the dimension of the image in order to cope with the out-of-the-imaging area rotations and translations. For registration of translations the sum of*x*- and*y*-projections is used whereas for the registration of the*xy*-plane rotation the*θ*degree projections are used. The registration function is the 1D equivalent of the volume-based definition given above. The way that we compute the projections allows us to avoid the use of interpolation within the geometric transformations. Instead of interpolation a computation of minimum and maximum x- and y-dimensions is performed. - One of the two images is defined as the reference image. The other image is aligned to the reference and is referred to as the reslice image because in the 3D registration case it has to be resliced after alignment
- The main iteration loop is entered and one of the
*N*=3 geometric transformation parameters is adjusted with each iteration. - For this parameter, the reslice image is transformed at
*n*Chebyshev points in the transformation units interval [−18, +18] and for these points the registration function is computed explicitly. As reported in [24], a Chebyshev approximation may be sufficient when the function is analytic and then no least-squares based function approximation is necessary. The transformation units are degrees for rotations and pixels for translations. The approximated function has a point of minimum which is considered as the adjustment value of the geometric transformation parameter. Using this value, the reslice image is transformed. - The adjustment values computed for each transformation parameter in different iterations are summed to give the final adjustment value. In a previous work [17], we have presented curves of the registration function and we showed that the point of minimum is unique. Convergence for a transformation parameter is achieved when two iterations which adjust this transformation parameter give adjustment values less than one transformation unit.
- It is clear from the above that the value of
*θ*which registers the 2D rotation is a parameter of the algorithm. Extensive experiments showed that the value is not steady for all initial transformations and should be varied and the registration results compared in order to get the best registration result. The range of the variation of this angle used for the results in this report is 40 to 50° for the usual orientation of the reference image which is parallel to the*y*-axis. If the reference image is significantly rotated relative to the*y*-axis, then a measurement of the angle of the rotation of the axis of symmetry of the image is performed and the*θ*range is adjusted accordingly. - Eleven angles in the range 40–50° separated by 1° (40, 41, 42,…,50) are used to evaluate the best
*θ*by performing an exhaustive search of the projection angles.

Another form of the 2D registration method incorporates multiple projections for rotational adjustment into the iteration loop. In order to incorporate multiple projections a decision has to be made at each iteration about the best projection. It was found that this decision cannot be projection-based. The reason for this is that the projection-based sequential execution of the full algorithm is based on selecting the final best result after visual inspection of all the final results. For the automated form of the algorithm [17] the full area-based criterion was found to be robust and accurate. This criterion was used for the incorporation of multiple projections into the iteration loop.

The implementation of the incorporated projections method uses two sets of projections with each rotational iteration, one at 40–50° and one at −50 to −40°. The best result from these two sets is kept as the correct result.

The algorithms used are:

**Algorithm 1** 2D image registration using binary projections and repetitive execution

**Algorithm 2** 2D image registration using binary projections with inclusion of theta selection in the iteration loop

Figure 2 shows an example of one projection used for the computation of the registration function for the 2D registration case.

Example of one of the 40–50° projections used for the computation of the registration function showing high and low values of the function when the images are de-registered

For the 2D case registration is performed using a two-step registration procedure when the projections are not incorporated in the iteration loop. The first step of the procedure aims at bringing the two scans rotationally close as determined by the visual inspection of the result. This step is considered successful if after the step the scans are rotated to each other by less than 10°. This is performed in the following way: an initial registration is performed with *n*=5 Chebyshev points in the interval ([−18, +18]; *A*=18) with projection angle *θ*=45. In most cases, this step brings the images sufficiently close. But there are cases when this step fails to register and therefore a search in the space of [*A*, *θ*] starts in order to find the range and angle which achieve this goal. First, *A* is increased to *A*=36 or *A*=50 and if still a failure occurs a search in the projection angle space starts with *θ* scanning the [40°, 50°] interval or in one case even the [−40°, −50°] interval with steps of 1°. Once a successful first step occurs the adjustment of this step gives the initial misalignment for the second step. This step uses the parameter *A*=9 with *n*=5 Chebyshev points and it performs 11 repetitive registrations in the interval [40°, 50°] with 1° step projection angle choosing at the end the result with minimum rotational error. The characteristic of the second step is that in all cases the rotational error produced by the first step is reduced and that in all cases the rotational error is less than 1°. In fact, in most cases the error is close to zero. The calibration is based on the fact that when the first step fails the images are misregistered by large angles. The errors are compared with the initial misregistration of the image which is known. The calibration step is not necessary when the projections are incorporated in the iteration loop where no first step is necessary since overall no local minima occur.

The method was also implemented for 3D–3D registration of MR volumes using 2D parallel projections. The data used for the 3D experiments also come from the Harvard Medical Atlas database (five patient cases).

The basic characteristics of the 3D registration method are:

- The two volumes to be registered are provided as a set of 2D scans with non-cubic voxel size of 1×1×5 mm. For this reason, in order to create the cubic voxel volumes a tri-linear interpolation routine is used. For example, for a volume of 25 scans, a cubic voxel volume with dimensions 256×256×120 is created.
- The two volumes are then preprocessed in order to create the binary volumes. This is done by thresholding using the
*K*-means segmentation with three clusters. For the MR data used in this report this procedure gives a threshold value of around 20 (the data is unsigned char). - The main iteration loop is then entered. With each iteration one of the six 3D transformation parameters (
*xy*-plane rotation,*yz*plane rotation,*zx*plane rotation,*x*-axis translation,*y*-axis translation,*z*-axis translation) is adjusted. The adjustment is according to the variance of the weighted ratio disparity measure as computed by the projections of the two volumes. The minimization method is again Chebyshev polynomial based with 5 Chebyshev points in the interval [−9, +9] for all transformation parameters. - The full volume is transformed with all transformations and tri-linear interpolation is used for the computations.
- The data used for the testing of the method are from the Harvard database and specifically from the cases of Alzheimer's, AIDS dementia, multiple infarctions, acute stroke and multiple sclerosis. The basic results are given in the following section. For the testing of the method one of the two volumes was initially de-registered using a standard set of 10 random 3D geometric transformations and then registered using the method. The errors were then computed. The 3D registration algorithm is:

**Algorithm 3** 3D registration algorithm using 2D binary projections

The above methods were compared to the mutual information and normalized mutual information methods which are included in the Bioimage Suite software package. The analysis of mutual information is beyond of the scope of this paper since it has been studied widely. A good presentation can be found in Pluim et al. [25]. Mutual information is an Information Theory measure which when maximized it maximizes the amount of information, image A contains about image B, or interchangeably image B about image A. This maximization of information brings the images into register.

Using data from the five scan pairs described above, a total of 100 2D experiments for the alignment of differently weighted axial MR scans were performed. These experiments were conducted according to the following rules (where all the experiments were performed at full resolution):

One of the two scans was used as the reference scan. The other scan was considered to be the reslice scan. The latter was rotated and translated using a standard set of 20 2D geometric transformations and then registered to the reference scan, giving 20 registration experiments per case. For this reason, these experiments are referred to as “20 displacements” experiments. The geometric transformations parameters were randomly selected using a random number generator in the range [−45, 45°] for the *xy* rotation and [−30, 30] mm for *x* and *y* translations. The 2D geometric transformation set is shown in Table 1.

Table 2 shows the average errors for the 20-displacements experiments for each of the five cases.

The processing time of the method is 1–2 s on a HP A6240 Intel Quad Core 2.4 GHz PC with 2 Gbyte RAM. Of course the method can be implemented in parallel and give processing time less than 1 s.

When the projections are incorporated in the iteration loop the results are similar to those obtained for sequential execution of the full registration algorithm. The errors are below 1° for rotations and 1 voxel for translations. The advantage of the method is that it does not have to be implemented in two steps since the problem of local minima is reduced. The results per 20 experiments show in Table 3.

For 3D registration, the results show the ability of the method to converge to the correct registration position independent of the initial misregistration. Table 4 gives the 10 transformations used for deregistering the images.

The total average absolute error is 0.19° for rotations and 0.14 voxels for translations.

For the transformation number 9 of the acute stroke case, it was found that the convergence criterion of two less than 1° adjustments per transformation parameter was not adequate (gives *yz* error 3.57°) and for this reason it was increased to 6 less than 1° adjustments. This gives a total number of iterations between 44 and 53 and a *yz* error for the specific case of 1.55. If allowed to proceed further the method advances slowly to the correct position. The value of six is a compromise for the necessity of producing all the results in this paper using common registration parameters.

For the acute stroke case the projection-based method was compared with the full volume method [4] using the same registration parameters [*n*=5, *A*=9] but different convergence criterion since two less than 1° or voxel iterations is sufficient for full volume adjustment. The total absolute error is 0.19° for rotations and 0.4 voxels for translations. The iterations needed are between 20 and 22.

The number of iterations needed is between 43 and 51. The average error is 0.3° for rotations and 0.35 voxels for translations.

The number of iterations needed is between 42 and 50. The average error is 0.07° for rotations and 0.11 voxels for translations.

The number of iterations needed is between 42 and 52. The average error is 0.11° for rotations and 0.3 voxels for translations.

The number of iterations needed is between 43 and 50. The average error is 0.56° for rotations and 0.44 voxels for translations.

In order to evaluate the performance of the method in comparison with the state-of-the-art, we performed experiments with the MI and the Normalized MI (NMI) methods using the same data. These methods are included in the Bioimage Suite software and were chosen as they compare favorably to several other image registration methods.

For the 2D case, the main parameters of these experiments were the following:

- We use the conjugate gradient method for the iteration loop with the MI methods.
- We found that when using the same initial misregistration as in the projections methods both of the MI methods fail to converge to the correct registration position in several occasions. Therefore we limited these methods to an initial misregistration within the [−10, +10] units (degrees or mm).
- The projection-based methods are more accurate than the MI methods even when starting from a wider initial misregistration interval. Both sequential execution of the algorithm several times and incorporation of the projections in the iteration loop make the registration method more accurate than the state-of-the-art MI methods even when those methods use a favorable initial misregistration. The MI method gives an average rotational error of 0.399° and an average translational error of 0.64 mm. The NMI method gives an average rotational error of 0.45° and an average translational error of 0.65 mm. The projection method with repetitive execution of the program gives an average rotational error of 0.32° and an average translational error of 0.46 mm. The projection-based method with inclusion of the projections within the basic iteration loop with the usage of the area for best projection selection gives an average rotational error of 0.36° and an average translational error 0.49 mm. These results of all five 2D registration data sets are given in Table 5.
- The processing time is the same for all methods: 1–2 s on a HP A6240 Intel Quad Core 2.4 GHz PC with 2 Gbyte RAM.

For 3D registration, we compared again the projections method with the MI and NMI methods. The main parameters and results for these experiments are:

- We did not need to reduce the initial misalignment for the 3D experiments. We found that with the same initial misalignment intervals as in the 3D case, the MI methods are able to converge close to the correct position. We performed a thorough analysis of the final registration error with respect to the initial rotational misregistration.

For the AIDS dementia case:

- For the NMI method the average rotational error is 0.57° and the average translational error is 0.92 mm. When the initial average rotational misregistration is greater than 5° the average rotational error is 0.76° and the final translational error is 1.35 mm. When the initial misregistration is lower than 5° the average rotational error is 0.33° and the average translational error is 0.4 mm.
- For the MI method, the average rotational error is 0.6° and the average translational error is 0.93 mm. When the initial average rotational misregistration is greater than 5° the average rotational error is 0.77° and the final translational error is 1.33 mm. When the initial misregistration is lower than 5° the average rotational error is 0.38° and the average translational error is 0.43 mm.
- Similar results were obtained for the other data cases. Table 6 summarizes the results for all cases.
- Figure 3 shows the scans of the AIDS dementia case and the areas of non-overlap after a 3D registration case.

From the above, it is obvious that the accuracy of the MI methods is worse than the projection method. It is also obvious that with MI the accuracy of the method depends on the initial misalignment. This does not happen for the projection method where the accuracy is independent of the initial misregistration.

The method converges to almost stable final positions. The standard deviation of the error is around 0.3 for the 2D registration for both rotations and translations. For the 3D case, it is 0.05 for rotations and 0.2 for translations.

The processing time of the projection-based method at full resolution is 7–8 min on a HP A6240 Intel Quad Core 2.4 GHz PC with 2 Gbyte RAM. The MI methods implemented hierarchically in the Bioimage Suite implementation take about 2 min.

From the above it can be seen that the 3D registration projections based method is more robust and accurate than the MI methods.

A full comparison of the mutual information and normalized mutual information with other image registration methods may be found in Holden et al. [26]. Based on this paper we chose these two methods to compare with.

We see that by using disease data with the projection methods described in this paper we get better accuracy compared to state-of-the-art methods and it is clear when the convergence occurs. It is clear because no matter what the initial misalignment is, the algorithm converges to stable final positions. Another advance made by this paper is the fact that the proposed algorithm for 2D–2D registration uses minimal voxel interpolation. With the use of other registration functions (i.e., sum-of-squared-differences, cross-correlation, MI) the interpolation causes local minima and special techniques like high speed B-spline interpolation, low pass filtering and stochastic integration have to be used for their reduction or removal [27, 28]. It was reported in a previous work [17] that with the use of full volumes, interpolation does not cause local minima to our registration method. In this work, with the use of binary projections we use less information for registration, we minimize interpolation and still get a 100% convergence to the correct registration position. The preprocessing step affects more heavily the projection-based method compared to the volume-based method which was not affected by high noise presence. The method is generally fast. The preferred parameters were selected with experimentation and an effort has been made to be maintained the same since the first implementation of the algorithm.

A new robust method for 2D and 3D rigid registration using binary projections was developed and tested using MR scans of the head.

For the 2D case, the accuracy of the method is better than 1° and 1 pixel. In most cases, the error is less than 0.5° and 0.5 pixels. The preprocessing of the images must be careful not to produce non-registrable areas in the contours to be registered (avoid for example repetitive median filtering in one of the two images). No interpolation is necessary for 2D registration with the use of binary projections. The registration function is not dependent on signal-intensity distributions. The method is directly applicable to binary images and contours. The method is fast with a typical time of 1–2 s running on an HP A6240 Intel Quad Core 2.4 GHz PC with 2 Gbyte RAM.

The first results of a 3D projection-based method have been given in this report. The accuracy of the method is better than 0.5° for rotations and 0.5 voxels for translations. Compared to the full volume method [17], the method takes more steps to converge towards the correct registration position but remains as accurate. Minimal preprocessing of the images prior to registration is needed. The method is more accurate and robust than standard MI methods. It converges to stable final positions independent of the initial misregistration.

The MI methods rely on minimizing the spread of the joint histogram for registration. In order to register non-rigidly transformed images the method presented in this report could be adopted to work with the projections of the joint histogram instead of projections of the images and tested accordingly. Further future work will include the application of the method for 2D–3D registration.

1. Zitova B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977–1000. doi: 10.1016/S0262-8856(03)00137-9. [Cross Ref]

2. Slomka PJ, Baum RP. Multimodality image registration with software: state-of-the-art. Eur J Nucl Med Mol Imaging. 2009;36(Suppl 1):S44–S55. doi: 10.1007/s00259-008-0941-8. [PubMed] [Cross Ref]

3. West J, Fitzpatrick JM, Wang MY, Dawant BM, Maurer CR, et al. Retrospective intermodality registration techniques for images of the head: surface-based versus volume-based. IEEE Trans Med Imag. 1999;18(2):144–150. doi: 10.1109/42.759119. [PubMed] [Cross Ref]

4. Bardera A, Feixas M, Boada I, Sbert M: High-dimensional normalized mutual information for image registration using random lines. Third International Workshop on Biomedical Image Registration (WBIR). LNCS 4057, pp 264–271, 2006

5. Liao YL, Sun YN, Guo WY, Chou YH, Hsieh JC et al: A Comparison between the surface-based and mutual-information-based methods for Co-registering SPECT and MR images. Proceedings of the 29th Annual International Conference of the IEEE EMBS, Cité Internationale, Lyon, France, August 23–26, 2007, pp 868–871 [PubMed]

6. Staring M, van der Heide UA, Klein S, Viergever MA, Pluim JPW: Registration of cervical MRI using multifeature mutual information. IEEE Transactions on Medical Imaging, Vol. ?, No. ?, Month 2009 (electronic pre-publication) [PubMed]

7. Pluim JPW, Maintz JBA, Viergever MA: Image registration by maximization of combined mutual information and gradient information. MICCAI 452–461, 2000 [PubMed]

8. Loeckx D, Slagmolen P, Maes F, Vandermeulen D, Suetens P: Nonrigid image registration using conditional mutual information. IEEE Transactions on Medical Imaging, Vol. ?, No. ?, April 2009 (electronic pre-publication)

9. Lu X, Zhang S, Su H, Chen Y. Mutual information-based multimodal image registration using a novel joint histogram estimation. Comput Med Imaging Graph. 2008;32:202–209. doi: 10.1016/j.compmedimag.2007.12.001. [PubMed] [Cross Ref]

10. Roche A, Pennec X, Malandain G, Ayache N. Rigid registration of 3-D ultrasound with MR images: a new approach combining intensity and gradient information. IEEE Trans Med Imag. 2001;20(10):1038–1049. doi: 10.1109/42.959301. [PubMed] [Cross Ref]

11. Kraats EB, Penney GP, Tomazevic D, Walsum T, Niessen WJ. Standardized evaluation methodology for 2D–3D registration. IEEE Trans Med Imag. 2005;24(9):1177–1189. doi: 10.1109/TMI.2005.853240. [PubMed] [Cross Ref]

12. McLaughlin RA, Hipwell J, Hawkes DJ, Noble JA, Byrne JV, et al. A comparison of a similarity-based and a feature-based 2D–3D registration method for neurointerventional use. IEEE Trans Med Imag. 2005;24(8):1058–1066. doi: 10.1109/TMI.2005.852067. [PubMed] [Cross Ref]

13. Maurer CR, Maciunas RJ, Fitzpatrick JM. Registration of head CT images to physical space using a weighted combination of points and surfaces. IEEE Trans Med Imag. 1998;17(5):753–761. doi: 10.1109/42.736031. [PubMed] [Cross Ref]

14. Khamene A, Chisu R, Wein W, Navab N, Sauer F: A novel projection-based approach for medical image registration. Third International Workshop on Biomedical Image Registration (WBIR), 2006, pp 247–256

15. Chan HY, Chung ACS: Efficient 3D-3D Vascular Registration Based on Multiple Orthogonal 2D Projections. Second International Workshop on Biomedical Image Registration (WBIR), 2003, pp 301–310

16. Cain SC, Hayat MM, Armstrong EE. Projection based image registration in the presence of fixed pattern noise. IEEE Trans Image Process. 2001;10(12):1860–1872. doi: 10.1109/83.974571. [PubMed] [Cross Ref]

17. Kotsas P, Malasiotis S, Strintzis M, Piraino DW, Cornhill JF. A fast and accurate method for registration of MR images of the head. International Journal of Medical Informatics. 1998;52:167–182. doi: 10.1016/S1386-5056(98)00136-1. [PubMed] [Cross Ref]

18. Kotsas P: A new automated method for three dimensional registration of MR images of the head. Master’s Thesis, Dept. of Biomedical Engineering, The Ohio-State University

19. Kotsas P. Non-rigid registration of medical images using an automated method. Enformatika. 2005;7:199–201.

20. Harvard Medical Atlas Database: http://www.med.harvard.edu/AANLIB/home.html

21. Wong A, Orchard J. Robust multimodal registration using local phase coherence representations. Journal of Signal Processing Systems. 2009;54:89–100. doi: 10.1007/s11265-008-0202-x. [Cross Ref]

22. Bioimage Suite Software http://www.bioimagesuite.org

23. Bezdek JC, Hall LO, Clarke LP. Review of MR image segmentation techniques using pattern recognition. Med Phys. 1993;20(4):1033–1048. doi: 10.1118/1.597000. [PubMed] [Cross Ref]

24. Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C, The Art of Scientific Computing. 2. Cambridge: Cambridge University Press; 1992.

25. Pluim JPW, Maintz JBA, Viergever MA. Mutual-information-based registration of medical images: a survey. IEEE Trans Med Imag. 2003;22(8):986–1004. doi: 10.1109/TMI.2003.815867. [PubMed] [Cross Ref]

26. Holden M, Hill DLG, Denton E, Jarosz JM, Cox T, et al. Voxel similarity measures for 3D serial MR brain image registration. IEEE Trans Med Imag. 2000;19(2):94–102. doi: 10.1109/42.836369. [PubMed] [Cross Ref]

27. Rohde GK, Aldroubi A, Healy DM., Jr Interpolation artifacts in sub-pixel image registration. IEEE Trans Image Process. 2009;18(2):333–345. doi: 10.1109/TIP.2008.2008081. [PubMed] [Cross Ref]

28. Salvado O, Wilson DL: Removal of interpolation artifacts in similarity surfaces. Third International Workshop on Biomedical Image Registration (WBIR), 2006, LNCS 4057, pp 43–49

Articles from Journal of Digital Imaging are provided here courtesy of **Springer**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |