Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IEEE Nucl Sci Symp Conf Rec (1997). Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
IEEE Nucl Sci Symp Conf Rec (1997). 2009 October 24; 2009: 3199–3202.
doi:  10.1109/NSSMIC.2009.5401706
PMCID: PMC2895273

Accuracy of Head Motion Compensation for the HRRT: Comparison of Methods


Motion correction in PET has become more important as system resolution has improved. The purpose of this study was to evaluate the accuracy of three motion compensation methods, event-by-event motion compensation with list-mode reconstruction (MOLAR), frame-based motion correction, and post-reconstruction image registration. Motion compensated image reconstructions were carried out with simulated HRRT data, using a range of motion information based on human motion data. ROI analyses in high contrast regions were performed to evaluate the accuracy of all the motion compensation methods, with particular attention to within-frame motion.

Our study showed that MOLAR with list-mode based motion correction using accurate motion data can reliably correct for all reasonable head motions. Over all motions, the average ROI count was within 0.1±4.2% and 0.7±0.9% of the reference, no-motion value for two different ROIs. The location of the ROI centroid was found to be within 0.7±0.3mm of that of the reference image for the raphe nucleus. Frame-based motion compensation and post-reconstruction image registration were able to correct for small (<5mm), but the ROI intensity begins to deteriorate for medium motions (5–10mm), especially for small brain structures such as the raphe nucleus. For large (>10mm) motions, the average centroid locations of the raphe nucleus ROI had an offset error of 1.5±1.8mm and 1.8±1.8mm for each of the frame-based methods. For each frame-based method, the decrease in the average ROI intensity was 16.9±4.3% and 20.2±9.9% respectively for the raphe nucleus, and was 5.5±2.2% and 7.4±0.2% for putamen. Based on these data, we conclude that event-by-event based motion correction works accurately for all reasonable motions, whereas frame-based motion correction is accurate only when the within-frame motion is less than 10mm.


Motion correction in PET has become more important as system resolution has improved. The High Resolution Research Tomograph (HRRT) has a resolution of better than 3 mm. Current motion compensation methods include both software registration [15] and hardware motion tracking using an external measurement device [6, 7]. For frame-based image registration method, motion that occurs within a frame is not accounted for; this will reduce image resolution in a complex manner. For hardware motion tracking methods, the Polaris optical tracking tool has been used at several PET centers for brain motion correction [6, 7]. In theory, event-by-event correction has the potential for the greatest accuracy, so we have implemented this approach in MOLAR, the Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction [8]. The goal of this study is to quantitatively evaluate the accuracy of both event-by-event based motion compensation and frame-based image registration for a wide variety of motion profiles, based on measured human motion data. In this study we used simulated data and assumed perfect knowledge of the subject motion.

II. Methods

A. Simulation of List-mode HRRT Data

3-D list-mode data (500M events) were simulated with MOLAR’s forward projection model from a static human PET brain image, derived from a high count image obtained by summing all frames of a 120-min PET scan with [11C]AFM (serotonin transporter). The simulation included detector resolution, normalization, attenuation and decay correction, but ignored randoms and scatter. Each simulated list mode file has the same duration of five minutes, and motion was included in the simulation, as detailed below.

B. Motion Profiles

Motion data included a large range of possible head motions from four real human measurements, tracked by infrared markers at 20 Hz and recorded as quarternions by the Vicra system. A total of 50 five-minute motion profiles were used in this study. Figure 1 shows an example of motion track of one point located at the corner of the brain during a 5-min frame.

Figure 1
Motion track of one point located at the corner of the brain during a 5-min frame duration. The origin is at the center of the scanner field of view (FOV). The magnitude of the within-frame motion is 7.6mm, determined by the algorithm detailed below

Measured motions were grouped based on the displacement of one point in the center of the ROI. The magnitude of within-frame motion is defined as twice the standard deviation of its displacements in three dimensions, tracked at all times when motion data is available. In our study, a small motion is defined to be less than 5mm and a large motion is greater than 10mm.

C. Reconstruction

Three methods of motion correction were used in this study: event-by-event list-mode motion correction, frame-based motion registration, and post-reconstruction image registration, as illustrated in Figure 2. For the event-by-event motion correction method, the motion data that were used in the simulation were used in the MOLAR reconstruction algorithm, as shown in equation 1. In this algorithm, the position of each line of response (LOR) k is first corrected by the transformation matrix, T that is returned from the motion tracking tool at each moment. The sensitivity image Q of each frame is calculated uniquely for each frame, incorporating the effect of motion on sensitivity. [8].

Figure 2
Schematics of motion correction methods
(Equation 1)

For the frame-based method, images were first reconstructed without attenuation or motion correction, and were then registered to a reference image, which had no motion or attenuation correction during the reconstruction using motionless list-mode data. Image registration was performed using a mutual-information algorithm [9] and the FLIRT software ( This algorithm generates a transformation matrix, which was then used to re-reconstruct this image with attenuation correction and motion correction. Effectively, the difference between frame- and list mode methods was that for the former, the motion data were determined by registration, and it was assumed that there was no motion during each frame. Thus, the reconstructed images did not require further reslicing since the data were mapped into the reference frame.

For the post-reconstruction image registration method, images were reconstructed with attenuation correction (incorrectly aligned due to motion), and were then registered to the reference frame that contains no motion in both the simulated list-mode data and the reconstruction process.

For the no motion correction case, motion was not corrected during the reconstruction and no post-reconstruction registration was performed.

D. Image Analysis

Quantitative image analyses were performed to evaluate the quantitative accuracy for each motion correction method. ROI were defined based on reference to a MR template by first registering the MR image of the subject to the MR template and then registering the summed PET image to the MR image. The ROIs evaluated here were the raphe nucleus (4×5×5 voxels), and the putamen (13×21×15 voxels), both of which have high [11C]AFM binding potentials. The average activity within each ROI was calculated for all images, and was normalized to that of the true image used in the simulation. Also, we determined the centroid location of the raphe ROI by defining a larger bounding box around each ROI. For each region, a threshold was set at 70% of the maximum intensity within the region. Voxels above the threshold value were used to calculate the centroid location, defined as the coordinates of the center of mass of the ROI in the bounding box, and was compared to that of the reference image.


A. Images

Figure 3 shows the reference true image and the reconstructed images using different motion compensation methods for a medium-sized motion. Compared to the event-by-event motion compensation (Fig 3B), frame-based image registration gives a slightly blurred image (Fig 3c and 3d), as indicated by the circled hot spot ROIs. Across each row, there is a drop in ROI intensity due to within-frame motion (column C and D) or uncorrected motion (column E). Also, in the third row, as denoted by the arrows, the top of the head shows an artifact for the frame-based and post-reconstruction registration methods, which indicates that the motion was not completely corrected and the attenuation map was incorrectly aligned with the emission image. For the no-motion correction case (column E), the images appear both blurred and distorted.

Figure 3
Qualitative display of reconstructed images using each motion correction method in three dimensions. (A) True image (B) Event-by-event motion correction (C) Frame-based motion correction (D) Post-reconstruction image registration (E) No Motion correction ...

B. ROI quantification

Figures 4 and and55 show the average count within each ROI. For each ROI, the image intensity was normalized by the reference image used in the simulation, for each motion correction method. The standard deviation bar represents the variation across simulations and reconstructions using different motion within the same motion category. In Figure 4 for the raphe nucleus, the event-by-event method gives comparable normalized ROI intensity to the no-motion case regardless of the magnitude of the motion. For all motions, the average ROI intensity is within 0.1±4.2% of the true ROI intensity. For the frame-based method and the post-reconstruction registration method, the average ROI intensities begin to decrease as motion becomes larger than 5mm. For motions between 5mm and 10mm, the average ROI intensities decrease by 4.0±5.4% and 3.9±4.6% respectively. For extremely large motions (>10mm), the drop is 16.9±4.3% and 20.2±9.9% for each method. Without any means of motion correction, the drop in image intensity is significant, as shown in figure 4.

Figure 4
The ratio of raphe nucleus ROI intensity to the true image as a function of the magnitude of within-frame motion, using each motion correction methods.
Figure 5
The ratio of putamen ROI intensity to the true image as a function of the magnitude of within-frame motion, using each motion correction methods.

Similarly, Figure 5 shows the average ROI intensity for the putamen, a larger brain structure. For all types of motion, the event-by-event method gives comparable image intensity that is within 0.7±0.9% of the true image. The frame-based motion correction and the post-reconstruction registration method work reliably for small motions, the drop in image intensity increases to 2.3±2.3% and 2.6±1.8% for medium motions (between 5mm and 10mm), and to 5.5±2.2% and 7.4±0.2% for large motions (>10mm), indicating that the hot spots are blurred by the within-frame motion. It should be noted that due to the larger size of the putamen than the raphe nucleus, the effect of motion on the average ROI intensity is expected to be smaller, as shown by the more drastic drop in ROI intensity for the raphe nucleus in Figure 4.

C. Displacement of ROI centroid

Figures 6 shows the displacements of the ROI centroid for the raphe nucleus. For all types of motion, event-by-event motion correction generates centroid shift of 0.7±0.3mm, comparable to the no-motion case. It should be noted that small displacements (~0.5mm) are estimated even with no motion, due to image noise. In contrast, the frame-based method and the post-reconstruction image registration method begin to show larger average displacements and wider standard deviations as motion gets larger. These two methods display 1.5±1.8mm and 1.8±1.8mm of centroid displacement for large motions (>10mm).

Figure 6
The displacement of the centroid of the ROI (Raphe), using each motion correction method.

This displacement of centroid serves as a complementary method to the average ROI method to evaluate the effect of motion on the image. If the motion only shifts the ROI slightly, the average intensity of the ROI may remain unchanged. However, this error may be detected by the centroid location of the ROI. On the other hand, if a motion only blurs an image without a net shift, the ROI centroid will remain unchanged but the average ROI intensity will show a reduced value due to the blurring.

The ROIs that we selected in this study are mainly located at the center of the brain, where the effect of uncorrected motion on image quality is expected to be less significant as compared to the peripheral brain regions. As a next step, other brain regions that are located off the center of the FOV will be evaluated. In addition, the effect of emission distributions from other tracers that have different hot spots and contrasts than [11C]AFM will be assessed.

In our study, we fixed the duration of the frames to be five minutes and the total number of simulated events to be ~90M for all frames; this is a high value. Future work will involve simulating shorter frames and fewer events, which will alter the impact of within-frame motion on image quality as well as the accuracy of the software registration for frame-based method, as lower counts will lead to noisier images, thus imposing added uncertainty for the software based image registration. For the frame-based method, we eliminated the effects of possible image registration error by reslicing each frame to the reference frame that has no motion in either the simulation or the reconstruction, thus leaving within-frame motion the major source of error for the frame-based method and giving an advantage to this method. In reality, reslicing to such motionless reference frame is not available, and we expect additional errors in the frame-based motion correction.

Another way of performing frame-based motion correction involves averaging the externally measured motion information for each frame and applying that transformation matrix to align the transmission images for attenuation correction. This process eliminates the potential bias in the software image registration. The motion information may also be used to frame the list-mode data such that whenever the motion exceeds a pre-determined threshold value, a new frame will be started until the maximum frame duration is reached or the next threshold is triggered [10]. By doing so, within-frame motion is minimized. Thus, this approach can be used with sinogram-based reconstructions. However, the drawback of this approach is that it depends on the nature of motion – high frequency motion of large magnitudes will result in short frames and noisy reconstructed images.

We will also evaluate these methods by simulating 4D list-mode data that incorporate the tracer kinetic information. Each frame will then contain temporal information of the tracer distribution so that the time activity curves (TAC) of various ROIs will be fitted using compartmental models, and the tracer kinetic parameters such as the binding potential and the volume of distribution will be estimated and compared to the true value. This study will also serve to explore possible figures of merit for our future studies on the effect of motion correction on real human PET scans where no true image will be available.


Both quantitative and qualitative results demonstrate that event-by-event motion correction is superior to frame-based motion correction and post-reconstruction image registration, especially for large motions (>10mm). Naturally, these results depend on accurate measurement of head motion. These results are significant as PET scans with long durations are divided into frames, and motion within a frame could affect the capability of frame-based motion compensation and deteriorate the resolution and the accuracy of PET images. Although extremely large motions (>10mm) rarely occur (<3% of the frames based on a review of 20 studies, each with 33 frames), if the external measurement device can correctly track the subject motion, any residual motion can be further eliminated by software motion correction. Additional future work will explore the effect of dividing a 4D scan into various frames according to the detected motion magnitudes, and other figures of merit to evaluate the effect of motion on image quality and quantification.


We thank Zhongdong Sun for programming support and the staff of the Yale PET Center for the studies which formed the basis of this work. This work was supported by Grant Number R01NS058360 from the National Institute of Neurological Disorders And Stroke.

Contributor Information

Xiao Jin, PET center, Yale University, New Haven, CT USA.

Tim Mulnix, PET center, Yale University, New Haven, CT USA.

Beata Planeta-Wilson, PET center, Yale University, New Haven, CT USA.

Jean-Dominique Gallezot, PET center, Yale University, New Haven, CT USA.

Richard E. Carson, PET center, Yale University, New Haven, CT USA.


1. Ashburner J, Andersson JL, Friston KJ. Image registration using a symmetric prior--in three dimensions. Hum Brain Mapp. 2000;9(4):212–25. [PubMed]
2. Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001;5(2):143–56. [PubMed]
3. Maes F, et al. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging. 1997;16(2):187–98. [PubMed]
4. Woods RP, Mazziotta JC, Cherry SR. MRI-PET registration with automated algorithm. J Comput Assist Tomogr. 1993;17(4):536–46. [PubMed]
5. Woods RP, Cherry SR, Mazziotta JC. Rapid automated algorithm for aligning and reslicing PET images. J Comput Assist Tomogr. 1992;16(4):620–33. [PubMed]
6. Bloomfield PM, et al. The design and implementation of a motion correction scheme for neurological PET. Phys Med Biol. 2003;48(8):959–78. [PubMed]
7. Herzog H, et al. Motion artifact reduction on parametric PET images of neuroreceptor binding. J Nucl Med. 2005;46(6):1059–65. [PubMed]
8. Carson R, et al. Design of a motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction of the HRRT. IEEE Nuclear Science Symposium and Medical Imaging Conference; 2003. M16-6.
9. Wells WM, 3rd, et al. Multi-modal volume registration by maximization of mutual information. Med Image Anal. 1996;1(1):35–51. [PubMed]
10. Fulton RR, et al. Correction for head movements in positron emission tomography using an optical motion-tracking system. Nuclear Science, IEEE Transactions. 2002;49(1):116–123.