Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IEEE Trans Med Imaging. Author manuscript; available in PMC 2010 August 12.
Published in final edited form as:
PMCID: PMC2920454

Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events


With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented.

Index Terms: Positron emission tomography (PET) iterative reconstruction, motion correction, system response

I. Introduction

Recent developments in 3-D positron emission tomography (PET) systems have enabled the spatial resolution to reach the 2–4 mm full-width at half-maximum (FWHM) range. With such improvements in spatial resolution, small patient movements during PET imaging become a significant source of resolution degradation.

Below, we first review correction methods in the literature that attempt to address the problem of rigid motion. Most of the existing work on this type of motion has been investigated and implemented in brain PET imaging, since nonrigid (cardiac and respiratory) motion, dominant in whole-body and cardiac PET imaging, is absent in this case.

A. Overview of Motion Compensation Techniques

As a typical PET brain imaging session can last hours, it is not reasonable to expect a patient to remain motionless during this time. A number of head restraints may be used, such as thermoplastic masks or neoprene caps, which lower the amount of motion but do not eliminate it. Even with head restraints, typical translations in the range of 5–20 mm and rotations of 1–4° are observed,1 depending on the type of mask and the duration of scan (e.g., see [1], [2], and [3] in which a study of various types of head movements has been presented).

Methods to correct for patient movements were in the past largely based on correction of interscan movements. These (software-based) methods involved the division of a scan into a number of frames, followed by spatial registration of the reconstructed images using mathematical algorithms (e.g., see [4] and [5]). However, motion correction (MC) strategies in emission computed tomography that rely exclusively on emission data itself are inadequate for robust clinical usage, since 1) they depend on the quality of the scan data including noise characteristics and 2) they assume the activity distribution does not significantly change within the frames, whereas the frames are chosen a priori. More successful methods, explained next, rely on motion information provided by an external motion-tracking device (e.g., the Polaris system [3], which was also used in this work as elaborated in Section III).

Assuming accurate measurement of patient movement during the scan, a number of approaches to MC have been proposed:

  1. One method [6], [2] involves dividing of detected events into multiple acquisition frames (MAFs). Every time the displacement of the patient is measured to be larger than a specified threshold, the PET data are saved in a new frame. This is then followed by correction of the individually-reconstructed images of the MAFs, via rotation, and translation, to compensate for the measured amount of motion (i.e., this is an image-driven approach). The major limitation of the MAF approach is that by using a high motion threshold, motion within the frames are neglected; and lowering the motion threshold can instead result in the acquisition of an increasing number of low-statistic frames to be reconstructed. This is problematic because iterative reconstruction algorithms are most typically prone to exhibit image bias in case of poor statistics (due to the use of nonnegativity constraints) [7], [8].
  2. Another image-driven correction method proposed by Menke et al. [9] involves postprocessing of the motion-blurred reconstructed images using deconvolution operators (whose shape is determined by the measured motion). This method, however, has not attracted much attention because the deconvolution process amplifies the noise in the PET data, and furthermore, when the movements include significant rotation, spatially-variant deconvolution filters need to be employed, potentially introducing other artifacts [9].
  3. Another possible approach is to model the effect of motion in (the image-space component of) the system matrix of the EM algorithm, which has been proposed and implemented by Qiao et al. [10] for the case of cardiac- and/or respiratory-gated PET data, wherein the nonrigid motion needs to be somehow extracted; e.g., using 4-D CT ([12] can be consulted for comparison of implementation methodologies). In the present context of motion-contaminated non-gated data (particularly in brain imaging), the measured rigid motion can instead be directly measured using external tracking and incorporated2 in the system matrix as investigated in [13]. In this context however, this method is only ideal for scanners without the list-mode acquisition capability (i.e., when the detected events cannot be individually compensated for motion) as it merely models the overall motion of the subject and would converge very slowly.
  4. Another approach consists of transforming coordinates of the lines-of-response (LORs) along which individual events are measured to where they would have been measured if the object had not moved [14] (this is an event-driven approach). The method was elaborated and implemented by Menke et al. [9]. However in that work, due to hardware limitations, it was suggested that the motion-corrected LORs be corrected by normalization factors for the transformed LORs (as opposed to those corresponding to the original detector-pairs along which the events were detected). This normalization mismatch has been shown to result in artifacts [15].

Alternatively, to solve this problem, one requires a PET scanner either 1) equipped with more specialized hardware to achieve accurate on-the-fly normalization correction followed by LOR-transformation; e.g., see [16], or 2) capable of acquiring data in list-mode format, so that LOR corrections can be accurately performed post-acquisition; e.g., see [1].

B. Beyond the Purely Event-Driven Approach

The above purely event-driven approach neglects two issues [17] which we shall refer to as issue-1 and issue-2.


An event that would have been otherwise detected can exit the field-of-view (FOV) because of motion. This results in a loss of data along the relevant LORs, an effect that is not modeled by regular reconstruction methods.


An event that is normally not detected, i.e., is not in the FOV, may fall within the FOV because of motion. Therefore, following MC some detected events may correspond to no actual detector pairs.

These two effects can occur in two ways:

  1. Along the axial direction of the scanner, via translation (as shown in Fig. 1) or rotation (not shown).
    Fig. 1
    Axial motion can result in (issue-1): a detectable LOR i to fall outside the FOV (shown as i′) and can also result in (issue-2): an out-of-FOV LOR k to fall within the FOV (shown as k′). The effect is shown due to translation, but is equally ...
  2. Along the transaxial direction (similarly via translation or rotation) for scanners with gaps in between the detectors (an example of this is the high-resolution research tomograph (HRRT) [18] which has an octagonal design with gaps in-between the heads). This effect is shown in Fig. 2 (for the case of translation).
    Fig. 2
    Transaxial motion, for scanners with gaps in between the detector heads, can result in the exact same issues as shown in Fig. 1. The effect is shown due to translation, but is equally valid for rotation.

In particular, Qi and Huesman [19] proposed a list-mode reconstruction approach that addressed both aforementioned issues via modeling of motion into the system matrix of the EM algorithm. In [17], a feasible framework (leading to very efficient computations of the motion-corrected sensitivity images) within the contexts of both histogram-mode and list-mode algorithms was developed; methods to additionally incorporate estimated randoms and scattered events are further developed in Section II.

Neglecting issue-1 can produce image artifacts, as demonstrated by simulation [20], [19], [15] or experimentally [17]. On the other hand, one may note that neglecting issue-2 should not result in image artifacts since the patient should be still sampled enough by the existing detector pairs; nevertheless, 1) this can result in a loss of signal-to-noise ratio (SNR) in the images since some of the measured signal with useful information is simply discarded;3 furthermore 2) ironically, consideration of these additional events can be shown (see Section I-D) to yield a very useful mathematical property resulting in a very considerable reduction in the computational task of calculating the sensitivity correction factors in the EM algorithm (e.g., by a factor of ~30 for the HRRT scanner).

Below, we mention two other approaches that have been proposed in the literature (addressing issue-1 only).

  1. The method by Thielemans et al. [21] involves scaling of motion-corrected sinogram bins in order to correct for events that were lost due to motion, where the scale factors are computed by the measured motion information.
  2. The method by Buehler et al. [15] similarly involves scaling the motion-corrected sinogram bins for counts that were lost due to motion, with the difference that this method precorrects the detected events by their normalization factors prior to histogramming, whereas in the former, normalization correction is performed following motion-corrected histogramming (by a calculated overall factor).

The above two methods (further compared to our approach at the end of Section I-C) have a few potential difficulties.

  1. Consideration of noise enhancement issues may be required when dividing the sinogram bins by small scale factors [21].
  2. They are computationally intense (similar to the work in [19]) since they require calculating the effect of motion for the entire duration of the scan along all LORs (thus problematic in the context of high-resolution scanners).
  3. They explicitly address issue-1 but not issue-2 as they simply discard motion-corrected events which do not correspond to actual detector elements.

Next, following [17], an EM approach addressing the above issues is elaborated, with the inclusion of terms for randoms and scattered events: accurate methodology to estimate these terms within the motion-compensation framework are developed in Section II.

C. Accurate Motion Corrected EM Reconstruction

Denoting fjm as the activity (i.e., emission rate) in voxel j (j = 1,…, J) estimated at the mth iteration, and pij as the probability of an emission from voxel j being detected along LOR i, the ordinary Poisson expectation maximization (OP-EM) algorithm is given by


where ni refers to the number of events detected along LOR i (i = 1,…, I), An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg and An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg the estimated random and scatter count rates expected along the LOR i, and T represents the scan duration. The sensitivity correction factor sj=i=1Ipij is a summation over all possible measurable LORs (i = 1…I) and calculates the probability of an emission from voxel j being detected anywhere.

We first divide a given scan (of duration T) into Q motion-intervals (t = 1,…, Q) each with a duration ΔTt within which movements remain below a very small threshold4 (e.g., as done in [6]): in the current work, the motion threshold was set to a tenth of the scanner resolution, which as described in Section III-B, due to the ease of the computation of the motion-corrected sensitivity factors, does not at all pose a noticeable increase in the computation. Following our notation in [17], we then introduce an invertible operator An external file that holds a picture, illustration, etc.
Object name is nihms221342ig1.jpg( ) which models the motion of the object by transforming the LOR i along which an event would have been detected in the absence of motion, to the LOR i′ along which the event is detected during interval t (Fig. 3). Motion-compensated sinograms are then obtained by binning each detected LOR i′ along i=Lt1(i).

Fig. 3
An event that would have been detected along LOR i is detected along LOR i= An external file that holds a picture, illustration, etc.
Object name is nihms221342ig3.jpg{i} due to motion. In image-space, the effect of motion can be characterized by a transformation from voxel j to j = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig7.jpg{j}.

Next, referring to Fig. 3, we note that the time-varying (due to motion) probability Pijt of detecting an event generated during interval t from voxel j along LOR i′ (i.e., prior to MC; thus binned along LOR i following MC) is given by5


wherein the system matrix has been decomposed into geometric gij, as well as attenuation Ai and normalization Ni factors,6 and


is used to denote whether or not the LOR i′ corresponds to physical detector-pairs. We next note that the overall probability pij of detecting an event generated at voxel j anytime during the scan (t = 1,…, T) and binned along an LOR i(after MC) can be written as


Combining (2) and (4), the overall system matrix can be written as


Next, we define


which can be thought of as the time-averaged detection-efficiency of the LORs that contribute to an LOR i upon MC, and using the substitution of (5) into (1), we arrive at




It is worth noting (as implied by the present framework) that ni in (7) is the number of events placed (and not necessarily detected) along LOR i following motion compensation.

Observation (in comparison with works [21] and [15])

It is worth noting that in our own notation, after careful review, one is able to show that the method introduced by Thielemans et al. [21] actually corresponds to precorrecting each motion-corrected sinogram bin i by the overall normalization correction factor N¯i=t=1QNiδi(ΔTt)/(T). By contrast, the method implemented by Buehler et al. [15] first precorrects each detected event along an LOR i′ by the corresponding normalization factor Ni′, followed by motion-corrected histogramming (i.e., into bin i). This is then followed by dividing the sinogram bins by the factor t=1QδiΔTt/T which simply represents the fraction of time an LOR i was in the FOV (i.e., passed through scanner detectors).

On the other hand, the method investigated in this work: 1) does not have the potential problem of noise amplification as encountered in the aforementioned two precorrection schemes; 2) furthermore, as seen next, it can be accurately simplified to a form that is not burdened with the computationally-intense tasks of calculating the expressions t=1QNiδiΔTt/T or t=1QδiΔTt/T.

D. Calculation of Sensitivity Factors

A number of possible (though not necessarily accurate) methods may be considered in order to simplify the computational complexity of expressions (6) or (8).

  1. By increasing the motion threshold, movements below which are neglected; thus resulting in fewer motion frames. However, this approach is bound to introduce resolution degradation and is therefore not optimal.
  2. By application of the compression method in [15] in which neighboring LORs are grouped together. This method is also an approximation and can potentially result in inaccuracies.
  3. Via backprojection [in (8)] of only a randomized subset of the projection-space as proposed by Carson et al. [22]. Qi and Huesman [23] have shown that the particular randomization method is very critical in the accuracy of the estimated sensitivity factors, especially as inaccuracies will be amplified in subsequent iterations of the EM algorithm, and more accurate (yet more time-consuming) Monte Carlo randomization techniques have been proposed [24].

Our approach

Alternatively, we are able to show [17] that an accurate, nonrandomized and considerably fast method is possible, if the data are precorrected for attenuation. This is obtained by first noting that instead of modeling motion in the LOR-domain using the operator An external file that holds a picture, illustration, etc.
Object name is nihms221342ig1.jpg( ), one can instead map the trajectory of the image voxels using an operator An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg( ) such that j′ = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg(j) represents the position of a voxel j during interval t, as depicted in Fig. 3.

Next we note that if performing attenuation precorrection of the sinogram bins, the algorithm (7) is instead written as


where the sensitivity term is given by


It can then be shown [17] that


where sj=i=1IgijNi is the conventional sensitivity correction factor. The important result is that for any voxel j, the term sj may be evaluated by motion-interval weighting of conventional sensitivity factors evaluated along the trajectory j′ (t)=An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg(j) of a voxel j as it undergoes motion. In other words, the time-consuming motion-averaging in the projection-space (10) can instead be performed in the image-space. This idea is illustrated in Fig. 4.

Fig. 4
The motion-compensated overall sensitivity correction factor sj for a particular voxel j can be calculated in image-space by (ΔTt)/(T)-weighted evaluation of the conventional sensitivity term along the trajectory j= ...

As an example, for a typical HRRT-scanner acquisition (span 3, maximum ring difference 67), the proposed image-space approach can result in a factor of ~30 speedup in the calculation of the sensitivity factors (the sinogram-size for a single frame is ~470 M, compared to only ~14 M voxels in the image-space).

II. Accurate Corrections for Scattered Events and Randoms

In this section, we elaborate on a methodology to accurately incorporate corrections for scattered events and randoms in the presence of motion. It must be noted that the proposed approach is generally applicable for estimation of scattered events and randoms in other analytic and iterative reconstruction tasks and as such can be similarly applied within other MC schemes.

A. Calculation of the Scatter Term

We propose that the term An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg/Ni in (9) can be simplified. In this work, we have used the Watson Single Scatter Simulation (SSS) technique [25]. The method has the property that instead of calculating the expected rate of scattered events Si detected along each LOR i, it first calculates the term Si defined such that it relates to Si via


wherein Si is thus effectively the expected rate of incident scattered events.7 As noted by Watson [25], calculation of Si = Si/Ni involves ratios of detector efficiencies, which can be estimated more accurately than the efficiencies themselves.

We shall argue in this section that the term An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg/Ni in (9) can be replaced by Si. Let us first consider the case without motion. As shown in Fig. 5, for a coincidence event detected at detectors A and B (defining an LOR i)the expected rate of scattered events Si is given by an integral over the scattering volume VX; i.e., for all positions X 1) at which Compton scattering could have occurred, and 2) leading to coincidence detection by detectors A and B. For the case when the scattered event is detected at B, the integral can be written as [25]

Fig. 5
A depiction of the effect of scattering at a position X resulting in a scattered event i detected along detectors A and B.


where εAX and [epsilon with circumflex]BX denote crystal efficiencies for events incident along AX and BX(hatted [epsilon with circumflex] indicates detector efficiency at the scattered photon’s energy), σAX and σBX are the detector geometric cross sections (while the distances from the scattering point X to detectors A and B are denoted by RAX and RAX), and finally Ci,XS(μ,f) is used to denote (aside from detection efficiency considerations modeled in the previous two bracketed terms) the expected rate of events generated along the path defined by AXB and contains terms related to the distribution of density μ and activity f in the object as well as Compton scattering probabilities. An additional integral is also needed to model the scattered event being instead detected at detector (which we drop from here on for convenience).

Next, noting that the normalization term Ni for the nonscattered true events detected along i (corresponding to detectors A and B) has the relation [25]:


where εAB2 and σAB2 represent the detector efficiencies and geometric cross sections for a true event detected along AB(RAB denotes the separation of the detectors). Defining


as the corresponding overall efficiency for detection of scattered events along LOR i (scattered at X), then referring to (13) it is easy to see that the term Si in (12) can be written as


Effect of motion

Defining Sidet(t) as the expected rate of scattered events detected along LOR i′ at time interval t [therefore binned along LOR i=Lt1(i)], we note that An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg in (9), which represents the overall rate of detected scattered events binned along LOR i, can be expressed as


Next, similar to the definition of Si in the expression (12), we define Siinc(t) such that


wherein Siinc(t) is effectively the expected rate of scattered events incident along LOR i′ at time interval t, and wherein using δi we have noted cases when Sidet(t) will be zero since i′ is not in the FOV. Additionally, very similarly to (16), Siinc(t) can be expressed as


where the primed quantities indicate the effect of motion transformation of the scattering volume at time interval t.

At this stage, we shall argue that expression (19) for Siinc(t) is nearly equivalent to the conventionally computed scatter term Si as given by (16). To proceed, we will assume the following.

The motion operator An external file that holds a picture, illustration, etc.
Object name is nihms221342ig1.jpg( ) applied to true-coincidence LORs is similarly applicable to modeling the motion of scattered-event LORs. This assumption is not strictly true [26] in cases when motion results in significant change in the relation between the object and the scanner (e.g., see Fig. 6). However, given realistic amounts of motion and the smooth nature of the scattering distribution, one may assume that this is the case, and that therefore, the relation of i′ with respect to the attenuation μ′ and emission f′ distributions remains nearly the same, and therefore

Fig. 6
A depiction of the potential effect of considerable motion on changing the relation of the scattering volume and the detection geometry: in this example, a large translation of the scattering volume has resulted in a non-similar transformation of the ...

While the absolute efficiency of detection Ni,XS certainly varies with motion and is taken into account in this work, the relative detection efficiencies between the scattered and true events are nearly preserved with motion; i.e.,


This is effectively true given the wide-range of angles/positions from which scattered events contribute to an LOR.

Combining (16), (19), (20), and (21), one arrives at the intuitively appealing result


where Si, in this work, is obtained using the standard Watson single scatter simulation (SSS) method [25] (on motion corrected sinograms). Combining (17), (18), and (22), it then follows that


where we have used the definition (6) for Ni. This gives a very intuitive picture: the overall rate An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg of detected scattered events binned along an LOR i is given by the rate Si of such events normally incident (i.e., in the absence of any motion) along the LOR, multiplied by the motion-weighted normalization factor Ni. As a result, the expression An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg/Ni in (9) can be simply replaced by the regularly calculated expression Si.

B. Calculation of the Random Term

We begin by noting that, similar to (17), the overall expected rate An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg of random coincidences binned along LOR i can be written as


where Ridet(t) denotes the expected rate of random events detected along LOR i′ = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig1.jpg(i) at time interval t (therefore binned along LOR i).

Next, we note that ordinarily, unlike analytic calculations for expected scatter counts, the expected random counts can be more directly obtained using either the measured single count rates or smoothed delayed coincidences. As such, the term An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg can be directly obtained from motion corrected singles or delayed coincidences. Therefore, the expression An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg/Ni in (9) can in principle be calculated.

However, 1) the task of computing Ni remains a computationally intense one, and 2) MC binning of delayed coincidences, and especially singles counts (which arrive from different orientations), may not be feasible. As such, we have pursued an alternative, more practical approach.

First, similar to (18), we define Riinc(t) such that


wherein Riinc(t) effectively8 denotes the rate of random events incident along an LOR i′ at time interval t.

At this stage, we make the following simplifying approximation: due to the very broad nature of random contributions compared to realistic amounts of motion, we assume that motion does not alter the rate of randoms incident along an LOR (i.e., Riinc(t) is a constant in time), and that incident randoms along any LOR i and its motion-transformed LOR i′ = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig1.jpg(i) are nearly the same (i.e., Riinc(t)Riinc(t)). We summarize this by writing


wherein Ri is introduced to emphasize time/motion-independence of Riinc(t). One can then rewrite (25) as


from which we conclude that Ridet=Ridet(t) is nearly a constant in time, and is simply given by the overall mean rate of random coincidences detected along any LOR i, as ordinarily computed in PET imaging applications.

Furthermore, combining (24), (25), and (26), one arrives at


It then follows that, considering the above two equations, for LORs within the FOV (i.e., δi = 1)


In other words, for binned LORs i that are inside the FOV(δi≠0), the expression An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg/ Ni in the overall EM algorithm (9) may be replaced with Ridet/Ni.

Nevertheless, for motion-compensated events that are outside the FOV (i.e., δi = 0), it is not possible to extract the incident random rates (and therefore An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg/ Ni) from (27). As such, for these LORs, given the broad nature of the randoms distribution, we have used the values obtained by extrapolating to nearby LORs in the FOV, which we shall refer to as extrap { (Ridet)/(Ni)}. Thus, redefining


in relation to (29), we finally arrive at the estimation


Subsequently, we may replace the expression An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg/Ni in (9) Ri by as given by (30).

Final Result

Combining results of Sections I-D, II-A, and II-B [(11), (23), and (31) respectively], the motion-compensated EM algorithm (9) can be written in the final form


wherein sj is the conventional (normalization-wighted) sensitivity term evaluated at the motion-trajectory j′ = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg(j)(Fig. 4), Si is the standard scatter estimate obtained using the single scatter simulation (SSS) method (on motion corrected sinograms), and Ri is given by (30). It is worth emphasizing that the summation i=1I in above equation takes place over all LORs i along which events are binned (following MC), including those that exit the FOV.

C. Scatter/Random Corrections in List-Mode Reconstruction

The direct list-mode reconstruction approach has a number of potential advantages over histogram-mode methods [29], including, within the present context, the inherent advantage of naturally incorporating all detected events thus avoiding the burden of extending the sinogram-space in histogram-mode methods (as needed to allow binning of motion-corrected events that are out of the FOV).

Building on the work of Parra and Barrett [27], we have elaborated the derivation of the motion-compensated list-mode EM algorithm elsewhere (Appendix I of [17]), aside from scatter and random coincidence considerations. Using lk to denote the LOR along which the kth list-mode event is detected (k = 1…K) and Pljt to define the time-varying probability that an event generated from voxel j during interval t is detected along LOR l, the resulting algorithm can be written as




is the overall probability for an event generated anytime at voxel j being detected anywhere. Next, we make the following notes and observations.

  1. In contrast to the histogram-mode EM algorithm (1), the list-mode algorithm (33) appears different in that the former employs overall (time/motion-weighted) probability elements pij whereas the latter includes time/motion-varying probability elements9 Pljt. Similarly, including time/motion-varying random and scatter terms, one can write (using l instead of lk for convenience)
    wherein Rldet(t) and Sldet(t) (Sections II-A and II-B) denote expected randoms and scattered coincidence events detected along an LOR l during interval t. A similar difference, as explained above, exists between these time-varying terms and the overall random and scatter coincidence terms An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg and An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg used in the histogram-mode approach,10 though as we shall see shortly, the two list-mode and histogram-mode algorithms will be seen to be very similar, except for the inherent difference by which the events are considered.
  2. If the data are precorrected for attenuation, the system matrix can be written as
    where i is used to denote the motion-corrected LOR, and An external file that holds a picture, illustration, etc.
Object name is nihms221342ig4.jpg(i, j) denotes the geometric probability (very similar to gij in histogram-mode reconstruction) which is written differently to emphasize the ability to take into account continuous motion-compensated LOR coordinates i=Lt1(l) as the list-mode events are motion-corrected and projected one-by-one.

Now, since no motion corrected event is discarded in the natural list-mode approach, one is able to show [17], very similar to the approach in Section I-D, that the sensitivity term reduces to


as similarly given by (11), where sj = ΣigijNiδi is the conventional sensitivity term, while j′ = An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg(j) follows the (image-based) trajectory of voxel j at each time t. Therefore, as depicted in Fig. 4, the sensitivity term may be motion-averaged in image-space, as opposed to time-consuming LOR-space computations. We have therefore used the following list-mode algorithm:


with ik=Lt1(lk) (where lk is the LOR along which the kth event is detected), and the sensitivity term šj is given by (37). For simplicity of notation, in the attenuation, normalization, scatter, and random terms, the subscripts l and i, instead of lk and ik, have been used.

Similar to the term An external file that holds a picture, illustration, etc.
Object name is nihms221342ig5.jpg/Ni in the histogram-mode algorithm (9), the term Sldet(t)/Nl in the above list-mode algorithm (38) can also be seen to essentially estimate the expected rates of scatter coincidences incident along an LOR l at time t (therefore, along LORi=Lt1(l) following MC). More rigorously, for a detected LOR l (i.e., prior to MC; and δl= 1), one may combine (18), (22), and (23), to arrive at


Within the framework of Section II-B, a very similar relation exists for random coincidences; combining (25), (26), and (28), one arrives at


wherein Rldet(t)/Nl and An external file that holds a picture, illustration, etc.
Object name is nihms221342ig6.jpg/Ni both estimate the average rate of random events incident along an LOR l (therefore along i=Lt1(l) following MC). Using the aforementioned two relations, algorithm (38) can be written in the final form


wherein, similarly to the histogram-mode counterpart (32) sj is the conventional sensitivity term evaluated at the motion-trajectory j′=An external file that holds a picture, illustration, etc.
Object name is nihms221342ig2.jpg(j) (Fig. 4), Si is the standard scatter estimate obtained using the single scatter simulation (SSS) method (on motion corrected data), and Ri is given by (30).

III. Methods

A. Tomograph

Data were acquired on the second generation high resolution research tomograph (HRRT) [31]. The detector heads in the octagonal design consist of a double 10-mm layer of LSO/LYSO for a total of 119, 808 detector crystals (crystal size 2.1×2.1×10 mm3). The total number of possible LORs is 4.486×109.

B. Motion-Tracking/Calibration

Collection of motion data was carried out using a Polaris motion tracking system (Northern Digital Inc., Waterloo, ON, Canada) [3]. This system uses an infrared signal to track a small tool consisting of four retro-reflective spheres attached to a plastic plate. The Polaris was mounted approximately 1 m from the rear of the scanner. Calibration of the Polaris frame of reference to that of the tomograph was achieved via a series of simultaneous Polaris and transmission scan measurements [15]. The Polaris tool was placed in a static position within the scanner, and the coordinates of its center as well as the orientation of the tool (translation and quaternion) were determined using the Polaris, allowing one to extract 3-D positions of the four individual spheres (given knowledge of their relative positions). During this time, a 10-min transmission scan of the tool was obtained. From the reconstructed transmission image the coordinates of each tracking sphere were determined by first isolating (in the image) the spheres from the plate, and then fitting a 3-D Gaussian to the center of each sphere. This resulted in four sets of paired coordinates. The entire process was repeated five times to increase the accuracy of the calibration. To determine the transformation between the two frames a direct least squares method was employed [32]. This transformation was later applied when calculating the corrections to be made to compensate for motion during image reconstruction.

Motion data was recorded using a modified version of the open source code provided by the Polaris manufacturer. Each motion measurement (quaternion and translation) was output in binary format with a time stamp that was synchronized to the HRRT acquisition PC using a common time server.

The motion threshold (below which motion is considered negligible) was set to a tenth of the resolution (2.5 mm) of the HRRT scanner. In fact, having a very high number of motion intervals (Q), does not pose any noticeable increase to the computation, primarily because this makes a difference only in the number of image-space motion-averages of the sensitivity image [see (41)] which are very efficiently computed, and only once per dataset (at 1.4 s per motion on a 2.6 GHz dual-processor computer). Therefore, it is fully practical to incorporate effectively continuous motion in the proposed formalism.

C. Phantom Study

An elliptical contrast phantom (3.2 L) was used containing hot (31 mL) and cold (13 mL) spheres inserted in a background. The phantom was filled with a total F-18 activity of 1.047 mCi (resulting in initial prompts and randoms rates of 267 kcps and 27 kcps), while the hot/background ratio was 4.77. An elaborate motion study was performed, which involved a number of distinct movements (summarized in Table I) with respect to the position and orientation of the phantom at the time origin. The movements were applied at the very beginning of each indicated time interval. The applied movement protocal is shown, and was monitored accurately using the motion tracking device.

Table I
Table of Applied Phantom Movements (With Respect To Time=0). Indicated Axes are With Respect to the Scanner

D. Simulations of a New Mathematical Brain Phantom

Our emphasis in this work has been to compare and validate MC methods under realistic imaging scenarios. The tools developed in this regard have been two-fold.

1) Mathematical Brain Phantom

Voxelized phantoms are problematic in that they are fixed to a particular spatial resolution, and also result in interpolation errors when modeling motion (e.g., the volume of a voxelized object may not be conserved after rotation). Alternatively, a mathematical brain phantom was developed, containing continuous structures and thus avoiding the need for interpolations when introducing motion, as depicted in Fig. 7. The brain phantom was constructed using subdivision surfaces [33]. Subdivision surfaces can be used to efficiently model structures with an arbitrary topological type, such as the brain, skull, muscle tissue, and vasculature. Surfaces were modeled based on a segmented MRI dataset of a normal subject. The dataset consisted of 181 slices of the brain (pixel-sizes/slice-widths of 1.0 mm). One-hundred structures in the brain were identified. A software application was written using the Visualization Toolkit (VTK) [34] to create 3-D polygon surfaces. The VTK marching cubes algorithm was first used to create an initial polygon model for each structure. The polygon model was then optimized using the mesh optimization and smoothing routines of the VTK software.

Fig. 7
(a) New Mathematical Brain Phantom. (b)–(d) Transaxial, coronal, and sagittal slices through a particular study simulated in this work.

2) PET Simulations

We used a novel simulation technique [35] involving combination of two powerful and well-validated Monte-Carlo codes, SimSET and GATE. The method takes advantage of the shorter simulation times for photon propagation inside a digital phantom using SimSET as compared to GATE. The histories of all photons, single and coincidence, escaping from the phantom are stored in a file which serves as the input to GATE. The total simulation times using the new technique are about 12 times faster with nearly similar accuracy [35]. We used the design parameters and the geometry of the second generation HRRT scanner, as depicted in Fig. 8. In this work, movement of the patient in-between three positions were simulated, each moved with respect to one another by 4.6, 4.6, and 7.0 mm in the x (horizontal), y (vertical), and z (axial) directions, respectively, consistent with actually observed amounts of motion. Simulations involving 32, 77, and 209 M detected events were considered.

Fig. 8
Geometry of the second generation HRRT scanner used in the combined SimSET/GATE simulation approach in this work.

E. Reconstructions

Phantom and simulated data for the HRRT were reconstructed directly from the list-mode data ([28], [29] should be consulted for details) with a span11 of 9, a maximum ring difference of 67, and using 32 subsets (which are time-based in list-mode reconstruction [29]). Throughout this work, resolution modeling has been used in the reconstructions (via the inclusion of an experimental model of spatial resolution in the EM algorithm [36]). The scatter term Si was computed using the single scatter simulation technique [25] on motion compensated sinograms, as elaborated in Section II, and the randoms term Ridet [also used to obtain Ri in (30)] was obtained by variance reduction of the delayed coincidence data12 [39].

Five reconstruction scenarios were considered: 1) no motion (with matched statistics), 2) motion with no MC, 3) purely LOR-driven compensation (MC Method 1); i.e., method that performs MC reconstruction via mere correction of LOR coordinates of the detected events given the motion information, 4) MC method proposed in [17] which additionally incorporated motion-averaging of sensitivity factors, but does not include scatter compensation and only performs simple random-subtraction (MC Method 2), 5) the proposed accurate MC algorithm (41) incorporating appropriate overall modeling of motion. In Methods 1 and 2, non-MC scatter and random terms were included as is commonly done in the OP-OSEM algorithm [29]), while Methods 3 and 5 incorporated the scatter and random correction methods proposed in this work.

Additionally, we note that attenuation was modeled in the system matrix for Methods 1–3, as is commonly the case, while the data were precorrected in Methods 4 and 5 as mathematically required by our methodology. It is predicted (and verified next) that MC Method 2 outperforms MC Method 1 qualitatively (as it models sensitivity variation due to motion) but performs poorly quantitatively due to lack of consideration of scatter correction.

F. Quantitative Metrics

1) Phantom Study

Hot and cold contrast were estimated following approximately the NEMA NU 2001 protocol. Defining CH, CC, and CB as average measured counts in regions of interest (ROIs) placed in hot, cold, and background regions, respectively, the cold percent contrast QC was measured using


while the hot percent contrast QH was calculated using


where AH/AB is the actual concentration ratio between the hot and background regions (measured to be 4.77). The percent noise (standard deviation/mean × 100%) was calculated for a large background ROI.

2) Simulations

Different areas in the brain (caudate, putamen, grey, white, cerebellum and brain stem) were quantitatively analyzed: ROIs were drawn on exact boundaries of these regions, and region bias (RBr) for each ROI r was calculated using

RBr=[mid ]λ¯rμr[mid ]μr

where [lambda with macron]r (r ranged from 1 to R = 6) denotes the reconstructed activities over each ROI r, while μr denotes the activities used as (true) reference in each ROI as obtained using 20 iterations of a static, high-statistic (300 M events) simulation of the same brain phantom.13 The aforementioned RB values for each ROI were plotted against the normalized standard deviation NSD for each ROI, as calculated using

NSDr=1n1j[set membership]r(λjλ¯r)2λ¯r

where λj denotes the reconstructed value at voxel j, and n is the number of voxels j [set membership] r defining ROI r.

The overall bias was also measured using an ROI-based normalized mean squared error (NMSE) metric given by


We have adopted such an ROI-based definition for NMSE over the common voxel-based definition, as we believe it should minimize the effect of voxel noise on this bias-measuring metric. The NMSE values were subsequently plotted against the average NSD values; i.e., left angle bracketNSDrright angle bracketr.

G. Patient Study

We examined a patient data set acquired from a subject with Parkinson’s disease (age 67, male, disease duration or 14 years, moderate disease severity) on the Siemens HRRT. The Polaris tool (Section III-B) was attached to the subject using a thin neoprene surf cap [1]. Following injection of 11C-raclopride, the subject was scanned for 60 min and 16 dynamic frames were considered (4 × 1 min, 3 × 2 min, 8 × 5 min, 1 × 10 min). The data were reconstructed using 1) no MC, 2) the purely LOR-driven method, and 3) the proposed accurate method. Time-activity curves (TACs) and binding potential (BP) values were derived from regions of interest (ROI) in the striatum. Eight circular ROIs were placed on the striatum bilaterally; i.e., right and left caudate (C), anterior putamen (P1), intermediate putamen (P2), and posterior putamen (P3). BPs were calculated using a tissue input Logan analysis with the cerebellum as the reference region.

IV. Results and Discussion

Phantom Study

First, we explicitly investigated the accuracy of the approximations regarding scattered and random events (Section II): these results are best summarized by (22) and (26). In the case of scatter, a validation method for (22) would be to compare the scattered events incident along the LORs for various motion frames and compare these values for those LORs that correspond to one another given the measured motion. A simple approach to this (not requiring lengthy simulations in which scattered events are tagged) was to use the Watson method; i.e., the scattered term Si for various motion frames were separately calculated followed by motion correction to the first frame, and compared with the results of the first frame.14 A typical comparison is shown in Fig. 9, showing high correspondence (some artifacts were still detected which we attribute to nearest-neighbor interpolation errors, which in the future, we intend to minimize via use of better interpolations). We found that for voxels 5 cm away from the axial edges15 of the FOV, and as long as the applied motion did not exceed 15 mm, the sinograms agreed within 10% for 4 × 4 ROIs and within 2% for 20 × 20 ROIs (radial r-steps of 1.2 mm and angular [var phi]-steps of 0.6°).

Fig. 9
Images of motion-corrected incident-scatter Si sinograms shown for segment 0 (i.e., azimuthal angle θ = 0). Different pairs of radial r, transaxial angle [var phi] and axial z are shown. (top) Reference frame 0, (bottom) frame ...

With regards to the randoms terms, (26) predicts nearly similar Ri sinograms for different motion frames independent of motion (unlike above, the sinograms are not motion-corrected with respect to another). Fig. 10 depicts images of the measured randoms Ridet (depicting strong detector and gap streaks), as well as incident-randoms Ri as obtained using (30) for two separate frames. We thus compared such estimated incident-random sinograms for various motion frames. In the current work, the constrained Fourier-space method of Karp et al. [41] was used for the task of gap-filling: in our current studies, use of this approach resulted in reasonable results, but we wish to emphasize that this method was originally developed for gap-filling of emission sinograms, and therefore ideally, dedicated methods of gap-filling for randoms sinograms may have to be developed in the future. Similar to the case of scattered events, we found that as long as the applied motion did not exceed 15 mm, the sinograms agreed within 13% for 4 × 4 ROIs and within 3% for 20 × 20 ROIs.

Fig. 10
Images of (top) measured randoms Ridet, compared with those of incident-random Ri sinograms shown for (middle) reference frame 0 and (bottom) frame 7 (see Table I). The strong detector and gap streaks are absent from the latter two images. ...

Fig. 11 shows typical transaxial and coronal slices for the reference (no motion) frame 1 (Table I) compared with reconstructions of another motion frame for cases of 1) no MC, 2) MC Method 1 (purely LOR-driven correction), and 3) MC Method 2 (method proposed in [17] not involving scatter correction or appropriate random correction), and 4) proposed MC method. Unlike the proposed approach, clear visual artifacts are observed for the purely LOR-driven MC Method 1.

Fig. 11
Reconstructed images of (column 1) reference (no motion) frame 1 (see Table I) after 2 iterations (32 subsets), compared with reconstructions of frame 8 with (column 2) No MC, (column 2) MC Method 1, (column 3) MC Method 2, (column 4) proposed MC. Transaxial ...

To better understand this effect, a normalized sensitivity image (obtained by simple back-projection of projection bins which are set to 0 along detector gap, and 1 otherwise) is depicted in Fig. 12 for the entire FOV (boxes depict region depicted in Fig. 11). The visible line streaks are due to the presence of gaps in the octagons design of the HRRT. This explains the artifacts seen in Fig. 11 as obtained by MC method 1, which does not take the presence of gaps into account. Additionally, this explains why this effect has not been noted in some previous studies (e.g., see [1]) due to absence of noticeable gaps in scanners used. It must still be noted that even for a scanner without detector gaps, not considering the axial gradient in the sensitivity image (see bottom plot in Fig. 12 which is caused by LORs exiting the FOV axially, as seen in Fig. 1) is expected to result in issues with quantitation.

Fig. 12
Typical (top) transaxial and (bottom) coronal slices through the normalized sensitivity image (see text) shown for the entire FOV (the area depicted in Fig. 11 is indicated by the solid boxes). Strong line streaks are readily seen, attributed to detectors ...

For a quantitative comparison of phantom reconstructions, Fig. 13(a) and (b) shows plots of cold QC and hot QH percent contrasts versus percent noise for frame 1 (which was used as the reference “no motion” frame with respect to which other frames were motion-corrected) along with motion frames 3, 8, and 10 (see Table I), with different number of iterations into the data. In line with frame 1, it is clearly seen that plots obtained using the proposed MC method outperform other approaches. Furthermore, we see that MC Method 2, though resulting in qualitatively superior images compared to MC Method 1, results in an inferior quantitative performance.

Fig. 13
Plots of (a) cold percent contrast QC and (b) hot percent contrast QH (versus percent noise) for frame 1 (reference: referred to as “no motion”) as well as frames 3, 8, and 10 (Table I) for Method 1 (dotted), Method 2 (- -) and proposed ...


Figs. 14 and and1515 depict typical transaxial and coronal slices for reconstructions in the cases of 1) no motion, 2) motion with no MC, 3) MC method 1, 4) MC Method 2, and 5) the proposed MC method, for 209 M and 32 M simulated events. It is observed that, compared to the case of no motion, purely LOR-driven corrections (MC method 1) can lead to some artifacts (especially see transaxial slices in the figures).

Fig. 14
Simulation contained 209 M detected events. Reconstructed images after 3 iterations (32 subsets); (column 1) no motion, (column 2) no MC, (column 3) MC Method 1, (column 4) MC Method 2, (column 5) proposed MC. Transaxial and coronal slices are shown in ...
Fig. 15
Simulation contained 32 M detected events. Reconstructed images after 3 iterations (32 subsets); (column 1) no motion, (column 2) no MC, (column 3) MC Method 1, (column 4) MC Method 2, (column 5) proposed MC. Transaxial and coronal slices are shown in ...

In addition to observed differences, in order to more quantitatively study the proposed approach, Fig. 16 shows plots of NSD (noise) versus NMSE (bias), as defined in the previous section, for different statistics with the points in each curve generated with varying the number of iterations (to better capture noise versus bias trade-off curves, different number of iterations were used for different simulated count levels). It is seen that the proposed MC method poses noticeable improvements compared to other approaches (relative to the case with no motion). For various regions in the brain (caudate, putamen, grey, white, cerebellum, and brain stem), typical plots of NSD (noise) versus RB (region bias) are depicted in Fig. 17 (209 M events): it was observed that the proposed algorithm typically performs favorably relative to MC method 1 (shown) and MC method 2 (not shown), with reference to simulations involving no motion.

Fig. 16
Plots of NSD (noise) versus NMSE (bias) for the cases of (a) 209 M, (b) 77 M, and (c) 32 M simulated events: points in each curve are generated with varying the number of iterations (to better capture noise versus bias trade-off curves, different number ...
Fig. 17
Plots of (y-axis) NSD versus (x-axis) RB for various regions in the brain with increasing iterations. 209 M events were simulated. The proposed method outperforms the purely LOR-driven approach.

Patient Study

Having established the accuracy of the proposed method using phantom and simulation studies, we then turned our attention to a patient study (elaborated in Section III-G). The measured movements of the particular patient under study are shown in Fig. 18. Also, Fig. 19 depicts slices across the striatum (frames 7–13: 10–45 min) reconstructed using 1) no MC, 2) the purely LOR-driven method, and 3) the proposed accurate method (prop. MC). Data reconstructed using the purely LOR driven method, as also observed before, resulted in some image artifacts (and were not considered further). BP values for the right and left caudate (C), anterior putamen (P1), intermediate putamen (P2) and posterior putamen (P3) are plotted in Fig. 20, and are seen to relatively increase, with respect to no MC, for the proposed MC method. This can be attributed to the increased effective resolution of the scanner due to motion correction. This is very similar in essence to BP increases one observes when higher resolution scanners [45], reconstruction methods with resolution recovery [36] and/or partial volume correction methods [43], [44] are considered.

Fig. 18
Measured motion information: six degrees of motion are depicted (shown for the strait um region). X, Y, and Z axes refer the horizontal (transaxial), vertical (transaxial), and axial directions, respectively.
Fig. 19
Transaxial (first row), coronal (second row), and sagittal (third row) slices across the striatum (frames 7–13: 10–45 min), reconstructed with different MC schemes. Clear artifacts can be seen with the purely LOR-driven approach (MC Method ...
Fig. 20
BP values with no MC and with the proposed accurate MC for a number of ROIs: right and left caudate (C), anterior putamen (P1), intermediate putamen (P2) and posterior putamen (P3).

Ongoing work includes investigation of the effect of motion correction on BP as well as dopamine release (DAR) values obtained from baseline/drug back-to-back studies across a wide range of patient data. Preliminary results indicate enhanced reproducibility in BP estimation, as also reported in [42], also accompanied according to our observations with significant increases in BP values especially in the intermediate and posterior putamen (P2 and P3). These results are even more significant for those patients (over 50% of sample population) reaching movement amplitudes greater than the resolution of the scanner.

V. Conclusion

We have proposed and investigated a formalism for accurate motion-compensated EM reconstruction, including elaborate consideration of randoms and scattered events, which is particularly feasible in the context of high-resolution PET. The method takes into consideration presence of motion-induced interactions between LORs within and outside the FOV, and accurately incorporates all detected events, including those, for instance, which exit the scanner axially or pass through detector gaps following motion correction. As an example, for a typical HRRT-scanner acquisition, the proposed image-space (versus projection-space) accurate calculation of the sensitivity factors can result in a factor of ~30 speedup in this task.

The method has been furthermore extensively tested using phantom measurements as well as simulation studies (involving a new mathematical brain phantom and a novel combination of SimSET and GATE simulation packages). It has been demonstrated that, in comparison to reconstructions of data with matched statistics involving no motion, the proposed method removes qualitative artifacts as well as quantitative inferiorities of purely event-driven correction methods. Having established the accuracy of the proposed method using phantom and simulation data, we are currently investigating the effects of motion correction on actual patient data. An example of this was presented in this work.


The authors would like to thank A. Crabb, K. Raywood, and K. Buckley for computational and technical support.

This work was supported in part by the National Institutes of Health under Grant DA00412 and Grant S10RR017219, in part by the TRIUMF Life Science Grant, in part by the Natural Sciences and Engineering Research Council of Canada, and in part by the Michael Smith Foundation for Health Research.


1Largest translation typically occurs along the transaxial-(x) axis, and largest rotation around the axial-(z) axis.

2This is in essence similar to how positron range has been modeled in the image-space component of the system matrix [37], [38].

3For instance, even small amounts of motion result in substantial interaction with detector gaps in the HRRT which occupy nearly 10% of the entire sinogram-space.

4Another downside of the MAF technique is that, unlike the present method, increasingly small motion thresholds can lead to poor statistics in the individual frames to be reconstructed, leading to bias as discussed in Section I-A.

5The notation to follow is valid element-by-element; however, the order of these components would be different in matrix notation.

6Note that the normalization factor for an LOR i′ along which an event is detected is given by the value of Ni for the LOR itself, whereas for attenuation correction (due to motion of the object relative to its initial position), the factor at time t along i’ is given by the measured attenuation factor at t = 0 along i; i.e., Ai (the reference time t = 0 is best taken to be the time of the transmission scan in order to correct for misalignments taking place between this time and the beginning of the emission scan).

7Note that Si is not, strictly speaking, the rate of incident scattered coincidences, since a scattered coincidence consists of two gamma rays arriving from a wide range of potential angles, thus having different normalizations than the trues normalization Ni; nevertheless, in order to intuitively suppose Si, as defined by Si/Ni, as the rate of incident scattered coincidences, while we make no approximations, we imagine a scattered event along an LOR i in a very similar sense as a true event along i.

8See footnote 7, noting that a random coincidence, just as a scattered coincidence, consists of two gamma rays arriving from a wide range of potential angles.

9The two probability terms pij and Pljt are related as expressed in (4) and are not identical in the presence of motion.

10These terms are related via (17) and (24).

11In list mode, we still use the concept of spanning to reorient the detected LORs as one would do in histogramming in order to allow one to make use of histogram-based normalization sinograms [36].

12In the HRRT scanner, singles are measured only at the block level; thus, the standard method employed [39] involves a generalization of the Casey 3-D random-smoothing technique [40] in which the crystal singles rates are first estimated from the delayed-coincidence measurements, followed by a standard calculation of the random rates.

13The aforementioned reconstructed reference image is more appropriate as compared to direct use of the true values used in the simulation, since the latter does not take into account the partial volume effect (PVE), and as such the measured bias values would not directly/visibly compare the various MC schemes.

14The scatter method does take the shape of the gantry into account [25] and as such could be used to validate both assumptions (20) and (21).

15We avoid edges of the FOV, since the Watson method performs planar scaling that can be particularly problematic near the edges.

Contributor Information

Arman Rahmim, Department of Radiology, Johns Hopkins University, 601 N. Caroline Street, Baltimore, MD 21205 USA.

Katie Dinelle, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.

Ju-Chieh Cheng, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.

Mikhail A. Shilov, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

William P. Segars, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Sarah C. Lidstone, Pacific Parkinson’s Research Centre, Vancouver, BC, V6T-1Z1 Canada.

Stephan Blinder, Pacific Parkinson’s Research Centre, Vancouver, BC, V6T-1Z1 Canada.

Olivier G. Rousset, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Hamid Vajihollahi, Department of Mechanical Engineering, McMaster University, Hamilton, ON, L8S-4L8 Canada.

Benjamin M. W. Tsui, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Dean F. Wong, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Vesna Sossi, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.


1. Bloomfield PM, Spinks TJ, Reed J, Schnorr L, Westrip AM, Livieratos L, Fulton R, Jones T. The design and implementation of a motion correction scheme for neurological PET. Phys Med Biol. 2003 Apr 21;48(8):959–978. [PubMed]
2. Fulton RR, Meikle SR, Eberl S, Pfeiffer J, Constable RT, Fulham MJ. Correction for head movements in positron emission tomography using an optical motion-tracking system. IEEE Trans Nuc Sci. 2002 Feb;49(1):116–123.
3. Lopresti BJ, Russo A, Jones WF, Fisher T, Crouch DG, Altenburger DE, et al. Implementation and performance of an optical motion tracking system for high resolution brain PET imaging. IEEE Trans Nucl Sci. 1999 Dec;46(6):2059–2067.
4. Woods RP, Cherry SR, Maziotta J. A rapid automated algorithm for accurate aligning and reslicing positron emission tomography images. J Comput Assist Tomogr. 1992;16:620–634. [PubMed]
5. Friston KJ, Ashburner J, Frith CD, Poline JB, Heather JD, Frackowiak RSJ. Spatial registration and normalization of images. Hum Brain Mapp. 1995;3:165–189.
6. Picard Y, Thompson CJ. Motion correction of PET images using multiple acquisition frames. IEEE Trans Med Imag. 1997;16:137–144. [PubMed]
7. Qi J, Leahy RM. Resolution and noise properties of MAP reconstruction for fully 3-D PET. IEEE Trans Med Imag. 2000 May;19(5):493–506. [PubMed]
8. Boellaard R, van Lingen A, Lammertsmaa AA. Experimental and clinical evaluation of iterative reconstruction (OSEM) in dynamic PET: Quantitative characteristics and effects on kinetic modeling. J Nucl Med. 2001;42(5):808–817. [PubMed]
9. Menke M, Atkins MS, Buckley KR. Compensation methods for head motion detected during PET imaging. IEEE Trans Nucl Sci. 1996;43(1):310–317.
10. Qiao F, Pan T, Clark JW, Mawlawi OR. A motion-incorporated reconstruction method for gated PET studies. Phys Med Biol. 2006;51:3769–3783. [PubMed]
11. Li T, Thorndyke B, Schreibmann E, Yang Y, Xing L. Reconstruction for proton computed tomography by tracing proton trajectories: A monte carlo study. Med Phys. 2006;33(3):699–1298. [PMC free article] [PubMed]
12. Lamare F, Ledesma Carbayo MJ, Cresson T, Kontaxakis G, Santos A, Cheze C, Rest L, Reader AJ, Visvikis D. List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations. Phys Med Biol. 2007;52(17):5187–5204. [PubMed]
13. Rahmim A, Cheng JC, Dinelle K, Shilov MA, Segars WP, Rousset OG, Tsui BMW, Wong DF, Sossi V. System matrix modeling of externally tracked motion. IEEE NSS/MIC Conf Rec. 2006;4:2137–2141.
14. Daube-Witherspoon ME, Yan YC, Green MV, Carson RE, Kempner KM, Herscovitch P. Correction for motion distortion in PET by dynamic monitoring of patient position. J Nucl Med. 1990;31:816.
15. Buehler P, Just U, Will E, Kotzerke J, van den Hoff J. An accurate method for correction of head movement in PET. IEEE Trans Med Imag. 2004 Sep;23(9):1176–1185. [PubMed]
16. Jones WF. Real-time event stream correction for patient motion in clinical 3-D PET. IEEE NSS/MIC Conf Rec. 2001;4:2062–2064.
17. Rahmim A, Bloomfield P, Houle S, Lenox M, Michel C, Buckley KR, Ruth TJ, Sossi V. Motion compensation in histogram-mode and list-mode EM reconstructions: Beyond the event-driven approach. IEEE Trans Nucl Sci. 2004;51:2588–2596.
18. Wienhard K, Shmand M, Casey ME, Baker K, Bao J, Eriksson L, Jones WF, Knoess C, Lenox M, Lercher M, Luk P, Michel C, Reed JH, Richerzhagen N, Treffert J, Vollmar S, Young JW, Heiss WD, Nutt R. The ECAT HRRT: Performance and first clinical application of the new high resolution research tomograph. IEEE Trans Nucl Sci. 2002;49:104–110.
19. Qi J, Huesman RH. 2002 IEEE Int Symp Biomed Imag. Washington, DC: 2002. List mode reconstruction for PET with motion compensation: A simulation study; pp. 413–416.
20. Qi J, Huesman RH. Monte carlo simulation of tracer-kinetic PET and SPECT studies using dynamic hybrid phantoms. J Nucl Med. 2002;43:146.
21. Thielemans K, Mustafovic S, Schnorr L. Image reconstruction of motion corrected sinograms. Proc IEEE Med Imag Conf. 2003;4:2401–2406.
22. Carson RE, Barker WC, Liow JS, Johnson CA. Design of a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction for the HRRT. IEEE NSS/MIC Conf Rec. 2003;5:3281–3285.
23. Qi J, Huesman RH. Propagation of errors from the sensitivity image in list mode reconstruction. IEEE Trans Med Imag. 2004 Sep;23(9):1094–1099. [PubMed]
24. Qi J. Calculation of the sensitivity image in list-mode reconstruction. IEEE NSS/MIC Conf Rec. 2005;4:1924–1928.
25. Watson CC. New, faster, image-based scatter correction for 3-D PET. IEEE Trans Nucl Sci. 2000 Aug;47(4):1587–1594.
26. Thielemans K. Scatter estimation and motion correction in PET. IEEE NSS/MIC Conf Rec. 2005;3:1745–1747.
27. Parra L, Barrett HH. List-mode likelihood: EM algorithm and image quality estimation demonstrated on 2-D PET. IEEE Trans Med Imag. 1998;17(2):228–235. [PMC free article] [PubMed]
28. Rahmim A, Lenox M, Reader AJ, Michel C, Burbar Z, Ruth TJ, Sossi V. Statistical list-mode image reconstruction for the high resolution research tomograph. Phys Med Biol. 2004 Aug;49:4239–4258. [PubMed]
29. Rahmim A, Cheng JC, Blinder S, Camborde M-L, Sossi V. Statistical dynamic image reconstruction in state-of-the-art high resolution PET. Phys Med Biol. 2005;50:4887–4912. [PubMed]
30. Reader AJ, Sureau FC, Comtat C, Buvat I, Trebossen R. List-mode reconstruction with system modeling derived from projections’ IEEE Nucl Sci Symp Conf Rec. 2005 Oct;3:23–29.
31. Sossi V, de Jong HWAM, Barker WC, Bloomfield P, Burbar Z, Camborde ML, Comtat C, Eriksson LA, Houle S, Keator D, Knob C, Krais R, Lammertsma AA, Rahmim A, Sibomana M, Teras M, Thompson CJ, Trebossen R, Votaw J, Walker M, Wienhard K, Wong DF. The second generation HRRT - a multi-centre scanner performance investigation. IEEE NSS/MIC Conf Rec. 2005;4:23–29.
32. Horn BK. Closed form solution of absolute orientation using unit quaternions. J Opt Soc Amer. 1987;4(4):629–642.
33. Hoppe H, DeRose T, Duchamp T, Halstead M, Jin H, McDonald J, Schweitzer J, Stuetzle W. Computer Graphics, Proc., Annu. Conf. Series. Piecewise smooth surface reconstruction. 1994;28:295–302.
34. The Visualization Toolkit. [Online]. Available:
35. Rodriguez M, Barker WC, Liow JS, Thada S, Chelikani S, Mulnix T, Carson RE. Count rate dependent component-based normalization for the HRRT. J Nucl Med. 2006;47:197P.
36. Sureau FC, Comtat C, Reader AJ, Leroy C, Santiago-Ribeiro MJ, Buvat I, Trebossen R. Improved clinical parametric imaging using list-mode reconstruction via resolution modeling. IEEE NSS/MIC Conf Rec. 2006;6:3507–3510.
37. ÜMumcuŏglu E, Leahy RM, Cherry SR, Zhou Z. Fast gradient-based methods for bayesian reconstruction of transmission and emission pet images. IEEE Trans Med Imag. 1994 Dec;13(4):687–701. [PubMed]
38. Reader AJ, Ally S, Bakatselos F, Manavaki R, Walledge R, Jeavons AP, Julyan PJ, Zhao S, Hastings DL, Zweit J. One-pass list-mode EM algorithm for high-resolution 3-D PET image reconstruction into large arrays. IEEE Trans Nucl Sci. 2002;49(3):693–699.
39. Byars LG, Sibomana M, Burbar Z, Jones J, Panin V, Barker WC, Liow J-S, Carson RE, Michel C. Variance reduction on randoms from coincidence histograms for the HRRT. IEEE NSS/MIC Conf Rec. 2005;5:2622–2626.
40. Casey ME, Hoffman EJ. Quantitation in positron emission tomography: 7. A technique to reduce noise in accidental concidence meansurements and coincidence efficinecy calibration. J Comput Assist Tomogr. 1986;10(5):845–850. [PubMed]
41. Karp JS, Muehllehner G, Lewitt RM. Constrained fourier space method for compensation of missing datain emission computed tomography. 1988;7:21–25. [PubMed]
42. Montgomery AJ, Thielemans K, Mehta MA, Turkheimer F, Mustafovic S, Grasby PM. Correction of head movement on pet studies: Comparison of methods. J Nucl Med. 2006;47:1936–1944. [PubMed]
43. Kuwabara H, Rousset O, Kumar A, Ye W, Brasic JR, Wong DF. Evaluation of tissue homogeneity assumptions of partial volume correction for [11c]raclopride binding potential volumes. J Nucl Med. 2006;47:192P.
44. Mawlawi O, Martinez D, Slifstein M, Broft A, Chatterjee R, Hwang D, Huang Y, Simpson N, Ngo K, Van Heertum R, Laruelle M. Imaging human mesolimbic dopamine transmission with positron emission tomography: I. accuracy and precision of d2 receptor parameter measurements in ventral striatum. Cereb Blood Flow Metab. 2001;21:1034–1057. [PubMed]
45. Zhou Y, Weed M, Chen MK, Rahmim A, Ye W, Brasic J, Alexander M, Crabb A, McGlothan J, Ali F, Guilarte T, Wong D. Quantitative dopamine transporter imaging studies in nonhuman primates with a GE advance and high resolution research tomography (HRRT) PET scanners. J Nucl Med. 2007;48:158P. [PubMed]