Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2920454

Formats

Article sections

- Abstract
- I. Introduction
- II. Accurate Corrections for Scattered Events and Randoms
- III. Methods
- IV. Results and Discussion
- V. Conclusion
- References

Authors

Related links

IEEE Trans Med Imaging. Author manuscript; available in PMC 2010 August 12.

Published in final edited form as:

PMCID: PMC2920454

NIHMSID: NIHMS221342

Arman Rahmim, Member, IEEE,^{} Katie Dinelle, Ju-Chieh Cheng, Member, IEEE, Mikhail A. Shilov, William P. Segars, Member, IEEE, Sarah C. Lidstone, Stephan Blinder, Olivier G. Rousset, Hamid Vajihollahi, Benjamin M. W. Tsui, Member, IEEE, Dean F. Wong, and Vesna Sossi, Member, IEEE

Arman Rahmim, Department of Radiology, Johns Hopkins University, 601 N. Caroline Street, Baltimore, MD 21205 USA;

Arman Rahmim: ude.imhj@1mimhara

The publisher's final edited version of this article is available at IEEE Trans Med Imaging

See other articles in PMC that cite the published article.

With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from *all* detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented.

Recent developments in 3-D positron emission tomography (PET) systems have enabled the spatial resolution to reach the 2–4 mm full-width at half-maximum (FWHM) range. With such improvements in spatial resolution, small patient movements during PET imaging become a significant source of resolution degradation.

Below, we first review correction methods in the literature that attempt to address the problem of rigid motion. Most of the existing work on this type of motion has been investigated and implemented in brain PET imaging, since nonrigid (cardiac and respiratory) motion, dominant in whole-body and cardiac PET imaging, is absent in this case.

As a typical PET brain imaging session can last hours, it is not reasonable to expect a patient to remain motionless during this time. A number of head restraints may be used, such as thermoplastic masks or neoprene caps, which lower the amount of motion but do not eliminate it. Even with head restraints, typical translations in the range of 5–20 mm and rotations of 1–4° are observed,^{1} depending on the type of mask and the duration of scan (e.g., see [1], [2], and [3] in which a study of various types of head movements has been presented).

Methods to correct for patient movements were in the past largely based on correction of interscan movements. These (software-based) methods involved the division of a scan into a number of frames, followed by spatial registration of the reconstructed images using mathematical algorithms (e.g., see [4] and [5]). However, motion correction (MC) strategies in emission computed tomography that rely exclusively on emission data itself are inadequate for robust clinical usage, since 1) they depend on the quality of the scan data including noise characteristics and 2) they assume the activity distribution does not significantly change within the frames, whereas the frames are chosen *a priori.* More successful methods, explained next, rely on motion information provided by an external motion-tracking device (e.g., the Polaris system [3], which was also used in this work as elaborated in Section III).

Assuming accurate measurement of patient movement during the scan, a number of approaches to MC have been proposed:

- One method [6], [2] involves dividing of detected events into multiple acquisition frames (MAFs). Every time the displacement of the patient is measured to be larger than a specified threshold, the PET data are saved in a new frame. This is then followed by correction of the individually-reconstructed images of the MAFs, via rotation, and translation, to compensate for the measured amount of motion (i.e., this is an image-driven approach). The major limitation of the MAF approach is that by using a high motion threshold, motion within the frames are neglected; and lowering the motion threshold can instead result in the acquisition of an increasing number of low-statistic frames to be reconstructed. This is problematic because iterative reconstruction algorithms are most typically prone to exhibit image bias in case of poor statistics (due to the use of nonnegativity constraints) [7], [8].
- Another image-driven correction method proposed by Menke
*et al.*[9] involves postprocessing of the motion-blurred reconstructed images using deconvolution operators (whose shape is determined by the measured motion). This method, however, has not attracted much attention because the deconvolution process amplifies the noise in the PET data, and furthermore, when the movements include significant rotation, spatially-variant deconvolution filters need to be employed, potentially introducing other artifacts [9]. - Another possible approach is to model the effect of motion in (the image-space component of) the system matrix of the EM algorithm, which has been proposed and implemented by Qiao
*et al.*[10] for the case of cardiac- and/or respiratory-gated PET data, wherein the nonrigid motion needs to be somehow extracted; e.g., using 4-D CT ([12] can be consulted for comparison of implementation methodologies). In the present context of motion-contaminated non-gated data (particularly in brain imaging), the measured rigid motion can instead be directly measured using external tracking and incorporated^{2}in the system matrix as investigated in [13]. In this context however, this method is only ideal for scanners without the list-mode acquisition capability (i.e., when the detected events cannot be individually compensated for motion) as it merely models the overall motion of the subject and would converge very slowly. - Another approach consists of transforming coordinates of the lines-of-response (LORs) along which individual events are measured to where they would have been measured if the object had not moved [14] (this is an event-driven approach). The method was elaborated and implemented by Menke
*et al.*[9]. However in that work, due to hardware limitations, it was suggested that the motion-corrected LORs be corrected by normalization factors for the transformed LORs (as opposed to those corresponding to the original detector-pairs along which the events were detected). This normalization mismatch has been shown to result in artifacts [15].

Alternatively, to solve this problem, one requires a PET scanner either 1) equipped with more specialized hardware to achieve accurate on-the-fly normalization correction followed by LOR-transformation; e.g., see [16], or 2) capable of acquiring data in list-mode format, so that LOR corrections can be accurately performed post-acquisition; e.g., see [1].

The above purely event-driven approach neglects two issues [17] which we shall refer to as *issue-1* and *issue-2*.

An event that would have been otherwise detected can exit the field-of-view (FOV) because of motion. This results in a loss of data along the relevant LORs, an effect that is not modeled by regular reconstruction methods.

An event that is normally not detected, i.e., is not in the FOV, may fall within the FOV because of motion. Therefore, following MC some detected events may correspond to no actual detector pairs.

These two effects can occur in two ways:

- Along the axial direction of the scanner, via translation (as shown in Fig. 1) or rotation (not shown).
- Along the transaxial direction (similarly via translation or rotation) for scanners with gaps in between the detectors (an example of this is the high-resolution research tomograph (HRRT) [18] which has an octagonal design with gaps in-between the heads). This effect is shown in Fig. 2 (for the case of translation).

In particular, Qi and Huesman [19] proposed a list-mode reconstruction approach that addressed both aforementioned issues via modeling of motion into the system matrix of the EM algorithm. In [17], a feasible framework (leading to very efficient computations of the motion-corrected sensitivity images) within the contexts of both histogram-mode and list-mode algorithms was developed; methods to additionally incorporate estimated randoms and scattered events are further developed in Section II.

Neglecting *issue-1* can produce image artifacts, as demonstrated by simulation [20], [19], [15] or experimentally [17]. On the other hand, one may note that neglecting *issue-2* should not result in image artifacts since the patient should be still sampled enough by the existing detector pairs; nevertheless, 1) this can result in a loss of signal-to-noise ratio (SNR) in the images since some of the measured signal with useful information is simply discarded;^{3} furthermore 2) ironically, consideration of these additional events can be shown (see Section I-D) to yield a very useful mathematical property resulting in a very considerable reduction in the computational task of calculating the sensitivity correction factors in the EM algorithm (e.g., by a factor of ~30 for the HRRT scanner).

Below, we mention two other approaches that have been proposed in the literature (addressing *issue-1* only).

- The method by Thielemans
*et al.*[21] involves scaling of motion-corrected sinogram bins in order to correct for events that were lost due to motion, where the scale factors are computed by the measured motion information. - The method by Buehler
*et al.*[15] similarly involves scaling the motion-corrected sinogram bins for counts that were lost due to motion, with the difference that this method precorrects the detected events by their normalization factors prior to histogramming, whereas in the former, normalization correction is performed following motion-corrected histogramming (by a calculated overall factor).

The above two methods (further compared to our approach at the end of Section I-C) have a few potential difficulties.

- Consideration of noise enhancement issues may be required when dividing the sinogram bins by small scale factors [21].
- They are computationally intense (similar to the work in [19]) since they require calculating the effect of motion for the entire duration of the scan along all LORs (thus problematic in the context of high-resolution scanners).
- They explicitly address
*issue-1*but not*issue-2*as they simply discard motion-corrected events which do not correspond to actual detector elements.

Next, following [17], an EM approach addressing the above issues is elaborated, with the inclusion of terms for randoms and scattered events: accurate methodology to estimate these terms within the motion-compensation framework are developed in Section II.

Denoting
${f}_{j}^{m}$ as the activity (i.e., emission *rate*) in voxel *j* (*j* = 1,…, *J*) estimated at the *m*th iteration, and *p _{ij}* as the probability of an emission from voxel

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\sum}_{i=1}^{I}{p}_{ij}}\sum _{i=1}^{I}{p}_{ij}\frac{{n}_{i}}{{\sum}_{b=1}^{J}{p}_{ib}{f}_{b}^{m}+{\overline{\mathcal{R}}}_{i}+{\overline{\mathcal{S}}}_{i}}$$

(1)

where *n _{i}* refers to the number of events detected along LOR

We first divide a given scan (of duration *T*) into *Q* motion-intervals (*t* = 1,…, *Q*) each with a duration Δ*T _{t}* within which movements remain below a very small threshold

An event that would have been detected along LOR *i* is detected along LOR *i*′ *=*
{*i*} due to motion. In image-space, the effect of motion can be characterized by a transformation from voxel *j* to *j =*
{*j*}.

Next, referring to Fig. 3, we note that the time-varying (due to motion) probability
${\mathcal{P}}_{{i}^{\prime}j}^{t}$ of detecting an event generated during interval *t* from voxel *j* along LOR *i*′ (i.e., prior to MC; thus binned along LOR *i* following MC) is given by^{5}

$${\mathcal{P}}_{{i}^{\prime}j}^{t}={g}_{ij}{A}_{i}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}$$

(2)

wherein the system matrix has been decomposed into geometric *g _{ij}*, as well as attenuation

$${\delta}_{{i}^{\prime}}=\{\begin{array}{ll}1,\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}\text{LOR}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}\phantom{\rule{0.16667em}{0ex}}\text{corresponds}\phantom{\rule{0.16667em}{0ex}}\text{to}\phantom{\rule{0.16667em}{0ex}}\text{a}\phantom{\rule{0.16667em}{0ex}}\text{detector}\phantom{\rule{0.16667em}{0ex}}\text{pair}\hfill \\ 0,\hfill & \text{otherwise}\hfill \end{array}$$

(3)

is used to denote whether or not the LOR *i*′ corresponds to physical detector-pairs. We next note that the *overall* probability *p _{ij}* of detecting an event generated at voxel

$${p}_{ij}=\sum _{t=1}^{Q}{\mathcal{P}}_{{i}^{\prime}j}^{t}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i).$$

(4)

Combining (2) and (4), the overall system matrix can be written as

$${p}_{ij}={g}_{ij}{A}_{i}\sum _{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i).$$

(5)

Next, we define

$${\overline{N}}_{i}=\sum _{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i)$$

(6)

which can be thought of as the time-averaged detection-efficiency of the LORs that contribute to an LOR *i* upon MC, and using the substitution of (5) into (1), we arrive at

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\overline{s}}_{j}}\sum _{i=1}^{I}{g}_{ij}\frac{{n}_{i}}{{\sum}_{b=1}^{J}{g}_{ib}{f}_{b}^{m}+{\scriptstyle \frac{{\overline{\mathcal{R}}}_{i}}{{A}_{i}{N}_{i}}}+{\scriptstyle \frac{{\overline{\mathcal{S}}}_{i}}{{A}_{i}{N}_{i}}}}$$

(7)

where

$${\overline{s}}_{j}=\sum _{i=1}^{I}{g}_{ij}{A}_{i}\sum _{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}=\sum _{i=1}^{I}{g}_{ij}{A}_{i}{\overline{N}}_{i}.$$

(8)

It is worth noting (as implied by the present framework) that *n _{i}* in (7) is the number of events placed (and not necessarily detected) along LOR

It is worth noting that in our own notation, after careful review, one is able to show that the method introduced by Thielemans *et al*. [21] actually corresponds to precorrecting each motion-corrected sinogram bin *i* by the overall normalization correction factor
${\overline{N}}_{i}={\sum}_{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}(\mathrm{\Delta}{T}_{t})/(T)$. By contrast, the method implemented by Buehler et al. [15] first precorrects each detected event along an LOR *i*′ by the corresponding normalization factor *N*_{i′}, followed by motion-corrected histogramming (i.e., into bin *i*). This is then followed by dividing the sinogram bins by the factor
${\sum}_{t=1}^{Q}{\delta}_{{i}^{\prime}}\mathrm{\Delta}{T}_{t}/T$ which simply represents the fraction of time an LOR *i* was in the FOV (i.e., passed through scanner detectors).

On the other hand, the method investigated in this work: 1) does not have the potential problem of noise amplification as encountered in the aforementioned two precorrection schemes; 2) furthermore, as seen next, it can be accurately simplified to a form that is not burdened with the computationally-intense tasks of calculating the expressions ${\sum}_{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\mathrm{\Delta}{T}_{t}/T$ or ${\sum}_{t=1}^{Q}{\delta}_{{i}^{\prime}}\mathrm{\Delta}{T}_{t}/T$.

A number of possible (though not necessarily accurate) methods may be considered in order to simplify the computational complexity of expressions (6) or (8).

- By increasing the motion threshold, movements below which are neglected; thus resulting in fewer motion frames. However, this approach is bound to introduce resolution degradation and is therefore not optimal.
- By application of the compression method in [15] in which neighboring LORs are grouped together. This method is also an approximation and can potentially result in inaccuracies.
- Via backprojection [in (8)] of only a randomized subset of the projection-space as proposed by Carson
*et al.*[22]. Qi and Huesman [23] have shown that the particular randomization method is very critical in the accuracy of the estimated sensitivity factors, especially as inaccuracies will be amplified in subsequent iterations of the EM algorithm, and more accurate (yet more time-consuming) Monte Carlo randomization techniques have been proposed [24].

Alternatively, we are able to show [17] that an accurate, nonrandomized *and* considerably fast method is possible, if the data are precorrected for attenuation. This is obtained by first noting that instead of modeling motion in the LOR-domain using the operator ( ), one can instead map the trajectory of the *image voxels* using an operator ( ) such that *j*′ = (*j*) represents the position of a voxel *j* during interval *t*, as depicted in Fig. 3.

Next we note that if performing attenuation precorrection of the sinogram bins, the algorithm (7) is instead written as

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\overline{s}}_{j}}\sum _{i=1}^{I}{g}_{ij}\frac{{n}_{i}/{A}_{i}}{{\sum}_{b=1}^{J}{g}_{ib}{f}_{b}^{m}+{\scriptstyle \frac{{\overline{\mathcal{R}}}_{i}}{{A}_{i}{N}_{i}}}+{\scriptstyle \frac{{\overline{\mathcal{S}}}_{i}}{{A}_{i}{N}_{i}}}}$$

(9)

where the sensitivity term is given by

$${\overline{s}}_{j}=\sum _{i=1}^{I}{g}_{ij}\sum _{t=1}^{Q}{N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i).$$

(10)

It can then be shown [17] that

$${\overline{s}}_{j}=\sum _{t=1}^{Q}{s}_{{j}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{j}^{\prime}={\mathcal{M}}_{t}(j)$$

(11)

where
${s}_{j}={\sum}_{i=1}^{I}{g}_{ij}{N}_{i}$ is the conventional sensitivity correction factor. The important result is that for any voxel *j*, the term * _{j}* may be evaluated by motion-interval weighting of conventional sensitivity factors evaluated along the trajectory

The motion-compensated overall sensitivity correction factor _{j} for a particular voxel *j* can be calculated in *image-space* by (Δ*T*_{t})/(*T*)-weighted evaluation of the conventional sensitivity term along the trajectory *j*′ *=* **...**

As an example, for a typical HRRT-scanner acquisition (span 3, maximum ring difference 67), the proposed image-space approach can result in a factor of ~30 speedup in the calculation of the sensitivity factors (the sinogram-size for a single frame is ~470 M, compared to only ~14 M voxels in the image-space).

In this section, we elaborate on a methodology to accurately incorporate corrections for scattered events and randoms in the presence of motion. It must be noted that the proposed approach is generally applicable for estimation of scattered events and randoms in other analytic and iterative reconstruction tasks and as such can be similarly applied within other MC schemes.

We propose that the term /* _{i}* in (9) can be simplified. In this work, we have used the Watson Single Scatter Simulation (SSS) technique [25]. The method has the property that instead of calculating the expected rate of scattered events

$${S}_{i}={\stackrel{~}{S}}_{i}\times {N}_{i}$$

(12)

wherein * _{i}* is thus effectively the expected rate of

We shall argue in this section that the term /* _{i}* in (9) can be replaced by

A depiction of the effect of scattering at a position *X* resulting in a scattered event *i* detected along detectors *A* and *B*.

$${S}_{i}={\int}_{{V}_{X}}dX({\epsilon}_{AX}{\widehat{\epsilon}}_{BX})\left(\frac{{\sigma}_{AX}{\sigma}_{BX}}{4{\pi}^{2}{R}_{AX}^{2}{R}_{BX}^{2}}\right){C}_{i,X}^{S}(\mu ,f)$$

(13)

where *ε _{AX}* and

Next, noting that the normalization term *N _{i}* for the nonscattered true events detected along

$${N}_{i}=\frac{{\epsilon}_{AB}^{2}{\sigma}_{AB}^{2}}{4{\pi}^{2}{R}_{AB}^{2}}$$

(14)

where
${\epsilon}_{AB}^{2}$ and
${\sigma}_{AB}^{2}$ represent the detector efficiencies and geometric cross sections for a true event detected along *AB*(*R _{AB}* denotes the separation of the detectors). Defining

$${N}_{i,X}^{S}=\frac{{\epsilon}_{AX}{\widehat{\epsilon}}_{BX}{\sigma}_{AX}{\sigma}_{BX}}{4{\pi}^{2}{R}_{AX}^{2}{R}_{BX}^{2}}$$

(15)

as the corresponding overall efficiency for detection of scattered events along LOR *i* (scattered at *X*), then referring to (13) it is easy to see that the term * _{i}* in (12) can be written as

$${\stackrel{~}{S}}_{i}={\int}_{{V}_{X}}dX\frac{{N}_{i,X}^{S}}{{N}_{i}}{C}_{i,X}^{S}(\mu ,f).$$

(16)

Defining
${S}_{{i}^{\prime}}^{det}(t)$ as the expected rate of scattered events detected along LOR *i*′ at time interval *t* [therefore binned along LOR
$i={\mathcal{L}}_{t}^{-1}(i)$], we note that in (9), which represents the overall rate of detected scattered events binned along LOR *i*, can be expressed as

$${\overline{\mathcal{S}}}_{i}=\sum _{t=1}^{Q}{S}_{{i}^{\prime}}^{det}(t)\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i).$$

(17)

Next, similar to the definition of * _{i}* in the expression (12), we define
${S}_{{i}^{\prime}}^{\text{inc}}(t)$ such that

$${S}_{{i}^{\prime}}^{det}(t)={S}_{{i}^{\prime}}^{\text{inc}}(t)\times {N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}$$

(18)

wherein
${S}_{{i}^{\prime}}^{\text{inc}}(t)$ is effectively the expected rate of scattered events incident along LOR *i*′ at time interval *t*, and wherein using *δ _{i}*

$${S}_{{i}^{\prime}}^{\text{inc}}(t)={\int}_{{V}_{X}}dX\frac{{N}_{{i}^{\prime},{X}^{\prime}}^{S}}{{N}_{{i}^{\prime}}}{C}_{{i}^{\prime},{X}^{\prime}}^{S}({\mu}^{\prime},{f}^{\prime})$$

(19)

where the primed quantities indicate the effect of motion transformation of the scattering volume at time interval *t*.

At this stage, we shall argue that expression (19) for
${S}_{{i}^{\prime}}^{\text{inc}}(t)$ is nearly equivalent to the conventionally computed scatter term * _{i}* as given by (16). To proceed, we will assume the following.

The motion operator ( ) applied to true-coincidence LORs is similarly applicable to modeling the motion of scattered-event LORs. This assumption is not strictly true [26] in cases when motion results in significant change in the relation between the object and the scanner (e.g., see Fig. 6). However, given realistic amounts of motion and the smooth nature of the scattering distribution, one may assume that this is the case, and that therefore, the relation of *i*′ with respect to the attenuation *μ*′ and emission *f*′ distributions remains nearly the same, and therefore

A depiction of the potential effect of considerable motion on changing the relation of the scattering volume and the detection geometry: in this example, a large translation of the scattering volume has resulted in a non-similar transformation of the **...**

$${C}_{{i}^{\prime},{X}^{\prime}}^{S}({\mu}^{\prime},{f}^{\prime})\approx {C}_{i,X}^{S}(\mu ,f)$$

(20)

While the absolute efficiency of detection ${N}_{{i}^{\prime},{X}^{\prime}}^{S}$ certainly varies with motion and is taken into account in this work, the relative detection efficiencies between the scattered and true events are nearly preserved with motion; i.e.,

$$\frac{{N}_{{i}^{\prime},{X}^{\prime}}^{S}}{{N}_{{i}^{\prime}}}\approx \frac{{N}_{i,X}^{S}}{{N}_{i}}.$$

(21)

This is effectively true given the wide-range of angles/positions from which scattered events contribute to an LOR.

Combining (16), (19), (20), and (21), one arrives at the intuitively appealing result

$${S}_{{i}^{\prime}}^{\text{inc}}(t)\approx {\stackrel{~}{S}}_{i}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i)$$

(22)

where * _{i}*, in this work, is obtained using the standard Watson single scatter simulation (SSS) method [25] (on motion corrected sinograms). Combining (17), (18), and (22), it then follows that

$${\overline{\mathcal{S}}}_{i}\approx \sum _{t=1}^{Q}\left[{\stackrel{~}{S}}_{i}\times {N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\right]={\stackrel{~}{S}}_{i}{\overline{N}}_{i}$$

(23)

where we have used the definition (6) for * _{i}*. This gives a very intuitive picture: the overall rate of detected scattered events binned along an LOR

We begin by noting that, similar to (17), the overall expected rate of random coincidences binned along LOR *i* can be written as

$${\overline{\mathcal{R}}}_{i}=\sum _{t=1}^{Q}{R}_{{i}^{\prime}}^{det}(t)\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i)$$

(24)

where
${R}_{{i}^{\prime}}^{det}(t)$ denotes the expected rate of random events detected along LOR *i*′ = (*i*) at time interval *t* (therefore binned along LOR *i*).

Next, we note that ordinarily, unlike analytic calculations for expected scatter counts, the expected random counts can be more directly obtained using either the measured single count rates or smoothed delayed coincidences. As such, the term can be directly obtained from motion corrected singles or delayed coincidences. Therefore, the expression /* _{i}* in (9) can in principle be calculated.

However, 1) the task of computing * _{i}* remains a computationally intense one, and 2) MC binning of delayed coincidences, and especially singles counts (which arrive from different orientations), may not be feasible. As such, we have pursued an alternative, more practical approach.

First, similar to (18), we define ${R}_{{i}^{\prime}}^{\text{inc}}(t)$ such that

$${R}_{{i}^{\prime}}^{det}(t)={R}_{{i}^{\prime}}^{\text{inc}}(t)\times {N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}$$

(25)

wherein
${R}_{{i}^{\prime}}^{\text{inc}}(t)$ effectively^{8} denotes the rate of random events incident along an LOR *i*′ at time interval *t*.

At this stage, we make the following simplifying approximation: due to the very broad nature of random contributions compared to realistic amounts of motion, we assume that motion does not alter the rate of randoms *incident* along an LOR (i.e.,
${R}_{i}^{\text{inc}}(t)$ is a constant in time), and that incident randoms along any LOR *i* and its motion-transformed LOR *i*′ = (*i*) are nearly the same (i.e.,
${R}_{i}^{\text{inc}}(t)\approx {R}_{{i}^{\prime}}^{\text{inc}}(t)$). We summarize this by writing

$${\stackrel{~}{R}}_{i}{R}_{i}^{\text{inc}}(t)\approx {R}_{{i}^{\prime}}^{\text{inc}}(t)\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}{i}^{\prime}={\mathcal{L}}_{t}(i)$$

(26)

wherein * _{i}* is introduced to emphasize time/motion-independence of
${R}_{i}^{\text{inc}}(t)$. One can then rewrite (25) as

$${R}_{i}^{det}(t)\approx {\stackrel{~}{R}}_{i}\times {N}_{i}{\delta}_{i}$$

(27)

from which we conclude that
${R}_{i}^{det}={R}_{i}^{det}(t)$ is nearly a constant in time, and is simply given by the overall mean rate of random coincidences detected along any LOR *i*, as ordinarily computed in PET imaging applications.

Furthermore, combining (24), (25), and (26), one arrives at

$${\overline{\mathcal{R}}}_{i}\approx \sum _{t=1}^{Q}\left[{\stackrel{~}{R}}_{i}\times {N}_{{i}^{\prime}}{\delta}_{{i}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\right]={\stackrel{~}{R}}_{i}{\overline{N}}_{i}$$

(28)

It then follows that, considering the above two equations, for LORs within the FOV (i.e., *δ _{i}* = 1)

$${\stackrel{~}{R}}_{i}\approx \frac{{\overline{\mathcal{R}}}_{i}}{{\overline{N}}_{i}}\approx \frac{{R}_{i}^{det}}{{N}_{i}}\text{for}\phantom{\rule{0.16667em}{0ex}}{\delta}_{i}=1.$$

(29)

In other words, for binned LORs *i* that are inside the FOV(*δ _{i}*≠0), the expression /

Nevertheless, for motion-compensated events that are outside the FOV (i.e., *δ _{i}* = 0), it is not possible to extract the incident random rates (and therefore /

$${\stackrel{~}{R}}_{i}\{\begin{array}{ll}{\scriptstyle \frac{{R}_{i}^{det}}{{N}_{i}}}\hfill & {\delta}_{i}=1\hfill \\ \text{extrap}\left\{{\scriptstyle \frac{{R}_{i}^{det}}{{N}_{i}}}\right\}\hfill & {\delta}_{i}=0\hfill \end{array}$$

(30)

in relation to (29), we finally arrive at the estimation

$$\frac{{\overline{\mathcal{R}}}_{i}}{{\overline{N}}_{i}}\approx {\stackrel{~}{R}}_{i}\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\text{all}\phantom{\rule{0.16667em}{0ex}}i.$$

(31)

Subsequently, we may replace the expression /* _{i}* in (9)

Combining results of Sections I-D, II-A, and II-B [(11), (23), and (31) respectively], the motion-compensated EM algorithm (9) can be written in the final form

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\sum}_{t=1}^{Q}{s}_{{j}^{\prime}}{\scriptstyle \frac{\mathrm{\Delta}{T}_{t}}{T}}}\sum _{i=1}^{I}{g}_{ij}\frac{{n}_{i}/{A}_{i}}{{\sum}_{b=1}^{J}{g}_{ib}{f}_{b}^{m}+{\scriptstyle \frac{{\stackrel{~}{R}}_{i}}{{A}_{i}}}+{\scriptstyle \frac{{\stackrel{~}{S}}_{i}}{{A}_{i}}}}$$

(32)

wherein *s _{j}*

The direct list-mode reconstruction approach has a number of potential advantages over histogram-mode methods [29], including, within the present context, the inherent advantage of naturally incorporating all detected events thus avoiding the burden of extending the sinogram-space in histogram-mode methods (as needed to allow binning of motion-corrected events that are out of the FOV).

Building on the work of Parra and Barrett [27], we have elaborated the derivation of the motion-compensated list-mode EM algorithm elsewhere (Appendix I of [17]), aside from scatter and random coincidence considerations. Using *l _{k}* to denote the LOR along which the

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\stackrel{\u02d8}{s}}_{j}}\sum _{k=1}^{K}{\mathcal{P}}_{{l}_{k}j}^{t}\frac{1}{{\sum}_{b=1}^{J}{\mathcal{P}}_{{l}_{k}b}^{t}{f}_{b}^{m}}$$

(33)

where

$${\stackrel{\u02d8}{s}}_{j}=\sum _{t=1}^{Q}\sum _{l}{\mathcal{P}}_{lj}^{t}\frac{\mathrm{\Delta}{T}_{t}}{T}$$

(34)

is the overall probability for an event generated anytime at voxel *j* being detected anywhere. Next, we make the following notes and observations.

- In contrast to the histogram-mode EM algorithm (1), the list-mode algorithm (33) appears different in that the former employs
*overall*(time/motion-weighted) probability elements*p*whereas the latter includes_{ij}*time/motion-varying*probability elements^{9}${\mathcal{P}}_{lj}^{t}$. Similarly, including time/motion-varying random and scatter terms, one can write (using*l*instead of*l*for convenience)_{k}$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\stackrel{\u02d8}{s}}_{j}}\sum _{k=1}^{K}{\mathcal{P}}_{lj}^{t}\frac{1}{{\sum}_{b=1}^{J}{\mathcal{P}}_{lb}^{t}{f}_{b}^{m}+{R}_{l}^{det}(t)+{S}_{l}^{det}(t)}$$(35)wherein ${R}_{l}^{det}(t)$ and ${S}_{l}^{det}(t)$ (Sections II-A and II-B) denote expected randoms and scattered coincidence events detected along an LOR*l*during interval*t*. A similar difference, as explained above, exists between these*time-varying*terms and the*overall*random and scatter coincidence terms and used in the histogram-mode approach,^{10}though as we shall see shortly, the two list-mode and histogram-mode algorithms will be seen to be very similar, except for the inherent difference by which the events are considered. - If the data are precorrected for attenuation, the system matrix can be written as$${\mathcal{P}}_{lj}^{t}=\mathcal{G}(\mathbf{i},j){N}_{l}{\delta}_{l}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}\mathbf{i}={\mathcal{L}}_{t}^{-1}(l)$$(36)where
**i**is used to denote the motion-corrected LOR, and (**i**,*j*) denotes the geometric probability (very similar to*g*in histogram-mode reconstruction) which is written differently to emphasize the ability to take into account_{ij}*continuous*motion-compensated LOR coordinates $\mathbf{i}={\mathcal{L}}_{t}^{-1}(l)$ as the list-mode events are motion-corrected and projected one-by-one.

Now, since no motion corrected event is discarded in the natural list-mode approach, one is able to show [17], very similar to the approach in Section I-D, that the sensitivity term reduces to

$${\stackrel{\u02d8}{s}}_{j}=\sum _{t=1}^{Q}{s}_{{j}^{\prime}}\frac{\mathrm{\Delta}{T}_{t}}{T}\text{with}\phantom{\rule{0.16667em}{0ex}}{j}^{\prime}={\mathcal{M}}_{t}(j)$$

(37)

as similarly given by (11), where *s _{j} =* Σ

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\stackrel{\u02d8}{s}}_{j}}\sum _{k=1}^{K}\frac{\mathcal{G}({\mathbf{i}}_{k},j)1/{A}_{i}}{{\sum}_{b=1}^{J}\mathcal{G}({\mathbf{i}}_{k},b){f}_{b}^{m}+{\scriptstyle \frac{{R}_{l}^{det}(t)}{{A}_{i}{N}_{i}}}+{\scriptstyle \frac{{S}_{l}^{det}(t)}{{A}_{i}{N}_{l}}}}$$

(38)

with
${\mathbf{i}}_{k}={\mathcal{L}}_{t}^{-1}({l}_{k})$ (where *l _{k}* is the LOR along which the

Similar to the term /* _{i}* in the histogram-mode algorithm (9), the term
${S}_{l}^{det}(t)/{N}_{l}$ in the above list-mode algorithm (38) can also be seen to essentially estimate the expected rates of scatter coincidences

$$\frac{{S}_{l}^{det}(t)}{{N}_{l}}\approx \frac{{\overline{\mathcal{S}}}_{i}}{{\overline{N}}_{i}}\approx {\stackrel{~}{S}}_{i}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}i={\mathcal{L}}_{t}^{-1}(l).$$

(39)

Within the framework of Section II-B, a very similar relation exists for random coincidences; combining (25), (26), and (28), one arrives at

$$\frac{{R}_{l}^{det}(t)}{{N}_{l}}\approx \frac{{\overline{\mathcal{R}}}_{i}}{{\overline{N}}_{i}}\approx {\stackrel{~}{R}}_{i}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}i={\mathcal{L}}_{t}^{-1}(l)$$

(40)

wherein
${R}_{l}^{det}(t)/{N}_{l}$ and /* _{i}* both estimate the average rate of random events incident along an LOR

$${f}_{j}^{m+1}=\frac{{f}_{j}^{m}}{T{\sum}_{t=1}^{Q}{s}_{{j}^{\prime}}{\scriptstyle \frac{\mathrm{\Delta}{T}_{t}}{T}}}\sum _{k=1}^{K}\frac{\mathcal{G}({\mathbf{i}}_{k},j)1/{A}_{i}}{{\sum}_{b=1}^{J}\mathcal{G}({\mathbf{i}}_{k},b){f}_{b}^{m}+{\scriptstyle \frac{{\stackrel{~}{R}}_{i}}{{A}_{i}}}+{\scriptstyle \frac{{\stackrel{~}{S}}_{i}}{{A}_{i}}}}$$

(41)

wherein, similarly to the histogram-mode counterpart (32) *s _{j}*

Data were acquired on the second generation high resolution research tomograph (HRRT) [31]. The detector heads in the octagonal design consist of a double 10-mm layer of LSO/LYSO for a total of 119, 808 detector crystals (crystal size 2.1×2.1×10 mm^{3}). The total number of possible LORs is 4.486×10^{9}.

Collection of motion data was carried out using a Polaris motion tracking system (Northern Digital Inc., Waterloo, ON, Canada) [3]. This system uses an infrared signal to track a small tool consisting of four retro-reflective spheres attached to a plastic plate. The Polaris was mounted approximately 1 m from the rear of the scanner. Calibration of the Polaris frame of reference to that of the tomograph was achieved via a series of simultaneous Polaris and transmission scan measurements [15]. The Polaris tool was placed in a static position within the scanner, and the coordinates of its center as well as the orientation of the tool (translation and quaternion) were determined using the Polaris, allowing one to extract 3-D positions of the four individual spheres (given knowledge of their relative positions). During this time, a 10-min transmission scan of the tool was obtained. From the reconstructed transmission image the coordinates of each tracking sphere were determined by first isolating (in the image) the spheres from the plate, and then fitting a 3-D Gaussian to the center of each sphere. This resulted in four sets of paired coordinates. The entire process was repeated five times to increase the accuracy of the calibration. To determine the transformation between the two frames a direct least squares method was employed [32]. This transformation was later applied when calculating the corrections to be made to compensate for motion during image reconstruction.

Motion data was recorded using a modified version of the open source code provided by the Polaris manufacturer. Each motion measurement (quaternion and translation) was output in binary format with a time stamp that was synchronized to the HRRT acquisition PC using a common time server.

The motion threshold (below which motion is considered negligible) was set to a tenth of the resolution (2.5 mm) of the HRRT scanner. In fact, having a very high number of motion intervals (*Q*), does not pose any noticeable increase to the computation, primarily because this makes a difference only in the number of image-space motion-averages of the sensitivity image [see (41)] which are very efficiently computed, and only once per dataset (at 1.4 s per motion on a 2.6 GHz dual-processor computer). Therefore, it is fully practical to incorporate effectively continuous motion in the proposed formalism.

An elliptical contrast phantom (3.2 L) was used containing hot (31 mL) and cold (13 mL) spheres inserted in a background. The phantom was filled with a total F-18 activity of 1.047 mCi (resulting in initial prompts and randoms rates of 267 kcps and 27 kcps), while the hot/background ratio was 4.77. An elaborate motion study was performed, which involved a number of distinct movements (summarized in Table I) with respect to the position and orientation of the phantom at the time origin. The movements were applied at the very beginning of each indicated time interval. The applied movement protocal is shown, and was monitored accurately using the motion tracking device.

Our emphasis in this work has been to compare and validate MC methods under realistic imaging scenarios. The tools developed in this regard have been two-fold.

Voxelized phantoms are problematic in that they are fixed to a particular spatial resolution, and also result in *interpolation errors* when modeling motion (e.g., the volume of a voxelized object may not be conserved after rotation). Alternatively, a mathematical brain phantom was developed, containing continuous structures and thus avoiding the need for interpolations when introducing motion, as depicted in Fig. 7. The brain phantom was constructed using subdivision surfaces [33]. Subdivision surfaces can be used to efficiently model structures with an arbitrary topological type, such as the brain, skull, muscle tissue, and vasculature. Surfaces were modeled based on a segmented MRI dataset of a normal subject. The dataset consisted of 181 slices of the brain (pixel-sizes/slice-widths of 1.0 mm). One-hundred structures in the brain were identified. A software application was written using the Visualization Toolkit (VTK) [34] to create 3-D polygon surfaces. The VTK marching cubes algorithm was first used to create an initial polygon model for each structure. The polygon model was then optimized using the mesh optimization and smoothing routines of the VTK software.

We used a novel simulation technique [35] involving combination of two powerful and well-validated Monte-Carlo codes, SimSET and GATE. The method takes advantage of the shorter simulation times for photon propagation inside a digital phantom using SimSET as compared to GATE. The histories of all photons, single and coincidence, escaping from the phantom are stored in a file which serves as the input to GATE. The total simulation times using the new technique are about 12 times faster with nearly similar accuracy [35]. We used the design parameters and the geometry of the second generation HRRT scanner, as depicted in Fig. 8. In this work, movement of the patient in-between three positions were simulated, each moved with respect to one another by 4.6, 4.6, and 7.0 mm in the *x* (horizontal), *y* (vertical), and *z* (axial) directions, respectively, consistent with actually observed amounts of motion. Simulations involving 32, 77, and 209 M detected events were considered.

Phantom and simulated data for the HRRT were reconstructed directly from the list-mode data ([28], [29] should be consulted for details) with a span^{11} of 9, a maximum ring difference of 67, and using 32 subsets (which are time-based in list-mode reconstruction [29]). Throughout this work, resolution modeling has been used in the reconstructions (via the inclusion of an experimental model of spatial resolution in the EM algorithm [36]). The scatter term * _{i}* was computed using the single scatter simulation technique [25] on motion compensated sinograms, as elaborated in Section II, and the randoms term
${R}_{i}^{det}$ [also used to obtain

Five reconstruction scenarios were considered: 1) no motion (with matched statistics), 2) motion with no MC, 3) purely LOR-driven compensation (MC Method 1); i.e., method that performs MC reconstruction via mere correction of LOR coordinates of the detected events given the motion information, 4) MC method proposed in [17] which additionally incorporated motion-averaging of sensitivity factors, but does not include scatter compensation and only performs simple random-subtraction (MC Method 2), 5) the proposed accurate MC algorithm (41) incorporating appropriate overall modeling of motion. In Methods 1 and 2, non-MC scatter and random terms were included as is commonly done in the OP-OSEM algorithm [29]), while Methods 3 and 5 incorporated the scatter and random correction methods proposed in this work.

Additionally, we note that attenuation was modeled in the system matrix for Methods 1–3, as is commonly the case, while the data were precorrected in Methods 4 and 5 as mathematically required by our methodology. It is predicted (and verified next) that MC Method 2 outperforms MC Method 1 qualitatively (as it models sensitivity variation due to motion) but performs poorly quantitatively due to lack of consideration of scatter correction.

Hot and cold contrast were estimated following approximately the NEMA NU 2001 protocol. Defining *C _{H}, C_{C},* and

$${Q}_{C}=\left(1-\frac{{C}_{C}}{{C}_{B}}\right)\times 100\%$$

(42)

while the hot percent contrast *Q _{H}* was calculated using

$${Q}_{H}=\frac{{C}_{H}/{C}_{B}-1}{{A}_{H}/{A}_{B}-1}\times 100\%$$

(43)

where *A _{H}/A_{B}* is the actual concentration ratio between the hot and background regions (measured to be 4.77). The percent noise (standard deviation/mean × 100%) was calculated for a large background ROI.

Different areas in the brain (caudate, putamen, grey, white, cerebellum and brain stem) were quantitatively analyzed: ROIs were drawn on exact boundaries of these regions, and region bias (RB* _{r}*) for each ROI

$${\text{RB}}_{r}=\frac{{\overline{\lambda}}^{r}-{\mu}^{r}{\mu}^{r}}{}$$

(44)

where * ^{r}* (

$${\text{NSD}}_{r}=\frac{\sqrt{{\scriptstyle \frac{1}{n-1}}{\sum}_{jr}}{\overline{\lambda}}^{r}}{}$$

(45)

where *λ _{j}* denotes the reconstructed value at voxel

The overall bias was also measured using an ROI-based normalized mean squared error (NMSE) metric given by

$$\text{NMSE}=\frac{1}{R}\sum _{r=1}^{R}{\left(\frac{{\overline{\lambda}}^{r}-{\mu}^{r}}{{\mu}^{r}}\right)}^{2}$$

(46)

We have adopted such an ROI-based definition for NMSE over the common voxel-based definition, as we believe it should minimize the effect of voxel noise on this bias-measuring metric. The NMSE values were subsequently plotted against the average NSD values; i.e., NSD_{r}* _{r}*.

We examined a patient data set acquired from a subject with Parkinson’s disease (age 67, male, disease duration or 14 years, moderate disease severity) on the Siemens HRRT. The Polaris tool (Section III-B) was attached to the subject using a thin neoprene surf cap [1]. Following injection of ^{11}C-raclopride, the subject was scanned for 60 min and 16 dynamic frames were considered (4 × 1 min, 3 × 2 min, 8 × 5 min, 1 × 10 min). The data were reconstructed using 1) no MC, 2) the purely LOR-driven method, and 3) the proposed accurate method. Time-activity curves (TACs) and binding potential (BP) values were derived from regions of interest (ROI) in the striatum. Eight circular ROIs were placed on the striatum bilaterally; i.e., right and left caudate (C), anterior putamen (P1), intermediate putamen (P2), and posterior putamen (P3). BPs were calculated using a tissue input Logan analysis with the cerebellum as the reference region.

First, we explicitly investigated the accuracy of the approximations regarding scattered and random events (Section II): these results are best summarized by (22) and (26). In the case of scatter, a validation method for (22) would be to compare the scattered events incident along the LORs for various motion frames and compare these values for those LORs that correspond to one another given the measured motion. A simple approach to this (not requiring lengthy simulations in which scattered events are tagged) was to use the Watson method; i.e., the scattered term * _{i}* for various motion frames were separately calculated followed by motion correction to the first frame, and compared with the results of the first frame.

Images of motion-corrected incident-scatter _{i} sinograms shown for segment 0 (i.e., azimuthal angle *θ* = 0). Different pairs of radial *r,* transaxial angle and axial *z* are shown. (top) Reference frame 0, (bottom) frame **...**

With regards to the randoms terms, (26) predicts nearly similar * _{i}* sinograms for different motion frames independent of motion (unlike above, the sinograms are not motion-corrected with respect to another). Fig. 10 depicts images of the measured randoms
${R}_{i}^{det}$ (depicting strong detector and gap streaks), as well as incident-randoms

Fig. 11 shows typical transaxial and coronal slices for the reference (no motion) frame 1 (Table I) compared with reconstructions of another motion frame for cases of 1) no MC, 2) MC Method 1 (purely LOR-driven correction), and 3) MC Method 2 (method proposed in [17] not involving scatter correction or appropriate random correction), and 4) proposed MC method. Unlike the proposed approach, clear visual artifacts are observed for the purely LOR-driven MC Method 1.

To better understand this effect, a normalized sensitivity image (obtained by simple back-projection of projection bins which are set to 0 along detector gap, and 1 otherwise) is depicted in Fig. 12 for the entire FOV (boxes depict region depicted in Fig. 11). The visible line streaks are due to the presence of gaps in the octagons design of the HRRT. This explains the artifacts seen in Fig. 11 as obtained by MC method 1, which does not take the presence of gaps into account. Additionally, this explains why this effect has not been noted in some previous studies (e.g., see [1]) due to absence of noticeable gaps in scanners used. It must still be noted that even for a scanner without detector gaps, not considering the axial gradient in the sensitivity image (see bottom plot in Fig. 12 which is caused by LORs exiting the FOV axially, as seen in Fig. 1) is expected to result in issues with quantitation.

For a quantitative comparison of phantom reconstructions, Fig. 13(a) and (b) shows plots of cold *Q _{C}* and hot

Figs. 14 and and1515 depict typical transaxial and coronal slices for reconstructions in the cases of 1) no motion, 2) motion with no MC, 3) MC method 1, 4) MC Method 2, and 5) the proposed MC method, for 209 M and 32 M simulated events. It is observed that, compared to the case of no motion, purely LOR-driven corrections (MC method 1) can lead to some artifacts (especially see transaxial slices in the figures).

Simulation contained 209 M detected events. Reconstructed images after 3 iterations (32 subsets); (column 1) no motion, (column 2) no MC, (column 3) MC Method 1, (column 4) MC Method 2, (column 5) proposed MC. Transaxial and coronal slices are shown in **...**

Simulation contained 32 M detected events. Reconstructed images after 3 iterations (32 subsets); (column 1) no motion, (column 2) no MC, (column 3) MC Method 1, (column 4) MC Method 2, (column 5) proposed MC. Transaxial and coronal slices are shown in **...**

In addition to observed differences, in order to more quantitatively study the proposed approach, Fig. 16 shows plots of NSD (noise) versus NMSE (bias), as defined in the previous section, for different statistics with the points in each curve generated with varying the number of iterations (to better capture noise versus bias trade-off curves, different number of iterations were used for different simulated count levels). It is seen that the proposed MC method poses noticeable improvements compared to other approaches (relative to the case with no motion). For various regions in the brain (caudate, putamen, grey, white, cerebellum, and brain stem), typical plots of NSD (noise) versus RB (region bias) are depicted in Fig. 17 (209 M events): it was observed that the proposed algorithm typically performs favorably relative to MC method 1 (shown) and MC method 2 (not shown), with reference to simulations involving no motion.

Plots of NSD (noise) versus NMSE (bias) for the cases of (a) 209 M, (b) 77 M, and (c) 32 M simulated events: points in each curve are generated with varying the number of iterations (to better capture noise versus bias trade-off curves, different number **...**

Having established the accuracy of the proposed method using phantom and simulation studies, we then turned our attention to a patient study (elaborated in Section III-G). The measured movements of the particular patient under study are shown in Fig. 18. Also, Fig. 19 depicts slices across the striatum (frames 7–13: 10–45 min) reconstructed using 1) no MC, 2) the purely LOR-driven method, and 3) the proposed accurate method (prop. MC). Data reconstructed using the purely LOR driven method, as also observed before, resulted in some image artifacts (and were not considered further). BP values for the right and left caudate (C), anterior putamen (P1), intermediate putamen (P2) and posterior putamen (P3) are plotted in Fig. 20, and are seen to relatively increase, with respect to no MC, for the proposed MC method. This can be attributed to the increased effective resolution of the scanner due to motion correction. This is very similar in essence to BP increases one observes when higher resolution scanners [45], reconstruction methods with resolution recovery [36] and/or partial volume correction methods [43], [44] are considered.

Measured motion information: six degrees of motion are depicted (shown for the strait um region). X, Y, and Z axes refer the horizontal (transaxial), vertical (transaxial), and axial directions, respectively.

Transaxial (first row), coronal (second row), and sagittal (third row) slices across the striatum (frames 7–13: 10–45 min), reconstructed with different MC schemes. Clear artifacts can be seen with the purely LOR-driven approach (MC Method **...**

BP values with no MC and with the proposed accurate MC for a number of ROIs: right and left caudate (C), anterior putamen (P1), intermediate putamen (P2) and posterior putamen (P3).

Ongoing work includes investigation of the effect of motion correction on BP as well as dopamine release (DAR) values obtained from baseline/drug back-to-back studies across a wide range of patient data. Preliminary results indicate enhanced reproducibility in BP estimation, as also reported in [42], also accompanied according to our observations with significant increases in BP values especially in the intermediate and posterior putamen (P2 and P3). These results are even more significant for those patients (over 50% of sample population) reaching movement amplitudes greater than the resolution of the scanner.

We have proposed and investigated a formalism for accurate motion-compensated EM reconstruction, including elaborate consideration of randoms and scattered events, which is particularly feasible in the context of high-resolution PET. The method takes into consideration presence of motion-induced interactions between LORs within and outside the FOV, and accurately incorporates *all* detected events, including those, for instance, which exit the scanner axially or pass through detector gaps following motion correction. As an example, for a typical HRRT-scanner acquisition, the proposed *image-space* (versus projection-space) accurate calculation of the sensitivity factors can result in a factor of ~30 speedup in this task.

The method has been furthermore extensively tested using phantom measurements as well as simulation studies (involving a new mathematical brain phantom and a novel combination of SimSET and GATE simulation packages). It has been demonstrated that, in comparison to reconstructions of data with matched statistics involving no motion, the proposed method removes qualitative artifacts as well as quantitative inferiorities of purely event-driven correction methods. Having established the accuracy of the proposed method using phantom and simulation data, we are currently investigating the effects of motion correction on actual patient data. An example of this was presented in this work.

The authors would like to thank A. Crabb, K. Raywood, and K. Buckley for computational and technical support.

This work was supported in part by the National Institutes of Health under Grant DA00412 and Grant S10RR017219, in part by the TRIUMF Life Science Grant, in part by the Natural Sciences and Engineering Research Council of Canada, and in part by the Michael Smith Foundation for Health Research.

^{1}Largest translation typically occurs along the transaxial-(*x*) axis, and largest rotation around the axial-(*z*) axis.

^{2}This is in essence similar to how positron range has been modeled in the image-space component of the system matrix [37], [38].

^{3}For instance, even small amounts of motion result in substantial interaction with detector gaps in the HRRT which occupy nearly 10% of the entire sinogram-space.

^{4}Another downside of the MAF technique is that, unlike the present method, increasingly small motion thresholds can lead to poor statistics in the individual frames to be reconstructed, leading to bias as discussed in Section I-A.

^{5}The notation to follow is valid element-by-element; however, the order of these components would be different in matrix notation.

^{6}Note that the normalization factor for an LOR *i*′ along which an event is detected is given by the value of *N _{i}*

^{7}Note that * _{i}* is not, strictly speaking, the rate of incident scattered coincidences, since a scattered coincidence consists of two gamma rays arriving from a wide range of potential angles, thus having different normalizations than the trues normalization

^{8}See footnote 7, noting that a random coincidence, just as a scattered coincidence, consists of two gamma rays arriving from a wide range of potential angles.

^{9}The two probability terms *p _{ij}* and
${\mathcal{P}}_{lj}^{t}$ are related as expressed in (4) and are

^{10}These terms are related via (17) and (24).

^{11}In list mode, we still use the concept of spanning to reorient the detected LORs as one would do in histogramming in order to allow one to make use of histogram-based normalization sinograms [36].

^{12}In the HRRT scanner, singles are measured only at the block level; thus, the standard method employed [39] involves a generalization of the Casey 3-D random-smoothing technique [40] in which the crystal singles rates are first estimated from the delayed-coincidence measurements, followed by a standard calculation of the random rates.

^{13}The aforementioned *reconstructed* reference image is more appropriate as compared to direct use of the true values used in the simulation, since the latter does not take into account the partial volume effect (PVE), and as such the measured bias values would not directly/visibly compare the various MC schemes.

^{14}The scatter method does take the shape of the gantry into account [25] and as such could be used to validate both assumptions (20) and (21).

^{15}We avoid edges of the FOV, since the Watson method performs planar scaling that can be particularly problematic near the edges.

Arman Rahmim, Department of Radiology, Johns Hopkins University, 601 N. Caroline Street, Baltimore, MD 21205 USA.

Katie Dinelle, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.

Ju-Chieh Cheng, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.

Mikhail A. Shilov, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

William P. Segars, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Sarah C. Lidstone, Pacific Parkinson’s Research Centre, Vancouver, BC, V6T-1Z1 Canada.

Stephan Blinder, Pacific Parkinson’s Research Centre, Vancouver, BC, V6T-1Z1 Canada.

Olivier G. Rousset, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Hamid Vajihollahi, Department of Mechanical Engineering, McMaster University, Hamilton, ON, L8S-4L8 Canada.

Benjamin M. W. Tsui, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Dean F. Wong, Department of Radiology, Johns Hopkins University, Baltimore, MD 21205 USA.

Vesna Sossi, Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T-1Z1 Canada.

1. Bloomfield PM, Spinks TJ, Reed J, Schnorr L, Westrip AM, Livieratos L, Fulton R, Jones T. The design and implementation of a motion correction scheme for neurological PET. Phys Med Biol. 2003 Apr 21;48(8):959–978. [PubMed]

2. Fulton RR, Meikle SR, Eberl S, Pfeiffer J, Constable RT, Fulham MJ. Correction for head movements in positron emission tomography using an optical motion-tracking system. IEEE Trans Nuc Sci. 2002 Feb;49(1):116–123.

3. Lopresti BJ, Russo A, Jones WF, Fisher T, Crouch DG, Altenburger DE, et al. Implementation and performance of an optical motion tracking system for high resolution brain PET imaging. IEEE Trans Nucl Sci. 1999 Dec;46(6):2059–2067.

4. Woods RP, Cherry SR, Maziotta J. A rapid automated algorithm for accurate aligning and reslicing positron emission tomography images. J Comput Assist Tomogr. 1992;16:620–634. [PubMed]

5. Friston KJ, Ashburner J, Frith CD, Poline JB, Heather JD, Frackowiak RSJ. Spatial registration and normalization of images. Hum Brain Mapp. 1995;3:165–189.

6. Picard Y, Thompson CJ. Motion correction of PET images using multiple acquisition frames. IEEE Trans Med Imag. 1997;16:137–144. [PubMed]

7. Qi J, Leahy RM. Resolution and noise properties of MAP reconstruction for fully 3-D PET. IEEE Trans Med Imag. 2000 May;19(5):493–506. [PubMed]

8. Boellaard R, van Lingen A, Lammertsmaa AA. Experimental and clinical evaluation of iterative reconstruction (OSEM) in dynamic PET: Quantitative characteristics and effects on kinetic modeling. J Nucl Med. 2001;42(5):808–817. [PubMed]

9. Menke M, Atkins MS, Buckley KR. Compensation methods for head motion detected during PET imaging. IEEE Trans Nucl Sci. 1996;43(1):310–317.

10. Qiao F, Pan T, Clark JW, Mawlawi OR. A motion-incorporated reconstruction method for gated PET studies. Phys Med Biol. 2006;51:3769–3783. [PubMed]

11. Li T, Thorndyke B, Schreibmann E, Yang Y, Xing L. Reconstruction for proton computed tomography by tracing proton trajectories: A monte carlo study. Med Phys. 2006;33(3):699–1298. [PMC free article] [PubMed]

12. Lamare F, Ledesma Carbayo MJ, Cresson T, Kontaxakis G, Santos A, Cheze C, Rest L, Reader AJ, Visvikis D. List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations. Phys Med Biol. 2007;52(17):5187–5204. [PubMed]

13. Rahmim A, Cheng JC, Dinelle K, Shilov MA, Segars WP, Rousset OG, Tsui BMW, Wong DF, Sossi V. System matrix modeling of externally tracked motion. IEEE NSS/MIC Conf Rec. 2006;4:2137–2141.

14. Daube-Witherspoon ME, Yan YC, Green MV, Carson RE, Kempner KM, Herscovitch P. Correction for motion distortion in PET by dynamic monitoring of patient position. J Nucl Med. 1990;31:816.

15. Buehler P, Just U, Will E, Kotzerke J, van den Hoff J. An accurate method for correction of head movement in PET. IEEE Trans Med Imag. 2004 Sep;23(9):1176–1185. [PubMed]

16. Jones WF. Real-time event stream correction for patient motion in clinical 3-D PET. IEEE NSS/MIC Conf Rec. 2001;4:2062–2064.

17. Rahmim A, Bloomfield P, Houle S, Lenox M, Michel C, Buckley KR, Ruth TJ, Sossi V. Motion compensation in histogram-mode and list-mode EM reconstructions: Beyond the event-driven approach. IEEE Trans Nucl Sci. 2004;51:2588–2596.

18. Wienhard K, Shmand M, Casey ME, Baker K, Bao J, Eriksson L, Jones WF, Knoess C, Lenox M, Lercher M, Luk P, Michel C, Reed JH, Richerzhagen N, Treffert J, Vollmar S, Young JW, Heiss WD, Nutt R. The ECAT HRRT: Performance and first clinical application of the new high resolution research tomograph. IEEE Trans Nucl Sci. 2002;49:104–110.

19. Qi J, Huesman RH. 2002 IEEE Int Symp Biomed Imag. Washington, DC: 2002. List mode reconstruction for PET with motion compensation: A simulation study; pp. 413–416.

20. Qi J, Huesman RH. Monte carlo simulation of tracer-kinetic PET and SPECT studies using dynamic hybrid phantoms. J Nucl Med. 2002;43:146.

21. Thielemans K, Mustafovic S, Schnorr L. Image reconstruction of motion corrected sinograms. Proc IEEE Med Imag Conf. 2003;4:2401–2406.

22. Carson RE, Barker WC, Liow JS, Johnson CA. Design of a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction for the HRRT. IEEE NSS/MIC Conf Rec. 2003;5:3281–3285.

23. Qi J, Huesman RH. Propagation of errors from the sensitivity image in list mode reconstruction. IEEE Trans Med Imag. 2004 Sep;23(9):1094–1099. [PubMed]

24. Qi J. Calculation of the sensitivity image in list-mode reconstruction. IEEE NSS/MIC Conf Rec. 2005;4:1924–1928.

25. Watson CC. New, faster, image-based scatter correction for 3-D PET. IEEE Trans Nucl Sci. 2000 Aug;47(4):1587–1594.

26. Thielemans K. Scatter estimation and motion correction in PET. IEEE NSS/MIC Conf Rec. 2005;3:1745–1747.

27. Parra L, Barrett HH. List-mode likelihood: EM algorithm and image quality estimation demonstrated on 2-D PET. IEEE Trans Med Imag. 1998;17(2):228–235. [PMC free article] [PubMed]

28. Rahmim A, Lenox M, Reader AJ, Michel C, Burbar Z, Ruth TJ, Sossi V. Statistical list-mode image reconstruction for the high resolution research tomograph. Phys Med Biol. 2004 Aug;49:4239–4258. [PubMed]

29. Rahmim A, Cheng JC, Blinder S, Camborde M-L, Sossi V. Statistical dynamic image reconstruction in state-of-the-art high resolution PET. Phys Med Biol. 2005;50:4887–4912. [PubMed]

30. Reader AJ, Sureau FC, Comtat C, Buvat I, Trebossen R. List-mode reconstruction with system modeling derived from projections’ IEEE Nucl Sci Symp Conf Rec. 2005 Oct;3:23–29.

31. Sossi V, de Jong HWAM, Barker WC, Bloomfield P, Burbar Z, Camborde ML, Comtat C, Eriksson LA, Houle S, Keator D, Knob C, Krais R, Lammertsma AA, Rahmim A, Sibomana M, Teras M, Thompson CJ, Trebossen R, Votaw J, Walker M, Wienhard K, Wong DF. The second generation HRRT - a multi-centre scanner performance investigation. IEEE NSS/MIC Conf Rec. 2005;4:23–29.

32. Horn BK. Closed form solution of absolute orientation using unit quaternions. J Opt Soc Amer. 1987;4(4):629–642.

33. Hoppe H, DeRose T, Duchamp T, Halstead M, Jin H, McDonald J, Schweitzer J, Stuetzle W. Computer Graphics, Proc., Annu. Conf. Series. Piecewise smooth surface reconstruction. 1994;28:295–302.

34. The Visualization Toolkit. [Online]. Available: http://public.kitware.com/VTK.

35. Rodriguez M, Barker WC, Liow JS, Thada S, Chelikani S, Mulnix T, Carson RE. Count rate dependent component-based normalization for the HRRT. J Nucl Med. 2006;47:197P.

36. Sureau FC, Comtat C, Reader AJ, Leroy C, Santiago-Ribeiro MJ, Buvat I, Trebossen R. Improved clinical parametric imaging using list-mode reconstruction via resolution modeling. IEEE NSS/MIC Conf Rec. 2006;6:3507–3510.

37. ÜMumcuŏglu E, Leahy RM, Cherry SR, Zhou Z. Fast gradient-based methods for bayesian reconstruction of transmission and emission pet images. IEEE Trans Med Imag. 1994 Dec;13(4):687–701. [PubMed]

38. Reader AJ, Ally S, Bakatselos F, Manavaki R, Walledge R, Jeavons AP, Julyan PJ, Zhao S, Hastings DL, Zweit J. One-pass list-mode EM algorithm for high-resolution 3-D PET image reconstruction into large arrays. IEEE Trans Nucl Sci. 2002;49(3):693–699.

39. Byars LG, Sibomana M, Burbar Z, Jones J, Panin V, Barker WC, Liow J-S, Carson RE, Michel C. Variance reduction on randoms from coincidence histograms for the HRRT. IEEE NSS/MIC Conf Rec. 2005;5:2622–2626.

40. Casey ME, Hoffman EJ. Quantitation in positron emission tomography: 7. A technique to reduce noise in accidental concidence meansurements and coincidence efficinecy calibration. J Comput Assist Tomogr. 1986;10(5):845–850. [PubMed]

41. Karp JS, Muehllehner G, Lewitt RM. Constrained fourier space method for compensation of missing datain emission computed tomography. 1988;7:21–25. [PubMed]

42. Montgomery AJ, Thielemans K, Mehta MA, Turkheimer F, Mustafovic S, Grasby PM. Correction of head movement on pet studies: Comparison of methods. J Nucl Med. 2006;47:1936–1944. [PubMed]

43. Kuwabara H, Rousset O, Kumar A, Ye W, Brasic JR, Wong DF. Evaluation of tissue homogeneity assumptions of partial volume correction for [11c]raclopride binding potential volumes. J Nucl Med. 2006;47:192P.

44. Mawlawi O, Martinez D, Slifstein M, Broft A, Chatterjee R, Hwang D, Huang Y, Simpson N, Ngo K, Van Heertum R, Laruelle M. Imaging human mesolimbic dopamine transmission with positron emission tomography: I. accuracy and precision of d2 receptor parameter measurements in ventral striatum. Cereb Blood Flow Metab. 2001;21:1034–1057. [PubMed]

45. Zhou Y, Weed M, Chen MK, Rahmim A, Ye W, Brasic J, Alexander M, Crabb A, McGlothan J, Ali F, Guilarte T, Wong D. Quantitative dopamine transporter imaging studies in nonhuman primates with a GE advance and high resolution research tomography (HRRT) PET scanners. J Nucl Med. 2007;48:158P. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |