Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2914313

Formats

Article sections

- Abstract
- Introduction
- Modelling of motion into the system matrix
- Modelling of motion for attenuation correction
- Application of Bayesian reconstruction
- Simulations of a new mathematical brain phantom
- Methods
- Results and discussion
- Conclusion
- References

Authors

Related links

Nucl Med Commun. Author manuscript; available in PMC 2010 August 3.

Published in final edited form as:

PMCID: PMC2914313

NIHMSID: NIHMS135354

Arman Rahmim,^{a} Ju-Chieh Cheng,^{b} Katie Dinelle,^{b} Mikhail Shilov,^{a} W. Paul Segars,^{c} Olivier G. Rousset,^{a} Benjamin M.W. Tsui,^{a} Dean F. Wong,^{a} and Vesna Sossi^{b}

Correspondence to Dr A. Rahmim, JHOC Bldg, Room 3245, 601 N Caroline St., Baltimore, MD 21287, USA, Tel: + 1 410 955 8433; fax: + 1 410 955 0696; Email: ude.imhj@1mimhara

The publisher's final edited version of this article is available at Nucl Med Commun

See other articles in PMC that cite the published article.

In high-resolution emission tomography imaging, even small patient movements can considerably degrade image quality. The aim of this work was to develop a general approach to motion-corrected reconstruction of motion-contaminated data in the case of rigid motion (particularly brain imaging) which would be applicable to any PET scanner in the field, without specialized data-acquisition requirements.

Assuming the ability to externally track subject motion during scanning (e.g., using the Polaris camera), we proposed to incorporate the measured rigid motion information into the system matrix of the expectation maximization reconstruction algorithm. Furthermore, we noted and developed a framework to incorporate the additional effect of motion on modifying the attenuation factors. A new mathematical brain phantom was developed and used along with elaborate combined Simset/GATE simulations to compare the proposed framework with the cases of no motion correction.

Clear qualitative and quantitative improvements were observed when incorporating the proposed framework. The method is very practical to implement for any scanner in the field, not requiring any hardware modifications or access to the list-mode acquisition capability.

As a typical PET brain imaging session can last hours, it is not reasonable to expect a patient to remain motionless during this time. A number of head restraints are nowadays common, such as thermoplastic masks or neoprene caps, which lower the amount of motion but do not eliminate it. Even with head restraints, typical translations in the range of 5–20mm and rotations of 1–4° are observed depending on the type of mask and the duration of scan (e.g., see Fulton *et al.* [1], Bloomfield *et al.* [2], and also Lopresti *et al.* [3] in which a study of various types of head movements, such as those caused by coughing and leg crossing, has been presented). Recall that the largest translation typically occurs along the transaxial axis (the *x*-axis), and largest rotation around the axial axis (the *z*-axis).

With continuous improvements in spatial resolution of emission tomography scanners, small patient movements have become a significant source of resolution degradation. Methods to correct for patient movements have been recently reviewed by Rahmim *et al.* [4]. In the past these were largely based on correction of inter-scan movements. These (software-based) methods involved the division of a scan into a number of frames, followed by spatial registration of the reconstructed images using mathematical algorithms (e.g., see Woods *et al.* [5] and Friston *et al.* [6]).

There has been an increased tendency *not* to exclusively rely on the emission data itself for the estimation of patient movements. More successful approaches [4] instead make use of information provided by an external motion-tracking device. These include the following.

Multiple acquisition frames (MAFs) [1] which are individually reconstructed, motion-compensated and summed can be used. The major limitation of the MAF approach is that lowering the motion threshold can result in the acquisition of an increasing number of *low-statistic* frames to be reconstructed. Lack of an adequate number of acquired events in the individual frames can, in turn, adversely affect the quality of the final reconstructed images, and an increased number of frames will also lead to increased reconstruction times.

Post-processing of the motion-blurred reconstructed images using deconvolution operators (whose shape is determined by the measured motion) [7] can be useful. This method, however, has not attracted much attention primarily because the deconvolution process amplifies the noise in the PET data, and furthermore, when the movements include significant rotation, spatially variant deconvolution filters need to be employed, potentially introducing other artifacts [7].

Individual lines of response (LORs) for motion [8] can be corrected. This is an *event-driven* approach; i.e., motion correction is performed by transforming the LORs along which the events are measured to where they would have been measured if the object had not. It has been argued (e.g., see Rahmim *et al.* [9,10]) that this approach (in its purely event-driven form) may result in reconstructed image artifacts. This is because an LOR that is normally in the field of view (FoV) can fall outside the FoV due to motion, and therefore the reconstruction algorithm should be modified in order to yield accurately reconstructed images. However, this approach requires either (1) specialized hardware to achieve accurate on-the-fly motion correction or (2) list-mode acquisition capability. Furthermore, it is only accurately applicable to *rigid* motion since a direct relation between motion in the image and the subsequent effect in the LOR coordinates is only possible in the case of rigid motion. In this work, our primary interest has been on the investigation of an approach to the reconstruction of motion-compensated data, thus applicable to any scanner in the field (e.g., without the list-mode acquisition capability).

As a solution to the above, a method has been put forward [11] which models motion blurring in the forward-projection step of the expectation maximization algorithm, such that better matching of the estimated image and the motion-contaminated data are obtained. This method, however, does not represent accurate modelling of the motion process, as shown in this work, can result in artifacts and is not necessarily convergent. A more comprehensive approach avoiding the aforementioned difficulties is proposed and investigated.

We first denote
${\lambda}_{j}^{m}$ as the image intensity in voxel *j* (*j*=1…*J*) estimated at the *m*-th iteration, *n _{i}* as the number of detected events along a line of response (LOR)

$${\lambda}_{j}^{m+1}=\frac{{\lambda}_{j}^{m}}{{s}_{j}}\sum _{i=1}^{I}{p}_{ij}\left\{\frac{{n}_{i}}{{\sum}_{j=1}^{J}{P}_{ij}{\lambda}_{\text{j}}^{\text{m}}}\right\}$$

(1)

where:

$${s}_{j}=\sum _{i=1}^{I}{p}_{ij}$$

(2)

computes the elements of the so-called image sensitivity (at each voxel *j* ). For compact representation [12], we next use
${\overrightarrow{\lambda}}^{m}={[{\lambda}_{1}^{m}\dots {\lambda}_{J}^{m}]}^{\text{tr}}$, *$\stackrel{\u20d7}{n}$* = [*n*_{1} … *n _{J}*]

$${\overrightarrow{\lambda}}^{m+1}=\frac{{\overrightarrow{\lambda}}_{m}}{\overrightarrow{s}}\times {P}^{\text{tr}}\left\{\frac{\overrightarrow{n}}{P{\overrightarrow{\lambda}}^{m}}\right\}$$

(3)

where vectorial multiplication and division operations are performed on an element-by-element basis.

Next, we first divide a given scan (of duration *T*) into *Q* motion-related time intervals (*t*=1…*Q*) each with a duration Δ*T _{t}* within which motion is limited below a small, negligible threshold. We then define an overall motion matrix to model motion of the object in the image space, thus relating a voxel

$$\overline{M}=\sum _{t=1}^{Q}{M}_{t}\frac{\mathrm{\Delta}{T}_{t}}{T}$$

(4)

where *M _{t}* represents the

(a) The operator tracks movements of the image elements at each time/motion frame *t*. (b) The overall weighted motion matrix, , models the overall motion-induced blurring (in image space), thus accurately modelling how motion affects **...**

Next, we decompose the system matrix

*P* = (*p _{ij}*)

$$P=\mathit{WGB}\overline{M}$$

(5)

Here, the matrix *B* = (*b _{ij}*)

Upon substitution of Equation 5 into the expectation maximization algorithm (Equation 3), one arrives at:

$${\overrightarrow{\lambda}}^{m+1}=\frac{{\overrightarrow{\lambda}}_{m}}{\overrightarrow{s}}\times {\overline{M}}^{\text{tr}}{B}^{\text{tr}}{G}^{\text{tr}}{W}^{\text{tr}}\left\{\frac{\overrightarrow{n}}{\mathit{WGB}\overline{M}{\overrightarrow{\lambda}}^{m}}\right\}$$

(6)

We next note that the matrix operators *G* and *G*^{tr}, which denote geometric forward-projections and back-projections, are not, in practice, actually stored and are instead implemented on-the-fly. To this end, we define *fp _{i}*{} and

$${\overrightarrow{\lambda}}^{m+1}=\frac{{\overrightarrow{\lambda}}_{m}}{\overrightarrow{s}}\times {\overline{M}}^{\text{tr}}{B}^{\text{tr}}\left[\sum _{i=1}^{I}{bp}_{i}\left\{\frac{\overrightarrow{n}}{f{p}_{i}\left\{B\overline{M}{\overrightarrow{\lambda}}^{m}\right\}}\right\}\right]$$

(7)

where the sensitivity vector *$\stackrel{\u20d7}{s}$* is now given by:

$$\overrightarrow{s}={\overline{M}}^{\text{tr}}{B}^{\text{tr}}\sum _{i=1}^{I}{bp}_{i}\{\overrightarrow{w}\}$$

(8)

where *$\stackrel{\u20d7}{w}$* denotes a list of diagonal elements of *W*.

Here, we note that one needs to apply *B* and ^{tr}*B*^{tr} only once in each (subset of) iteration, and *not* for each and every LOR. Additionally, unlike the ‘motion deconvolution’ technique in Menke *et al.* [7], this method does not amplify noise since it does not make use of ^{−1} (which would be a deblurring operator), and instead only makes use of and ^{tr}, both of which are essentially blurring operators. It must also be noted that the use of in the denominator is to merely model the effect of motion in the forward-projection step, as is done in the approach of Fulton and Meikle [11]; and at the same time, we have shown that the term ^{tr} should also be included in the back-projection step to yield a correct expectation maximization algorithm, implying convergence to the maximum-likelihood solution.

It must be noted that modelling the motion information into the system matrix of the expectation maximization algorithm, in the context of non-rigid motion, has also been proposed for the reconstruction of respiratory *non-gated* data by Reyes *et al.* [14] as well as in the case of respiratory/cardiac gated data [15–17], although the latter does not involve motion-contaminated data (the data are gated, motion modelling applied to the series of gates). One must also note that estimation of respiratory/cardiac movements requires alternative approaches, including the possibility of extracting the motion information from (1) separately reconstructed PET gated data or (2) gated CT images in PET/CT applications.

The aforementioned modelling of motion into the system matrix of the expectation maximization algorithm, investigated in this work, is expected to improve the reconstructed image qualities. In fact, we have argued [18] that such modelling of image blurring effects not only improves image resolution, but can also improve noise propagation properties in the reconstruction process. This idea is somewhat paralleled in collimator– detector response (CDR) modelling in SPECT, wherein a similar incorporation in the reconstruction process has been shown to result in improvements in resolution [19], in task-based measures of image quality (see Frey and Tsui [20] for a review), as well as improvements in noise properties [21].

At this stage, one must note that, in addition to the data, the attenuation coefficients for the LORs are also contaminated by motion. In other words, the probability of attenuation along an LOR is modified due to motion, as shown in Fig. 2. We thus introduce an operator *L _{t}*() which models the effect of motion in the LOR space by transforming the LOR

$${a}_{ii}^{(t)}={a}_{ll}^{(0)}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}l={L}_{t}^{-1}(i)$$

(9)

where ${a}_{ll}^{(0)}$ represents the attenuation factors for the object scanned at initial position. The time/motion-averaged, overall system matrix is then accurately given by:

$$P=\left(\sum _{t=1}^{Q}{A}_{t}{\mathit{NGBM}}_{t}\right)\times \frac{\mathrm{\Delta}{T}_{t}}{T}$$

(10)

For high-resolution scanners, the above summation over all the motion frames will be computationally intense: this is because while *M _{t}* and

$$\begin{array}{l}P=\left(\sum _{t=1}^{Q}{A}_{t}{\mathit{NGBM}}_{t}\right)\times \frac{\mathrm{\Delta}{T}_{t}}{T}\\ =\overline{A}\mathit{NGB}\overline{M}\end{array}$$

(11)

In order to calculate *Ā*, we then note that the measured mu-map (0) undergoes motion in the course of the scan as given by:

$$\overrightarrow{\mu}(t)={M}_{t}\overrightarrow{\mu}(0)$$

(12)

Thus, the effective, overall mu-map $\overline{\overrightarrow{\mu}}$ is given by:

$$\begin{array}{l}\overline{\overrightarrow{\mu}}=\sum _{t=1}^{Q}\overrightarrow{\mu}(t)\times \frac{\mathrm{\Delta}{T}_{t}}{T}\\ =\sum _{t=1}^{Q}{M}_{t}\overrightarrow{\mu}(0)\times \frac{\mathrm{\Delta}{T}_{t}}{T}\end{array}$$

(13)

This term is easily computed, and can be forward projected to yield the motion-weighted attenuation factors (i.e., elements of *Ā*).

To obtain images of improved quality, we also considered the Bayesian maximum *a priori* (MAP) method (first utilized by Geman and McClure [22] in nuclear medicine); the approach works by introducing a potential function *V*() that decreases in value with fewer local variations in the image, and its subtraction from the standard log-likelihood function being maximized, so as to counter the noise-generating nature of iterative reconstruction algorithm. In particular, we considered the one-step-late OSL-MAP reconstruction algorithm of Green [23], which is obtained by modifying Equation 2 to:

$${s}_{j}={\sum _{i=1}^{I}{p}_{ij}+\beta \frac{V(\overrightarrow{\lambda}){\lambda}_{j}}{}|\overrightarrow{\lambda}={\overrightarrow{\lambda}}^{m}}_{}$$

(14)

and applied it directly to our proposed approach by modifying Equation 8 via addition of the above last term. The same potential function as proposed by Green [23] was also considered in this work, and the derivative of the potential function used was:

$$\frac{V(\overrightarrow{\lambda}){\lambda}_{j}=\sum _{kN{H}_{j}}}{}$$

(15)

and the summation is performed over all voxels *k* in the neighbourhood of voxel *j*; ‘neighbourhood’ in this work was defined as the six voxels sharing a side and the 12 voxels sharing an edge with the central voxel, and each weighted by the inverse of distance between the voxels; i.e., *w _{kj}*=1 and
${w}_{kj}=1/\sqrt{2}$ respectively. The variables β and δ were set by the user, and adjusted to achieve various degree/types of smoothing.

Our emphasis in this work has been to compare and validate motion compensation methods under realistic imaging scenarios. The tools developed in this regard have been two-fold.

Voxelized phantoms are problematic in that they are fixed to a particular spatial resolution, and also result in *interpolation errors* when modelling motion (e.g., the volume of a voxelized object may not be conserved after rotation). Alternatively, a mathematical brain phantom was developed as further elaborated by Rahmim *et al.* [10], depicted in Fig. 3a. The brain phantom was constructed using subdivision surfaces [24]. One hundred structures in the brain were identified. A software application was written using the Visualization Toolkit [25] to create three-dimensional polygon surfaces, which were then used to generate subdivision surfaces as described by Hoppe *et al.* [24].

A new technique [26] was used involving combination of two powerful and well-validated Monte Carlo codes: SimSET and GATE. The method takes advantage of the shorter simulation times for photon propagation inside a digital phantom using SimSET as compared to GATE. We used the design parameters and the geometry of the second generation HRRTscanner. The detector heads in the octagonal design consist of a double 10mm layer of LSO/LYSO for a total of 119 808 detector crystals (crystal size 2.1×2.1×10mm^{3}), as depicted in Fig. 3b. The total simulation times using the new technique are about 12 times faster with nearly similar accuracy.

As a first step, two-dimensional simulations of digitized brain images were considered. The two-dimensional sinograms had 96 projections covering 180°, and 96 radial bins. Five different positions were simulated, with incremental motions of translating by 1 pixel (width 4.87mm) along both directions and rotating by 1°. The simulated data were reconstructed using the OSEM algorithm (24 subsets). Four approaches were compared:

- No correction for motion;
- Motion incorporated in the forward projection step only [11]; FPMM (forward-projection motion modelling);
- The proposed system matrix modelling of motion with conventional attenuation correction (SMMM–CAC);
- System matrix modelling of motion with proposed attenuation correction (SMMM–PAC).

Ten contour ROIs were defined (*R*=10) each consisting of voxels of similar (within 2%) activity. The reconstructed image bias was then measured using an ROI-based normalized mean squared error (NMSE) metric given by:

$$\text{NMSE}=\frac{1}{R}\sum _{r=1}^{R}{\left(\frac{{\overline{\mu}}_{r}-{\mu}_{r}}{{\mu}_{r}}\right)}^{2}$$

(16)

where μ* _{r}* and

$$\text{NSD}=\frac{1}{R}\sum _{r=1}^{R}{(\frac{\sqrt{{\scriptstyle \frac{1}{n-1}}{\displaystyle \sum _{jr}{[{\lambda}_{j}-{\overline{\lambda}}^{r}]}^{2}}}{\overline{\lambda}}^{r}}{})2}^{}$$

(17)

where λ* _{j}* denotes the estimated activity at voxel

Using the developed tools as discussed in the previous section, we performed simulations of our mathematical phantom in the HRRTscanner. A span of 3 and maximum ring difference of 67 were considered. Three different positions were included in the study, with incremental motions each one translating by 7.5mm along both transaxial directions and 10mm along the axial direction. Reconstructions were performed using the OSEM algorithm with 16 subsets. Six different areas in the brain (caudate, putamen, grey, white, cerebellum and brain stem) were quantitatively analysed, and the NMSE values (for different iterations) were determined as given by Equation 16, and plotted against the average ROI NSD. The true ROI μ* _{r}* values were obtained from the known phantom volumes.

The resulting noise (NSD) versus bias (NMSE) plots are depicted in (Fig. 4a and b). We have repeatedly observed that the FPMM method does not perform well. Alternatively, the proposed SMMM–CAC/PAC methods perform well, with the latter exhibiting improved NSD versus NMSE performance. Visually, a similar improved performance may be observed, as shown in Fig. 5.

Resulting plots of NSD (noise) versus NMSE (bias) with different number of iterations (in the range 1–18) for (a) 2 million and (b) 10 million simulated events (increasing iterations typically lower bias but increase noise, resulting in the well-known **...**

The resulting noise versus bias plots are shown in Fig. 6. The proposed approach (especially when including modelling of modified attenuation factors due to presence of motion) performs favourably compared to the case with no motion correction. Nevertheless, one observes that, compared to the case with no motion, an increasing number of iterations is required to achieve similar bias performance, thus resulting in poorer noise versus bias trade-off curves. This can be attributed to the fact that, in this approach, the data is not explicitly corrected for motion, and rather the motion information is incorporated in the reconstruction task of the contaminated data-sets. Use of Bayesian MAP reconstruction (with some optimum values for variables β and δ, which were defined in Equations 14 and 15) results in somewhat improved noise versus bias trade-off curves, and this remains an area that needs to be further investigated and optimized.

Resulting plots of NSD (noise) versus NMSE (bias) for the fully simulated three-dimensional phantom with a different number of iterations (in the range 1–18; not all iterations are shown; increasing iterations lower bias but increase noise, thus **...**

Figure 7 depicts transaxial/coronal/sagittal slices for various reconstructions. Similar conclusions as above can be drawn. An overall observation we have made [10] is that convergence is relatively slower in this approach, as compared with event-driven methods (i.e., the latter event-driven approach performs [10] very similarly to the case of no motion, whereas this is not the case in the current approach), though the approach presented has the advantage of being easily applicable to all scanners in the field, not requiring the list-mode acquisition capability or specialized hardware to achieve event-driven correction.

We have proposed and investigated a motion correction method applicable to *any* PET scanner in the field (i.e., without the need for specialized hardware modification and/or access to the list-mode acquisition capability). It has been demonstrated that in the case of motion-contaminated data, accurate incorporation of tracked motion information into the system matrix of the expectation maximization algorithm yields quantitatively and qualitatively improved images. At the same time, we have shown that quantification is further improved if the effect of motion-contamination on the attenuation factors is further modelled and incorporated in the reconstruction task.

This work was supported by the National Institute of Health (grants: DA00412 and S10RR017219), the TRI-UMF Life Science Grant, the Natural Sciences and Engineering Research Council of Canada, and the Michael Smith Foundation for Health Research. The authors wish to thank Andrew Crabb for computational and technical support.

1. Fulton RR, Meikle SR, Eberl S, Pfeiffer J, Constable CJ. Correction for head movements in positron emission tomography using an optical motion-tracking system. IEEE Trans Nucl Sci. 2002;49:116–123.

2. Bloomfield PM, Spinks TJ, Reed J, Schnorr L, Westrip AM, Livieratos L, Fulton R, Jones T. The design and implementation of a motion correction scheme for neurological PET. Phys Med Biol. 2003;48:959–978. [PubMed]

3. Lopresti BJ, Russo A, Jones WF, Fisher T, Crouch DG, Altenburger DE, Townsend DW. Implementation and performance of an optical motion tracking system for high resolution brain PET imaging. IEEE Trans Nucl Sci. 1999;46:2059–2067.

4. Rahmim A, Rousset O, Zaidi H. Strategies for motion tracking and correction in PET. PET Clinics. 2008;2 (in press)

5. Woods RP, Grafton ST, Holmes CJ, Cherry SR, Mazziotta JC. Automated image registration: I General methods and intrasubject, intramodality validation. J Comput Assist Tomogr. 1998;22:139–152. [PubMed]

6. Friston KJ, Ashburner J, Frith CD, Poline JB, Heather JD, Frackowiak RSJ. Spatial registration and normalisation of images. Human Brain Map. 1995;2:693–696.

7. Menke M, Atkins MS, Buckley KR. Compensation methods for head motion detected during PET imaging. IEEE Trans Nucl Sci. 1996;43:310–317.

8. Daube-Witherspoon M, Yan Y, Green M, Carson R, Kempner K, Herscovitch P. Correction for motion distortion in PET by dynamic monitoring of patient position [Abstract] J Nucl Med. 1990;31:816.

9. Rahmim A, Bloomfield P, Houle S, Lenox M, Michel C, Buckley KR, et al. Motion compensation in histogram-mode and list-mode EM reconstructions: beyond the event-driven approach. IEEE Trans Nucl Sci. 2004;51:2588–2596.

10. Rahmim A, Dinelle K, Cheng J, Shilov MA, Segars WP, Rousset OG, et al. Accurate event-driven motion compensation in high-resolution PET incorporating scattered and random events. IEEE Trans Med Imaging. 2008 (in press) [PMC free article] [PubMed]

11. Fulton RR, Meikle SR. Reconstruction of projection data corrupted by rigid or non-rigid motion. Oral presentation at the IEEE Medical Imaging Conference; San Juan, Puerto Rico. 25–29 October 2005.

12. Reader AJ, Ally S, Bakatselos F, Manavaki R, Walledge R, Jeavons AP, et al. One-pass list-mode EM algorithm for high-resolution 3-D PET image reconstruction into large arrays. IEEE Trans Nucl Sci. 2002;49:693–699.

13. Qi J, Huesman RH. Scatter correction for positron emission mammography. Phys Med Biol. 2002;47:2759–2771. [PubMed]

14. Reyes M, Malandain G, Koulibaly PM, Gonzalez-Ballester MA, Darcourt J. Model-based respiratory motion compensation for emission tomography image reconstruction. Phys Med Biol. 2007;52:3579–3600. [PubMed]

15. Qiao F, Pan T, Clark JW, Jr, Mawlawi OR. A motion-incorporated reconstruction method for gated PET studies. Phys Med Biol. 2006;51:3769–3783. [PubMed]

16. Li T, Thorndyke B, Schreibmann E, Yang Y, Xing L. Model-based image reconstruction for four-dimensional PET. Med Phys. 2006;33:1288–1298. [PubMed]

17. Lamare F, Ledesma Carbayo MJ, Cresson T, Kontaxakis G, Santos A, Cheze Le Rest C, et al. List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations. Phys Med Biol. 2007;52:5187–5204. [PubMed]

18. Rahmim A, Cheng JC, Sossi V. Improved noise propagation in statistical image reconstruction with resolution modeling. IEEE Nucl Sci Symp Imag Conf Record. 2005;5:2576–2578.

19. Kohli V, King MA, Glick SJ, Pan TS. Comparison of frequency–distance relationship and Gaussian-diffusion based methods of compensation for distance-dependent spatial resolution in SPECT imaging. Phys Med Biol. 1998;43:1025–1037. [PubMed]

20. Frey EC, Tsui BMW. Collimator–detector response compensation in SPECT. In: Zaidi H, editor. Quantitative Analysis in Nuclear Medicine Imaging. New York: Springer; 2006. pp. 141–166.

21. Tsui BMW, Frey EC, Zhao XD, Lalush DS, Johnston RE, McCartney WH. The importance and implementation of accurate three-dimensional compensation methods for quantitative SPECT. Phys Med Biol. 1994;39:509–530. [PubMed]

22. Geman S, McClure D. Proc Statist Comput Sect. Washington, DC: American Statistics Assoc; 1985. Bayesian image analysis: an application to single photon emission tomography; pp. 12–18.

23. Green PJ. Bayesian reconstructions from emission tomography data using a modified EM algorithm. IEEE Trans Med Imaging. 1990;9:84–93. [PubMed]

24. Hoppe H, DeRose T, Duchamp T, Halstead M, Jin H, McDonald J, et al. Piecewise smooth surface reconstruction. Comput Graph. 1994;28:295–302.

25. The Visualization Toolkit. http://public.kitware.com/VTK.

26. Shilov MA, Frey EC, Segars WP, Xu J, Tsui BMW. Improved Monte-Carlo simulations for dynamic PET. J Nucl Med. 2006;47 (Suppl 1):197P.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |