Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Comput Med Imaging Graph. Author manuscript; available in PMC 2010 October 21.
Published in final edited form as:
PMCID: PMC2958779

FDG-PET Parametric Imaging by Total Variation Minimization


Parametric imaging of the cerebral metabolic rate for glucose (CMRGlc) using [18F]-fluorodeoxyglucose positron emission tomography is considered. Traditional imaging is hindered due to low signal to noise ratios at individual voxels. We propose to minimize the total variation of the tracer uptake rates while requiring good fit of traditional Patlak equations. This minimization guarantees spatial homogeneity within brain regions and good distinction between brain regions. Brain phantom simulations demonstrate significant improvement in quality of images by the proposed method as compared to Patlak images with post-filtering using Gaussian or median filters.

Keywords: Total variation, graphical analysis, Patlak plot, PET quantification, Parametric imaging, FDG, Alzheimer’s disease, uptake rate

1. Introduction

We focus on positron emission tomography (PET) parametric imaging for estimating the cerebral metabolic rate of glucose (CMRGlc) using the [18F]-fluorodeoxyglucose (FDG) tracer. The ability to derive accurate parameters depends upon the quality of data, the quantification method and the numerical algorithm. In this study, we refer to the time activity curve (TAC) from a given tissue location as the output, tissue TAC or TTAC, and the TAC from the blood pool (image-derived or arterial blood-sampled) as the input, plasma TAC or PTAC. Most existing quantification methods perform well for regions of interest (ROIs), but are not good for voxel level quantification due to the high level of noise. These include graphical methods, [20, 18], linear least squares, the weighted integration method, [3], generalized linear least squares, [6, 5], nonlinear least squares (NLS) and weighted NLS.

All the algorithms listed above perform the quantification at each voxel location separately; they do not consider the kinetic similarities among neighboring voxels within functionally-defined regions. Thus, voxel-by-voxel variation in a functionally-homogeneous region may be large because of noise in the data. But, by incorporating the spatial constraint that parameters in a functionally-homogeneous region should be similar, in any of the above methods, the quality of the resulting parametric image for the CMRGlc may be improved. Zhou et al, [25], for example, improved the parametric image quality by ridge regression with constraints on the rate constants. There, the estimation of parameters uses a linear components decomposition of the kinetics in which each component represents a functional kinetic curve generated by clustering the TTACs, and the problem is solved voxel by voxel. Here we propose the use of a total variation (TV) penalty term which imposes spatial consistency between neighboring voxels.

The total variation penalty was first introduced in the context of image deblurring by Rudin et al, [22]. TV can significantly suppress noise while recovering sharp edges because it does not penalize discontinuities. It has received much theoretical research attention and been utilized in many signal and image processing applications. While it was introduced for PET image reconstruction by Jonsson et al, [13], and Kisilev et al, [15], it has apparently not been applied for parametric PET imaging. Instead of calculating the uptake rate for each voxel by Patlak’s method, we propose to minimize the TV of the uptake rate over the entire image while also maintaining a good least squares fit for the Patlak equations at all voxels. Thus the parameters of the whole image are spatially related by the TV and solved simultaneously. The resultant parametric image is expected to have spatial homogeneity over brain regions with similar kinetics and distinct edges between brain regions that have different kinetics. This is validated by phantom simulations.

In addition to proposing the new model with the TV penalty, we also pay careful attention to the computational complexity of the algorithm by taking advantage of implementations for large scale sparse matrix computations. In contrast to approximating the Hessian matrix, as is typical for quasi-Newton methods, our algorithm explicitly and accurately calculates both gradient and Hessian terms. The Hessian is efficiently recalculated at each iteration because of its sparsity. The Quasi-Minimal Residual (QMR) method, [7], is used to solve the resulting large-scale linear systems. With this efficient implementation, the procedure described here is computationally feasible.

The rest of the paper is organized as follows: The new algorithm with TV penalty is introduced in section 2, with relevant computational issues detailed in the appendix. The experimental data sets are described in section 3 and results reported in section 4. Issues relevant to the proposed TV-Patlak method and computational aspects are discussed in section 5. Conclusions are presented in section 6.

2. Methods

The Patlak plot has been developed for systems with irreversible trapping [20]. Most often it is applied for the analysis of FDG. The measured TTAC undergoes a transformation and is plotted against a normalized time. It is given by the expression


where CT(t) is the measured TTAC, (in counts/min/g) and CP is the PTAC (in counts/min/ml), i.e. the FDG concentration in plasma. For systems with irreversible compartments this plot yields a straight line after sufficient equilibration time. For the FDG tracer, the slope K represents the uptake rate which, together with lumped constant (LC) and glucose concentration in plasma (Cpg) allows easy calculation of the CMRGlc=KCpg/LC, (in mg/min/100g). The intercept V is given by V0 + vB where V0 is the distribution volume of the reversible compartment and vB is the fractional blood volume.

The linear relationship (1) can be rewritten as


and, assuming m dynamic frames over the period of equilibration, its discretized version is


In matrix format,


where A is a m-by-2 matrix,


and vector b = (CT(t1), CT(t2), ···, CT(tm))T. If we were to solve (4) for each voxel independently we would obtain a parametric image lacking spatial homogeneity and with low signal-to-noise ratio (SNR). Image denoising techniques could then be applied as a separate task to improve the image quality. Instead, obtaining all voxel parameters as a result of a global optimization algorithm with a TV penalty for the entire image, the necessity for postprocessing should be eliminated.

Limiting the discussion here to 2D images (although our application of TV is 3D), we select the active voxels to be quantified by the application of a brain mask, yielding a total of N voxels. Equation (4) holds with common matrix A dependent on CP(t) for each voxel i, but with K, V and b replaced by K(i), V(i), and bi, respectively, where bi is obtained from the TTAC for voxel i. Collecting the unknowns of these N voxels in vectors of uptake rates and intercepts


and requiring (4) in the least squares sense over all voxels, while maintaining minimal TV of the uptake rate for the selected image voxels, yields the global minimization problem


Here the total variation norm is given by ||x||TV,β=i=1Nφi(x) with φi(x)=(xi=xir)2+(xixib)2+β2, and xir and xib are the values associated with voxels to the right and below voxel i. Theoretically the TV norm is ||x||TV,0, which is a seminorm on a space of bounded variation, [24]. The small constant β is used to avoid the numerical difficulty due to the lack of differentiability at the origin of [var phi]i for β = 0. The diagonal weight matrix for voxel i is given by


where Δtj is the scan duration of the frame at time tj, λ is the tracer’s decay constant and bij is the value of the ith TTAC at frame j. This weighting is consistent with using a simulation with variance


[17]. Sc is a common scale factor that need not be made explicit here because it is absorbed into the parameter α in (6).

The objective function in (5) is convex, and can be solved using a standard Newton-type algorithm, [24] Chapter 8. To simplify the expressions we introduce the vector z = [x; y].

Further details on the calculation of gradient vector [nabla]Φ(z) and Hessian matrix [nabla]2Φ(z) are provided in the appendix. Some other aspects of the algorithm, including discussion about constants α and β, here chosen to be 0.2 and 10−8, respectively, as well as other approaches for the solution of the TV problem are discussed in section 5.

3. Experimental Data

To validate the proposed parametric imaging method we performed experiments with simulated data. The MRI-based high-resolution Zubal head phantom is used to define the brain structures, [26]. Each voxel in the slice of 256 ×256 voxels is of size 1.5 ×1.5mm2. There are 128 slices for the entire head and 62 defined anatomical, neurological, and taxonomical structures. The 11 regions for slice 64, representing a total 13708 voxels, are given in table 1. For the purposes of the simulations, well-accepted values of the kinetic rate parameters of these structures for the two-tissue compartmental model of the FDG tracer [12], are assigned. Notice that in order to better model the true biochemical process we assume k4 > 0. Because k4 is, however, relatively small, Patlak’s method, for which it is assumed k4 = 0, is still a suitable graphical method for quantifying the uptake rate.

Table 1
Brain regions and rate constants[12] for slice 64 of the Zubal head phantom [26].

The noise-free input function, with values given in kBq/ml, is given by the formulation introduced in [9],


Although any reasonable input function, including clinical plasma samples, could be used for the simulation this formulation was validated as providing a good approximation to the plasma samples of a healthy subject. Given the input function, the exact phantom can then be generated using the rate constants for the structures detailed in table 1, [23]. The output time frames were generated assuming time frames with durations Δtj, j = 1, ···, m, m = 22, given in minutes, 0.2, 8 × 0.0333, 2 × 0.1667, 0.2, 0.5, 2 × 1, 2 × 1.5, 3.5, 2 × 5, 10 and 30.

In our experiments, in order to control the computational overhead of the reconstructions, we reduced the image size of the phantom from 256 ×256 to 128 × 128. To do this we needed to relabel the structure assigned to a given voxel. This was done by averaging the quantity K1k3/(k2 + k3) over the 2-by-2 neighboring voxels at the finer resolution. Then the voxel at the coarser level was labeled as belonging to the structure for which this average is closest to the structure value. The kinetic values were then assigned, the TTAC output values calculated, and then projected to the sinogram space yielding projected noise-free sinogram data P. For simplification, the instrumental and physical effects, including attenuation, Compton scattering, decay and random coincidences, were not simulated. Poisson noise was then added to the projected data using S = poissrnd(P) where the Matlab®,[19] function poissrnd uses vector P as the means of Poisson densities to generate the noisy sinogram data S. Based on this noisy sinogram data, concentration images are reconstructed using the Expectation-Maximization (EM) algorithm, [14].

Several data sets were generated: To investigate errors introduced by violation of the irreversibility assumption, k4 = 0, we tested both k4 = 0 and k4 > 0. Similarly, we tested both with and without noise in the sinograms to investigate the effects of the proposed method due to noise in the sinograms. Gaussian noise with noise levels 0%, 5%, 10%, 15% and 20% was added to the input function, i.e.


where ηj is selected from a standard normal distribution (G(0, 1)), and CV = 0, 0.05, 0.10, 0.15 and 0.20. 100 random realizations are tested in each case.

We summarize the test data sets in table 2, using a character triple to classify each test. The first character of the triple indicates whether k4 = 0 or k4 > 0, 0 or +, respectively. The second character indicates the noise level on the input function, 0 or +, for noise-free, or with added noise, respectively. To indicate the noise level + is replaced by 1, 2, 3 and 4 to indicate noise levels 5%, 10%, 15% and 20% as necessary. The third character indicates whether noise is added to the sinogram, again 0 or +, respectively. Therefore, for example, +3+ represents the test case for k4 > 0, 15% noise in u(t) and Poisson noise added to the sinogram.

Table 2
Summary of the test cases. Here 0 in columns two and three indicates the noise-free case, while + indicates noise was added. In column one the 0 indicates k4 = 0. The same comments apply, but with the irreversibility assumption violated for all triples ...

4. Results

To evaluate the simulations quantitatively we define the relative error of the kth realization for voxel i:


where Kk(i)^ is the estimated value of the true value of the uptake rate Ktrue(i) at the kth realization for voxel i. Over 100 random simulations, and N voxels, we calculate the bias, i.e. the mean of the relative errors, and the deviation from the mean:


Absolute relative errors |rik| and associated mean R¯=i=1Nk=1100rik/100N, and deviation, D=i=1Nk=1100||rikR¯/N, are also calculated. Note the deviations are l1 measurements which do not overweight outlier and large error samples, as is the case for the l2-based measurements such as the root mean squared error.

In the images shown in the figures we illustrate the calculated uptake rates K of the FDG. Images for the CMRGlc can be obtained by directly scaling K. In figure 1 we compare the result of using Patlak and TV-Patlak for estimating the uptake rates with respect to no noise, 20% noise in the input function, Poisson noise in the sinogram, and finally with respect to the case in which the irreversibility assumption is violated but without noise in the sinogram or input data. In each case the histogram of the relative errors is given on the left, the Patlak image in the middle and the TV-Patlak on the right. The different scales in the histograms are due to the total number of results illustrated. When there is no noise (triples 000 and +00) the histogram illustrates results over all voxels but only one simulation, while for the noisy simulations the results are for all voxels over all 100 realizations of the noise. The TV-Patlak images are more homogeneous in all cases and the relative errors are smaller. The figures clearly show the improvements of employing the TV-Patlak method as compared to using Patlak independently for each voxel. This is confirmed in figure 2 in which images with noise in the sinogram, positive k4 and different noise levels in the input function are shown.

Figure 1
Comparison of Patlak and TV-Patlak for imaging the uptake rate K. In each case on the left is the histogram for the relative error as compared to the exact simulated value, in the middle the Patlak image and on the right the TV-Patlak image. The first ...
Figure 2
Comparison of Patlak and TV-Patlak for imaging the uptake rate K. In each case on the left is the histogram for the relative error as compared to the exact simulated value, in the middle the Patlak image and on the right the TV-Patlak image. In each case ...

Quantitative measurements, confirming the illustrations, are presented in table 3. There we also present the results for conventional Patlak’s method with post-smoothing by two standard filters:

Table 3
Results of phantom simulation: comparison of relative errors between the Patlak (P), TV-Patlak (TV-P), Patlak-GF (P-GF) and Patlak-MF (P-MF) methods. TV¯ is the mean of the total variation over 100 simulations in each case, std denotes its standard ...
  1. a Gaussian filter (Patlak-GF), size 3-by-3 with standard deviation 0.5, generated by Matlab functions filter2(fspecial(‘gaussian’, 3, 0.5), img) and
  2. a median filter (Patlak-MF) generated by Matlab function medfilt2(img) for a 3-by-3 neighborhood.

Consistent with the observation in [21, 16], we find that violation of the Patlak assumption, k4 = 0, introduces about 10% bias; [r with macron] ≈ 0 when k4 = 0 but [r with macron] ≈ 10% for k4 > 0. The rows TV¯ (std) and “# 10% (#15%)” provide complementary supporting information, indicating that the TV is minimized by TV-Patlak; as compared to Patlak, Patlak-GF and Patlak-MF the number of voxels with larger error is reduced. In particular, we emphasize that TV-Patlak provides a better noise removal mechanism than popular post-filtering approaches.

In figures 3 and and44 we illustrate the uptake rates and relative error in the uptake rates, respectively, calculated by Patlak, TV-Patlak, Patlak-GF and Patlak-MF for one simulated data case +4+, i.e. k4 > 0, 20% noise in the input function and Poisson noise in the sinograms. The uptake rate image generated by Patlak-MF is visually smoother than that by TV-Patlak, but the equivalent histograms show that the relative error is higher for Patlak-MF than for TV-Patlak; the Patlak-MF image is over-smoothed.

Figure 3
Comparison of Patlak (upper-left), TV-Patlak (upper-right), Patlak-GF (bottom-left) and Patlak-MF (bottom-right) for imaging the uptake rate K for one simulated data case +4+, i.e. k4 > 0, 20% noise in the input function and Poisson noise in the ...
Figure 4
Histograms for the relative error (12) in the uptake rate calculated by Patlak, TV-Patlak, Patlak-GF and Patlak-MF, for 100 realizations of the data case +4+, i.e. k4 > 0, 20% noise in the input function and Poisson noise in the sinogram.

In figure 5 we illustrate the relative error of noisy case +4+ for gray matter regions in the phantom, including frontal lobes, occipital lobes, insula cortex, temporal lobes and globus pallidus, which are of interest for Alzheimer’s disease research. The error bars show that all estimation methods for gray matter regions have negative bias and there are fewer cases with high error by TV-Patlak.

Figure 5
Histograms for the relative error (12) in the uptake rate of gray matter regions calculated by Patlak, TV-Patlak, Patlak-GF and Patlak-MF, for 100 realizations of the data case +4+, i.e. k4 > 0, 20% noise in the input function and Poisson noise ...

Finally, we note that the computational cost for the TV-Patlak Algorithm 1 is about 6.5 seconds while for Patlak, Patlak-GF and Patlak-MF they are 0.48, 0.55 and 0.58 seconds respectively for a 2D image on a PC with 1GHz CPU and Matlab code.

Algorithm 1
Given initial guess z = [x; y] and tolerance ε > 0

5. Discussion

In this section we discuss issues relevant to the proposed TV-Patlak method.

  1. Regularization constant α: There are many approaches for determining the choice of an appropriate regularization constant α. A good reference would be [24] in which the methods of unbiased predictive risk-estimation, generalized cross validation, and the L-curve are described. In general, the choice of α depends on the noise level of the problem. Here α balances the homogeneity of the uptake rate against the residual of the traditional Patlak least squares data fit. For the PET imaging application, noise in the TTAC data depends on the scanner, the reconstruction method, the tracer dosage and even the kinetics of individuals. It is therefore possible to make a standard parameter setting for commonly-used environments. The most convenient method for the selection of α is the so-called L-curve, [10, 11], which plots total variation against i=1N||Wi(A(xi,yi)Tbi)||22 for all tested α. The L-curve clearly displays the compromise between of the homogeneity and the residual of the Patlak fitting equations. The α corresponding to the left lower corner of the L-curve is considered as the optimal choice. One representative L-curve of our simulations is illustrated in figure 6. We found for our simulations that a suitable choice is α = 0.2, but certainly it will in general depend on the reconstruction algorithm. For example, the simple EM method and filtered backprojection algorithms introduce more noise than the maximum a posteriori (MAP) algorithm, [8, 1]. For real data not only are there additional sources of noise but the choice for α will also depend on the specific tracer. However, once an appropriate α is found by L-curve for a specific imaging environment, it can be fixed for future imaging calculations.
    Figure 6
    L-curve for the simulation case ‘+4+’. Total variation against the sum of weighted residuals, i.e. i=1N||Wi(A(xi,yi)Tbi)||22, for α from 0.0498 to 0.8187 is plotted. α = 0.2 is associated with the left ...
  2. Constant β: We use Newton’s method for the case β ≠ 0 to solve the optimization problem (5). Other methods for solving the TV problem with β ≠ 0 are discussed in [24], which includes the primal-dual method [4]. A good choice of β avoids numerical difficulties for small derivatives and provides a good approximation of the TV. For our tests we found that the results are not sensitive to the choice of β and thus suggest the use of β = 10−8 as a good choice for FDG-PET brain imaging. If we wish to avoid the choice for β the TV term in (5) can be reformulated as a set of linear constraints, and other algorithms are possible, [2].
  3. Boundary values: At each iteration of the algorithm, after updating x and before doing any function evaluation or other calculation, the boundary voxels need to be updated. The value of each boundary voxel is set to the average of its active neighbors in four directions.
  4. Computational efficiency: For computational efficiency and to minimize memory usage, it is important to not only use sparse storage strategies for the relevant matrices but also to use appropriate algorithms for the solution of the large scale sparse linear systems given by (9). Here we use the QMR iteration, [7]. Moreover, if a general Newton’s or quasi-Newton method with approximated Hessian were used, the cost in time and memory would be much more expensive because the approximated Hessian matrix is generally dense. In addition to achieving sparsity, the Hessian matrix is calculated accurately; and the convergence should be faster. For our simulations convergence is achieved in eleven iteration steps on average. The Matlab code of the TV-Patlak method can be downloaded from

6. Conclusions

A qualitative improvement in imaging of PET uptake can be achieved by using a global model, with the total variation as a penalty term, to obtain the voxel uptake rates. The resultant uptake images have spatial homogeneity over brain regions with similar kinetics and distinct edges between brain regions that have different kinetics. It is statistically validated that the TV-Patlak significantly reduces the relative errors of the calculated parameters as compared with those generated by Patlak’s graphical method, and post-smoothing by Gaussian and median filters.


This work was supported by grants from the state of Arizona, the NIH, R01 MH057899 and P30 AG19610 and the NSF DMS 0652833 and DMS 0513214. The authors thank Chi-Chuan Chen for providing the transition matrix, generated by the Monte Carlo method, which is used in the PET image reconstructions and Drs Bouman and Musfata for offering their MAP reconstruction code, which provides comparisons between reconstruction methods. We also acknowledge Dr. Laszlo Balkay and Dr. Jeffrey Fessler for providing PET simulator and PET reconstruction software, respectively.



Hongbin Guo received his Ph.D. degree in computational mathematics from the Department of Mathematics, Fudan University, in 2000. He is currently an Assistant Research Professor with the Department of Mathematics and Statistics at Arizona State University. His main research interest is computational mathematics with a focus on the numerical linear algebra and its applications in medical imaging, particularly for the quantification of brain function with positron emission tomography and magnetic resonance imaging, co-registration, functional clustering, image restoration and classification.


Rosemary A. Renaut received the Ph.D. degree in applied mathematics from the University of Cambridge, U.K., in 1985. Since 1987, she has been with the Department of Mathematics, Arizona State University, where she is now a Full Professor. Dr. Renaut is a Fellow of the Institute for Mathematics and its Applications, and a Chartered Mathematician. Her research interests are broad and include the design and evaluation of computational methods for the solution of partial differential equations, with specific emphasis on high-order and spectral methods, as well as the development of novel algorithms for solving inverse problems, specifically as applied to medical image reconstruction and restoration.


Kewei Chen got his master degree from Beijing Normal University in 1986 and his Ph.D. Degree from UCLA in 1993. He is currently the director and a senior biomathematician of the Computational Image Analysis Program, Banner Alzheimer’s Institute. He has been using and developing neuroimaging analytic techniques in PET, magnetic resonance imaging (MRI) and functional MRI. His main methodological research interests include the tracer kinetic modeling in PET, measuring local and global volume changes using various MRI techniques, brain functional connectivity, and multi-modal data integration. One of his primary research interests is the use of neuroimaging techniques in the study of Alzheimer’s disease.


Eric M Reiman is Executive Director of the Banner Alzheimer’s Institute, Clinical Director of the Neurogenomics Division at the Translational Genomics Research Institute (TGen), Professor and Associate Head of Psychiatry at the University of Arizona, and Director of the Arizona Alzheimer’s Consortium. He received his undergraduate and medical degrees and most of psychiatry residency training at Duke University. He completed his residency training, became an Assistant Professor of Psychiatry and developed a leadership role in positron emission tomography research at Washington University in St. Louis, before moving to Arizona. His research interests include brain imaging, genomics, and their use in the unusually early detection and tracking of Alzheimer’s disease and the rigorous and rapid evaluation of promising Alzheimer’s disease-slowing, risk-reducing and prevention therapies.

A. Appendix

To simplify notation, we assume that the weighting matrix in (5) has been absorbed into A and bi. To simplify the expressions in the objective function (6) we introduce the vector function f(z) = (f1(z), f2(z), ···, f2N(z))T, where fi(z) = [var phi]k(x)2 for i = 1 … N and fi(z) = ||A[xiN; yiN] − biN||2 for i = N + 1 … 2N.

Gradient calculation

To find the gradient of Φ(z), we first derive the Jacobian of f(z).

  1. For i = 1, ···, N and l = 1, ···, 2N
  2. For i = N + 1, ···, 2N and l = 1, ···, 2N
    Because [nabla]x[var phi]i = 1/(2[var phi]i)[nabla]xfi for i = 1, ···, N, [nabla]zΦ = ([nabla]f(z))Tg, where

Hessian matrix

To find the Hessian matrix for Φ(z), we first derive the Hessian matrix for each function fi(z).

  1. For i = 1, ···, N, xx2fi is sparse
  2. For i = N + 1, ···, 2N, the Hessian matrix is again sparse.


1. Bouman CA, Sauer K. A unified approach to statistical tomography using coordinate descent optimization. IEEE Tr Im Proc. 1996 March;5 (3):480–492. [PubMed]
2. Boyd S, Vandenberghe L. Convex Optimization. Cambridge University Press; 2004.
3. Carson R, Huang S, Green M. Weighted integration method for local cerebral blood flow measurements with positron emission tomography. J Cereb Blood Flow Metab. 1986;6 (2):245–58. [PubMed]
4. Chan TF, Golub GH, Mulet P. A nonlinear primal-dual method for total variation-based image restoration. SIAM J Sci Comput. 1999;20 (6):1964–1977.
5. Chen K, Lawson M, Reiman E, Cooper A, Feng D, Huang S, Bandy D, Ho D, Yen L, Palant A. Generalized linear least squares method for fast generation of myocardial blood flow parametric images with N–13 ammonia PET. Med Imag. 1998;7 (3):236–243. [PubMed]
6. Feng D, Huang S. An unbiased parametric imaging algorithm for nonuniformly sampled biomedical system parameter estimation. IEEE Trans Med Imag. 1996;15 (4):512–518. [PubMed]
7. Freund RW, Nachtigal NM. QMR: A quasi-minimal residual method for non-Hermitian linear systems. Numerische Mathematik. 1991;60:315–340.
8. Geman S, McClure DE. Bayesian image analysis: An application to single photon emission tomography. Proc Amer Statist Assoc Statistical Computing Section. 1985:12–18.
9. Guo H, Renaut R, Chen K. An input function estimation method for FDG-PET human brain studies. Nuclear Medicine and Biology. 2007;34 (5):483–492. [PMC free article] [PubMed]
10. Hansen PC. Analysis of discrete ill-posed problems by means of the L-curve. SIAM Review. 1992;34:561–580.
11. Hansen PC, O’Leary DP. The use of the L-curve in the regularization of discrete ill-posed problems. SIAM Journal on Scientific Computing. 1993;14:1487–1503.
12. Huang SC, Phelps ME, Hoffman EJ, Sideris K, Selin CJ, Kuhl DE. Noninvasive determination of local cerebral metabolic rate of glucose in man. Am J Physiol. 1980;238 (E):69–82. [PubMed]
13. Jonsson E, Huang SC, Chan T. Total variation regularization in positron emission tomography. Technical Reports on Image Processing CAM 98-48, UCLA 1998
14. Kaufman L. Implementing and accelerating the EM algorithm for positron emission tomography. IEEE Trans Med Imag. 1987 Mar;6 (1):37–51. [PubMed]
15. Kisilev P, Zibulevsky M, Zeevi YY. Wavelet representation and total variation regularization in emission tomography. ICIP. 2001;(1):702–705.
16. Lammertsma A, Brooks D, Frackowiak R, Beaney R, Herold S, Heather J, Palmer A, Jones T. Measurement of glucose utilisation with [18f]2-fluoro-2-deoxy-d-glucose: a comparison of different analytical methods. J Cereb Blood Flow Metab. 1987;7 (2):161–72. [PubMed]
17. Logan J, Fowler J, Volkow N, Ding Y, Wang G, Alexoff D. A strategy for removing the bias in the graphical analysis method. J Cereb Blood Flow Metab. 2001;21 (3):307–20. [PubMed]
18. Logan J, Fowler JS, Volkow ND, Wolf AP, Dewey SL, Schlyer DJ, MacGregor RR, Hitzemann R, Bendriem B, Gatley SJ. Graphical analysis of reversible radioligand binding from time-activity measurements applied to [N-11C-methyl]-(−)-cocaine PET studies in human subjects. J Cereb Blood Flow Metab. 1990;10:740–747. [PubMed]
19. Matlab. Matlab is a registered trademark of MathWorks, Inc. 2008. URL
20. Patlak CS, Blasberg RG, Fenstermacher JD. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. J Cereb Blood Flow Metab. 1983;3 (1):1–7. [PubMed]
21. Phelps ME, Huang SC, Hoffman EJ, Selin CE, Kuhl D. Tomographic measurement of local cerebral glucose metabolic rate in man with (18F) fluorodeoxyglucose: Validation of method. Ann Neurol. 1979;6:371–388. [PubMed]
22. Rudin L, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992;60:259–268.
23. Sokoloff L, Reivich M, Kennedy C, Rosiers MHD, Patlak CS, Pettigrew KD, Sakurada M, Shinohara M. The [14C] deoxyglucose method for the measurement of local cerebral glucose metabolism: theory procedures and normal values in the conscious and anesthetized albino rat. J Neurochem. 1977;28:897–916. [PubMed]
24. Vogel CR. Computational Methods for Inverse Problems. Society for Industrial and Applied Mathematics; Philadelphia, PA, USA: 2002.
25. Zhou Y, Endres CJ, Brasic JR, Huang SC, Wong DF. Linear regression with spatial constraint to generate parametric images of ligand-receptor dynamic PET studies with a simplified reference tissue model. NeuroImage. 2003;18 (4):975–989. [PubMed]
26. Zubal IG, Harrell CR, Smith EO, Rattner Z, Gindi G, Hoffer PB. Computerized three-dimensional segmented human anatomy. Medical Physics. 1994 Feb;21:299–302. [PubMed]