Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2969184

Formats

Article sections

- Abstract
- 1. INTRODUCTION
- 2. STATISTICAL MODELS
- 3. MAXIMUM-LIKELIHOOD IMAGE RECONSTRUCTION
- 4. HYPOTHESIS TESTING
- 5. SUMMARY AND CONCLUSIONS
- References

Authors

Related links

J Opt Soc Am A Opt Image Sci Vis. Author manuscript; available in PMC 2010 November 2.

Published in final edited form as:

PMCID: PMC2969184

NIHMSID: NIHMS235175

Harrison H. Barrett, Department of Radiology and Optical Sciences Center, University of Arizona, Tucson, Arizona;

Address correspondence to Harrison H. Barrett, Department of Radiology, University of Arizona, Tucson, Arizona 85724. Email: ude.anozira.ygoloidar@tterrab

The publisher's final edited version of this article is available at J Opt Soc Am A Opt Image Sci Vis

See other articles in PMC that cite the published article.

As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. In various imaging systems the attributes can include several position variables, time variables, and energies. If more than about four attributes are measured for each event, it is not practical to record the data in an image matrix. Instead it is more efficient to use a simple list where every attribute is stored for every event. It is the purpose of this paper to discuss the concept of likelihood for such list-mode data. We present expressions for list-mode likelihood with an arbitrary number of attributes per photon and for both preset counts and preset time. Maximization of this likelihood can lead to a practical reconstruction algorithm with list-mode data, but that aspect is covered in a separate paper [IEEE Trans. Med. Imaging (to be published)]. An expression for lesion detectability for list-mode data is also derived and compared with the corresponding expression for conventional binned data.

As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. As an example, consider a planar nuclear-medicine imaging system in which the detector is a scintillation camera. This detector measures (or estimates) two coordinates for each gamma ray. For some kinds of scatter correction, the energy of the photon is also estimated and recorded.^{1}^{–}^{3} With high photon energies and thick scintillation crystals, it can also be useful to estimate the depth of interaction of the photon in the crystal.^{4}^{–}^{6} All of these attributes are estimated from the basic raw data, the photomultiplier signals, and in fact these signals can themselves be regarded as measured attributes of the scintillation event.^{7}^{,}^{8}

Additional attributes arise in dynamic and tomographic imaging. One way of conducting a dynamic study in nuclear medicine is to record the time of occurrence for each event. Similarly, in single-photon emission computed tomography (SPECT) systems, it is necessary, at a minimum, to record the projection angle along with the event coordinates.

The number of attributes increases further in a fully three-dimensional positron emission tomography (PET) system with two scintillation cameras. There the minimal set of attributes consists of four coordinates (two for each of the coincident photons) plus a rotation angle. In addition, the attribute set might include estimates of the energies of each photon, depth of interaction, or time-of-flight difference. Similarly, in a Compton camera^{9} each primary gamma-ray photon produces a Compton-scattered photon, and the coordinates and energy of both primary and secondary photon are measured. Thus the attribute set consists of at least four measured coordinates and two energies, and two more coordinates can be measured with thick detectors.

The concept of measuring multiple attributes for each event is not restricted to gamma rays; optical photon-counting detectors with multiple outputs exist as well. A position-sensitive photomultiplier, for example, can have multiple anodes. If optical photons are incident on the photocathode and well resolved temporally, so that distinct anode signals are obtained for each photon, then the signal on each anode can be regarded as an attribute of a single optical photon.

It is clear from these examples that a substantial number of attributes can be measured for each detected event. One way of recording these data is to bin them into one large data matrix, with one index for each attribute. This method of data recording encounters difficulties as the number of attributes increases, since the number of elements in the data matrix can be huge. If *N* attributes are measured to a precision of *B* bits each, there must be 2* ^{NB}* elements. If we assign one byte to each element, we can acquire a maximum of 255 events in one bin. With four attributes, we thus require approximately 2

If we attempted to reduce the storage requirements by reducing the number of bits per attribute, there would be a danger of information loss. As an extreme example, photon energy in a scintillation camera is often reduced to a single bit, set to one if the estimated energy lies in a preset window. There is evidence that this results in a loss of image quality as measured by lesion detectability.^{8}^{,}^{10}

Another form of data reduction is to use the initial set of measured attributes to estimate values for some smaller set of attributes. From the photomultiplier outputs in a scintillation camera, for example, we can estimate the coordinates and energy of each photon, reducing the number of attributes from the number of photomultipliers to three. Though this practice is virtually universal, it is difficult to establish in general that it entails no information loss.

In an important alternative mode of data storage, called list mode, the measured attributes of each event are simply stored in a list. If *J* events are observed and *N* attributes are measured for each, then *NJ* memory locations are required. Each location can be one byte if 8-bit precision is adequate for each attribute. Whenever 2* ^{NB}* exceeds

The goal of this paper is to present a comprehensive treatment of the important concept of likelihood for list-mode data. A familiar use of likelihood is in maximum-likelihood parameter estimation or object reconstruction, and the theory presented here provides the mathematical basis for these applications. Another use of likelihood is in signal detection and discrimination problems, where it is known that the likelihood ratio is the optimum test statistic. Performance on these detection and discrimination tasks can then be used for the objective assessment of image quality.

In Section 2 we consider three modes of representing data from photon-counting imaging systems: conventional binning, list mode, and an impulse-valued random process. We present the relevant multivariate probability distributions needed to describe the data in each mode.

In Section 3 we use the statistical distributions developed in Section 2 to derive expressions for list-mode likelihood of the data given a particular object. We consider separately the situations where data are acquired for a preset time or a preset number of counts. We have used one of these expressions to develop a maximum-likelihood reconstruction algorithm, but the algorithm itself is derived and discussed in a separate paper.^{11} Previous work on maximum-likelihood reconstruction from list-mode data has been published by Snyder^{12} and Snyder and Politte,^{13} and it is a subject of current interest in high-energy physics.

In Section 4 we derive expressions for the likelihood ratio and detectability index for list-mode data when the task is detection of a nonrandom object or discrimination between two such objects. The results are compared with previously published expressions for binned data.

In any event-counting system, data can be collected either for a given time or until a given number of events is reached. In the nuclear-medicine literature, these two methods are referred to as preset time and preset counts, respectively, and we adopt that terminology here also. The key distinction is that the total number of events *J* is a random variable for preset time but a fixed number for preset counts. Another possible data set could be obtained by collecting a preset number of counts but also recording the (random) time required to reach this number. This option is rarely used in practice and is not treated here.

Let **r*** _{j}*,

There are three different ways of representing these data. The simplest is the attribute list {**r*** _{j}*,

In a binned image, it is convenient to use a single index *m* to denote the bin rather than using one index for each component of the attribute vector. The *j*th event is assigned to bin *m* if
${X}_{mn}-{\scriptstyle \frac{1}{2}}{\epsilon}_{n}\le {x}_{jn}<{X}_{mn}+{\scriptstyle \frac{1}{2}}{\epsilon}_{n}$ for all *n*, where *X _{mn}* is the

The number of bins associated with the *n*th attribute is *M _{n}*, given by the range of allowed values of

After all *J* events have been binned, a total of *g _{m}* events will have accumulated in bin

$$M=\prod _{n=1}^{N}{M}_{n}.$$

(1)

As noted in the introduction, *M* can be huge even for modest *N*, so the binned data representation may not be feasible for *N* larger than 4 or so.

Another representation of the data, very useful for theoretical analysis, is the impulse-valued random process.^{14} In this representation, we assign an *N*-dimensional Dirac delta function to each event in the list. The resulting random process is a generalized function in attribute space defined by

$$g(\mathbf{r})=\sum _{j=1}^{J}\delta (\mathbf{r}-{\mathbf{r}}_{j}).$$

(2)

This function is parameterized by the random attributes {**r*** _{j}*} and by

Given the random process *g*(**r**), we can obtain the binned data vector **g** simply by integrating. The number of events in bin *m* is given by

$${g}_{m}={\int}_{\text{bin}\phantom{\rule{0.16667em}{0ex}}m}g(\mathbf{r}){\text{d}}^{N}r,$$

(3)

where d* ^{N}r* is a volume element in attribute space and the integral is over the region of attribute space associated with the

Equation (3) demonstrates a key difference between *g*(**r**) and *g _{m}*: the latter is a pure number, while the former must have dimensions associated with it. Just what these dimensions are depends on the specific attributes that make up

The key assumption in the analysis that follows is that individual events are statistically independent. Although we often take this condition for granted in photon-counting problems, there are several important situations that could invalidate it.

The first is detector saturation, manifested as dead time or loss of resolution at high counting rates. If one photon temporarily paralyzes the detector and there is a significant probability of another photon arriving before it recovers, the probability of detection of the second photon is dependent on the presence of the first. Even if the second photon is detected, the transient response of the detector or the electronics may cause errors in the measured position, energy, or other attributes of the second photon. We shall neglect all of these effects in this paper, which amounts to restricting the analysis to relatively low count rates.

Statistical independence also fails in random multiplication processes where one primary event gives rise to a random number of secondary events.^{14}^{,}^{15} In a scintillation detector, for example, a single gamma-ray photon produces a large number of optical photons, and these secondary events are not statistically independent since they arise from the same gamma-ray photon.^{16} One can conceive of systems that measure attributes of individual secondary events, though the authors know of no current systems that do so. A scintillator could be viewed by an optical photon-counting imaging system, for example, but currently such systems do not have sufficient temporal resolution to report coordinates or other attributes for the individual optical photons. Since this paper is concerned only with counting systems where multiple attributes of individual events are measured, we rule out the possibility that the events under consideration are secondaries associated with a single primary event.

On the other hand, random multiplication processes may be present in the systems considered here, scintillation cameras being a prime example. The distinction is that the attributes being measured are properties of the primary event, and the randomness of the secondaries simply leads to error in the estimates of the attributes. As long as the primary events (the gamma rays in the case of a scintillation camera) are statistically independent, the analysis given here is valid.

Another problem that can invalidate the independence assumption is randomness in the object being imaged. There are two different statistical ensembles that we might consider. The first is all realizations of the random data set for one particular object; the second allows the objects themselves to be drawn at random from some distribution. In the latter case, the independence assumption may hold conditionally for a fixed object but not when the ensemble of objects is taken into account. Object randomness is not considered in this paper.

We adopt a discrete object model and represent the object by a *K* × 1 vector **f**. The *k*th component of **f**, denoted *f _{k}*, is the mean number of photons per second emitted from the

The random vector **r*** _{j}* is the result of a measurement of the set of attributes associated with an individual event. As with any measurement, there can be both systematic and random errors. In the language of estimation theory, there is both a bias and a variance associated with the estimate or measurement. If we denote the true attribute vector for the

$${\mathbf{r}}_{j}={\mathbf{R}}_{j}+{\mathbf{b}}_{j}+{\mathit{\eta}}_{j},$$

(4)

where **b*** _{j}* is the bias or systematic error and

Thus the measured attribute vector **r*** _{j}* has two random components,

With these considerations in mind, we write the probability density function for **r*** _{j}* as

$$\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f})={\int}_{\text{att}}{\text{d}}^{N}{R}_{j}{\text{pr}}_{det}({\mathbf{r}}_{j}\mid {\mathbf{R}}_{j}){\text{pr}}_{\text{im}}({\mathbf{R}}_{j}\mid \mathbf{f}),$$

(5)

where the integral is over the range of each component in attribute space, pr_{det}(**r*** _{j}*|

In practice, the two factors in the integrand in Eq. (5) can be calculated from an analytical or numerical model of the detector and imaging system. The second factor, pr_{im}(**R*** _{j}*|

Since nothing distinguishes one photon from another, pr(**r*** _{j}*|

$$\text{pr}(\{{\mathbf{r}}_{j}\}\mid \mathbf{f},J)=\prod _{j=1}^{J}\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f}).$$

(6)

For an acquisition with preset time, the list-mode data set consists of *J* + 1 random variables, namely, all of the **r*** _{j}* and

$$\begin{array}{l}\text{pr}(\{{\mathbf{r}}_{j}\},J\mid \mathbf{f})=\text{pr}(\{{\mathbf{r}}_{j}\}\mid \mathbf{f},J)Pr(J\mid \mathbf{f})\\ =Pr(J\mid \mathbf{f})\prod _{j=1}^{J}\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f}).\end{array}$$

(7)

The notation is a bit tricky here since pr({**r*** _{j}*},

For later convenience we define two unnormalized densities *h*(**r*** _{j}*) and (

$$h({\mathbf{r}}_{j})=J\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f}),$$

(8)

$$\overline{h}({\mathbf{r}}_{j})=\overline{J}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f}),$$

(9)

where is the mean number of events, averaged over many acquisitions with the same object and the same preset time:

$$\overline{J}=\sum _{J=0}^{\infty}J\phantom{\rule{0.16667em}{0ex}}Pr(J\mid \mathbf{f}).$$

(10)

For preset counts, *h*(**r**)d* ^{N}r* is the mean number of events with attributes in the differential volume d

In this section we review some well-known results concerning statistics of binned data. The relation of these results to list-mode data will be discussed in Subsection 2.F.

The probability that event *j* is recorded in bin *m* equals the probability that **r*** _{j}* falls in the region of attribute space associated with that bin; this probability is given by the integral

$$Pr({\mathbf{r}}_{j}\phantom{\rule{0.38889em}{0ex}}\text{in}\phantom{\rule{0.38889em}{0ex}}\text{bin}\phantom{\rule{0.38889em}{0ex}}m)={\int}_{\text{bin}\phantom{\rule{0.38889em}{0ex}}m}\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f}){\text{d}}^{N}{r}_{j}\equiv {\alpha}_{m}.$$

(11)

Since we have assumed that pr(**r*** _{j}*|

For a preset-count data acquisition, the mean number of counts in bin *m* is simply

$${\overline{g}}_{m}={\alpha}_{m}J={\int}_{\text{bin}\phantom{\rule{0.38889em}{0ex}}m}h(\mathbf{r}){\text{d}}^{N}r\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}(\text{preset}\phantom{\rule{0.16667em}{0ex}}\text{counts}).$$

(12)

Since the events are independent and the total number is fixed, the univariate probability law for *g _{m}* in this case is a binomial,

$$Pr({g}_{m}\mid \mathbf{f},J)=\frac{J!}{({g}_{m}!)(J-{g}_{m})!}{({\alpha}_{m})}^{{g}_{m}}{(1-{\alpha}_{m})}^{J-{g}_{m}},$$

(13)

and the corresponding multivariate law for the entire data vector **g** is a multinomial,

$$Pr(\mathbf{g}\mid \mathbf{f},J)=J!\prod _{m=1}^{M}\frac{{({\alpha}_{m})}^{{g}_{m}}}{{g}_{m}!}.$$

(14)

Note that the *g _{m}* are not statistically independent, because their sum must be

For preset time, we can write

$$Pr(\mathbf{g}\mid \mathbf{f})=\sum _{J=0}^{\infty}Pr(\mathbf{g}\mid \mathbf{f},J)Pr(J\mid \mathbf{f}).$$

(15)

If *J* is a Poisson random variable, as it almost invariably is in practical photon-counting problems, this sum can readily be performed.^{14} The result is that the *g _{m}* are also Poisson and statistically independent, with a probability law given by

$$Pr(\mathbf{g}\mid \mathbf{f})=\prod _{m=1}^{M}exp(-{\overline{g}}_{m})\frac{{({\overline{g}}_{m})}^{{g}_{m}}}{{g}_{m}!}$$

(16)

where now

$${\overline{g}}_{m}={\alpha}_{m}\overline{J}={\int}_{\text{bin}\phantom{\rule{0.38889em}{0ex}}m}\overline{h}(r){\text{d}}^{N}r\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}(\text{preset}\phantom{\rule{0.16667em}{0ex}}\text{time}).$$

(17)

Thus, with preset time and the assumption that *J* is a Poisson random variable, the full probability law Pr(**g**|**f**) is determined by knowledge of *h*(*r*).

The statistics of a spatial or temporal, impulse-valued, Poisson random process are well understood.^{14}^{,}^{17} In this section we extend these properties to a more general attribute space.

Since *g*(**r**) is a generalized function with no finite values other than zero, a probability density function does not have much meaning. Instead we shall discuss the first- and second-order statistics of *g*(**r**), or its mean and autocorrelation function.

The conditional expectation of *g*(**r**), given **f** and *J*, is

$$E\{g(\mathbf{r})\mid \mathbf{f},J\}={\int}_{\text{att}}{\text{d}}^{N}{r}_{1}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{1}\mid \mathbf{f}){\int}_{\text{att}}{\text{d}}^{N}{r}_{2}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{2}\mid \mathbf{f})\cdots \times {\int}_{\text{att}}{\text{d}}^{N}{r}_{J}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{J}\mid \mathbf{f})g(\mathbf{r}).$$

(18)

The procedure for performing this kind of average is detailed by Barrett and Swindell^{14}; the result is

$$E\{g(\mathbf{r})\mid \mathbf{f},J\}=h(\mathbf{r}).$$

(19)

If we further average over *J* in a preset-time mode and assume that *J* is Poisson, we find^{14}

$$E\{g(\mathbf{r})\mid \mathbf{f}\}=\overline{h}(\mathbf{r}).$$

(20)

The nonstationary autocorrelation function of *g*(**r**) is defined by

$$\mathrm{\Gamma}(\mathbf{r},{\mathbf{r}}^{\prime})=\langle g(\mathbf{r})g({\mathbf{r}}^{\prime})\rangle ,$$

(21)

where the angle brackets denote averaging over the set {**r*** _{j}*} and, for preset time,

$$\mathrm{\Gamma}(\mathbf{r},{\mathbf{r}}^{\prime})=\overline{h}(\mathbf{r})\delta (\mathbf{r}-{\mathbf{r}}^{\prime})+\overline{h}(\mathbf{r})\overline{h}({\mathbf{r}}^{\prime}).$$

(22)

Counterparts of Eqs. (18)–(22) for spatial or temporal random processes are derived by Barrett and Swindell,^{14} among other sources. The main point of this section, however, is that they apply in attribute space as well.

We have considered two kinds of data acquisition (preset counts and preset time) and three kinds of data representation (list, bins, and random process). In this section we point out some connections among the results.

First, with binned data the distinction between preset time and preset counts is not great if the total number of bins *M* is large and the number of events *J* is also large. If *M* is large, chances are that no single bin will have a large probability *α _{m}* of getting a particular event. By Eq. (12), the mean number in bin

The distinction between binned data and list mode disappears if the size of each bin is made small enough, since in that case the average number of counts in any bin becomes much less than one. Then the actual random number of events recorded in a bin is either 0 or 1 with high probability, and the list of attribute vectors is simply a list of addresses of bins with one count. This limit is not a practical one, since it is wasteful of memory, but it shows the theoretical relation between binning and list mode: binned data approaches a list as bin size tends to zero.

Also, the density functions *h*(**r**) and (**r**) play a key role in the statistics of all three kinds of data. These functions were originally introduced as unnormalized versions of pr(**r*** _{j}*|

In the case of binned data, *h*(**r**) and (**r**) have yet another meaning: their integrals determine the mean number of counts in each bin [see Eqs. (12) and (17)]. Moreover, if the bins are small enough that these density functions do not vary appreciably over a bin width, we can approximate the integrals as

$${\overline{g}}_{m}\simeq h({\mathbf{X}}_{m})\prod _{n=1}^{N}{\epsilon}_{n},$$

(23)

where **X*** _{m}* and

Comparison of Eqs. (7) and (15) reveals an interesting distinction between list-mode and binned data. The list-mode data set consists of *J* + 1 random variables, the attribute vectors plus *J* itself, so the probability law in Eq. (7) includes *J* as a random variable. With binned data, however, *J* is not a separate random variable since it is the sum of the *g _{m}*. Thus a sum over

Suppose we are given a data vector **g** described by a probability density function pr(**g**|** θ**), where the vector

For binned data, the likelihood is given by Eq. (14) or (16), where the dependence on **f** is contained in *α _{m}* or

$${\overline{g}}_{m}=\sum _{k=1}^{K}{H}_{mk}{f}_{k}={(\mathbf{Hf})}_{m},$$

(24)

where *H _{mk}* is an element of an

With this system model, the likelihood for binned data and preset counts is given, from Eqs. (12), (14), and (24), by

$${L}_{\text{bin}}(\mathbf{f}\mid J)=J!\prod _{m=1}^{M}\frac{{[{(\mathbf{Hf})}_{m}/J]}^{{g}_{m}}}{{g}_{m}!}.$$

(25)

For preset time, and with the assumption that *J* is a Poisson random variable, the likelihood for binned data is [see Eq. (16)]

$$\begin{array}{l}{L}_{\text{bin}}(\mathbf{f})=\sum _{J=0}^{\infty}Pr(J\mid \mathbf{f}){L}_{\text{bin}}(\mathbf{f}\mid J)\\ =\prod _{m=1}^{M}exp[-{(\mathbf{Hf})}_{m}]\frac{{[{(\mathbf{Hf})}_{m}]}^{{g}_{m}}}{{g}_{m}!}.\end{array}$$

(26)

With this form of the likelihood, a popular method for finding the maximum-likelihood estimator of **f** is the expectation-maximization algorithm.^{23}^{,}^{24} This algorithm finds the vector **f** that maximizes *L*_{bin}(**f**) (or, equivalently, its logarithm), subject to the constraint that all components *f _{k}* be nonnegative.

For list-mode data, the likelihood is given by Eq. (6) or (7). The dependence on **f** in these equations is contained in pr(**r*** _{j}*|

$${L}_{\text{list}}(\mathbf{f}\mid J)={J}^{-J}\prod _{j=1}^{J}h({\mathbf{r}}_{j}).$$

(27)

A constant factor such as *J*^{−}* ^{J}* is irrelevant in most applications of the likelihood, so Eq. (27) shows that the list-mode likelihood for preset counts is simply the product of the densities

For preset time, Eqs. (7) and (27) show that

$${L}_{\text{list}}(\mathbf{f})=Pr(J\mid \mathbf{f}){L}_{\text{list}}(\mathbf{f}\mid J)=Pr(J\mid \mathbf{f}){J}^{-J}\prod _{j=1}^{J}h({\mathbf{r}}_{j}).$$

(28)

In contrast to Eq. (26), no sum over *J* appears in this equation, since *J* is a separate random variable in list mode.

To express the list-mode likelihoods as explicit functions of **f**, we note that

$$\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f})=\sum _{k=1}^{K}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)Pr(j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k\mid \mathbf{f}),$$

(29)

where “*j* from *k*” is shorthand indicating that the *j*th event originated with the emission of a photon from the *k*th voxel. Since all photons are equivalent, the probability that any one of them originated from the *k*th voxel is proportional to the object strength associated with that voxel, so Pr(*j* from *k*|**f**) *f _{k}*.

To determine the constant of proportionality in general, we must consider the possibility that the overall system sensitivity can vary with voxel location. The sensitivity *S _{k}* is the probability that a photon emitted from voxel

The probability that the *j*th event originated in the *k*th voxel is the mean number of photons emitted from the *k*th voxel and detected divided by the total number emitted and detected. The mean number emitted from the voxel per unit time is *f _{k}*, and the probability that an emitted photon is detected is

$$Pr(j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k\mid \mathbf{f})=\frac{{f}_{k}{S}_{k}}{{\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}}.$$

(30)

Equation (29) now becomes

$$\text{pr}({\mathbf{r}}_{j}\mid \mathbf{f})=\frac{{\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)}{{\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}}.$$

(31)

From Eqs. (27), (29) and (31), the list-mode likelihood for preset counts is

$${L}_{\text{list}}(\mathbf{f}\mid J)=\frac{{\displaystyle \prod _{j=1}^{J}}\left[{\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right]}{{\left({\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}\right)}^{J}}.$$

(32)

The corresponding log likelihood is

$$log[{L}_{\text{list}}(\mathbf{f}\mid J)]=-Jlog\left(\sum _{k=1}^{K}{f}_{k}{S}_{k}\right)+\sum _{j=1}^{J}log\left[\sum _{k=1}^{K}{f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right].$$

(33)

For preset time, the list-mode likelihood is

$$\begin{array}{l}{L}_{\text{list}}(\mathbf{f})=Pr(J\mid \mathbf{f}){L}_{\text{list}}(\mathbf{f}\mid J)\\ =\frac{Pr(J\mid \mathbf{f})}{{\left({\displaystyle \sum _{k=1}^{K}}{f}_{k}{S}_{k}\right)}^{J}}\prod _{j=1}^{J}\left[\sum _{k=1}^{K}{f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right].\end{array}$$

(34)

But note that the mean number of detected counts in a preset-time acquisition is given by

$$\overline{J}=\tau \sum _{k=1}^{K}{S}_{k}{f}_{k},$$

(35)

where *τ* is the acquisition time. Thus

$$\begin{array}{l}{L}_{\text{list}}(\mathbf{f})=\frac{\text{pr}(J\mid \mathbf{f})}{{(\overline{J})}^{J}}\prod _{j=1}^{J}\left[\sum _{k=1}^{K}\tau {f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right]\\ =\frac{1}{J!}exp(-\overline{J})\prod _{j=1}^{J}\left[\sum _{k=1}^{K}\tau {f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right],\end{array}$$

(36)

where the last step follows by use of a Poisson law for Pr(*J*|**f**). The log likelihood is now

$$log[{L}_{\text{list}}(\mathbf{f})]=-log(J!)-\overline{J}+\sum _{k=1}^{J}log\left(\sum _{k=1}^{K}\tau {f}_{k}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k)\right).$$

(37)

The first term in this expression, −log(*J*!), is an irrelevant constant, but **f** appears in the other two terms. (One must resist the temptation to approximate with the observed *J* in the second term, since that would throw out an essential dependence on **f**, the variable of interest in the reconstruction). Thus maximum-likelihood reconstruction from list-mode, preset-time data consists of maximizing the sum of the second and third terms in Eq. (36), subject to the constraint that all of the *f _{k}* be nonnegative. An algorithm for doing this is given in a separate paper.

Objective assessment of image quality can be based on the ability of an ideal observer to perform a specified task, using the image data.^{21}^{,}^{25} One particular task that has received considerable attention for this purpose is detection of an exactly known, nonrandom signal.^{26} For the ideal observer, this task is essentially equivalent to discrimination between two specified nonrandom signals. It amounts to testing the binary hypothesis that either signal 1 or signal 2 is present; in the detection problem, signal 1 is zero. In an imaging context, the signals in question are the objects being imaged, denoted **f** in this paper. Thus the task is to determine whether **f**_{1} or **f**_{2} is present. Performance on this task can be measured by the area under a receiver operating characteristic (ROC) curve, or, equivalently, by the detectability index *d* to be defined below.

It is well known^{19}^{,}^{20}^{,}^{25} that the optimum strategy for performing a binary discrimination task is to first compute the likelihood ratio λ, defined by

$$\lambda =\frac{L({\mathbf{f}}_{2})}{L({\mathbf{f}}_{1})}=\frac{\text{pr}(\mathbf{g}\mid {\mathbf{f}}_{2})}{\text{pr}(\mathbf{g}\mid {\mathbf{f}}_{1})}.$$

(38)

The discrimination is then performed by comparing λ with a threshold λ_{th} and choosing the hypothesis that **f**_{2} is present if λ > λ_{th}. Equivalently, we can compute the log of the likelihood ratio,

$$\ell =log(\lambda )$$

(39)

and compare it with log(λ_{th}). The result is the same, and the log is often more convenient mathematically.

The ROC curve is generated by varying λ_{th} and, for each value, plotting the true-positive rate (probability of choosing **f**_{2} when it is actually present) versus the false-positive rate (probability of choosing **f**_{2} when **f**_{1} is actually present). In a detection problem where **f**_{1} is zero, the true-positive rate is the probability of detection and the false-positive rate is the false-alarm rate.

The detectability index *d* is defined by

$${d}^{2}=\frac{{[E(\ell \mid {\mathbf{H}}_{2})-E(\ell \mid {\mathbf{H}}_{1})]}^{2}}{{\scriptstyle \frac{1}{2}}\phantom{\rule{0.16667em}{0ex}}var(\ell \mid {\mathbf{H}}_{1})+{\scriptstyle \frac{1}{2}}\phantom{\rule{0.16667em}{0ex}}var(\ell \mid {\mathbf{H}}_{2})},$$

(40)

where *E*(|**H*** _{i}*) is the expected value of , given that hypothesis

The log of the likelihood ratio can easily be constructed from any of the likelihood expressions given in Section 3. For example, with binned data and preset time, it is given by

$$\begin{array}{l}{\ell}_{\mathit{bin}}=log[{\lambda}_{\text{bin}}({\mathbf{f}}_{2})]-log[{\lambda}_{\text{bin}}({\mathbf{f}}_{2})]\\ =\sum _{m=1}^{M}\{{(\mathbf{H}{\mathbf{f}}_{1})}_{m}-{(\mathbf{H}{\mathbf{f}}_{2})}_{m}+{g}_{m}log[{(\mathbf{H}{\mathbf{f}}_{2})}_{m}]-{g}_{m}log[{(\mathbf{H}{\mathbf{f}}_{1})}_{m}]\}.\end{array}$$

(41)

Terms independent of the data **g** can be lumped into the threshold without affecting the ROC curve, so _{bin} becomes

$${\ell}_{\text{bin}}=\sum _{m=1}^{M}{g}_{m}log\left[\frac{{(\mathbf{H}{\mathbf{f}}_{2})}_{m}}{{(\mathbf{H}{\mathbf{f}}_{1})}_{m}}\right]+\text{dull}\phantom{\rule{0.16667em}{0ex}}\text{terms}.$$

(42)

This form shows that _{bin} can be realized as a linear filter where each datum *g _{m}* is multiplied by a weight given by the logarithmic expression in Eq. (42). When the weight is simply the difference of the means under the two hypotheses, the process is called matched filtering; the present process is therefore referred to as logarithmic matched filtering.

Since _{bin} is a linear function of the data and the *g _{m}* are independent Poisson random variables, it is straightforward to compute

$${d}^{2}=\frac{{\left\{{\displaystyle \sum _{m=1}^{M}}[{(\mathbf{H}{\mathbf{f}}_{2})}_{m}-{(\mathbf{H}{\mathbf{f}}_{1})}_{m}]log\left[\frac{{(\mathbf{H}{\mathbf{f}}_{2})}_{m}}{{(\mathbf{H}{\mathbf{f}}_{1})}_{m}}\right]\right\}}^{2}}{\frac{1}{2}{\displaystyle \sum _{m=1}^{M}}[{(\mathbf{H}{\mathbf{f}}_{2})}_{m}-{(\mathbf{H}{\mathbf{f}}_{1})}_{m}]{log}^{2}\left[\frac{{(\mathbf{H}{\mathbf{f}}_{2})}_{m}}{{(\mathbf{H}{\mathbf{f}}_{1})}_{m}}\right]}.$$

(43)

This expression was derived by Cunningham *et al.*^{27} and used as a figure of merit for image quality by Wagner *et al.*^{26}

In this section we present expressions analogous to Eqs. (42) and (43) for the log likelihood and detectability but for list-mode data. Only the case of preset time will be considered, but the case of preset counts is similar.

We define a density * _{i}*(

$${\overline{h}}_{i}({\mathbf{r}}_{j})={\overline{J}}_{i}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid {\mathbf{f}}_{i})=\tau \sum _{k=1}^{K}{f}_{ik}{S}_{k}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{j}\mid j\phantom{\rule{0.16667em}{0ex}}\text{from}\phantom{\rule{0.16667em}{0ex}}k),\phantom{\rule{0.38889em}{0ex}}i=1,2.$$

(44)

Using Eq. (37), we find

$${\ell}_{\text{list}}={\overline{J}}_{1}-{\overline{J}}_{2}+\sum _{j=1}^{J}log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{j})}{{\overline{h}}_{1}({\mathbf{r}}_{j})}\right].$$

(45)

The constant terms _{1} − _{2} are irrelevant for purposes of hypothesis testing since the objects to be discriminated are specified. The term − could not be dropped in Eq. (37) since it was a function of the unknown **f** in a reconstruction problem, but we can drop the corresponding terms in Eq. (45).

Comparison of Eq. (45) with Eq. (42) shows that there is a similar logarithmic structure, but the dependence on the data is quite different. In _{bin}, the data *g _{m}* appear linearly, but in

Next we turn to the detectability index, which requires computation of the mean and variance of _{list}. It is shown in Appendix A that

$${d}^{2}=\frac{{\left\{{\int}_{\text{att}}{\text{d}}^{N}r[{\overline{h}}_{2}(\mathbf{r})-{\overline{h}}_{1}(\mathbf{r})]log\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]\right\}}^{2}}{\frac{1}{2}{\int}_{\text{att}}{\text{d}}^{N}r[{\overline{h}}_{1}(\mathbf{r})+{\overline{h}}_{2}(\mathbf{r})]{log}^{2}\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]}.$$

(46)

The structure of this expression is very similar to that of the Cunningham formula, Eq. (43). Evaluation of Eq. (43) requires computation of the mean data vector for each object, while evaluation of Eq. (46) requires computation of the mean data density in attribute space for each object.

The main contribution of this paper has been the development of a likelihood formalism for photon-counting imaging systems in which multiple attributes are measured for each photon. Since binning of the count data may not be practical when the number of attributes per photon is larger than about four, attention was concentrated on list-mode data storage. Separate likelihood formulas were derived for acquisitions with preset time and ones with present number of events. In Section 3 we derived expressions for list-mode likelihood that can serve as the starting point for a specific reconstruction algorithm. The algorithm itself is derived and discussed in a separate paper.^{11} In Section 4 we derived expressions for the likelihood ratio and the detectability index for list-mode data when the task was detection of a nonrandom object or discrimination between two such objects. The detectability expression is the list-mode counterpart of a formula due to Cunningham *et al.*^{27} that has been used for image-quality assessment.

A limitation of the theory developed here is that it considers only nonrandom objects. It has been noted in the literature^{28}^{–}^{30} that classification tasks based on nonrandom objects can lead to counterintuitive results on image quality. It is argued in these papers that it is better to allow some degree of randomness in the objects to be discriminated.

Unfortunately, it has not been possible to develop a useful theory for the likelihood ratio or the detectability when object randomness is included. With binned data, this difficulty has led to consideration of suboptimal linear discriminants in place of the likelihood ratio.^{21}^{,}^{25}^{,}^{29}^{,}^{31}^{,}^{32} These discriminants have proved to be an effective tool for optimizing imaging systems and predicting performance of human observers.

In the case of list-mode data, the natural interpretation of a linear discriminant is that it is linear not in the attributes themselves but in the random process *g*(**r**). Future work will investigate the properties of such linear discriminants.

The authors acknowledge stimulating discussions with Jack Denny, Craig Abbey, Bob Wagner, and Kyle Myers. We also gratefully acknowledge the careful reading and helpful comments by an anonymous reviewer. This work was supported in part by National Institutes of Health grant 2RO1 CA52643, but it does not reflect any official position of that organization.

With preset time, _{list} is a function of the *J* + 1 random variables {**r*** _{j}*} and

$$E({\ell}_{\text{list}}\mid {H}_{i})=\sum _{J=0}^{\infty}Pr(J\mid {H}_{i}){\int}_{\text{att}}{\text{d}}^{N}{r}_{1}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{1}\mid {H}_{i})\times {\int}_{\text{att}}{\text{d}}^{N}{r}_{2}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{2}\mid {H}_{i})\cdots \times {\int}_{\text{att}}{\text{d}}^{N}{r}_{N}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{N}\mid {H}_{i}){\ell}_{\text{list}}.$$

(A1)

We now use Eq. (45) for _{list} and drop the constant terms _{1} − _{2}, yielding

$$E({\ell}_{\text{list}}\mid {H}_{i})=\sum _{J=0}^{\infty}Pr(J\mid {H}_{i}){\int}_{\text{att}}{\text{d}}^{N}{r}_{1}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{1}\mid {H}_{i})\times {\int}_{\text{att}}{\text{d}}^{N}{r}_{2}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{2}\mid {H}_{i})\cdots \times {\int}_{\text{att}}{\text{d}}^{N}{r}_{J}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{J}\mid {H}_{i})\sum _{j=1}^{J}log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{j})}{{\overline{h}}_{1}({\mathbf{r}}_{j})}\right].$$

(A2)

Consider one particular value of *j*, say, *j* = 17. Then all of the integrals except the one over **r**_{17} yield unity, and that one gives

$${\int}_{\text{att}}{\text{d}}^{17}{r}_{17}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{17}\mid {H}_{i})log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{17})}{{\overline{h}}_{1}({\mathbf{r}}_{17})}\right]=\frac{1}{{\overline{J}}_{i}}{\int}_{\text{att}}{\text{d}}^{N}{r}_{17}\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}({\mathbf{r}}_{17})log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{17})}{{\overline{h}}_{1}({\mathbf{r}}_{17})}\right].$$

(A3)

Since *r*_{17} is just a dummy variable of integration, it does not matter which value of *j* was chosen. There are *J* identical terms in the sum over *j*, and we have

$$\begin{array}{l}E({\ell}_{\text{list}}\mid {H}_{i})=\sum _{J=0}^{\infty}Pr(J\mid {H}_{i})\frac{1}{{\overline{J}}_{i}}{\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r})log\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]\\ ={\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r})log\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right].\end{array}$$

(A4)

As a step toward computing the variance, we consider the average of the square of _{list}, given by

$$E\{{({\ell}_{\text{list}})}^{2}\mid {H}_{i}\}=\sum _{J=0}^{\infty}Pr(J\mid {H}_{i}){\int}_{\text{att}}{\text{d}}^{N}{r}_{1}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{1}\mid {H}_{i})\times {\int}_{\text{att}}{\text{d}}^{N}{r}_{2}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{2}\mid {H}_{i})\cdots \times {\int}_{\text{att}}{\text{d}}^{N}{r}_{J}\phantom{\rule{0.16667em}{0ex}}\text{pr}({\mathbf{r}}_{J}\mid {H}_{i})\xb7\sum _{j=1}^{J}log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{j})}{{\overline{h}}_{1}({\mathbf{r}}_{j})}\right]\sum _{{j}^{\prime}=1}^{J}log\left[\frac{{\overline{h}}_{2}({\mathbf{r}}_{{j}^{\prime}})}{{\overline{h}}_{1}({\mathbf{r}}_{{j}^{\prime}})}\right].$$

(A5)

In the double sum over *j* and *j*′, there are *J* terms for which *j* = *j*′, and there are *J*^{2} − *J* terms for which *j* ≠ *j*′. Performing each of these averages separately and making use of the statistical independence of **r*** _{j}* and

$$E\{{({\ell}_{\text{list}})}^{2}\mid {H}_{i}\}=\sum _{J=0}^{\infty}Pr(J\mid {H}_{i})\left\{\frac{J}{{\overline{J}}_{i}}{\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r}){log}^{2}\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]\right\}+\sum _{J=0}^{\infty}Pr(J\mid {H}_{i})\times \left(\frac{{J}^{2}-J}{{{\overline{J}}_{i}}^{2}}{\left\{{\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r})log\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]\right\}}^{2}\right).$$

(A6)

For Poisson *J*,

$$\sum _{J=0}^{\infty}Pr(J\mid {H}_{i})({J}^{2}-J)={{\overline{J}}_{i}}^{2},$$

(A7)

so

$$E\{{({\ell}_{\text{list}})}^{2}\mid {H}_{i}\}={\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r}){log}^{2}\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right]+{[E({\ell}_{\mathit{list}}\mid {H}_{i})]}^{2}.$$

(A8)

Thus the variance of the log of the likelihood ratio is

$$var({\ell}_{\text{list}}\mid {H}_{i})={\int}_{\text{att}}{\text{d}}^{N}r\phantom{\rule{0.16667em}{0ex}}{\overline{h}}_{i}(\mathbf{r}){log}^{2}\left[\frac{{\overline{h}}_{2}(\mathbf{r})}{{\overline{h}}_{1}(\mathbf{r})}\right].$$

(A9)

Equations (A4) and (A9) are inserted into Eq. (40) to obtain the expression for *d*^{2} in Eq. (46).

Harrison H. Barrett, Department of Radiology and Optical Sciences Center, University of Arizona, Tucson, Arizona.

Timothy White, Idaho National Engineering Laboratory, Pocatello, Idaho.

Lucas C. Parra, Siemens Corporate Research Center, Princeton, New Jersey.

1. Gagnon D, Todd-Pokropek A, Arsenault A, Dupras G. Introduction to holospectral imaging in nuclear medicine for scatter subtraction. IEEE Trans Nucl Sci. 1989;8:245–250. [PubMed]

2. Jaszczak RJ, Greer KL, Floyd CE, Jr, Harris CC, Coleman RE. Improved SPECT quantification using compensation for scattered photons. J Nucl Med. 1984;25:893–900. [PubMed]

3. King MA, Hademenos GJ, Glick SJ. A dual-photopeak window method for scatter correction. J Nucl Med. 1992;33:605–612. [PubMed]

4. Cook WR, Finger M, Prince TA. A thick Anger camera for gamma-ray astronomy. IEEE Trans Nucl Sci. 1985;NS-32:129–133.

5. Rogers JG, Saylor DP, Harrop R, Yao XG, Leitao CVM, Pate BD. Design of an efficient position sensitive gamma ray detector for nuclear medicine. Phys Med Biol. 1986;31:1061–1090. [PubMed]

6. Gagnon D. Maximum likelihood positioning in the scintillation camera using depth of interaction. IEEE Trans Med Imaging MI-12. 1993:101–107. [PubMed]

7. White TA, Barrett HH, Rowe RK. Direct three-dimensional SPECT reconstruction from photomultiplier signals. presented at the International Meeting on Fully 3D Image Reconstruction in Nuclear Medicine and Radiology; Corsendonk, Belgium. June 1991.

8. White TA. PhD dissertation. University of Arizona; Tucson, Ariz: 1994. SPECT reconstruction directly from photo-multiplier tube signals.

9. Singh M. An electronically collimated gamma camera for single photon emission computed tomography. Part 1: theoretical considerations and design criteria. Med Phys. 1983;10:421–427. [PubMed]

10. Kijewski MF, Moore SC, Mueller SP. Cramer–Rao bounds for estimation tasks using multiple energy window data. J Nucl Med. 1994;35:4P.

11. Parra L, Barrett HH. List-mode likelihood—EM algorithm and noise estimation demonstrated on 2D-PET. IEEE Trans Med Imaging. (to be published) [PMC free article] [PubMed]

12. Snyder DL. Parameter estimation for dynamic studies in emission-tomography systems having list-mode data. IEEE Trans Nucl Sci NS-3. 1984:925–931.

13. Snyder DL, Politte DG. Image reconstruction from list-mode data in an emission tomography system having time-of-flight measurements. IEEE Trans Nucl Sci NS-20. 1983:1843–1849.

14. Barrett HH, Swindell W. Radiological Imaging: The Theory of Image Formation, Detection and Processing. paperback. Chap. 3 Academic; San Diego, Calif: 1996.

15. Rabbani M, Shaw R, van Metter R. Detective quantum efficiency of imaging systems with amplifying and scattering mechanisms. J Opt Soc Am A. 1987;4:895–901. [PubMed]

16. Barrett HH, Swindell W. Radiological Imaging: The Theory of Image Formation, Detection and Processing. paperback. Chap. 5 Academic; San Diego, Calif: 1996.

17. Goodman JW. Statistical Optics. Wiley; New York: 1985.

18. Frieden BR. Probability, Statistical Optics, and Data Testing: A Problem Solving Approach. Springer-Verlag; New York: 1983.

19. Van Trees HL. Detection, Estimation, and Modulation Theory. I Wiley; New York: 1968.

20. Melsa JL, Cohn DL. Decision and Estimation Theory. McGraw-Hill; New York: 1978.

21. Barrett HH, Denny JL, Wagner RF, Myers KJ. Objective assessment of image quality. II. Fisher information, Fourier crosstalk, and figures of merit for task performance. J Opt Soc Am A. 1995;12:834–852. [PubMed]

22. Rockmore AJ, Macovski A. A maximum likelihood approach to emission image reconstruction from projections. IEEE Trans Nucl Sci NS-23. 1976:1428–1432.

23. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Statistical Soc Ser B. 1977;39:1–38.

24. Shepp L, Vardi Y. Maximum likelihood reconstruction for emission tomography. IEEE Trans Med Imaging, MI-1. 1982:113–121. [PubMed]

25. Barrett HH. Objective assessment of image quality: effects of quantum noise and object variability. J Opt Soc Am A. 1990;7:1266–1278. [PubMed]

26. Wagner RF, Brown DG, Metz CE. On the multiplex advantage of coded source/aperture photon imaging. In: Brody WR, editor. Digital Radiography; Proc SPIE; 1981. pp. 72–76.

27. Cunningham DR, Laramore RD, Barrett E. Detection in image dependent noise. IEEE Trans Inf Theory IT-10. 1976:603–610.

28. Myers KJ, Rolland JP, Barrett HH, Wagner RF. Aperture optimization for emission imaging: effect of a spatially varying background. J Opt Soc Am A. 1990;7:1279–1293. [PubMed]

29. Barrett HH, Gooley TA, Girodias KA, Rolland JP, White TA, Yao J. Linear discriminants and image quality. Image Vis Comput. 1992;10:451–460.

30. Rolland JP, Barrett HH. Effect of random background inhomogeneity on observer detection performance. J Opt Soc Am A. 1992;9:649–658. [PubMed]

31. Smith WE, Barrett HH. Hotelling trace criterion as a figure of merit for the optimization of imaging systems. J Opt Soc Am A. 1986;3:717–725.

32. Fiete RD, Barrett HH, Smith WE, Myers KJ. Hotelling trace criterion and its correlation with human observer performance. J Opt Soc Am A. 1987;4:945–953. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |