PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Phys Med Biol. Author manuscript; available in PMC 2010 May 6.
Published in final edited form as:
PMCID: PMC2865200
NIHMSID: NIHMS195621

Aperture optimization in emission imaging using ideal observers for joint detection and localization

Abstract

For the familiar 2-class detection problem (signal present/absent), ideal observers have been applied to optimization of pinhole and collimator parameters in planar emission imaging. Given photon noise and background and signal variability, such experiments show how to optimize an aperture to maximize detectability of the signal. Here, we consider a fundamentally different, more realistic task in which the observer is required to both detect and localize a signal. The signal is embedded in a variable background and is known except for location. We inquire whether the addition of a localization requirement changes conclusions on aperture optimization. We have previously formulated an ideal observer for this joint detection/localization task, and here apply it to the classic problem of determining an optimal pinhole diameter in a planar emission imaging system. We conclude that as search tolerance on localization decreases, the optimal pinhole diameter shrinks from that required by detection alone, and in addition, task performance becomes more sensitive to fluctuations about the optimal pinhole diameter. As in the case for detection only, the optimal pinhole diameter shrinks as the amount of background variability grows, and in addition, conspicuity limits can be observed. Unlike the case for detection only, our task leads to a finite aperture size in the absence of background variability. For both tasks, the inclusion of background variability yields a finite aperture size.

1. Introduction

Scalar figures of merit (FOM) summarizing task performance can be used in simulation studies to optimize or compare imaging systems in emission imaging. One oft-considered clinically important task is the detection of a signal (i.e. a lesion) in a noisy background. For example, one might optimize SPECT collimator properties to maximize detection, in a noisy image, of a signal embedded in a complex patient background. In such system optimization research involving detection tasks, there is an escalatory path toward making studies more realistic. One seeks to (1) model the imaging process as accurately as possible (2) model the Poisson photon noise accurately (3) make the ensemble of possible backgrounds as realistic as possible (4) make the ensemble of possible signals as realistic as possible.

We address this problem in the context of a simple planar imaging system with a pinhole aperture. We use a simple stochastic model for background variability. (We shall refer to background variability as “background noise”.) What is new in this study is that we consider a signal whose form is known exactly, but whose location is unknown, and demand that the task be one of joint detection and localization. Our ultimate goal is to see how the addition of localization to the task changes the optimal pinhole diameter. In fact, our experiment is similar to classic experiments done earlier by Myers et al (1990) in the context of detection only of a known signal in a fixed location embedded in background noise.

To optimize the system, we need a model observer whose scalar responses can be combined into an overall FOM. As we vary system parameters (simply the pinhole diameter in this case), the FOM tracks performance and we choose that pinhole diameter with the highest FOM. For a host of reasons (Barrett and Myers 2004), the mathematical ideal observer is used for system optimization. The ideal observer has exact knowledge of the imaging system and all relevant probability distributions of the underlying object to be imaged and the source of noise (here, photon noise) that corrupt the data. The ideal observer is optimal in several senses. It can, in principle, incorporate the ever more sophisticated models needed to escalate the verisimilitude as per points (1)(4) above. The ideal observer forms an upper limit on possible human observer performance. In this work, we use a new form of ideal observer that we derived in Khurd and Gindi (2005). It extends the detection-only task to a joint detection and localization task.

The problem of aperture optimization for emission tomography has a long history. Previous approaches based on detection task performance FOM include that of Myers et al (1990), Tsui et al (1978), Smith and Barrett (1986) and Hesterman et al (2007). Often, the calculation of a detection FOM for an ideal or approximate ideal observer is computationally challenging, and technical advances for this problem have been made by Kupinski et al (2003), Park et al (2005) and He et al (2007). Aperture optimization using criteria other than detection task performance has been reported in Fessler (1998), Shao et al (1989) and Meng and Clinthorne (2004).

We also mention a different approach to image quality evaluation as espoused in the scan-statistics literature (Popescu and Lewitt 2006 and Swensson 1996). This approach typically uses detection or joint detection and localization figures of merit. A brief review of the scan-statistics methodology and an excellent literature resource of scan-statistics (and related) methods in imaging can be found in Popescu and Lewitt (2006). In this approach, one eschews the often unobtainable detailed knowledge of the object statistics and imaging system and instead directly applies empirical observers to samples of images to obtain histogram estimates of probabilities that can then be used to derive image-quality FOM’s. In Popescu and Lewitt (2006), PET vs TOF-PET are compared under a task context of detection of a signal at an unknown location. Gifford et al (2005) used a similar task criterion to optimize reconstruction strategies in SPECT. Here, some effort was made to design an observer that emulated human performance. Scan-statistics methods do not yield ideal observers, but have practical advantages in that they do not presume difficult-to-obtain detailed knowledge of the object and the imaging system.

In figure 1, a crude depiction of a two-dimensional (2-D) pinhole imaging system, is used in order to introduce the aperture optimization problem. A wide aperture allows more photons into the image so that the image has less photon noise and so signal detection and localization is perhaps easier. However, opening the aperture leads to a resolution loss that might lessen the observer’s ability to detect and localize the signal. A smaller aperture lets pass less background noise, thereby perhaps increasing a detection and localization FOM, but again increases photon noise by letting through fewer counts. With these competing effects, it would be interesting to determine the optimal pinhole diameter, i.e. that diameter that maximizes the FOM.

Figure 1
(a) Pinhole imaging system (cross section). The object is one sample from an ensemble with correlated background noise and variable signal location. The arrows indicate the signal location in the object and image spaces. We seek to optimize the diameter ...

One can get an intuitive sense of the task at hand by inspecting the images in figure 2. The leftmost column displays 32 × 32 objects with a signal (at one location) embedded in increasing amounts of background noise. The signal itself, seen clearly in the upper left image, is a Gaussian blob of height = 1.5 intensity units and width σs = 1.5 pixel units (A pixel unit is the length of a pixel and an intensity unit would be measured in units of disintegrations per unit area per second). The background noise is derived by sampling a stationary noise field with a Gaussian power spectrum of correlation width = 4.5 pixel units sitting atop a DC background of 10.0 intensity units. The “amplitude” mentioned in the caption of figure 2 is simply the square root of the variance of the background noise and is used to gauge the amount of background noise. As seen in column 1, as we increase the amount of background noise, it becomes increasingly difficult to localize and detect the signal — even in the object space! Columns 2, 3, 4 and 5 show images obtained using apertures whose transmittance is characterized by a Gaussian of standard deviations σa=0.33, 1.0, 2.0, 3.0 pixel units, respectively. Note that a pixel unit in the image space equals that in the object space. The intensity unit in the image space is counts per unit area. (Note that the actual image size increases with σa, but for display purposes, we have normalized the images sizes in figure 2. The figure in columns 3, 4 and 5 show a dark border. This is due to intensity falloff at the edge of the image that is inherent in the imaging model). Each image is modified to include Poisson noise. As the aperture diameter increases, the noise (photon and background) is smoothed at the expense of resolution loss, and as the amplitude of background noise increases, the signal becomes more obscured. From the images in columns 2, 3, 4 and 5 of figure 2, one gets a visual sense of the difficulty of detection and localization under these conditions.

Figure 2
Column 1, objects show signal buried in increasing amounts of background noise. Columns 2, 3, 4, 5, images, including Poisson noise, of the objects in column 1. Rows 1, 2, 3, 4 have background noise with amplitudes 0, 0.3, 0.6, 1.2, respectively. Columns ...

In section 2, we review ideal observers for detection only and for joint detection and localization. In section 3, we present our simulation methodology for exploring how optimal aperture diameter varies in the detection-only and in the joint detection and localization tasks. Our simulation results are shown in section 4. Section 5 includes a summary and discussion.

2. Background

We review well-known forms of the ideal decision strategy and accompanying ideal observer for the detection-only problem, and then present our ideal decision strategy and accompanying ideal observer for the joint detection and localization problem.

2.1. Ideal Observer for Detection Only: Signal and Background Known

We first revisit the well-known case (Barrett and Myers 2004) of an ideal observer for a 2-class (signal present/absent) detection problem in which the signal and background are known exactly. Let vector b be the known background object and vector s a known additive signal (lesion). The vector g is the observed image and is noisy by virtue of photon noise. Then p(g|b) is the signal-absent likelihood and p(g|b + s) the signal-present likelihood. The test statistic is the likelihood ratio t given by

t(g)=p(g|H1)p(g|H0)=p(g|b+s)p(g|b)
(1)

where H1, H0 correspond to hypotheses of signal present/absent. The scalar test statistic t(g) is compared to a threshold τ to decide H1 if t(g) > τ and H0 otherwise.

One can summarize the performance of the observer by a FOM given by AROC, the area under the ROC curve. To do this, one generates an ensemble of signal-absent and signal-present realizations of g, and accumulates observer responses t+ (for signal present) and t (for signal absent). By forming a bimodal histogram of t+ and t, one can, at a given τ, integrate suitable areas under the histograms to obtain the probability PTP(τ) of deciding signal present when it is indeed present, and the probability PTP(τ) of deciding that a signal is present when it is absent. By sweeping τ, one generates the well-known ROC curve PTP(τ) vs PFP(τ) seen in figure 3. The area under this curve, AROC, is a FOM for this 2-class detection problem. The likelihood ratio is ideal in the sense that it maximizes AROC amongst all possible observers, and minimizes the Bayes risk for arbitrary costs. In figure 3, the location on the curves of the operating point corresponding to τ = 1 is shown. (The threshold τ = 1 corresponds to an ML classifier for the ROC curve.)

Figure 3
The ROC and LROC curves. See text for explanation.

2.2. Ideal Observer for Detection Only: Signal Known and Background Variable

If the background is variable, then b is random, and is characterized by a pdf p(b). The likelihood ratio must now integrate over all possibilities of the background, and the test statistic becomes

t(g)=p(g|H1)p(g|H0)= p(g|b+s)p(b)db p(g|b)p(b)dbLR(g)
(2)

In (2) we have used the notation LR(g) to indicate a likelihood ratio.

To calculate AROC in this case, one can again generate an ensemble of signal-absent and signal-present noisy realizations of g, where the noise now includes both photon and background noise. Given these samples, one can follow the procedure outlined in section 2.1 to obtain AROC.

2.3. Ideal Observer for Detection and Localization: Signal Location Unknown and Background Variable

To introduce localization into the problem, we digress into a discussion of the LROC (localization ROC) curve. The LROC curve incorporates a notion of localization. The LROC curve, seen in figure 3, plots the probability of correct joint detection and localization PCL(τ), vs the false positive probability PFP(τ). The area under the LROC curve, ALROC, is a FOM for joint detection and localization performance just as AROC is for detection performance. To compute ALROC in a simulation experiment, one can follow a procedure similar to that described for computing AROC. Here, one computes histograms of t+ and t as before, but includes in t+ only those cases in which the signal was correctly localized within an allowed tolerance about the true location. In section 3.3, we discuss the computation of ALROC in more detail.

For detection, the ideal observer maximizes AROC, and so it is natural to seek a decision strategy that is ideal for detection and localization. A natural feature of such a decision strategy is that it should maximize ALROC. In Khurd and Gindi (2005) we derived such a decision strategy, and we briefly summarize it here. Let sj be a signal (of known form) located at j. (We consider only discrete signal locations but in Khurd and Gindi (2005) generalize our decision strategy to the case where signal location is continuous.) Let pR(sj) be the prior probability that the signal will be found at j. The notion of localization is meaningless without specifying a tolerance distance about the true location within which localization is deemed correct. Let l = 1, (...), L also index signal location. We define the hypothesis Hl, where l = 1, (...), L, to mean that the signal is located within the tolerance region T(l) centered at l. The hypothesis H0 again means signal absent. Thus we have an L + 1 hypotheses decision problem. We shall also define a likelihood ratio LR(g, sj), a generalization of (2), that is indexed to the signal at j:

LR(g,sj)=p(g|Hj)p(g|H0)=p(g|b+sj)p(b)dbp(g|b)p(b)db
(3)

With these definitions we can write our optimal decision strategy for detection and localization:

t(g)=max l{1,,L}jT(l)pR(sj)LR(g, sj)l(g)=arg max l{1,,L}jT(l)pR(sj)LR(g, sj)Decide Hl(g) if t(g)>τ,else decide H0
(4)

Our decision strategy will detect the presence or absence of the signal in the data g and, if a signal is detected, it will report a location l(g) where it deems the signal to be present. If the true location of the signal falls within a tolerance region T(l(g)) about the reported location l(g), we will say that the observer has correctly localized the signal. We shall use “tolerance” to simply indicate the diameter of T(l), and also note that in our experiments, the tolerance does not change with position. The decision rule (4) indeed maximizes ALROC and also minimizes Bayes risk under certain cost constraints (Khurd and Gindi 2005).

Clearly, as tolerance → ∞, the localization requirement disappears and we end up with a detection-only problem with signal location unknown. In this case, the LROC curve becomes the ROC curve, and maximizing ALROC is the same as maximizing AROC. The implementation and analysis of such a 2-class decision strategy with signal location unknown was discussed in Barrett and Abbey (1997). In terms of our own notation, the optimal decision strategy maximizing AROC with signal location unknown requires the computation of a test statistic t(g)=j=1LpR(sj)LR(g,sj) and we decide signal present if t(g) > τ. In fact this rule follows from (4) as tolerance → ∞. To avoid confusion, this max-AROC signal-location-unknown ideal observer is to be contrasted with the max-AROC signal-location-known ideal observers discussed in sections 2.1 and 2.2. Indeed, we shall use this max-AROC signal-location-unknown ideal observer in our simulation experiments. It represents performance of our max-ALROC observer (4) in the limit of no localization requirement.

3. Methods

We first describe our simulated imaging system, then describe our methodology for our simulation experiments.

3.1. Image Formation

Our overall geometry is a unit-magnification pinhole system in which the distance of the object-to-aperture plane equals that of the aperture-to-image plane. Figure 4 shows a cross section. The object with signal present is given by f+ = b + sj where b is a stochastic background and sj a signal at j. We shall use f = b to denote a signal-absent object and f to indicate either f+ or f. The signal to be localized is a discretized 2D Gaussian function with σs = 1.5 pixel units and a peak intensity of 1.5 intensity units. We truncate the Gaussian so that its diameter is 13 pixel units. Note that if present, the signal can be located anywhere with equal probability in a 20 × 20 grid centered in the 32 × 32 object space. The smaller search grid ensures that the signal remains within the object space (i.e. does not fall off the edge). The background b comprises a DC plateau of 10.0 intensity units to which we add stationary zero-mean multivariate Gaussian noise whose power spectrum is given by

S(ω)=2πσcw  amp2  exp (2π2ω2σcw2)
(5)

Figure 4
Cross section of the two-dimensional pinhole imaging system. Arrows indicate a signal and its image. Please see text for a detailed explanation.

This power spectrum is characterized by a correlation width σcw = 4.5 pixel units and a factor “amp”. To characterize the “amount” of noise, we use the notion of noise amplitude given by ampvariance of the background noise. An object realization profile (solid line) displayed in the object plane in figure 4 shows a signal embedded in background noise with σcw = 4.5 pixel units and amp = 0.3 intensity units. The dashed line in the object plane shows the mean object µf = b + sj where b is the mean background, so that a visual comparison of µf and f gives one a sense of the typical amount and quality of background variability.

The aperture plane contains a Gaussian aperture whose transmittance is characterized by a width σa and given (in continuous notation) as

a(x,y)=exp [(xcx)2+(ycy)22σa2]
(6)

where (cx, cy) is the center of the Gaussian aperture. Figure 4 shows a profile of a(x, y). The entity we seek to optimize is the aperture width σa.

The image plane comprises the pinhole image of f, plus photon noise. For a two-dimensional pinhole system, the image g is given (Barrett and Swindell (1981)) approximately as

g=κf˜**a˜+n
(7)

where ** denotes 2-D non-circular convolution and f is a scaled (stretched) version of f and ã a scaled version of a. Here a is a discrete (vector) representation of a(x, y). The factor κ is given by

κ=𝒯4π(d1+d2)2
(8)

where T is exposure time, and d1 and d2 are object plane-to-aperture plane and aperture plane-to-image plane distances, respectively. Equation (7) is consistent with using units of counts per unit area for g and n and disintegrations per unit area per second for f, with ã unitless. The convolution operation itself has units of area. In our case d1 = d2, and with this equality, f = f (unit magnification). Furthermore, it is easy to show that for our imaging system ã is a stretched (by ×2) version of a and so ã(x, y) [equivalent] h(x, y) = exp [−((xcx)2 + (ycy)2)/(2(2σa)2)]. Henceforth, we shall use the uncluttered notation h(x, y) to refer to the scaled aperture transmittance, as seen in figure 4, and h to refer to its discretized version. We shall set κ to unity without any loss of generality. In (7) n is the photon noise, written without loss of generality as an additive term though it is signal dependent for emission imaging. With the above choices, the image formation model (7) for our particular system becomes

g=h**f+n
(9)

The dashed line in the image plane in figure 4 shows the mean image g = µf * *h, and visual comparison of g with g gives one a sense of the typical amount and quality of background and photon noise in the image. Note also that since the signal is a truncated Gaussian blob of diameter 13 pixel units in object space, and since the imaging system has unit magnification, then by (9), image of the signal will have a diameter approximately equal to the sum of the signal width + scaled aperture width, typically about 15 pixel units in image space. Thus the signal image is significantly larger than the pixel spacing in image space. It will be convenient to rewrite the convolution h* *f in a matrix-vector form Hf where the system matrix H is a non-circular convolution matrix.

We note that the simple convolutional imaging model described above is itself an approximation. A more exact model (Barrett and Swindell 1981) accounts for additional factors for high obliquity rays, vignetting due to finite aperture thickness, and modification of the aperture transmittance due to gamma-ray penetration of the aperture. These physical effects are small and do not significantly affect our conclusions. We note that more exact models of pinhole imaging incorporating the effects mentioned above could be incorporated into H since these are linear effects.

3.2. Experiments with Poisson Noise Only

In our first experiments, we considered the effects of Poisson photon noise only, i.e. no background noise. Then the conditional probability for g is g|f ~ Poisson(Hf). In this case, it is easy to write LR(g, sj) since there is no background variability, and (3) becomes LR(g,sj)=p(g|b+sj)p(g|b). For the signal-present case and Poisson noise, we get

p(g|b+sj)=mexp([b+sj]m)([b+sj]m)gmgm!
(10)

where gm indicates the observed counts in the mth detector. The notation [Hy]m indicates the mth element of vector Hy. The signal-absent likelihood has the same form as (10) but with no sj. Combining the likelihoods into a ratio, we can write the appropriate decision strategy for this case as (4) with LR(g, sj) given by

LR(g,sj)=p(g|Hj)p(g|H0)=mexp([sj]m) (1+[sj]m[b]m)gm
(11)

3.3. Experiments with Photon and Background Noise

In these experiments, we use the Gaussian model of background variability described earlier, and again use the quantity “amp” to gauge the amount of background noise. We previously described background noise in terms of a continuous representation via the use of the power spectrum in (5). It will be convenient to re-express the fact that the background noise is Gaussian in a discrete notation. In discrete notation, the object noise follows a normal distribution

f~𝒩(μf,Kb)
(12)

where µf, the mean object, was defined earlier. The background covariance matrix Kb has its rows equal to appropriately shifted versions of the discrete form of the autocorrelation function corresponding to the power spectrum in (5).

For emission imaging, one would like to again use a conditional Poisson data model g|f ~ Poisson(Hf). Using (12) and the Poisson data model g|f ~ Poisson(Hf), we cannot derive the pdf p(g) nor LR(g, sj) in closed form. One would be forced to resort to computationally intensive numerical integration methods (Kupinski et al 2003) to evaluate likelihoods via the integrals on the right-hand side of (3). To avoid using such computationally intensive methods, we have made a fairly accurate approximation to the Poisson noise model that enables us to derive closed-form, easily evaluable expressions for the likelihoods by computing p(g|Hj) directly (middle term in (3)) without integrating over object noise. We take the approximate conditional Poisson data model to be Gaussian

g|f~𝒩(f,diag(b¯))
(13)

where diag(y) indicates a diagonal matrix with diagonal elements given by vector y. Note that if had used diag(Hf) instead of diag(Hb) in (13), we would have obtained the familiar Gaussian approximation to Poisson noise. However, using diag(Hf) incurs its own problems with numerically intensive computation, and using diag(Hb) is not a bad approximation as we shall later demonstrate experimentally. To gain insight on the approximation (13), we list the first two central moments (Abbey 1998) of the correct Poisson-Gaussian model (We cannot write p(g), but can write its first two moments). The first two moments are g = Hµf and Kg = diag(Hµf) + HKbHT. Of interest is the fact that for the modified data model in (13), we can show that the first two moments are:

g¯=μf,     Kg=diag(b¯)+KbT
(14)

Note that the mean in (14) equals that for the Poisson-Gaussian model while the covariance differs only slightly given that the signal to be localized is weak.

To see (14), we use (12) to express f = u + µf where u ~ [mathematical script N](0, Kb). We then use (9) and (13) to express the data as g = Hf + v where the data noise v ~ [mathematical script N](0, diag(Hb)). We can combine the above equations for f and g to express g further as g = H(u + µf)+v = Hu + v + Hµf and note that the random terms Hu and v are both zero mean. Therefore, g = Hµf. Since the covariance of u is Kb, the covariance of Hu is HKbHT. Combining these facts regarding means and variances, we can finally write g ~ [mathematical script N](Hµf, diag(Hb) + HKbHT), thus justifying the discussion regarding the first two moments. Furthermore, we have g|Hj ~ [mathematical script N](Hb + Hsj, diag(Hb) + HKbHT) and g|H0 ~ [mathematical script N](Hb, diag(Hb) + HKb HT).

With the closed form expressions for p(g|Hj) and p(g|H0) now available, we can easily write the likelihood ratio for the data + background noise case as:

LR(g,sj)=p(g|Hj)p(g|H0)=[(2π)Mdet (Kg)]12exp [12(gb¯sj)TKg1(gb¯sj)][(2π)Mdet (Kg)]12exp [12(gb¯)TKg1(gb¯)]=exp [(sj)TKg1(gb¯)12(sj)TKg1sj]
(15)

Finally, we can combine (14) and (15) with our decision rule (4) to obtain a means for computing observer response t(g) for the case of background + approximate photon noise.

In order to determine ALROC, we first plot the LROC curve. In order to plot the LROC curve, we determine a set of Nτ thresholds from the range of values of t+ and t. For each threshold τ, the probability of correct localization PCL(τ) and the false-positive rate PFP(τ) are determined by computing the fractions of t+ and t observer responses that exceed the given threshold. By sweeping all thresholds, the LROC curve is plotted. Note that in computing PCL(τ), one divides the number of correctly localized t+ that exceed τ by the total number of t+ (including the discarded incorrectly localized t+), and in computing PFP(τ), one divides the number of t that exceed τ by the total number of t. Then ALROC is obtained by simple trapezoidal integration of the LROC curve.

4. Results

In calculating signal-absent observer responses, for each experiment, we use 10000 signal-absent sample images. So for the Poisson noise only case, we generate 10000 noisy images of a fixed background and for the photon + background noise case generate 10000 noisy images from 10000 noisy objects. In signal-present images, a signal can appear at any one of the 400 locations on the 20 × 20 search grid. We take pR(sj) to be uniform and generate 26 signal-present images per location, for a total of 26 × 400 = 10400 signal-present images for each experiment. For the Poisson noise only case, we use a fixed background but add 10400 signals (26 × 400) resulting 10400 noisy images. For the photon + background noise case, we generate 10400 noisy backgrounds, one for each of the 10400 signals, and thus form 10400 noisy images. In all experiments, we use a Gaussian signal with a peak intensity of 1.5 units and σs = 1.5 pixel units.

Our previous definition of tolerance is apropos for a continuous object space. Since our object appears on pixels, we shall use the label “tol” for tolerance where tol = m, an odd integer, implies a region of diameter m pixel units and tol = 1 implies a one-pixel region, i.e. the guessed signal location must be coincident with the signal.

4.1. Results for Experiments with Poisson Noise Only

In our first series of experiments, we tested the approximation (13) for Poisson noise and also examined results when no background variability was present. For this case, LR(g,sj) is given by (11).

Figure 5(a) shows our first experimental results. The abscissa is σas, the aperture width normalized by signal width, and the ordinate is ALROC. The value “tol” (tolerance) indicates the diameter of T(l) as explained above, and curves are plotted for various values of tol. The top curves in figure 5, labeled “ROC”, correspond to tol = ∞, i.e. a pure detection task.

Figure 5
(a) the case of Poisson noise only. (b) the case of pseudo Poisson noise. The optimal aperture is indicated by the location of the peak of the ALROC curve. See text for detailed explanation. (c) optimal aperture size vs tolerance for results in (a). This ...

As seen in figure 5(a), for the most restrictive localization at tol = 1, one intuitively expects that a high-resolution image is needed, and indeed, the optimal normalized aperture width (= 1.1 pixel units), given by the location of the peak in the ALROC curve, is about equal to the signal size. As tolerance relaxes to higher values, the optimal aperture size increases (the ALROC peak shifts rightward) to let in more counts and the ALROC value rises. Finally, for the case of detection only (tol = ROC) and no background noise, the AROC curve shows the startling result that an infinitely large aperture (i.e. no aperture) is best! This classic “gaping aperture” result has been observed earlier for a detection-only signal-known/background-known case with an ideal observer in Myers et al (1990) and for an signal-known/background-known case with a near-ideal observer in Tsui et al (1978). On the other hand, the ALROC curves demonstrate that when a localization task is added to the detection task, we obtain a finite optimal aperture size - even without background noise. This is a new and not obvious result.

The plot in figure 5(b) follows the same format as that in figure 5(a). The only difference is that the approximate Poisson noise (13) is used. The result is virtually identical to that in figure 5(a), with the curves overlapping nearly perfectly, thus justifying our use of the approximation (13).

The plot in figure 5(c) summarizes some important observations gleaned from the results in figure 5(a). The abscissa is tolerance. The ordinate is the optimal aperture width normalized by signal width. The vertical bars above/below each point indicate the asymmetric half-width of the ALROC curve at 95% max. The bar lengths are thus a measure of the sensitivity of the ALROC FOM to aperture width, with smaller bars implying a greater sensitivity. As the search tolerance decreases, the optimal aperture width shrinks and ALROC performance becomes more sensitive to fluctuations about the optimal pinhole diameter. For a pure detection task, the error bar is infinite and not shown.

4.2. Results for Experiments with Photon and Background Noise

An important result is shown in figure 6. Here background noise of amp = 0.3 intensity units and correlation width σcw = 4.5 pixel units and approximate photon noise are both included. In figure 6(a), ALROC vs normalized aperture width is again plotted for various tolerances. As tolerance decreases, the optimal aperture width shrinks and the ALROC curve is lowered. The ROC curve in figure 6(a) has a peak and so no “gaping aperture” is observed when background noise is present. This result is consistent with those of the signal-location-known detection-only experiments with photon + background noise shown in Myers et al (1990) and in Tsui et al (1978).

Figure 6
(a) ALROC vs normalized aperture for photon + background noise with σcw = 4.5 and amp = 0.3 (b) Optimal normalized aperture width vs degree of localization from results of (a). Vertical bars indicate sensitivity of ALROC to aperture width. See ...

The plot in figure 6(b) follows the same conventions as that in figure 5(c) except that the abscissa point marked “infinity” corresponds to the ROC curve at top of figure 6(a). With background noise added, the optimal aperture size is smaller than the case for no background noise, as can be seen by comparing figure 6(b) and figure 5(c).

In figure 7, we once again plot ALROC and AROC vs normalized aperture width, but each plot is at a fixed tolerance of 5 pixel units with the curves in each plot indexed by background noise amplitude “amp”. In figure 7(a), as background noise is lowered, ALROC performance increases and optimal aperture width, given by the location of the peaks in the ALROC curve, increases. In figure 7(b), we repeat the experiment for a pure detection task and observe a trend similar to that in figure 7(a). The result in figure 7(b) for a detection-only task is consistent with results reported in Myers et al (1990), but, interestingly, the same trend in optimal aperture variation with background noise amplitude is also seen in the joint detection and localization case of figure 7(a). In figure 7(c), we examine the size and the full-width at 95% max of the optimal aperture as background noise is increased. As amp rises, the optimal aperture shrinks.

Figure 7
(a) ALROC performance vs normalized aperture size for various values of amp. All results are at a tolerance of 5 pixel units. (b) AROC vs normalized aperture size for various values of amp. Since this is an AROC case, the tolerance distance is infinite. ...

In figures 5, ,66 and and7,7, we used the bootstrap (Zoubir and Boashash 1998) method to calculate 95% confidence interval error bars on the points displayed. In all cases, the error bars (each less than 1% of the ALROC value itself) were tiny - too small to display. The separation of the curves on these plots is thus significant.

4.3. Results for a Conspicuity Test

In a conspicuity test, one tracks the FOM as exposure time T is increased. With large enough T, the relative effects of photon noise become insignificant and the performance, under certain conditions, could become limited by background noise. (See Myers et al 1990 for a discussion of conspicuity limits.) For detection tasks, conspicuity limits have been reported in Barrett (1990). Here, we inquire whether such limits can also occur for a joint detection and localization task.

We performed the conspicuity test seen in figure 8. In figure 8(a), both background and photon noise were included. At low T, photon noise dominates, but at high T, the ALROC and AROC approach an asymptote (i.e. conspicuity limit) due to background noise. In figure 8(b), we included Poisson noise only and observed that ALROC performance saturates to 1.0 (perfect performance - no conspicuity) quickly. Note that in these studies, our ideal observer must itself change with T as follows: in (15), HTH.

Figure 8
Conspicuity test. (a) Background and photon noise included. σa = 2.0 and amp = 0:75. The scale of the signal and background structure are similar in that σs = σcw = 1.5. (b) Photon noise only with the same σa and σ ...

5. Discussion

We have explored a previously studied problem - aperture optimization in emission imaging - in the context of a detection and localization task using an ideal observer. We were particularly interested in exploring the effects of adding localization to the detection task. Our recent derivation of an ideal ALROC observer (Khurd and Gindi 2005) made this endeavor possible.

Our first conclusion, seen in figure 6(a), corresponds to intuition: the optimal aperture width shrinks and ALROC performance is degraded as localization becomes more stringent. In addition, as seen in figure 6(b), the sensitivity of ALROC performance to the optimal aperture width also increases as localization becomes more stringent. Thus a system designer faced with a clinical task with a high degree of localization would need to consider a smaller (than without localization) aperture.

In addition we observed forms of similar behavior of our joint detection and localization performance to that for a roughly similar (Myers et al 1990) study involving a detection-only task with similar background noise but with signal location known. For both cases, the optimal aperture width shrinks (see figure 7) as background noise grows. Also, conspicuity limits can be observed (see figure 8) for both cases.

Finally we observed the interesting result that for the case of background noise absent, our joint detection and localization task led to a finite aperture size, whereas for pure detection, our results and those of others (Myers et al 1990 and Tsui et al 1978) showed an infinitely large “gaping” aperture. For both joint detection and localization and detection tasks, the addition of background noise yields a finite aperture size.

In Khurd et al (2006), we did work similar to that of this paper, but for a simple one-dimensional (1-D) imaging system. The 2-D problem is not only more realistic (and much more computationally complex), but also differs in important ways. In 2-D, the count level grows as the square of aperture width while in 1-D it grows linearly with aperture width, so in a 2D system, a greater advantage in photon noise reduction is seen as aperture width grows. Also, if, for a given tolerance distance, the potential number of mis-localizations in 1-D object space is N, then it grows as N 2 in 2-D. Despite these differences, the rough qualitative behavior for the 1-D system matched that of the 2-D system.

In this work, we used a simple planar imaging system as well as a simple Gaussian background noise model. It would be interesting to extend our current work to a more realistic background noise models (using the techniques in Kupinski et al 2003 and Park et al 2005) and to more useful emission imaging systems such as SPECT (using for example techniques in He et al 2007). In Zhou and Gindi (2008), we have obtained preliminary results for a simple SPECT system.

Acknowledgments

This work was supported by NIH NIBIB 02629.

References

  • Abbey C. Assessment of reconstructed images PhD Thesis Graduate interdisciplinary program in applied mathematics. University of Arizona; Tuscon, AZ, USA: 1998.
  • Barrett HH. Objective Assessment of image quality: effects of quantum noise and object variability. J. Opt. Soc. Am. A. 1990;7(7):1266–1278. [PubMed]
  • Barrett HH, Abbey C. Bayesian detection of random signals on random backgrounds. Info. Processing In Med. Imaging. 1997;1230:155–166.
  • Barrett HH, Myers KJ. Foundations of image science. (Wiley Interscience); 2004.
  • Barrett HH, Swindell W. Radiological imaging: theory of image formation, detection and processing I. (Academic Press); 1981.
  • Fessler JA. Spatial resolution and noise tradeoffs in pinhole imaging system design: a density estimation approach. Optics Express. 1998;2(6):237–253. [PubMed]
  • Gifford HC, King MA, Pretorius PH, Wells RG. A comparison of human and model observers in multislice LROC studies. IEEE Trans. on Medical Imaging. 2005;24(2):160–169. [PubMed]
  • He X, Caffo BS, Frey EC. Markov chain monte carlo (MCMC) based ideal observer estimation using a parameterized phantom and a pre-calculated dataset. Conf. Rec. SPIE Med. Imaging. 2007;6515:161–166.
  • Hesterman JY, Kupinski MA, Clarkson E, Wilson DW, Barrett HH. Evaluation of hardware in a small-animal SPECT system using reconstructed images. Conf. Rec. SPIE Med. Imaging. 2007;6515:1G1–110. [PMC free article] [PubMed]
  • Khurd Pand, Gindi G. Decision strategies that maximize the area under the LROC curve. IEEE Trans. on Medical Imaging. 2005;24(12):1626–1636. [PubMed]
  • Khurd P, Zhou L, Rangarajan Aand, Gindi G. Aperture optimization in emission imaging using optimal LROC observers. Conf. Rec. IEEE Nucl. Sci. Symp. Med. Imaging Conf.; 2006. pp. M03–M06.
  • Kupinski MA, Hoppin JW, Clarkson Eand, Barrett HH. Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. J. Opt. Soc. Am. A. 2003;20(3):430–438. [PMC free article] [PubMed]
  • Meng LJ, Clinthorne NH. A modified uniform cramer-rao bound for multiple pinhole aperture design. IEEE Trans. on Medical Imaging. 2004;23(7):896–902. [PubMed]
  • Myers KJ, Rolland JP, Barrett HH, Wagner RF. Aperture optimization for emission imaging: effect of a spatially varying background. J. Opt. Soc. Am. A. 1990;7(7):1279–1293. [PubMed]
  • Park S, Clarkson E, Kupinski MA, Barrett HH. Efficiency of the human observer detecting random signals in random backgrounds. J. Opt. Soc. Am. A. 2005;22:13–16. [PMC free article] [PubMed]
  • Popescu LM, Lewitt RM. Small nodule detectability evaluation using a generalized scan statistic model. Phys. Med. Biol. 2006;51(23):6225–6224. [PubMed]
  • Shao L, Hero AO, Rogers WL, Clinthorne NH. The mutual information criterion for SPECT aperture evaluation and design. IEEE Trans. on Medical Imaging. 1989;8(4):322–336. [PubMed]
  • Smith WE, Barrett HH. Hoteling trace criterion as a figure of merit for the optimization of imaging system. J. Opt. Soc. Am. A. 1986;3(5):717–725.
  • Swensson RG. Unified measurement of observer performance in detecting and localizing target objects on images. Med. Phys. 1996;23(10):1709–1725. [PubMed]
  • Tsui B MW, Metz CE, Atkins FB, Starr SJ, Beck RN. A comparison of optimum detector spatial resolution in nuclear imaging based on statistical theory and on observer performance. Phys. Med. Biol. 1978;23(4):654–676. [PubMed]
  • Zhou Land, Gindi G. SPECT image system optimization using ideal observer for detection and localization. Conf. Rec. SPIE Med. Imaging. 2008;6917:48.
  • Zoubir AM, Boashash The bootstrap and its application in signal processing. IEEE Signal Proc. Mag. 1998;15:56–76.