|Home | About | Journals | Submit | Contact Us | Français|
For the familiar 2-class detection problem (signal present/absent), ideal observers have been applied to optimization of pinhole and collimator parameters in planar emission imaging. Given photon noise and background and signal variability, such experiments show how to optimize an aperture to maximize detectability of the signal. Here, we consider a fundamentally different, more realistic task in which the observer is required to both detect and localize a signal. The signal is embedded in a variable background and is known except for location. We inquire whether the addition of a localization requirement changes conclusions on aperture optimization. We have previously formulated an ideal observer for this joint detection/localization task, and here apply it to the classic problem of determining an optimal pinhole diameter in a planar emission imaging system. We conclude that as search tolerance on localization decreases, the optimal pinhole diameter shrinks from that required by detection alone, and in addition, task performance becomes more sensitive to fluctuations about the optimal pinhole diameter. As in the case for detection only, the optimal pinhole diameter shrinks as the amount of background variability grows, and in addition, conspicuity limits can be observed. Unlike the case for detection only, our task leads to a finite aperture size in the absence of background variability. For both tasks, the inclusion of background variability yields a finite aperture size.
Scalar figures of merit (FOM) summarizing task performance can be used in simulation studies to optimize or compare imaging systems in emission imaging. One oft-considered clinically important task is the detection of a signal (i.e. a lesion) in a noisy background. For example, one might optimize SPECT collimator properties to maximize detection, in a noisy image, of a signal embedded in a complex patient background. In such system optimization research involving detection tasks, there is an escalatory path toward making studies more realistic. One seeks to (1) model the imaging process as accurately as possible (2) model the Poisson photon noise accurately (3) make the ensemble of possible backgrounds as realistic as possible (4) make the ensemble of possible signals as realistic as possible.
We address this problem in the context of a simple planar imaging system with a pinhole aperture. We use a simple stochastic model for background variability. (We shall refer to background variability as “background noise”.) What is new in this study is that we consider a signal whose form is known exactly, but whose location is unknown, and demand that the task be one of joint detection and localization. Our ultimate goal is to see how the addition of localization to the task changes the optimal pinhole diameter. In fact, our experiment is similar to classic experiments done earlier by Myers et al (1990) in the context of detection only of a known signal in a fixed location embedded in background noise.
To optimize the system, we need a model observer whose scalar responses can be combined into an overall FOM. As we vary system parameters (simply the pinhole diameter in this case), the FOM tracks performance and we choose that pinhole diameter with the highest FOM. For a host of reasons (Barrett and Myers 2004), the mathematical ideal observer is used for system optimization. The ideal observer has exact knowledge of the imaging system and all relevant probability distributions of the underlying object to be imaged and the source of noise (here, photon noise) that corrupt the data. The ideal observer is optimal in several senses. It can, in principle, incorporate the ever more sophisticated models needed to escalate the verisimilitude as per points (1) – (4) above. The ideal observer forms an upper limit on possible human observer performance. In this work, we use a new form of ideal observer that we derived in Khurd and Gindi (2005). It extends the detection-only task to a joint detection and localization task.
The problem of aperture optimization for emission tomography has a long history. Previous approaches based on detection task performance FOM include that of Myers et al (1990), Tsui et al (1978), Smith and Barrett (1986) and Hesterman et al (2007). Often, the calculation of a detection FOM for an ideal or approximate ideal observer is computationally challenging, and technical advances for this problem have been made by Kupinski et al (2003), Park et al (2005) and He et al (2007). Aperture optimization using criteria other than detection task performance has been reported in Fessler (1998), Shao et al (1989) and Meng and Clinthorne (2004).
We also mention a different approach to image quality evaluation as espoused in the scan-statistics literature (Popescu and Lewitt 2006 and Swensson 1996). This approach typically uses detection or joint detection and localization figures of merit. A brief review of the scan-statistics methodology and an excellent literature resource of scan-statistics (and related) methods in imaging can be found in Popescu and Lewitt (2006). In this approach, one eschews the often unobtainable detailed knowledge of the object statistics and imaging system and instead directly applies empirical observers to samples of images to obtain histogram estimates of probabilities that can then be used to derive image-quality FOM’s. In Popescu and Lewitt (2006), PET vs TOF-PET are compared under a task context of detection of a signal at an unknown location. Gifford et al (2005) used a similar task criterion to optimize reconstruction strategies in SPECT. Here, some effort was made to design an observer that emulated human performance. Scan-statistics methods do not yield ideal observers, but have practical advantages in that they do not presume difficult-to-obtain detailed knowledge of the object and the imaging system.
In figure 1, a crude depiction of a two-dimensional (2-D) pinhole imaging system, is used in order to introduce the aperture optimization problem. A wide aperture allows more photons into the image so that the image has less photon noise and so signal detection and localization is perhaps easier. However, opening the aperture leads to a resolution loss that might lessen the observer’s ability to detect and localize the signal. A smaller aperture lets pass less background noise, thereby perhaps increasing a detection and localization FOM, but again increases photon noise by letting through fewer counts. With these competing effects, it would be interesting to determine the optimal pinhole diameter, i.e. that diameter that maximizes the FOM.
One can get an intuitive sense of the task at hand by inspecting the images in figure 2. The leftmost column displays 32 × 32 objects with a signal (at one location) embedded in increasing amounts of background noise. The signal itself, seen clearly in the upper left image, is a Gaussian blob of height = 1.5 intensity units and width σs = 1.5 pixel units (A pixel unit is the length of a pixel and an intensity unit would be measured in units of disintegrations per unit area per second). The background noise is derived by sampling a stationary noise field with a Gaussian power spectrum of correlation width = 4.5 pixel units sitting atop a DC background of 10.0 intensity units. The “amplitude” mentioned in the caption of figure 2 is simply the square root of the variance of the background noise and is used to gauge the amount of background noise. As seen in column 1, as we increase the amount of background noise, it becomes increasingly difficult to localize and detect the signal — even in the object space! Columns 2, 3, 4 and 5 show images obtained using apertures whose transmittance is characterized by a Gaussian of standard deviations σa=0.33, 1.0, 2.0, 3.0 pixel units, respectively. Note that a pixel unit in the image space equals that in the object space. The intensity unit in the image space is counts per unit area. (Note that the actual image size increases with σa, but for display purposes, we have normalized the images sizes in figure 2. The figure in columns 3, 4 and 5 show a dark border. This is due to intensity falloff at the edge of the image that is inherent in the imaging model). Each image is modified to include Poisson noise. As the aperture diameter increases, the noise (photon and background) is smoothed at the expense of resolution loss, and as the amplitude of background noise increases, the signal becomes more obscured. From the images in columns 2, 3, 4 and 5 of figure 2, one gets a visual sense of the difficulty of detection and localization under these conditions.
In section 2, we review ideal observers for detection only and for joint detection and localization. In section 3, we present our simulation methodology for exploring how optimal aperture diameter varies in the detection-only and in the joint detection and localization tasks. Our simulation results are shown in section 4. Section 5 includes a summary and discussion.
We review well-known forms of the ideal decision strategy and accompanying ideal observer for the detection-only problem, and then present our ideal decision strategy and accompanying ideal observer for the joint detection and localization problem.
We first revisit the well-known case (Barrett and Myers 2004) of an ideal observer for a 2-class (signal present/absent) detection problem in which the signal and background are known exactly. Let vector b be the known background object and vector s a known additive signal (lesion). The vector g is the observed image and is noisy by virtue of photon noise. Then p(g|b) is the signal-absent likelihood and p(g|b + s) the signal-present likelihood. The test statistic is the likelihood ratio t given by
where H1, H0 correspond to hypotheses of signal present/absent. The scalar test statistic t(g) is compared to a threshold τ to decide H1 if t(g) > τ and H0 otherwise.
One can summarize the performance of the observer by a FOM given by AROC, the area under the ROC curve. To do this, one generates an ensemble of signal-absent and signal-present realizations of g, and accumulates observer responses t+ (for signal present) and t− (for signal absent). By forming a bimodal histogram of t+ and t−, one can, at a given τ, integrate suitable areas under the histograms to obtain the probability PTP(τ) of deciding signal present when it is indeed present, and the probability PTP(τ) of deciding that a signal is present when it is absent. By sweeping τ, one generates the well-known ROC curve PTP(τ) vs PFP(τ) seen in figure 3. The area under this curve, AROC, is a FOM for this 2-class detection problem. The likelihood ratio is ideal in the sense that it maximizes AROC amongst all possible observers, and minimizes the Bayes risk for arbitrary costs. In figure 3, the location on the curves of the operating point corresponding to τ = 1 is shown. (The threshold τ = 1 corresponds to an ML classifier for the ROC curve.)
If the background is variable, then b is random, and is characterized by a pdf p(b). The likelihood ratio must now integrate over all possibilities of the background, and the test statistic becomes
In (2) we have used the notation LR(g) to indicate a likelihood ratio.
To calculate AROC in this case, one can again generate an ensemble of signal-absent and signal-present noisy realizations of g, where the noise now includes both photon and background noise. Given these samples, one can follow the procedure outlined in section 2.1 to obtain AROC.
To introduce localization into the problem, we digress into a discussion of the LROC (localization ROC) curve. The LROC curve incorporates a notion of localization. The LROC curve, seen in figure 3, plots the probability of correct joint detection and localization PCL(τ), vs the false positive probability PFP(τ). The area under the LROC curve, ALROC, is a FOM for joint detection and localization performance just as AROC is for detection performance. To compute ALROC in a simulation experiment, one can follow a procedure similar to that described for computing AROC. Here, one computes histograms of t+ and t− as before, but includes in t+ only those cases in which the signal was correctly localized within an allowed tolerance about the true location. In section 3.3, we discuss the computation of ALROC in more detail.
For detection, the ideal observer maximizes AROC, and so it is natural to seek a decision strategy that is ideal for detection and localization. A natural feature of such a decision strategy is that it should maximize ALROC. In Khurd and Gindi (2005) we derived such a decision strategy, and we briefly summarize it here. Let sj be a signal (of known form) located at j. (We consider only discrete signal locations but in Khurd and Gindi (2005) generalize our decision strategy to the case where signal location is continuous.) Let pR(sj) be the prior probability that the signal will be found at j. The notion of localization is meaningless without specifying a tolerance distance about the true location within which localization is deemed correct. Let l = 1, , L also index signal location. We define the hypothesis Hl, where l = 1, , L, to mean that the signal is located within the tolerance region T(l) centered at l. The hypothesis H0 again means signal absent. Thus we have an L + 1 hypotheses decision problem. We shall also define a likelihood ratio LR(g, sj), a generalization of (2), that is indexed to the signal at j:
With these definitions we can write our optimal decision strategy for detection and localization:
Our decision strategy will detect the presence or absence of the signal in the data g and, if a signal is detected, it will report a location l(g) where it deems the signal to be present. If the true location of the signal falls within a tolerance region T(l(g)) about the reported location l(g), we will say that the observer has correctly localized the signal. We shall use “tolerance” to simply indicate the diameter of T(l), and also note that in our experiments, the tolerance does not change with position. The decision rule (4) indeed maximizes ALROC and also minimizes Bayes risk under certain cost constraints (Khurd and Gindi 2005).
Clearly, as tolerance → ∞, the localization requirement disappears and we end up with a detection-only problem with signal location unknown. In this case, the LROC curve becomes the ROC curve, and maximizing ALROC is the same as maximizing AROC. The implementation and analysis of such a 2-class decision strategy with signal location unknown was discussed in Barrett and Abbey (1997). In terms of our own notation, the optimal decision strategy maximizing AROC with signal location unknown requires the computation of a test statistic and we decide signal present if t(g) > τ. In fact this rule follows from (4) as tolerance → ∞. To avoid confusion, this max-AROC signal-location-unknown ideal observer is to be contrasted with the max-AROC signal-location-known ideal observers discussed in sections 2.1 and 2.2. Indeed, we shall use this max-AROC signal-location-unknown ideal observer in our simulation experiments. It represents performance of our max-ALROC observer (4) in the limit of no localization requirement.
We first describe our simulated imaging system, then describe our methodology for our simulation experiments.
Our overall geometry is a unit-magnification pinhole system in which the distance of the object-to-aperture plane equals that of the aperture-to-image plane. Figure 4 shows a cross section. The object with signal present is given by f+ = b + sj where b is a stochastic background and sj a signal at j. We shall use f− = b to denote a signal-absent object and f to indicate either f+ or f−. The signal to be localized is a discretized 2D Gaussian function with σs = 1.5 pixel units and a peak intensity of 1.5 intensity units. We truncate the Gaussian so that its diameter is 13 pixel units. Note that if present, the signal can be located anywhere with equal probability in a 20 × 20 grid centered in the 32 × 32 object space. The smaller search grid ensures that the signal remains within the object space (i.e. does not fall off the edge). The background b comprises a DC plateau of 10.0 intensity units to which we add stationary zero-mean multivariate Gaussian noise whose power spectrum is given by
This power spectrum is characterized by a correlation width σcw = 4.5 pixel units and a factor “amp”. To characterize the “amount” of noise, we use the notion of noise amplitude given by of the background noise. An object realization profile (solid line) displayed in the object plane in figure 4 shows a signal embedded in background noise with σcw = 4.5 pixel units and amp = 0.3 intensity units. The dashed line in the object plane shows the mean object µf = + sj where is the mean background, so that a visual comparison of µf and f gives one a sense of the typical amount and quality of background variability.
The aperture plane contains a Gaussian aperture whose transmittance is characterized by a width σa and given (in continuous notation) as
where (cx, cy) is the center of the Gaussian aperture. Figure 4 shows a profile of a(x, y). The entity we seek to optimize is the aperture width σa.
The image plane comprises the pinhole image of f, plus photon noise. For a two-dimensional pinhole system, the image g is given (Barrett and Swindell (1981)) approximately as
where ** denotes 2-D non-circular convolution and is a scaled (stretched) version of f and ã a scaled version of a. Here a is a discrete (vector) representation of a(x, y). The factor κ is given by
where is exposure time, and d1 and d2 are object plane-to-aperture plane and aperture plane-to-image plane distances, respectively. Equation (7) is consistent with using units of counts per unit area for g and n and disintegrations per unit area per second for , with ã unitless. The convolution operation itself has units of area. In our case d1 = d2, and with this equality, = f (unit magnification). Furthermore, it is easy to show that for our imaging system ã is a stretched (by ×2) version of a and so ã(x, y) h(x, y) = exp [−((x − cx)2 + (y − cy)2)/(2(2σa)2)]. Henceforth, we shall use the uncluttered notation h(x, y) to refer to the scaled aperture transmittance, as seen in figure 4, and h to refer to its discretized version. We shall set κ to unity without any loss of generality. In (7) n is the photon noise, written without loss of generality as an additive term though it is signal dependent for emission imaging. With the above choices, the image formation model (7) for our particular system becomes
The dashed line in the image plane in figure 4 shows the mean image = µf * *h, and visual comparison of with g gives one a sense of the typical amount and quality of background and photon noise in the image. Note also that since the signal is a truncated Gaussian blob of diameter 13 pixel units in object space, and since the imaging system has unit magnification, then by (9), image of the signal will have a diameter approximately equal to the sum of the signal width + scaled aperture width, typically about 15 pixel units in image space. Thus the signal image is significantly larger than the pixel spacing in image space. It will be convenient to rewrite the convolution h* *f in a matrix-vector form f where the system matrix is a non-circular convolution matrix.
We note that the simple convolutional imaging model described above is itself an approximation. A more exact model (Barrett and Swindell 1981) accounts for additional factors for high obliquity rays, vignetting due to finite aperture thickness, and modification of the aperture transmittance due to gamma-ray penetration of the aperture. These physical effects are small and do not significantly affect our conclusions. We note that more exact models of pinhole imaging incorporating the effects mentioned above could be incorporated into since these are linear effects.
In our first experiments, we considered the effects of Poisson photon noise only, i.e. no background noise. Then the conditional probability for g is g|f ~ Poisson(f). In this case, it is easy to write LR(g, sj) since there is no background variability, and (3) becomes . For the signal-present case and Poisson noise, we get
where gm indicates the observed counts in the mth detector. The notation [y]m indicates the mth element of vector y. The signal-absent likelihood has the same form as (10) but with no sj. Combining the likelihoods into a ratio, we can write the appropriate decision strategy for this case as (4) with LR(g, sj) given by
In these experiments, we use the Gaussian model of background variability described earlier, and again use the quantity “amp” to gauge the amount of background noise. We previously described background noise in terms of a continuous representation via the use of the power spectrum in (5). It will be convenient to re-express the fact that the background noise is Gaussian in a discrete notation. In discrete notation, the object noise follows a normal distribution
where µf, the mean object, was defined earlier. The background covariance matrix Kb has its rows equal to appropriately shifted versions of the discrete form of the autocorrelation function corresponding to the power spectrum in (5).
For emission imaging, one would like to again use a conditional Poisson data model g|f ~ Poisson(f). Using (12) and the Poisson data model g|f ~ Poisson(f), we cannot derive the pdf p(g) nor LR(g, sj) in closed form. One would be forced to resort to computationally intensive numerical integration methods (Kupinski et al 2003) to evaluate likelihoods via the integrals on the right-hand side of (3). To avoid using such computationally intensive methods, we have made a fairly accurate approximation to the Poisson noise model that enables us to derive closed-form, easily evaluable expressions for the likelihoods by computing p(g|Hj) directly (middle term in (3)) without integrating over object noise. We take the approximate conditional Poisson data model to be Gaussian
where diag(y) indicates a diagonal matrix with diagonal elements given by vector y. Note that if had used diag(f) instead of diag() in (13), we would have obtained the familiar Gaussian approximation to Poisson noise. However, using diag(f) incurs its own problems with numerically intensive computation, and using diag() is not a bad approximation as we shall later demonstrate experimentally. To gain insight on the approximation (13), we list the first two central moments (Abbey 1998) of the correct Poisson-Gaussian model (We cannot write p(g), but can write its first two moments). The first two moments are = µf and Kg = diag(µf) + KbT. Of interest is the fact that for the modified data model in (13), we can show that the first two moments are:
Note that the mean in (14) equals that for the Poisson-Gaussian model while the covariance differs only slightly given that the signal to be localized is weak.
To see (14), we use (12) to express f = u + µf where u ~ (0, Kb). We then use (9) and (13) to express the data as g = f + v where the data noise v ~ (0, diag()). We can combine the above equations for f and g to express g further as g = (u + µf)+v = u + v + µf and note that the random terms u and v are both zero mean. Therefore, = µf. Since the covariance of u is Kb, the covariance of u is KbT. Combining these facts regarding means and variances, we can finally write g ~ (µf, diag() + KbT), thus justifying the discussion regarding the first two moments. Furthermore, we have g|Hj ~ ( + sj, diag() + KbT) and g|H0 ~ (, diag() + Kb T).
With the closed form expressions for p(g|Hj) and p(g|H0) now available, we can easily write the likelihood ratio for the data + background noise case as:
In order to determine ALROC, we first plot the LROC curve. In order to plot the LROC curve, we determine a set of Nτ thresholds from the range of values of t+ and t−. For each threshold τ, the probability of correct localization PCL(τ) and the false-positive rate PFP(τ) are determined by computing the fractions of t+ and t− observer responses that exceed the given threshold. By sweeping all thresholds, the LROC curve is plotted. Note that in computing PCL(τ), one divides the number of correctly localized t+ that exceed τ by the total number of t+ (including the discarded incorrectly localized t+), and in computing PFP(τ), one divides the number of t− that exceed τ by the total number of t−. Then ALROC is obtained by simple trapezoidal integration of the LROC curve.
In calculating signal-absent observer responses, for each experiment, we use 10000 signal-absent sample images. So for the Poisson noise only case, we generate 10000 noisy images of a fixed background and for the photon + background noise case generate 10000 noisy images from 10000 noisy objects. In signal-present images, a signal can appear at any one of the 400 locations on the 20 × 20 search grid. We take pR(sj) to be uniform and generate 26 signal-present images per location, for a total of 26 × 400 = 10400 signal-present images for each experiment. For the Poisson noise only case, we use a fixed background but add 10400 signals (26 × 400) resulting 10400 noisy images. For the photon + background noise case, we generate 10400 noisy backgrounds, one for each of the 10400 signals, and thus form 10400 noisy images. In all experiments, we use a Gaussian signal with a peak intensity of 1.5 units and σs = 1.5 pixel units.
Our previous definition of tolerance is apropos for a continuous object space. Since our object appears on pixels, we shall use the label “tol” for tolerance where tol = m, an odd integer, implies a region of diameter m pixel units and tol = 1 implies a one-pixel region, i.e. the guessed signal location must be coincident with the signal.
Figure 5(a) shows our first experimental results. The abscissa is σa/σs, the aperture width normalized by signal width, and the ordinate is ALROC. The value “tol” (tolerance) indicates the diameter of T(l) as explained above, and curves are plotted for various values of tol. The top curves in figure 5, labeled “ROC”, correspond to tol = ∞, i.e. a pure detection task.
As seen in figure 5(a), for the most restrictive localization at tol = 1, one intuitively expects that a high-resolution image is needed, and indeed, the optimal normalized aperture width (= 1.1 pixel units), given by the location of the peak in the ALROC curve, is about equal to the signal size. As tolerance relaxes to higher values, the optimal aperture size increases (the ALROC peak shifts rightward) to let in more counts and the ALROC value rises. Finally, for the case of detection only (tol = ROC) and no background noise, the AROC curve shows the startling result that an infinitely large aperture (i.e. no aperture) is best! This classic “gaping aperture” result has been observed earlier for a detection-only signal-known/background-known case with an ideal observer in Myers et al (1990) and for an signal-known/background-known case with a near-ideal observer in Tsui et al (1978). On the other hand, the ALROC curves demonstrate that when a localization task is added to the detection task, we obtain a finite optimal aperture size - even without background noise. This is a new and not obvious result.
The plot in figure 5(b) follows the same format as that in figure 5(a). The only difference is that the approximate Poisson noise (13) is used. The result is virtually identical to that in figure 5(a), with the curves overlapping nearly perfectly, thus justifying our use of the approximation (13).
The plot in figure 5(c) summarizes some important observations gleaned from the results in figure 5(a). The abscissa is tolerance. The ordinate is the optimal aperture width normalized by signal width. The vertical bars above/below each point indicate the asymmetric half-width of the ALROC curve at 95% max. The bar lengths are thus a measure of the sensitivity of the ALROC FOM to aperture width, with smaller bars implying a greater sensitivity. As the search tolerance decreases, the optimal aperture width shrinks and ALROC performance becomes more sensitive to fluctuations about the optimal pinhole diameter. For a pure detection task, the error bar is infinite and not shown.
An important result is shown in figure 6. Here background noise of amp = 0.3 intensity units and correlation width σcw = 4.5 pixel units and approximate photon noise are both included. In figure 6(a), ALROC vs normalized aperture width is again plotted for various tolerances. As tolerance decreases, the optimal aperture width shrinks and the ALROC curve is lowered. The ROC curve in figure 6(a) has a peak and so no “gaping aperture” is observed when background noise is present. This result is consistent with those of the signal-location-known detection-only experiments with photon + background noise shown in Myers et al (1990) and in Tsui et al (1978).
The plot in figure 6(b) follows the same conventions as that in figure 5(c) except that the abscissa point marked “infinity” corresponds to the ROC curve at top of figure 6(a). With background noise added, the optimal aperture size is smaller than the case for no background noise, as can be seen by comparing figure 6(b) and figure 5(c).
In figure 7, we once again plot ALROC and AROC vs normalized aperture width, but each plot is at a fixed tolerance of 5 pixel units with the curves in each plot indexed by background noise amplitude “amp”. In figure 7(a), as background noise is lowered, ALROC performance increases and optimal aperture width, given by the location of the peaks in the ALROC curve, increases. In figure 7(b), we repeat the experiment for a pure detection task and observe a trend similar to that in figure 7(a). The result in figure 7(b) for a detection-only task is consistent with results reported in Myers et al (1990), but, interestingly, the same trend in optimal aperture variation with background noise amplitude is also seen in the joint detection and localization case of figure 7(a). In figure 7(c), we examine the size and the full-width at 95% max of the optimal aperture as background noise is increased. As amp rises, the optimal aperture shrinks.
In figures 5, ,66 and and7,7, we used the bootstrap (Zoubir and Boashash 1998) method to calculate 95% confidence interval error bars on the points displayed. In all cases, the error bars (each less than 1% of the ALROC value itself) were tiny - too small to display. The separation of the curves on these plots is thus significant.
In a conspicuity test, one tracks the FOM as exposure time is increased. With large enough , the relative effects of photon noise become insignificant and the performance, under certain conditions, could become limited by background noise. (See Myers et al 1990 for a discussion of conspicuity limits.) For detection tasks, conspicuity limits have been reported in Barrett (1990). Here, we inquire whether such limits can also occur for a joint detection and localization task.
We performed the conspicuity test seen in figure 8. In figure 8(a), both background and photon noise were included. At low , photon noise dominates, but at high , the ALROC and AROC approach an asymptote (i.e. conspicuity limit) due to background noise. In figure 8(b), we included Poisson noise only and observed that ALROC performance saturates to 1.0 (perfect performance - no conspicuity) quickly. Note that in these studies, our ideal observer must itself change with as follows: in (15), → .
We have explored a previously studied problem - aperture optimization in emission imaging - in the context of a detection and localization task using an ideal observer. We were particularly interested in exploring the effects of adding localization to the detection task. Our recent derivation of an ideal ALROC observer (Khurd and Gindi 2005) made this endeavor possible.
Our first conclusion, seen in figure 6(a), corresponds to intuition: the optimal aperture width shrinks and ALROC performance is degraded as localization becomes more stringent. In addition, as seen in figure 6(b), the sensitivity of ALROC performance to the optimal aperture width also increases as localization becomes more stringent. Thus a system designer faced with a clinical task with a high degree of localization would need to consider a smaller (than without localization) aperture.
In addition we observed forms of similar behavior of our joint detection and localization performance to that for a roughly similar (Myers et al 1990) study involving a detection-only task with similar background noise but with signal location known. For both cases, the optimal aperture width shrinks (see figure 7) as background noise grows. Also, conspicuity limits can be observed (see figure 8) for both cases.
Finally we observed the interesting result that for the case of background noise absent, our joint detection and localization task led to a finite aperture size, whereas for pure detection, our results and those of others (Myers et al 1990 and Tsui et al 1978) showed an infinitely large “gaping” aperture. For both joint detection and localization and detection tasks, the addition of background noise yields a finite aperture size.
In Khurd et al (2006), we did work similar to that of this paper, but for a simple one-dimensional (1-D) imaging system. The 2-D problem is not only more realistic (and much more computationally complex), but also differs in important ways. In 2-D, the count level grows as the square of aperture width while in 1-D it grows linearly with aperture width, so in a 2D system, a greater advantage in photon noise reduction is seen as aperture width grows. Also, if, for a given tolerance distance, the potential number of mis-localizations in 1-D object space is N, then it grows as N 2 in 2-D. Despite these differences, the rough qualitative behavior for the 1-D system matched that of the 2-D system.
In this work, we used a simple planar imaging system as well as a simple Gaussian background noise model. It would be interesting to extend our current work to a more realistic background noise models (using the techniques in Kupinski et al 2003 and Park et al 2005) and to more useful emission imaging systems such as SPECT (using for example techniques in He et al 2007). In Zhou and Gindi (2008), we have obtained preliminary results for a simple SPECT system.
This work was supported by NIH NIBIB 02629.