|Home | About | Journals | Submit | Contact Us | Français|
In SPECT the collimator is a crucial element of the imaging chain and controls the noise-resolution tradeoff of the collected data. Optimizing collimator design has been a long studied topic, with many different criteria used to evaluate the design. One class of criteria is tasked-based, in which the collimator is designed to optimize detection of a signal (lesion). Here we consider a new, more realistic, task, the joint detection and localization of a signal. Furthermore, we use an ideal observer - one that attains a theoretically maximum task performance - to optimize collimator design. The ideal observer operates on the sinogram data. We consider a family of parallel-hole low-energy collimators of varying resolution and efficiency and optimize over this set. We observe that for a 2-D object characterized by noise due to background variability and a sinogram with photon noise, the optimal collimator tends to be of lower resolution and higher efficiency than equivalent commercial collimators. Furthermore, this optimal design is insensitive to the tolerance radius within which the signal must be localized. So for this scenario, the addition of a localization task does not change the optimal collimator. Optimal collimator resolution gets worse as signal size grows, and improves as the level of background variability noise increases. These latter two trends are also observed when the detection task is signal-known-exactly and background variable.
In SPECT imaging with collimators, collimator properties control the noise and resolution of the collected data. This tradeoff ultimately has a significant effect on the performance (by a human or mathematical observer) on various types of detection tasks. Indeed, measures of performance on such tasks (Barrett and Myers 2004) can be used to optimize collimator properties.
We are interested in using task performance measures to optimize parallel-hole collimator properties for SPECT. In particular, we are interested in extending the oft-used task of signal-present/signal-absent detection of a signal (lesion) at a known location to a medically more realistic task in which an observer is required to jointly detect and localize (within an allowed search tolerance) a signal that is known exactly except for location. Given this escalation in task complexity, do the optimal collimator properties change relative to those obtained using a simpler task involving detection only? We presented initial work on this topic at two conferences (Zhou and Gindi 2008a) and (Zhou and Gindi 2008b). We also presented similar work but in the context of pinhole optimization for planar emission imaging systems in Zhou et al (2008). The purpose of this paper is to show a methodology for optimizing collimators in SPECT for this new type of task, and to present some initial results using this approach.
To optimize the collimator, we need an observer and a scalar figure of merit (FOM). The ideal observer has been promoted (Barrett and Myers 2004) as appropriate for optimizing and comparing imaging systems. This approach has been used in the context of optimizing pinhole SPECT and planar emission imaging as we discuss in section 6. The ideal observer is a mathematical observer that has full knowledge of the imaging system and all probability densities associated with the noisy data. It is ideal in the sense that amongst all observers, it maximizes a generally accepted FOM such as area under the ROC curve (AROC). As espoused in Barrett and Myers (2004) ideal observers operate on the raw collected data - here the sinogram - not the reconstruction. For a reconstruction algorithm that preserves all information in the sinogram, the ideal observer performance operating on the reconstruction is the same as that achieved operating on the sinogram (page 830, Barrett and Myers 2004). All that is required for equal performance in the sinogram and reconstruction domains is that the reconstruction operator be invertible. The stationarity of any noise source or linearity of the imaging system does not matter as long as the reconstruction operator is invertible. Hence the ideal observer performance on the sinogram forms an upper limit on the task performance on any reconstruction of the sinogram data. In addition, the computational complexity is much worse operating in the reconstructed domain, another reason to apply the ideal observer to the sinogram.
Other attempts to optimize apertures in SPECT have used approaches different from the ideal-observer method that we follow here. For example, Zeng and Gullberg (2002) applied a human-performance emulating mathematical observer - a channelized Hotelling observer (CHO) - in the reconstruction domain to optimize a parallel-hole collimator for a detection task. Indeed, collimator optimization using detection-task performance criteria in the context of SPECT and planar imaging has had a long history. We review this work in section 6. However, one way that our work is differentiated from previous work is that we consider a fundamentally different type of task in which the ideal observer jointly detects the presence of the signal and also estimates its location.
This joint detection and localization task is medically more realistic than the pure detection task (even when signal location is unknown) since a physician must search an image for a lesion. A general ideal observer for this task was proposed by Khurd and Gindi (2005). The relevant FOM is the area under the localization ROC curve (ALROC) and the associated ideal observer maximizes ALROC. In previous work (Zhou et al 2008) we applied this task to a planar pinhole imaging system to optimize pinhole size.
A different approach to image quality evaluation is espoused in the scan-statistics literature (Swensson 1996 and Popescu and Lewitt 2006). This approach typically uses detection or joint detection and localization FOMs. In the scan-statistics approach, one eschews the often unobtainable detailed statistical model of the background and data noise, and directly uses the image of some phantom to obtain needed quantities. One simply uses an empirical observer to scan the image and accumulate histograms, for instance, of the observer responses when the observer is centered on a signal in the image space, or the max-scan observer responses for signal-absent images. These histograms can be used to estimate (by kernel methods for example) pdf’s and cdf’s needed by scan-statistic approaches. Armed with such pdf’s, one can derive task performance FOM’s such as ROC curves (and AROC) or LROC curves (and ALROC).
In addition, we mention the somewhat different work by Gifford et al (2005) in which model observers designed to emulate human performance in detection and localization tasks were proposed. These were used to optimize reconstruction strategies (Bruyant et al 2004), but not imaging systems.
In section 2, we describe collimator models and image formation. In section 3, we review the ideal observer for joint detection and localization. In section 4, we present our simulation methodology. Our simulation results are provided in section 5 followed by discussion and conclusion in section 6.
The parallel-hole collimator system is here characterized by the lattice structure of the hole pattern (hole shape and hole array), bore diameter, bore length and septal thickness. The depth-dependent collimator resolution is determined by bore diameter D and bore length l as well as intrinsic camera resolution σ0. Many authors (Gifford and King 2007, Zeng et al 1998 and King et al 1997) have modeled the depth-dependent FWHM (full-width half-maximum) of a point response from a source in air at depth d as a linear relation with a given slope and intercept. A linear relationship is useful if the depth-resolution model is to be obtained from only a few measurements, but since we presume knowledge of the collimator properties D, l and σ0, we can use the more exact nonlinear relation (Gifford and King 2007).
where d is the distance from source to the crystal face (not the collimator face).
The collimator efficiency is determined by bore diameter D, bore length l and septal thickness (SPT). Note that non-circular bore profiles and the array pattern of bore holes demand a definition of “bore diameter” and “septal thickness”. From Gunter (2004) the bore diameter D is defined by the relation . We shall use a hexagonal bore shape, and in this case, the relation leads to D = 1.819S, where S is the length of one of the hexagonal sides. The septal thickness SPT is determined by the bore diameter D and the hole separation HOLSEP, where HOLSEP is defined as the distance between the centers of adjacent bores in the lattice. We shall use a hexagonal lattice of hexagonal bores. In this case, we have (Gunter 2004) . With these definitions and for our case using hexagonal holes in a hexagonal lattice, the average collimator efficiency is given by Gunter (2004)
We chose a family of 12 collimators to span a resolution-efficiency tradeoff. We seek that collimator amongst the 12 that yields the best performance. The tradeoff is shown graphically in figure 1. Table 1 summarizes the collimator parameters. The UHR (Ultra High Resolution), HR (High Resolution), GAP (General All Purpose) and UHSens (Ultra High Sensitivity) collimators use parameters obtained from typical low-energy parallel-hole clinical collimators appropriate for Tc-99m 140 keV imaging. The C1, ···, C8 collimators are designed to span the resolution-efficiency curve. The last row of the table lists efficiency relative to that of the GAP collimator. We note that in our simulations, we assume a monoenergetic 140 keV Tc-99m source.
We used a 2-D SPECT geometry. The object field comprised a 64 × 64 array with pixel size 0.42 cm. The detector gantry had 128 bins of width 0.42 cm. The radius of rotation was 20 cm and we used 65 equispaced angles over 360°. In emission imaging, there are two sources of noise that greatly affect detection and joint detection and localization performances. One is the usual photon noise. However, a second source of noise, background variability (BV), is due to the variations of anatomy and uptake in a class of patients to be imaged (Barrett and Myers 2004). A completely realistic model of background variability noise is too difficult to handle analytically, but we use a correlated Gaussian pdf to account for it.
Let the N ×1 vector f be the object (N = 64×64). The object comprises a random background b and, if a signal is present at location j, a signal sj , where the form of sj is known. Thus f = b for the signal-absent case and f = b + sj for the signal-present case. The background b occupies all 64 × 64 pixels and comprises a DC plateau to which we add stationary zero-mean multivariate Gaussian noise whose power spectrum is given by
This power spectrum is characterized by a correlation width σcw (pixel units) and a factor “amp”. To characterize the “amount” of BV noise, we use the notion of noise amplitude given by of the BV noise. Hence the stochastic Gaussian background variability is given by b ~ (, Kb), where the background covariance matrix Kb has its rows equal to appropriately shifted versions of the discrete form of the autocorrelation function corresponding to the power spectrum in (3).
The signal has a square profile within which the intensity is constant. We shall use various signal sizes as described in section 5.3. Note that the signal can be located on a search grid anywhere in the object field with equal probability. (However, for large signals that fall off the edge of the object field we shrink the search grid accordingly.) Figure 2(a) shows an object with signal present. The correlation width σcw = 4.5 leads to background fluctuations of a scale somewhat broader than the 3 × 3 signal. We shall use a value = 3 for our experiments and specify the signal amplitude for each experiment.
In SPECT, the imaging equation is
where is the system matrix and n the photon noise. We take to correspond to a unit exposure time, a point discussed below. The M × 1 vector g with M = 128 × 65 is the sinogram. Note that g, f and n are all random quantities.
The system matrix accounts the geometric response described earlier. We use a Gaussian diffusion scheme (Zeng et al 1998) to implement the depth-dependent blur of (1). We note that the diffusion scheme in Zeng et al (1998) was originally used to implement a linear depth-resolution curve but it is very easily modified to accomodate the nonlinear quadrature model (1). The system matrix also models attenuation. The object sits in a water bath with an the attenuation coefficient of 0.146 cm−1 appropriate for 140 keV. Since the attenuating medium occupies the entire object space, the attenuation is somewhat severe for these simulations. We do not model scatter in this study. Figure 2(b) shows a sinogram. The noise in the sinogram is due to photon noise and BV noise. As mentioned earlier, the ideal observer operates on the sinogram directly. However, in figure 2(c) we have shown a reconstruction (using a MAP-COSEM algorithm (Hsiao et al 2004) of the sinogram in figure 2(b) to give a qualitative feel of the nature of the reconstruction. While a reconstruction algorithm typically contains a regularization term that controls the noise-resolution tradeoff in the reconstructed image, our ideal observer performance, operating on the sinogram, ignores this regularization. We again emphasize that the collimator is optimized from the sinogram only, and that the reconstructions displayed in figure 2(c) (and later in figures 6(a), 6(b) and 6(c)) are for illustrative purposes only. We standardized the count level at 230K counts for a signal-absent object and a GAP collimator. This count level was chosen after examining typical per-slice count levels for Tc99m cardiac and hepatic studies reported in the literature. Count levels for other collimators were scaled according to the relative (to GAP) efficiencies in Table 1.
Note that in (4), the exposure time is set to unity. Clearly, is an important factor in task performance. As → ∞, the background noise dominates and as → 0 the photon noise dominates. In our work, we explore the effects of indirectly through the variation of the noise amplitude amp. For = 1, raising amp corresponds to a greater proportion of noise due to background noise, and lowering amp to a greater proportion of noise due to photon noise. Thus, varying amp is equivalent to monotonically varying .
In this section, we review the ideal decision strategy and accompanying ideal observer for the joint detection and localization task. We first revisit the familiar case of a 2-class (signal-present/absent) detection problem (Barrett and Myers 2004) in which the signal is known exactly and the background is variable. An ideal observer computes a test statistic t(g) and compares this to a threshold τ to decide the signal-present hypothesis H1 if t(g) > τ and the signal-absent hypothesis H0 otherwise. As is well known, the ideal scalar test statistic is given by the likelihood ratio , where p(g|H0) and p(g|H1) may be complex expressions accounting for BV noise and photon noise.
For this case, one can summarize observer performance by a FOM given by AROC, the area under the ROC curve. One can do this by generating an ensemble of signal-present t+ and signal-absent t− responses and integrating areas under the resulting bimodal histogram as τ is swept (Barrett and Myers 2004). This generates the familiar ROC curve, as seen in figure 3, that plots the PTP (τ) the probability of true detection versus PFP (τ) the probability of false detection. AROC can be computed by numerical integration. The likelihood ratio is ideal in that it maximizes AROC.
To introduce localization into the problem, we digress into a discussion of the LROC (localization ROC) curve. The LROC curve, seen in figure 3, plots the probability of correct joint detection and localization PCL(τ), vs the false positive probability PFP(τ). The area under the LROC curve, ALROC, is a FOM for joint detection and localization performance just as AROC is for detection performance. To compute ALROC in a simulation experiment, one can follow a procedure similar to that described for computing AROC. Here, one computes histograms of t+ and t− as before, but includes in t+ only those cases in which the signal was correctly localized within an allowed tolerance about the true location (Zhou et al 2008)
Khurd and Gindi (2005) derived such a decision strategy for the detection and localization problem that maximizes ALROC. We briefly summarize it here. Let sj be a signal (of known form) located at j. Let pR(sj ) be the prior probability that the signal will be found at j. In this work, we take pR(sj ) to be uniform. The notion of localization is meaningless without specifying a tolerance distance about the true location within which localization is deemed correct. Let T (l) indicate a circular tolerance region centered on l. We shall evaluate FOM’s as a function of the diameter of T(l). Let l = 1, ···, L also index signal location. We define the hypothesis Hj to mean that the signal is located at j. The hypothesis H0 means signal absent. Thus we have an L + 1 hypotheses decision problem. We shall also define a likelihood ratio LR(g, sj ), that is indexed to the signal at j:
(In section 4.2, we will give explicit expressions for LR(g, sj ) for various cases.) With the above definitions we can write our optimal decision strategy for detection and localization:
Our decision strategy will detect the presence or absence of the signal in the data g and, if a signal is detected, it will report a location l(g) where it deems the signal to be present. If the true location of the signal falls within a tolerance region T(l(g)) about the reported location l(g), we will say that the observer has correctly localized the signal. We shall use “tolerance” to simply indicate the diameter of T(l), and also note that in our experiments, the tolerance does not change with position. The decision rule (6) indeed maximizes ALROC and also minimizes Bayes risk under certain cost constraints (Khurd and Gindi 2005).
Clearly, as tolerance → ∞, the localization requirement disappears and we end up with a detection-only problem with signal location unknown. In this case, the LROC curve becomes the ROC curve, and maximizing ALROC is the same as maximizing AROC for the case of signal location unknown.
In our first experiments, we considered the effects of Poisson photon noise only, i.e. no BV noise. Then the conditional probability for g is g|f ~ Poisson(f). In this case, it is easy to write LR(g, sj ) since there is no background variability. The likelihoods follow simple independent Poisson noise expressions and the ratio is easy to calculate. We did this for a pinhole system in Zhou et al (2008) and the derivation is similar for SPECT (We need only substitute our SPECT system matrix for the pinhole system matrix). The resulting LR(g, sj ) given by
where the notation [y]m indicates the mth element of vector y and gm is the number of observed counts in the mth detector bin.
To compute ALROC in a simulation experiment, one generates an ensemble of signal-absent and signal-present realizations of g, and accumulates observer responses t+ (for signal present) and t− (for signal absent), then follows the procedure mentioned in section 4.2.
In these experiments, we use the Gaussian model of background variability described earlier, in which case the object variability follows a normal distribution
where μf is the mean object averaged over BV noise. Note that μf = for signal absent and μf = + sj for signal present.
For emission imaging, one would like to again use a conditional Poisson data model g|f ~ Poisson(f). Using (8) and the Poisson data model g|f ~ Poisson(f), we cannot derive the pdf p(g) nor LR(g, sj ) in closed form. One would be forced to resort to computationally intensive numerical integration methods (Kupinski et al 2003) to evaluate likelihoods. To avoid using such computationally intensive methods, we have made a fairly accurate approximations in Zhou et al (2008) that enable us to derive closed-form, easily evaluable expressions for the likelihoods in (5). This leads to an expression for the likelihood ratio
where Kg = diag() + KbT is the covariance of the sinogram. Finally, we can combine (9) with our decision rule (6) to obtain a means for computing observer response t(g) for the case of BV + photon noise. (Note that the exponential form of (9) tempts one to use log LR(g, sj ), but the summation in (6) precludes this.)
To compute ALROC, we generate many instances of signal-absent g and signal-present g with the signals uniformly distributed over a search region. Each g includes an instance of BV noise and Poisson (photon) noise. We then accumulate the associated observer responses t− and t+ and follow the aforementioned numerical integration procedure to obtain ALROC.
Unless otherwise specified, we generate 10000 signal-absent images for each experiment and 4 signal-present images per signal location if a signal is present. (This leads, for example, to 15376 signal-present images for a 3 × 3 signal.) Our previous definition of tolerance is apropos for a continuous object space. Since our object appears on pixels, we shall use the label “tol” for tolerance where tol = m, an odd integer, implies a region of diameter m pixel units and tol = 1 implies a one-pixel region, i.e. the guessed signal location must be coincident with the signal. Also we shall specify signal amplitude as “signal height” meaning the signal intensity above mean background.
In our first series of experiments, we examined results when no BV noise was present so that the only noise was photon noise. For this case, LR(g, sj ) is given by (7). Figure 4(a) plots performance, in terms of ALROC, versus collimator resolution, FWHM. The abscissa indexes the collimator choice by its resolution FWHM for a source at depth 10 cm. Reference to figure 1 relates this FWHM to efficiency. The curves in figure 4(a) are indexed by tolerance. The top curve, labeled tol = ∞, corresponds to no localization restriction. For tol = ∞, the ALROC reduces to AROC, the area under the associated ROC curve for a detection-only task with signal location unknown. As expected, ALROC performance worsens as localization becomes more stringent (smaller tol). In figure 4(a), we observe the non-intuitive result that at any tolerance, the ALROC curves keep rising with FWHM. The plot shows results up to FWHM = 19.4 mm, but numerical experiments confirm this rise even at 381 mm. It appears that no collimator is best! This paradoxical “gaping aperture” effect has been observed previously by Myers et al (1990) for a simple planar pinhole imaging system and a signal-known-exactly task. Intuitively, one would think that adding a localization restriction would result in a finite collimator resolution, and we did observe a finite optimal aperture size at finite tolerance for a pinhole system with photon noise only in (Zhou et al 2008).
Previous work by Myers et al (1990) and Zhou et al (2008) shows that for detection tasks in both planar pinhole and collimator systems, the addition of BV noise results in a finite-sized aperture. For our SPECT case, we show that this effect holds as seen in figure 4(b). The “gaping aperture” effect disappears as expected. The ALROC shows a shallow peak at FWHM = 14.3 mm (the UH-sens collimator) and this peak persists even as tolerance becomes more stringent. The optimal FWHM corresponds to a collimator whose sensitivity and FWHM are higher than that of a conventional collimator, such as the GAP. The peaks of the curves line up, which indicates that the optimal collimator is not sensitive to the tolerance. This result is surprising. One would have expected smaller FWHM (better resolution) at more stringent tolerance as observed for a pinhole system in Zhou et al (2008). The behavior in figure 4(b) holds even as we vary amp and signal parameters.
In Moore et al (2005) it was observed that for planar imaging with collimators and a detection-only task, the optimal collimator FWHM and efficiency rose as signal size grew. We test this effect for SPECT for our joint detection and localization task in figure 5(a). The figure is plotted at a fixed tolerance (=3). The curves correspond to square signal size of SW × SW pixels where SW = 1, 3, 5 and 7. Signal height (SP) above background is normalized to preserve signal power. Here background noise is present (amp = 1.36) and is fixed at all signal sizes. We observed a weak dependence of the optimal FWHM on signal size. As the signal size decreases, the optimal collimator shifts toward a better resolution. The peak for the 7 × 7 signal does not appear on the plot but occurs at FWHM = 24.0 mm.
In figure 5(b) we plot ALROC versus FWHM, with each curve plotted at a fixed tolerance of 3 pixel units and for a signal with a size of 3 pixel units. Each curve is indexed by the level of background variability noise amplitude amp. As background noise grows, ALROC performance degrades and the optimal FWHM shifts slightly to smaller values. The result shows that an increase in background noise results in a higher resolution imaging system (i.e. lower FWHM). This result is consistent with earlier results in Zhou et al (2008) for a planar pinhole imaging system. As mentioned earlier, raising background noise amplitude is equivalent to increasing exposure time. The result generalizes to cases of differing tolerances and signal parameters.
SPECT is highly space variant due to depth-dependent blur and attenuation. We observed anecdotally that reconstructed signals in noisy backgrounds are more easily seen as the signal (height = 4) approaches the edge of the object. This is shown in figure 6(d). For a signal of the same signal height at the center, we rarely see the signal in the reconstructed image as shown in figure 6(f). To quantify the effect, we did the following experiments: the signal is placed in a fixed location in the object field, but the signal location is unknown to the observer, i.e. the search grid is 64 × 64. The observer still needs to detect and localize the signal. We choose the three signal positions in figures 6(a), 6(b) and 6(c) to calculate ALROC performance. For each signal position, we generated 15000 signal-present noisy sample images with a fixed signal location and 10000 signal-absent noisy sample images. We then calculated ALROC for each position. The graph of ALROC vs FWHM for three signal positions at tol = 3 is shown in figure 7. The ALROC performance improves dramatically as the signal moves to the edge from the center. The ALROC curve for the signal close to the edge displays the “gaping aperture” effect; the ALROC curves for the signal off the edge show a peak at FWHM = 13.2 mm.
In figures 4, ,55 and and7,7, we used the bootstrap (Zoubir and Boashash 1998) method to calculate 95% confidence interval error bars on the points displayed. In all cases, the error bars (each less than 1% of the ALROC value itself) were tiny - too small to display. The separation of the curves on these plots is thus significant.
We observed several interesting results for our SPECT study. To facilitate the discussion we shall use acronyms SKE (signal-known-exactly), SKS (signal-known-statistically, meaning signal location known statistically for our case), BKE (background-known-exactly) and BKS (background known statistically, meaning BV noise is included).
In figure 4(b) we observed that the optimal collimator choice was insensitive to tolerance level for the case of BV noise and photon noise. This was non-intuitive in that one might expect the optimal collimator FWHM to decrease as tolerance radius decreases, an effect we observed for a planar pinhole system in Zhou et al (2008). The result seems to imply that the detection plus localization task could be replaced by a detection-only task (with signal location unknown). In figure 5(a) we observed that for a given tolerance, the addition of more BV noise required a higher resolution optimal collimator (smaller FWHM). We observed in simulations (not shown) that this trend holds even for a SKE/BKS detection task using an ideal observer, and that for a SKS/BKS pure detection task (tol = ∞) the same trend obtains. For planar pinhole systems this same trend has been observed for the SKS/BKS detection and localization task (Zhou et al 2008) and the SKE/BKS detection task (Myers et al 1990). In figure 5(b) we observed that at a fixed level of BV noise and tolerance, the optimal FWHM grows with signal size. We observed this effect for an SKE/BKS detection task using an ideal observer (not shown), and also observed this trend for an SKS/BKS (tol = ∞) pure detection task. In figures 6 and and77 we observed that for our ideal-observer detection plus localization task, performance depends on signal location, with performance increasing for signals approaching the edge of the object field. For an SKE/BKS detection task using an ideal observer, we observed this same trend (not shown). We also observed in figure 4(a) the counter-intuitive result of gaping apertures for all tolerances for the photon-noise-only case. We note that if we repeat this experiment using only one angle, the gaping aperture effect disappears except at tol = ∞. As more angles are added, the ALROC behavior approaches that of the gaping aperture and with enough angles, the effect is seen at all tolerances. We observed that for a photon-noise-only case, an SKE/BKS detection-only experiment for SPECT also shows this gaping aperture effect (not shown). Finally we note that optimal collimators derived by our criteria tend to be of lower resolution and higher efficiency than commercial collimators, an effect seen by others (discussed below) using different task criteria.
It is interesting to compare our collimator optimization work to that of others. Zeng and Gullberg (2002) investigated collimator resolution-efficiency optimization for SPECT by calculating a FOM, the channelized Hotelling observer (CHO) signal-to-noise ratio (SNR), that emulates human detection performance when observing the reconstruction, here an OSEM reconstruction. The task was SKE/BKE detection. Their results indicated that a collimator with poorer resolution and higher efficiency than that of a conventionally preferred low-energy-high-resolution (LEHR) collimator optimized their task. This result was consistent with ones reported in Kamphuis et al (1999) and also in Westcott et al (2007) who used a contrast-to-noise ratio criterion. It is interesting to note that Zhang and Zeng (2007) showed that the use of a wide-bore low-resolution collimator combined with a collimator blurring compensation algorithm led to reconstructions with less noise at the resolution obtained by a higher resolution LEHR collimator. While not a task study per se, this study suggests a route for obtaining the benefits of using low-resolution high-efficiency collimators suggested by task-based studies such as ours and by Zeng and Gullberg (2002). For small animal pinhole SPECT, Hesterman et al (2005) optimized pinhole configurations and magnifications for a four-camera system using an approximate ideal observer operating on the sinogram domain. In He et al (2008), methods were presented wherein an ideal observer operating on sinogram data could be used to optimize SPECT collimator properties. Here the BV noise modeled variable organ uptake and organ size.
There have also been collimator optimization studies in planar imaging (single projection) contexts. Moore et al (2005) investigated collimator optimization for a SKE/BKE detection task for Ga-67 using a CHO observer designed to emulate human detection performance. By optimizing the CHO-SNR the optimal collimator had higher efficiency and poorer resolution than a commercial medium-energy general-purpose collimator applicable to this image setting. Moore et al (1995) also studied collimator optimization for planar imaging in detecting lesions of known location but unknown size in a uniform background of unknown amplitude using a Bayesian estimator designed to emulate human perceptual performance. The Bayesian observer used knowledge of the range of possible lesion sizes as a prior and accurately predicted the performance of a human observer study. They concluded that as the uncertainty in the range of signal size increased, the optimal collimator FWHM decreased and the overall detectability decreased. Barrett et al (1992) studied the impact of exposure time on optimal collimator choice in a planar imaging experiment. The SNR of an ideal linear observer for an SKE/BKS task was used as an FOM. For a short imaging time, the best collimator had a high efficiency and poor resolution, as one would expect when photon noise dominates. For longer imaging times, the optimal collimator had a higher resolution and somewhat lower efficiency, as one would expect when BV noise dominates. The ultra-high-resolution long-bore collimators performed poorest in both cases.
There have also been a number of works in the context of pinhole optimization for planar emission imaging systems that bear some relevance to our work. (Our own work in this area appeared in Zhou et al 2008.) An early study by Tsui et al (1978) highlighted the importance of BV noise in aperture size determination. Myers et al (1990) further investigated this theme for an SKE/BKS task using a BV noise model similar to that used here. Rolland and Barrett (1992) pursued aperture optimization and introduced a new useful type of BV noise - the “lumpy background” model, in an SKE/BKS task context. Smith and Barrett (1986) used an optimal linear observer to find the optimal placement of pinholes in a coded-aperture system. Here the task was SKE/BKS with BV noise modeling anatomical shape and uptake in a liver model. Kupinski et al (2003) and Park et al (2005) optimized pinhole parameters for detection tasks with lumpy background BV noise. The main contribution here was the introduction of computational techniques to compute ideal observer performance when analytical expression (such as ours) were unavailable.
We explored the application of our new task criterion to collimator optimization in SPECT. In future work, it would be interesting to compare our results with those obtained using other criteria, such as the simpler signal-known-exactly but variable SKEV/BKS task in which the SKE/BKS behavior is averaged over signal location (i.e. signal location is known but variable). We are also developing ideal observers for a multiple signal case in Khurd et al (2009) and it would be interesting to see how this change affects collimator optimization. Finally, we obviously need to add more realism to the simulation. We could extend our 2D case to 3D. It should be possible to add scatter to our simulations if, for example, we approximate the scatter contribution as that due to . Otherwise, a scatter sinogram is needed for each realization of b. We could model the collimator response more correctly by incorporating septal penetration and other effects. Ideal observer computation is complex and this computational burden had limited us to consider the simplified model presented here. Techniques for rapidly computing ideal observer or approximate ideal observer (Gallas and Barrett 2003 and Kupinski et al 2003) performance could help here.
This work was supported by NIH NIBIB 02629. We thank Michael King for useful discussions. We thank the referees for some insightful comments.