Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2859675

Formats

Article sections

- Abstract
- 1. Introduction
- 2. The spatio-temporal Hotelling observer
- 3. Analysis of the data covariance matrix
- 4. Estimation of the Hotelling template vector
- 5. Implementation
- 6. Simulation results
- 7. Conclusions
- References and links

Authors

Related links

Opt Express. Author manuscript; available in PMC 2010 April 26.

Published in final edited form as:

PMCID: PMC2859675

NIHMSID: NIHMS173491

Luca Caucci: ude.anozira.liame@iccuac

The publisher's final edited version of this article is available at Opt Express

See other articles in PMC that cite the published article.

Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

Signal detection [1, 2] is a fundamental task in image science and a common problem in many fields, from medicine (for example, the detection of tumors [1, 3, 4]) to astronomy (detection of extrasolar planets [5, 6] or near-earth objects). As a consequence, an extensive theory and numerous algorithms have been developed to address the problem of signal detection from images [1,2,7–13]. Some systems, such as adaptive optics systems [14,15], can deliver sequences of spatially correlated images. Spatio-temporally correlated data also appear in medical applications, such as [16,17]. In many cases, the large amount of spatio-temporal data makes it hard to examine them and apply the detection algorithm directly to the data. Reduction in the temporal dimension is usually performed by adding together some or all of the frames. In adaptive optics, for example, images of the same object are taken over time and summed together either on the readout chip or in an external computer. The resulting single-frame image is used to perform the task of interest, but the information loss that results from the summation can reduce the detection performance. In medical applications, by contrast, it is usually not valid to assume that the object is constant over a long exposure time, and indeed the temporal dynamics of the object may be what defines the signal to be detected. Short-exposure images are inherently noisy, but long exposures wash out the signal. Full spatio-temporal processing is required for optimal signal detection.

The performance of a detection algorithm is mathematically quantified using receiver operating characteristic (ROC) analysis [18–20] and the area under the ROC curve (AUC) [19, 21]. With respect to that measure, the likelihood ratio is the optimal detector [21]. However, the likelihood ratio requires knowledge of the probability density functions under the hypotheses signal present and signal absent. Such probability density functions are often unknown or hard to estimate in practical cases. A more viable solution is the Hotelling observer [22], which requires only the knowledge of the data mean vector and covariance matrix. The Hotelling observer is linear, and it is optimal with respect to the class of linear observers [21] and a certain detectability measure to be defined below.

In this paper, the optimal-linear Hotelling observer is applied to spatio-temporal imagery. For this reason, we talk about the spatio-temporal Hotelling observer. By construction, such an observer is able to use both the spatial and temporal correlations between pixels in an optimal way, with respect to all linear observers. Methods for the estimation of the mean data vector and covariance matrix are described as well. Computational methods are described, and a parallel algorithm is implemented on a cluster of Sony PLAYSTATION 3 game consoles.

Let {*g*^{(1)}, … , *g*^{(J)}} be a collection of *J* images of the same object *f* (assumed to be independent of time) taken over time. Each *g*^{(j)} is, in turn, a collection of *M* pixel intensities $\{{g}_{1}^{\left(j\right)},\dots ,{g}_{M}^{\left(j\right)}\}$. The data set {*g*^{(1)}, … , *g*^{(J)}} represents the intensities of a total of *MJ* pixels, which we raster-scan and represent in compact form as the *MJ* × 1 vector ** G**.

The task of interest is binary discrimination: given a noisy data set ** G**, an observer must decide whether the object

Recall that ** G** is a random vector, so

$$\text{TPF}(\tau )={\displaystyle {\int}_{\tau}^{\mathrm{\infty}}\text{pr}(t|{H}_{1})\phantom{\rule{thinmathspace}{0ex}}\mathrm{d}t,\phantom{\rule{thinmathspace}{0ex}}\text{\hspace{1em}FPF}}(\tau )={\displaystyle {\int}_{\tau}^{\mathrm{\infty}}\text{pr}(t|{H}_{0})\phantom{\rule{thinmathspace}{0ex}}\mathrm{d}t,}$$

(1)

in which it is made clear that TPF and FPF are functions of τ.

If we change the value of τ, different values of TPF(τ) and FPF(τ) are obtained; a plot of TPF(τ) versus FPF(τ) as τ is varied over the real line is called a receiver operating characteristic (ROC) curve [18–20], and the area under the ROC curve (AUC) [19, 21] is a meaningful figure of merit for a binary classification task. The AUC is defined as [21]

$${\text{AUC}}_{t(\mathit{G})}={\displaystyle -{\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}\text{TPF}(\tau )\frac{\text{dFPF}(\tau )}{\mathrm{d}\tau}\mathrm{d}\tau}.$$

Another figure of merit for the same task is the signal-to-noise ratio (SNR) on the test statistic *t*(** G**):

$${\text{SNR}}_{t(\mathit{G})}=\frac{{\langle t(\mathit{G})\rangle}_{\mathit{G}|{H}_{1}}-{\langle t(\mathit{G})\rangle}_{\mathit{G}|{H}_{0}}}{\sqrt{\frac{1}{2}\text{Var}\{t(\mathit{G})|{H}_{1}\}+\frac{1}{2}\text{Var}\{t(\mathit{G})|{H}_{0}\}}},$$

(2)

in which the notation *t*(** G**)

Many classical monographs discuss statistical decision theory; some of them are [2] and [9]. These texts show that, for binary classifications and for any ROC-related figure of merit (including the AUC), the optimal observer is the likelihood ratio [1, 2, 9, 21]:

$$\Lambda (\mathit{G})=\frac{\text{pr}(\mathit{G}|{H}_{1})}{\text{pr}(\mathit{G}|{H}_{0})},$$

or, equivalently, its logarithm λ (** G**) = ln Λ (

$${\mathit{W}}_{\text{Hot}}={[{\mathbf{K}}_{\mathit{G}}]}^{-1}\mathit{S},$$

(3)

in which **K_{G}** is the covariance matrix of the data vector

In a general case, we have three sources of randomness: detector noise, point spread function variability, and object variability. This leads to the decomposition [23]

$${\mathbf{K}}_{\mathit{G}}=\stackrel{\u2550}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}+\overline{\mathbf{K}}{}_{\overline{\mathit{G}}}^{\text{PSF}}+{\mathbf{K}}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}},$$

(4)

with which the Hotelling template vector is written as

$${\mathit{W}}_{\text{Hot}}={\left[\stackrel{\u2550}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}+\overline{\mathbf{K}}{}_{\overline{\mathit{G}}}^{\text{PSF}}+{\mathbf{K}}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}}\right]}^{-1}\mathit{S}.$$

(5)

The expression in (4) can be formally derived from the definition of covariance matrix for ** G**:

$${\mathbf{K}}_{\mathit{G}}={\langle {\langle {\langle \phantom{\rule{thinmathspace}{0ex}}\left[\mathit{G}-\stackrel{\equiv}{\mathit{G}}\right]\text{}{\left[\mathit{G}-\stackrel{\equiv}{\mathit{G}}\right]}^{\mathrm{T}}\rangle}_{\mathit{G}|\mathit{P},f}\rangle}_{\mathit{P}|f}\rangle}_{f},$$

(6)

in which ** P** represents the sequence {

Expression (6) contains three averaging (or statistical expectation) steps. The innermost expectation is on the noise and for given ** P** and

$$\begin{array}{l}\text{}\stackrel{\u2550}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}={\langle {\langle {\mathbf{K}}_{\mathit{G}}^{\text{noise}}\rangle}_{\mathit{P}|f}\rangle}_{f}={\langle {\langle \left\{{\langle \left[\mathit{G}-\overline{\mathit{G}}\right]{\left[\mathit{G}-\overline{\mathit{G}}\right]}^{\mathrm{T}}\rangle}_{\mathit{G}|\mathit{P},f}\right\}\rangle}_{\mathit{P}|f}\rangle}_{f},\hfill \\ \text{\hspace{1em}}{\overline{\mathbf{K}}}_{\overline{\mathit{G}}}^{\text{PSF}}={\langle {\mathbf{K}}_{\overline{\mathit{G}}}^{\text{PSF}}\rangle}_{f}={\langle \left\{{\langle \phantom{\rule{thinmathspace}{0ex}}\left[\overline{\mathit{G}}-\stackrel{\u2550}{\mathit{G}}\right]\text{}{\left[\overline{\mathit{G}}-\stackrel{\u2550}{\mathit{G}}\right]}^{\mathrm{T}}\rangle}_{\mathit{P}|f}\right\}\rangle}_{f},\hfill \\ {\mathbf{K}}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}}={\langle \left[\stackrel{\u2550}{\mathit{G}}-\stackrel{\equiv}{\mathit{G}}\right]\text{}{\left[\stackrel{\u2550}{\mathit{G}}-\stackrel{\equiv}{\mathit{G}}\right]}^{\mathrm{T}}\rangle}_{f},\hfill \end{array}$$

in which ** G** with a variable number of bars denotes average with respect to noise, PSFs, and object. By construction, random vectors

For simplicity, we will assume that the signal we want to detect is known in brightness and location. In this case, we refer to a signal-known-exactly (SKE) problem [21]. The more realistic problem of unknown signal location can be handled by scanning the observer template [24–26]. The SKE hypothesis provides valuable information that the observer can use to improve detection performance (quantified by an increase in AUC) with respect to the detection and localization problem of [25]. We also assume that the object background is known exactly (background-known-exactly or BKE in the terminology of [21]).

The noise covariance matrix is usually very easy to study. Indeed, if we assume that only photon (Poisson) and readout (Gaussian) noises are present in the sequence ** G**, then the noise in distinct elements of the detector are usually uncorrelated and so we can write

$${\left[\stackrel{\u2550}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m,m\prime}^{\left(j,j\prime \right)}=\{\begin{array}{ll}{\sigma}_{m}^{2}+\stackrel{\equiv}{g}{}_{m}^{\left(j\right)}\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}m\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}m\prime \phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}j=j\prime ,\hfill \\ 0\hfill & \text{otherwise},\hfill \end{array}$$

(7)

where ${\sigma}_{m}^{2}$ is the readout noise variance for the *m*-th pixel of the detector, and

$$\stackrel{\equiv}{g}{}_{m}^{\left(j\right)}=\text{Pr}\left({H}_{0}\right)\stackrel{\u2550}{g}{}_{m|{H}_{0}}^{\left(j\right)}+\text{Pr}\left({H}_{1}\right)\stackrel{\u2550}{g}{}_{m|{H}_{1}}^{\left(j\right)}$$

is the variance of the photon noise for the same pixel when $\stackrel{\equiv}{g}{}_{m}^{\left(j\right)}$ is the expected number of photons (from objects in the field of view and background) collected for the *m*-th pixel of the *j*-th image. The appropriateness of the Poisson model for the photon noise is justified by invoking the so-called Poisson postulates [21]. The readout noise variance ${\sigma}_{m}^{2}$ at each detector pixel is usually known, as provided by the detector’s manufacturer, or it can be measured. Matrix $\stackrel{\u2550}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}$ above is diagonal with no zero terms on the diagonal, which guarantees the invertibility of **K_{G}**.

In this section, we describe how quantities discussed in the previous section can be estimated. We will rely on simulation code and consider *L*_{1} realization of the PSF sequence ** P** and

The noise covariance matrix is estimated as:

$${\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m,m\prime}^{\left(j,j\prime \right)}=\{\begin{array}{ll}{\sigma}_{m}^{2}+{\widehat{\stackrel{\equiv}{g}}}_{m}^{\left(j\right)}\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}m\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}m\prime \phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}j=j\prime ,\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$

where we denoted estimated quantities using the hat symbol and we set

$${\widehat{\stackrel{\equiv}{g}}}_{m}^{\left(j\right)}={\left[\widehat{\stackrel{\equiv}{\mathit{G}}}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}=\frac{1}{{L}_{1}}{{\displaystyle \sum _{{\ell}_{1}=1}^{{L}_{1}}\left[{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}}_{m}^{\left(j\right)}.$$

The PSF covariance matrix $\overline{\mathbf{K}}{}_{\overline{\mathit{G}}}^{\text{PSF}}$ is estimated from simulated data as well:

$${\left[\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m,m\prime}^{\left(j,j\prime \right)}=\frac{1}{{L}_{1}-1}{{\displaystyle \sum _{{\ell}_{1}=1}^{{L}_{1}}\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}}_{m}^{\left(j\right)}{\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m\prime}^{(j\prime )},$$

(8)

where

$${\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}={\left[{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}-\frac{1}{{L}_{1}}{\displaystyle \sum _{{\ell}_{1}^{\prime}=1}^{{L}_{1}}{\phantom{\rule{thinmathspace}{0ex}}\left[{\overline{\mathit{G}}}_{0}^{({\ell}_{1}^{\prime})}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}}.$$

(9)

Expressions (8) and (9) show that $\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}$ is the sample estimate of ${\overline{\mathbf{K}}}_{\overline{\mathit{G}}}^{\text{PSF}}$ , estimated from the noiseless simulated data.

Finally, if we consider randomness in the signal to be detected or the background on which it is superimposed, the object term in the covariance matrix expression can be estimated as:

$${\left[\widehat{\mathbf{K}}{}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m,m\prime}^{\left(j,j\prime \right)}=\frac{1}{{L}_{2}-1}{\displaystyle \sum _{{\ell}_{2}=1}^{{L}_{2}}{\left[\mathrm{\Delta}{\stackrel{\u2550}{\mathit{G}}}_{1}^{\left({\ell}_{2}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}{\phantom{\rule{thinmathspace}{0ex}}\left[\mathrm{\Delta}{\stackrel{\u2550}{\mathit{G}}}_{1}^{\left({\ell}_{2}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m\prime}^{(j\prime )}},$$

where

$${\left[\mathrm{\Delta}{\stackrel{\u2550}{\mathit{G}}}_{1}^{\left({\ell}_{2}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}=\frac{1}{{L}_{1}}{\displaystyle \sum _{{\ell}_{1}=1}^{{L}_{1}}{\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{1}^{\left({\ell}_{1},{\ell}_{2}\right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)}}.$$

It might be tempting to try to estimate the whole data covariance matrix **K_{G}** in (6) from noisy simulated data, without using the decomposition (4). A necessary (but not sufficient) condition for such estimate

$${\mathit{x}}^{\mathrm{T}}{\widehat{\mathbf{K}}}_{\mathit{G}}\mathit{x}={\mathit{x}}^{\mathrm{T}}\widehat{\stackrel{\u2550}{\mathbf{K}}}{}_{\mathit{G}}^{\text{noise}}\mathit{x}+{\mathit{x}}^{\mathrm{T}}\phantom{\rule{thinmathspace}{0ex}}\left[\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}+\widehat{\mathbf{K}}{}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}}\right]\phantom{\rule{thinmathspace}{0ex}}\mathit{x}\phantom{\rule{thinmathspace}{0ex}}\ge \phantom{\rule{thinmathspace}{0ex}}{\mathit{x}}^{\mathrm{T}}\widehat{\stackrel{\u2550}{\mathbf{K}}}{}_{\mathit{G}}^{\text{noise}}\mathit{x}={\displaystyle \sum _{m=0}^{M}\left({\sigma}_{m}^{2}+{\widehat{\stackrel{\equiv}{g}}}_{m}^{\left(j\right)}\right)\phantom{\rule{thinmathspace}{0ex}}{x}_{m}^{2}\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0,}$$

for any vector ** x** ≠

Finally, the average contribution of the signal to the image data is estimated as:

$$\widehat{\mathit{S}}=\frac{1}{{L}_{1}{L}_{2}}{\displaystyle \sum _{{\ell}_{1}=1}^{{L}_{1}}{\displaystyle \sum _{{\ell}_{2}=1}^{{L}_{2}}{\overline{\mathit{G}}}_{1}^{\left({\ell}_{1},{\ell}_{2}\right)}-}}\frac{1}{{L}_{1}}{\displaystyle \sum _{{\ell}_{1}=1}^{{L}_{1}}{\overline{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}}.$$

Armed with an estimate of the signal to be detected and estimates of the covariance matrices that appear on the right-hand side of (4), we can formally write an expression for the Hotelling template vector estimate: in analogy with (5), we define

$${\widehat{\mathit{W}}}_{\text{Hot}}={\left[{\widehat{\mathbf{K}}}_{\mathit{G}}\right]}^{-1}\widehat{\mathit{S}}={\left[\widehat{\stackrel{\u2550}{\mathbf{K}}}{}_{\mathit{G}}^{\text{noise}}+\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}+\widehat{\mathbf{K}}{}_{\stackrel{\u2550}{\mathit{G}}}^{\text{object}}\right]}^{-1}\widehat{\mathit{S}}.$$

(10)

For the SKE/BKE case, (10) reduces to:

$${\widehat{\mathit{W}}}_{\text{Hot}}={\left[{\widehat{\mathbf{K}}}_{\mathit{G}}\right]}^{-1}\widehat{\mathit{S}}={\left[\widehat{\stackrel{\u2550}{\mathbf{K}}}{}_{\mathit{G}}^{\text{noise}}+\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}\right]}^{-1}\widehat{\mathit{S}}.$$

(11)

Although (10) and (11) make sense because [**_{G}**]

$${\left[\mathbf{R}\right]}_{\ell ,m}^{\left(j\right)}=\frac{1}{\sqrt{L-1}}{\phantom{\rule{thinmathspace}{0ex}}\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left(\ell \right)}\right]\phantom{\rule{thinmathspace}{0ex}}}_{m}^{\left(j\right)},$$

then (8) is rewritten as

$$\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}={\mathbf{\text{RR}}}^{\mathrm{T}}.$$

The *MJ* × *L* matrix **R** contains in the -th column the *MJ* × 1 vector obtained by raster-scanning the pixels in $\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left(\ell \right)}$. The Woodbury matrix-inversion lemma [28–31] allows us to rewrite the inverse in (10) as a computationally tractable expression. In abstract form, the matrix-inversion lemma can be stated as follows:

$${\left[\mathbf{A}-\mathbf{\text{UBV}}\right]}^{-1}={\mathbf{A}}^{-1}+{\mathbf{A}}^{-1}{\left[{\mathbf{I}}_{N}-{\mathbf{\text{BVA}}}^{-1}\mathbf{U}\right]}^{-1}{\mathbf{\text{BVA}}}^{-1},$$

in which **I*** _{N}* is the identity matrix of order

$$\mathbf{A}=\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}},\phantom{\rule{thinmathspace}{0ex}}\mathbf{B}=-{\mathbf{I}}_{L},\phantom{\rule{thinmathspace}{0ex}}\mathbf{U}=\mathbf{R},\phantom{\rule{thinmathspace}{0ex}}\mathbf{V}={\mathbf{R}}^{\mathrm{T}},$$

then

$${\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}+\widehat{\overline{\mathbf{K}}}{}_{\overline{\mathit{G}}}^{\text{PSF}}\right]}^{-1}={\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\left\{{\mathbf{I}}_{\mathit{\text{MJ}}}-{\mathbf{\text{RQ}}}^{-1}{\mathbf{R}}^{\mathrm{T}}{\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\right\},$$

(12)

where

$$\mathbf{Q}={\mathbf{I}}_{L}+{\mathbf{R}}^{\mathrm{T}}{\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\mathbf{R}.$$

The matrix $\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}$ is diagonal, so computing its inverse does not pose computational problems. The matrix **Q** is of size *L* × *L* and invertible. Note that *L* is usually much smaller than *MJ*, which implies that **Q**^{−1} can be calculated in much shorter time than [* _{G}*]

For this research, we took advantage of the computational capabilities available at the Center for Gamma-Ray Imaging, University of Arizona. In particular, we used a Sony PLAYSTATION 3 cluster consisting of 30 units. Each unit was equipped with a Cell Broadband Engine (Cell BE) processor, 256 MB of RAM memory, 60 GB of disk space, and was running Linux Fedora 7, kernel version 2.6.23. The IBM’s Cell BE Software Development Kit 3.0 was installed on all of the units and used to generate code suitable for the Cell BE microprocessor. All machines in the cluster were connected by means of a 1-Gbit/s local area network (LAN). All algorithms were coded using the C programming language, and source files were compiled using IBM XL C/C++ compiler, version 9.0. Communication between different nodes of the computer cluster was achieved using the Message Passing Interface (MPI) standard. The Cell BE architecture [32–34] has recently received enormous attention from the computer science community. The possibility of using up to nine processing cores and the fact that SIMD instructions are supported make the Cell BE processor particularly appealing as a low-cost solution for high-performance scientific computing [35–39].

An algorithm for the computation of the template vector *Ŵ*_{Hot} according to (10) and (12) was implemented and run on the PLAYSTATION 3 cluster. The algorithm is composed of two different programs: one to be run as a master process and one to be run as a slave process. Pseudocode for the master process is shown below:

*J*← number of images in each sequence*L*← number of simulated sequences*N*← number of slave processes**for all***n*{1, … ,*N*}**do**- send “
*initialize*” to slave process*n*

**end for****for all**{1, … ,*L*}**do**- read ${\overline{\mathit{G}}}_{0}^{\left(\ell \right)}$ from the disk
**for all***n*{1, … ,*N*}**do**- send ${\overline{\mathit{G}}}_{0}^{\left(\ell \right)}$ to slave process
*n*

**end for**

**end for****for all**{1, … ,*L*}**do**- read ${\overline{\mathit{G}}}_{1}^{\left(\ell \right)}$ from the disk
**for all***n*{1, … ,*N*}**do**- send ${\overline{\mathit{G}}}_{1}^{\left(\ell \right)}$ to slave process
*n*

**end for**

**end for***j*← 1*m*← 0 {number of tasks completed}**while***m*<*J***do****while***m*<*J*and there is an idle slave process**do***n*← index of an idle slave process- send “
*compute*[*Ŵ*_{Hot}]^{(j)}” to slave process*n* *j*←*j*+ 1

**end while****if***m*<*J***then***n*← slave process that has just computed [**Ŵ**_{Hot}]^{(j′)}- receive [
**Ŵ**_{Hot}]^{(j′)}from slave process*n* - save [
**Ŵ**_{Hot}]^{(j′)}to disk *m*←*m*+ 1

**end if**

**end while****for all**n {1, … ,*N*}**do**- send “
*end of computation*” to slave process*n*

**end for**

Pseudocode for the slave processes is shown below:

*J*← number of images in each sequence*L*← number of simulated sequences- ${\sigma}_{m}^{2}$ ← readout noise variance for all
*m*{1, … ,*M*} **for all**{1, … ,*L*}**do**- receive ${\overline{\mathit{G}}}_{0}^{\left(\ell \right)}$ from master process
- save ${\overline{\mathit{G}}}_{0}^{\left(\ell \right)}$ to disk

**end for****for all**{1, … ,*L*}**do**- receive ${\overline{\mathit{G}}}_{1}^{\left(\ell \right)}$ from master process
- save ${\overline{\mathit{G}}}_{1}^{\left(\ell \right)}$ to disk

**end for**- $${\widehat{\stackrel{\equiv}{\mathit{G}}}}_{0}\phantom{\rule{thinmathspace}{0ex}}\leftarrow \phantom{\rule{thinmathspace}{0ex}}\frac{1}{L}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle {\sum}_{\ell =1}^{L}{\overline{\mathit{G}}}_{0}^{\left(\ell \right)}}$$
- $${\widehat{\stackrel{\equiv}{\mathit{G}}}}_{1}\phantom{\rule{thinmathspace}{0ex}}\leftarrow \phantom{\rule{thinmathspace}{0ex}}\frac{1}{L}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle {\sum}_{\ell =1}^{L}{\overline{\mathit{G}}}_{1}^{\left(\ell \right)}}$$
- $$\widehat{\mathit{S}}\leftarrow {\widehat{\stackrel{\equiv}{\mathit{G}}}}_{1}-{\widehat{\stackrel{\equiv}{\mathit{G}}}}_{0}$$
**for all***j*{1, … ,*J*}**do**- $${\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{\left(j\right)}\phantom{\rule{thinmathspace}{0ex}}\leftarrow \phantom{\rule{thinmathspace}{0ex}}\text{diag}\phantom{\rule{thinmathspace}{0ex}}\left({\sigma}_{1}^{2}+\widehat{\stackrel{\equiv}{g}}{}_{0,1,\dots \phantom{\rule{thinmathspace}{0ex}},}^{\left(j\right)}{\sigma}_{M}^{2}+\widehat{\stackrel{\equiv}{g}}{}_{0,M}^{\left(j\right)}\right)$$

**end for**- $${\left[\mathbf{R}\right]}_{\ell}^{\left(j\right)}\phantom{\rule{thinmathspace}{0ex}}\leftarrow \phantom{\rule{thinmathspace}{0ex}}\frac{1}{\sqrt{L-1}}{\left[\mathrm{\Delta}{\overline{\mathit{G}}}_{0}^{\left(\ell \right)}\right]}^{\left(j\right)}$$
- $$\mathbf{Q}\phantom{\rule{thinmathspace}{0ex}}\leftarrow \phantom{\rule{thinmathspace}{0ex}}{\mathbf{I}}_{L}+{\mathbf{R}}^{\mathrm{T}}{\phantom{\rule{thinmathspace}{0ex}}\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\mathbf{R}$$
**Q**^{−1}← inverse(**Q**)**while**not message “*end of computation*” received**do**- receive “
*compute*[*Ŵ*_{Hot}]^{(j)}” from master process - [
*Ŵ*_{Hot}]^{(j)}←**0**_{M×M} **for all***j′*{1, … ,*J*}**do****T**(*j*,*j*′) ← (*j*,*j*′)-th block of ${\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\left\{{\mathbf{I}}_{\mathit{\text{MJ}}}-{\mathbf{\text{RQ}}}^{-1}{\mathbf{R}}^{\mathrm{T}}{\left[\stackrel{\stackrel{\wedge}{=}}{\mathbf{K}}{}_{\mathit{G}}^{\text{noise}}\right]}^{-1}\right\}$- [
*Ŵ*_{Hot}]^{(j)}← [*Ŵ*_{Hot}]^{(j)}+**T**^{(j, j′)}[]**Ŝ**^{(j′)}

**end for**- send [
*Ŵ*_{Hot}]^{(j)}to master process

**end while**

In the implementation, we took advantage of the Cell BE processor and its SIMD capabilities. Floating-point values were stored in double precision, and SIMD instructions were used for the computation of the blocks of [**_{G}**]

As an example, we developed the spatio-temporal Hotelling observer for adaptive optics (AO) images [14, 15]. The simulation code we used is AOTools (webpage: http://www.tosc.com/software/software.html). The atmospheric thickness was set to 24km and, in order to compute the aberrated wavefront, we assumed the frozen flow hypothesis of atmospheric turbulence. The turbulence was computed according to the modified Hufnagel-Valley *C*_{n}^{2} profile, given by [40]:

$${{C}_{n}}^{2}\left(h\right)=8.16\xb7{10}^{-54}{h}^{10}{e}^{-h/1000}+3.02\xb7{10}^{-17}{e}^{-h/1500}+1.90\xb7{10}^{-15}{e}^{-h/100},$$

in which the altitude *h* is expressed in meters and the units of *C*_{n}^{2} are meters to the −2/3 power. We assumed *C*_{n}^{2}(*h*) trascurable when *h* ≥ 24km. The Fried parameter *r*_{0} [41] for the phase screens was 0.30m at the wavelength λ = 500nm. Our simulation took into account effects such as scintillation and anisoplanetism [15] as well. The wind speed was 1.25m/s. The telescope we simulated had a circular aperture of diameter 5m, and the diameter of the central obscuration due to the secondary mirror was 0.50m. The secondary mirror was supported by three arms. For the estimation of the template vector *Ŵ*_{Hot} in (10), we simulated *L* = 512 sequences containing *J* = 25 images each of size 64 × 64 (pixel size 5.96 µm). The wavefront sensor apparatus of many AO systems includes a lenslet array. In our simulation, we simulated a 32 × 32 array of side length 0.02m. The total power entering the telescope was equally split between the wavefront sensor and the science camera by a 50/50 beamsplitter. The system was assumed idea, with an efficiency of 1 (i.e., no losses) and the AO loop was running at a speed of 1kHz. The quality of the AO correction can be quantified with an average Strehl ratio of about 0.67.

We applied the spatio-temporal Hotelling observer to the detector of dim planets orbiting a star (assuming the star and the planet in the same isoplanetic patch), and we simulated data sets for both hypotheses *H*_{0} and *H*_{1}. The apparent magnitude of the star was *m* = 6 and the difference in apparent magnitude between the planet and the star was Δ*m* = 16.74. More specifically, we simulated *L* = 512 sequences for each hypothesis and, for each sequence, we used *J* = 25 frames containing *M* = 64^{2} pixels each. The simulation was set up in order to mimic an astronomical observation. For example, we used typical values for the readout noise as found in [42]. The exposure time was 0.1ms. The data used to estimate the template vector *Ŵ*_{Hot} was noiseless and no noise was considered in the wavefront sensor. On the other hand, data on which the detection task was applied were noisy and were obtained with different phase screens.

We considered the SKE/BKE/PKS case and we compared the performance of the spatiotemporal Hotelling observer to that of the purely spatial Hotelling observer *t*_{Hot}(* g*) and the purely spatial matched filter [43, 44] observer

$$\begin{array}{c}\text{TPF}\left(\tau \right)=\frac{\left|\left\{i\phantom{\rule{thinmathspace}{0ex}}\text{such that}\phantom{\rule{thinmathspace}{0ex}}{t}_{1}^{\left(i\right)}>\tau ,i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n\right\}\right|}{n},\\ \text{FPF}\left(\tau \right)=\frac{\left|\left\{i\phantom{\rule{thinmathspace}{0ex}}\text{such that}\phantom{\rule{thinmathspace}{0ex}}{t}_{0}^{\left(i\right)}>\tau ,i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n\right\}\right|}{n},\end{array}$$

in which the notation |*S*| stands for the number of elements of the set *S*, and we varied the value of τ to obtain ROC curves. The ROC curves are reported in Fig. 1, and the corresponding values of the AUC, standard deviation σ on the AUC, and SNR are reported in Table 2. The SNR for the three observers was computed according to (2), in which conditional means … and variances Var{ … } were replaced by the sample means and sample variances of the ${t}_{0}^{\left(i\right)}$ and ${t}_{1}^{\left(i\right)}$. The values of σ were computed as described in [45].

The results reported in Fig. 1 and Table 2 confirm the superiority of the spatio-temporal Hotelling observer with respect to the spatial Hotelling observer and the matched filter observer. The results of Fig. 1 complete the ones reported in [25]. Indeed, in [25], we compared the spatial Hotelling observer with current techniques used in astronomy for point-source detection, and we noted that the spatial Hotelling observer outperforms popular detection algorithms, such as [46]. In this paper, we showed that short-exposure images retain temporal information, which increases detection performance. We see that the spatio-temporal Hotelling observer outperforms current long-exposure detection algorithms as well.

In this paper, statistical decision theory was rigorously applied to the problem of signal detection for spatio-temporal data. Three sources of randomness were initially considered: randomness in the object, randomness in the residual point spread function, and randomness due to measurement noise in the detector array. However, for simplicity, we assumed the signal in the object to be nonrandom and at a known location, and the background constant and known. We noted that, in many applications of interest, a complete description of the statistics of the data is not available. Only the first and second moments of the data are required to compute the Hotelling observer. We remarked the importance of the temporal correlations between pixels in the temporal data. Indeed, the Hotelling observer was applied to the whole sequence of temporally-related images, rather than to their average.

In some cases, such as adaptive optics systems, a complete analytical study of the first two moments of the data might be complicated. Therefore, we proposed to estimate means and covariance matrices using simulated data. We implemented one algorithm for the computation of the spatio-temporal Hotelling observer, and we ran the algorithm on a Sony PLAYSTATION 3 cluster. Our implementation took advantage of the computational capabilities of the Cell Broad-band Engine Architecture (Cell BE) processor, which equips all of the Sony PLAYSTATION 3 units of our cluster. Thanks to the matrix-inversion lemma, the problem of computing the product between the inverse of a large covariance matrix and the signal was recast to the problem of computing matrix multiplications involving the inverse of a much smaller matrix.

Research concerning an analytical expression for the PSF covariance matrix is currently underway for the case of an ideal thin lens with a weak Gaussian phase perturbation in the pupil. This model might be appropriate for high-performance adaptive optics systems, for which the residual perturbation can be shown to be Gaussian and weak. This study would be of great importance, as it would eliminate the need for simulation in order to estimate the PSF covariance matrix. If an analytical expression for the data covariance matrix is available, other methods—based, for example, on the Landweber algorithm or on the Neumann series expansion—for the computation of the Hotelling template vector could be investigated as well [21]. In addition, it would be interesting to find an expression for the data’s probability density function. With such density, the likelihood ratio could be computed and compared with the Hotelling observer. We expect these two methods to deliver similar detection performance, because, for large mean values, Poisson random variables are very well approximated with Gaussian random variables.

In this study, we relied on simulation code to estimate the mean and covariance of the data. Differences between the actual mean and covariance present in real data and the mean and covariance used in the detection task can arise from two sources: errors in the simulation code or sampling errors because the mean and covariance are estimated from a finite number of simulated sample images. In this work and related previous studies [25,47], we have investigated the latter point in great detail. The former point, effect of model errors on detection performance, has not been investigated for the spatiotemporal Hotelling observer (implemented for the first time in this paper), but it has been studied in the medical literature for purely spatial Hotelling observers [21,48]. The general conclusion is that even crude modeling of the covariance model affords a demonstrable improvement in detection performance, though of course more accurate models are always preferable. This issue will be discussed in a separate paper.

This work was supported by NIH Grants 5 R37 EB000803-18 and 5 P41 EB002035-10.

**OCIS codes:** (000.5490) Probability theory, stochastic processes, and statistics; (030.4280) Noise in imaging systems; (110.1080) Active or adaptive optics; (110.2970) Image detection systems; (110.3000) Image quality assessment; (110.4155) Multiframe image processing; (200.4960) Parallel processing; (330.1880) Detection.

1. Green DM, Swets JA. Signal Detection Theory and Psychophysics. New York, NY: John Wiley and Sons, Inc.; 1966.

2. Van Trees HL. Part III, Radar-sonar Signal Processing and Gaussian Signals in Noise. New York, NY: John Wiley and Sons, Inc.; 2001. Detection, Estimation, and Modulation Theory.

3. Myers KJ, Barrett HH, Borgstrom MC, Patton DD, Seeley GW. Effect of Noise Correlation on Detectability of Disk Signals in Medical Imaging. J. Opt. Soc. Am. A. 1985;2:1752–1759. [PubMed]

4. Starr SJ, Metz CE, Lusted LB, Goodenough DJ. Visual detection and localization of radiographic images. Radiology. 1975;116:533–538. [PubMed]

5. Hughes DW. Nonsolar Planets and Their Detection. Nature. 1979;279:579. [PubMed]

6. Tamura M. Extra-solar Planet Detection. Viva Origino. 2002;30:157–161.

7. Berger JO. Statistical Decision Theory, Foundations, Concepts, and Methods. New York, NY: Springer-Verlag; 1980.

8. Duda RO, Hart PE, Stork DG. Pattern Classification. 2nd ed. New York, NY: John Wiley and Sons, Inc.; 2001.

9. Melsa JL, Cohn DL. Decision and Estimation Theory. New York, NY: McGraw-Hill; 1978.

10. McDonough RN, Whalen AD. Detection of Signal in Noise. San Diego, CA: Academic Press; 1995.

11. Park S, Clarkson E, Kupinski MA, Barrett HH. Efficiency of the human observer detecting random signals in random backgrounds. J. Opt. Soc. Am. A. 2005;22:3–16. [PMC free article] [PubMed]

12. Hobson MP, McLachlan C. A Bayesian Approach to Discrete Object Detection in Astronomical Data Sets. Mon. Not. R. Astron. Soc. 2003;338:765–784.

13. Kasdin NJ, Braems I. Linear and Bayesian Planet Detection Algorithms for the *Terrestrial Planet Finder*. Astrophys. J. 2006;646:1260–1274.

14. Babcock HW. The Possibility of Compensating Astronomical Seeing. Publ. Astron. Soc. Pac. 1953;65:229–236.

15. Tyson RK. Adaptive Optics Engineering Handbook. New York, NY: Marcel Dekker; 2000.

16. Akbarpour R, Friedman SN, Siewerdsen JH, Neary JD, Cunningham IA. Signal and noise transfer in spatiotemporal quantum-based imaging systems. J. Opt. Soc. Am. A. 2007;24:B151–B164. [PubMed]

17. Yadava GK, Rudinacd S, Kuhls-Gilcristab AT, Bednarek DR. Generalized Objective Performance Assessment of a New High-Sensitivity Microangiographic Fluoroscopic (HSMAF) Imaging System. In: Jiang Hsieh ES, editor. Medical Imaging 2008: Physics of Medical Imaging. vol. 6913. 2008. p. 69130U. Proc. SPIE. [PMC free article] [PubMed]

18. Barrett HH, Abbey CK, Clarkson E. Objective Assessment of Image Quality. III. ROC Metrics, Ideal Observers, and Likelihood-generating Functions. J. Opt. Soc. Am. A. 1998;15:1520–1535. [PubMed]

19. Hanley JA, McNeil BJ. The Meaning and Use of the Area Under a Receiver Operating Characteristic (ROC) Curve. Radiology. 1982;143:29–36. [PubMed]

20. Park SH, Goo JM, Jo C-H. Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists. Korean J. Radiol. 2004;5:11–18. [PMC free article] [PubMed]

21. Barrett HH, Myers KJ. Foundations of Image Science. Hoboken, NJ: Wiley-Interscience; 2004.

22. Hotelling H. The Generalization of Student’s Ratio. Ann. Math. Stat. 1931;2:360–378.

23. Barrett HH, Myers KJ, Devaney N, Dainty JC. Objective Assessment of Image Quality: IV. Application to Adaptive Optics. J. Opt. Soc. Am. A. 2006;23:3080–3105. [PMC free article] [PubMed]

24. Clarkson E. Estimation receiver operating characteristic curve and ideal observers for combined detection/estimation tasks. J. Opt. Soc. Am. A. 2007;24:B91–B98. [PMC free article] [PubMed]

25. Caucci L, Barrett HH, Devaney N, Rodríguez JJ. Application of the Hotelling and ideal observers to detection and localization of exoplanets. J. Opt. Soc. Am. A. 2007;24:B13–B24. [PMC free article] [PubMed]

26. Burke D, Devaney N, Gladysz S, Barrett HH, Whitaker MK, Caucci L. Optimal linear estimation of binary star parameters. In: Hubin N, Max CE, Wizinowich PL, editors. Adaptive Optics Systems. vol. 7015. 2008. p. 70152J. (Proc. SPIE)

27. Barrett HH, Myers KJ, Gallas BD, Clarkson E, Zhang H. Megalopinakophobia: Its symptoms and cures. In: Antonuk LE, Yaffe MJ, editors. Medical Imaging 2001: Physics of Medical Imaging. vol. 4320. 2001. pp. 299–307. (Proc. SPIE)

28. Tylavsky DJ, Sohie GRL. Generalization of the Matrix Inversion Lemma. Proc. IEEE. 1986;74:1050–1052.

29. Bartlett MS. An Inverse Matrix Adjustment Arising in Discriminant Analysis. Ann. Math. Stat. 1951;22:107–111.

30. Sherman J, Morrison WJ. Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix. Ann. Math. Stat. 1950;21:124–127.

31. Henderson HV, Searle SR. On Deriving the Inverse of a Sum of Matrices. SIAM Rev. 1981;23:53–60.

32. Kahle JA, Day MN, Hofstee HP, Johns CR, Maeurer TR, Shippy D. Introduction to the Cell Multiprocessor. IMB J. Res. Dev. 2005;49:589–604.

33. Pham D, Aipperspach T, Boerstler D, Bolliger M, Chaudhry R, Cox D, Harvey P, Harvey P, Hofstee H, Johns C, Kahle J, Kameyama A, Keaty J, Masubuchi Y, Pham M, Pille J, Posluszny S, Riley M, Stasiak D, Suzuoki O, Takahashi M, Warnock J, Weitzel S, Wendel D, Yazawa K. Overview of the architecture, circuit design, and physical implementation of a first-generation Cell Processor. IEEE J. Solid-State Circuits. 2006;41:179–196.

34. Hofstee P. Tech. rep. Riverton, NJ: IBM Corporation; 2005. Introduction to the Cell Broadband Engine.

35. Williams S, Shalf J, Oliker L, Kamil S, Husbands P, Yelick K. Proceedings of the 3rd conference on Computing Frontiers. New York, NY: ACM; 2006. The potential of the Cell Processor for scientific computing; pp. 9–20.

36. Bader DA, Agarwal V, Madduri K, Kang S. High performance combinatorial algorithm design on the Cell Broadband Engine processor. Parallel Comput. 2007;33:720–740.

37. Benthin IWC, Scherbaum M, Friedrich H. Ray tracing on the Cell Processor; IEEE Symposium on Interactive Ray Tracing; 2006. pp. 15–23.

38. Sakamoto M, Murase M. Parallel implementation for 3-D CT image reconstruction on Cell Broadband Engine™. IEEE International Conference on Multimedia and Expo; 2007. pp. 276–279.

39. Kachelrieβ M, Knaup M, Bockenbach O. Hyperfast parallel-beam and cone-beam backprojection using the Cell general purpose hardware. Med. Phys. 2007;34:1474–1486. [PubMed]

40. Parenti RR, Sasiela RJ. Laser-guide-star systems for astronomical applications. J. Opt. Soc. Am. A. 1994;11:288–309.

41. Fried DL. Statistics of a Geometric Representation of Wavefront Distortion. J. Opt. Soc. Am. 1965;55:1427–1435.

42. Fitzgerald MP, Graham JR. Speckle Statistics in Adaptively Corrected Images. Astrophys. J. 2006;637:541–547.

43. Turin GL. An introduction to matched filters. IRE Trans. Inf. Theory. 1960;6:311–329.

44. Middleton D. An Introduction to Statistical Communication Theory. Piscataway, NJ: IEEE Press; 1960.

45. Gallas BD. One-shot estimate of MRMC Variance: AUC. Acad. Radiol. 2006;13:353–362. [PubMed]

46. Bertin E, Arnouts S. SExtractor: Software for source extraction. Astron. Astrophys. Suppl. Ser. 1996;117:393–404.

47. Caucci L. Master’s thesis. University of Arizona; 2007. Point Detection and Hotelling Discriminant: An Application in Adaptive Optics.

48. Abbey CK, Barrett HH. Human- and model-observer performance in ramp-spectrum noise: effects of regularization and object variability. J. Opt. Soc. Am. A. 2001;18:473–488. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |