Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2947459

Formats

Article sections

- Abstract
- 1. INTRODUCTION
- 2. DEFINITIONS AND NOTATION
- 3. CONCEPTS FROM STATISTICAL DECISION THEORY
- 4. STATISTICAL PROPERTIES OF AO IMAGES
- 5. EXAMPLE: DETECTION OF EXOPLANETS
- 6. SUMMARY AND CONCLUSIONS
- References

Authors

Related links

Proc SPIE Int Soc Opt Eng. Author manuscript; available in PMC 2010 September 29.

Published in final edited form as:

Proc SPIE Int Soc Opt Eng. 2006 January 1; 6272: 62721W.

doi: 10.1117/12.672798PMCID: PMC2947459

NIHMSID: NIHMS233983

Harrison H. Barrett, College of Optical Sciences and Department of Radiology, University of Arizona, Tucson AZ 85724;

Further author information: Harrison H. Barrett, Radiology Research Laboratory, Arizona Health Sciences Center, Tucson, AZ 85724. Email: ude.anozira.ygoloidar@tterrab

See other articles in PMC that cite the published article.

In objective or task-based assessment of image quality, figures of merit are defined by the performance of some specific observer on some task of scientific interest. This methodology is well established in medical imaging but is just beginning to be applied in astronomy. In this paper we survey the theory needed to understand the performance of ideal or ideal-linear (Hotelling) observers on detection tasks with adaptive-optical data. The theory is illustrated by discussing its application to detection of exoplanets from a sequence of short-exposure images.

Adaptive optical (AO) systems have reached a high level of sophistication, and many ingenious system configurations and data-processing algorithms have been proposed. Methods for specifying system performance and image quality, on the other hand, seldom go beyond Strehl ratio. Indeed, it is often said that the goal of adaptive optics is to improve the Strehl ratio.

In this paper we take a different view, one adopted from the medical-imaging literature. In this view, the goal of a medical imaging system is to perform a specific task of clinical interest, and the system performance is determined by how well the task can be performed. Similarly, astronomical AO systems are used for specific tasks of scientific interest, and it is reasonable to define image quality in terms of performance on these tasks. This approach has proven valuable for evaluating and optimizing radiological imaging systems,^{1} and it is the premise of this paper that it will also be useful in adaptive optics.

In both fields, the tasks of interest are either *classification* or *estimation*. In a classification task, the goal is to assign the object that produced the image to one of two or more classes or hypotheses. In an estimation task, the goal is to determine numerical values of one or more parameters describing the object. The method by which the task is performed is called the *observer*.

An example of a classification task in medical imaging is detection of a tumor in a mammogram. The observer for this task can be either a human (a radiologist) or an automated detection algorithm, and in either case task performance can be specified by an ROC (receiver operating characteristic) curve, described briefly in Sec. 2.

An example of an estimation task in medical imaging is determination of the uptake of a targeted radiotracer that binds preferentially to a specific neuroreceptor. The data set for this task is a reconstructed tomographic image obtained by PET (positron emission tomography) or SPECT (single-photon emission computed tomography), and various mathematical observers are used to determine the uptake from the reconstructed image. Performance in this case can be defined either by accuracy of the estimate (including both bias and variance) or by the usefulness of the estimated value in performing a subsequent diagnostic classification.

Detection of a faint object such as an asteroid on a cluttered star field has much in common with detection of a small tumor in a mammogram, where normal anatomical structures obscure the tumor in the same way as background astronomical objects obscure the asteroid. Similarly, detection of an exoplanet is analogous to detection of a metastatic bone tumor on a rib in a SPECT image when the tumor is adjacent to the heart; like the host star in astronomy, the heart appears very bright since it is large and emits photons of the same energy as does the tumor, and it can be challenging to separate the two in a noisy, low-resolution image.

A common estimation task in astronomy is determination of the brightness of a star. Known as photometry, this task is completely analogous to assaying the local uptake of a radiotracer in SPECT or PET imaging.

For both estimation and classification tasks, calculation of task performance (in either medicine or astronomy) requires detailed knowledge of all statistical effects that influence the image or the observer. In radiology, the images are random because of photon noise from the discrete x rays or gamma rays, but also because the objects themselves are random. The photon noise in raw projection data is well described by independent Poisson statistics, but tomographic reconstruction algorithms lead to spatial correlations and more complicated image statistics. Object randomness includes normal anatomical structures as well as randomness in the tumor or other object of diagnostic interest, and these stochastic processes must also be transformed through the image-forming elements and any post-processing such as tomographic reconstruction. The images are said to be doubly stochastic^{1} since there is randomness from both object variability and measurement noise.

In AO systems, there is an additional source of randomness, the point-spread function (PSF) of the imaging system itself. If the adaptive optics functioned perfectly, this PSF would be a nonrandom Airy pattern, but residual speckle and unknown residual aberrations make it imperative to treat the PSF as random, so AO images are triply stochastic. Even though one of the three stochastic components, the Poisson noise from discrete photoelectric events, is statistically independent from measurement to measurement, the contributions from object and PSF randomness produce correlations and complicated statistical distributions.

In a paper recently submitted for publication,^{2} we presented expressions for the mean data and spatiotemporal covariance matrices for general AO systems with all three sources of randomness. We then related these expressions to three specific tasks of interest in astronomical adaptive optics: (1) detection of a faint point object on a complex background with random spatial and temporal structure; (2) detection of a faint companion at an unknown location around a bright star, and (3) photometry of a given star in a crowded field. In all three cases, expressions were derived for the form of the optimal linear observer for performing the task as well as for the final performance achievable. Practical ways of calculating or estimating these covariance components were discussed, and ways of using them to compute task-based measures of image quality were presented.

The present paper concentrates on detection tasks, but there is a brief discussion of joint detection and estimation. It expands considerably on the analysis of Ref. ^{2} by looking at the Bayesian ideal observer, which almost always requires nonlinear operations on the data. The example of detection of an exoplanet from a sequence of short-exposure images is treated in some detail. Previous work on Bayesian detection of exoplanets^{3}^{, }^{4} has considered only a single long-exposure image and a much simpler stochastic model.

Section 2 introduces the notation and some important definitions relating to triply stochastic spatiotemporal data, and Sec. 3 briefly reviews some key concepts from statistical decision theory. Section 4 summarizes results from ref. ^{2} on triply stochastic covariance matrices for AO systems and adds some discussion of probability density functions. Section 5 shows how these ideas can be applied to detection of an exoplanet.

This section introduces the notation that will be used in this paper, reviews the basic principles of multiply stochastic data, and discusses the nontrivial concept of linearity.

Consider a digital imaging system viewing a time-independent object *f*(**r**), where **r** = (*x*, *y*) is a 2D position vector. We shall often think of this object as a vector in a Hilbert space, in which case we shall denote it as **f**.

A single digital image consists of *M* pixel values, {*g _{m}*,

$$\overline{\overline{\mathbf{g}}}\equiv \int d\mathbf{g}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{g})\mathbf{g},$$

(2.1)

where pr(**g**) is the overall *M*-dimensional probability density function (PDF) for the data, and the integral runs over all values that can be assumed by each of the components of **g**.

We can indicate the doubly stochastic nature of the average by writing

$$\text{pr}(\mathbf{g})=\int d\mathbf{f}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{g}\mid \mathbf{f})\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{f}),$$

(2.2)

where pr(**f**) is an abstract way of indicating the PDF on all parameters needed to express the statistics of the random process *f*(**r**). Thus,

$$\overline{\overline{\mathbf{g}}}=\int d\mathbf{g}\int d\mathbf{f}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{g}\mid \mathbf{f})\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{f})\mathbf{g}\equiv {\langle {\langle \mathbf{g}\rangle}_{\mathbf{g}\mid \mathbf{f}}\rangle}_{\mathbf{f}}.$$

(2.3)

We can also define a conditional average over the measurement noise alone by

$$\overline{\mathbf{g}}(\mathbf{f})\equiv {\langle \mathbf{g}\rangle}_{\mathbf{g}\mid \mathbf{f}}=\int d\mathbf{g}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{g}\mid \mathbf{f})\mathbf{g}.$$

(2.4)

The imaging system is said to be linear if (**f**) is a linear function of **f**, in which case the components of can be expressed as^{1}

$${\overline{g}}_{m}={\int}_{\infty}{d}^{2}r{h}_{m}(\mathbf{r})f(\mathbf{r}),$$

(2.5)

where the kernel *h _{m}*(

The overall average image, in component form, is given by

$${\overline{\overline{g}}}_{m}={\int}_{\infty}{d}^{2}r{h}_{m}(\mathbf{r})\overline{f}(\mathbf{r}),$$

(2.6)

where (**r**) is the ensemble-average object.

Now consider an AO system that produces a temporal sequence of *J* images while viewing a time-dependent incoherent object *f*(**r**, *t*). The *j ^{th}* image in the sequence is indicated by the

By analogy to the doubly stochastic case, we shall use a single overbar to indicate an average over the measurement noise alone, conditional on a specific object and PSF. For incoherent imaging, this average is a linear function of the spatiotemporal object, given by

$${\overline{g}}_{m}^{(j)}=\int {d}^{2}{r}_{d}{d}_{m}({\mathbf{r}}_{d}){\int}_{{t}_{j}}^{{t}_{j}+T}dt\int {d}^{2}rp({\mathbf{r}}_{d},\mathbf{r},t)f(\mathbf{r},t),$$

(2.7)

where **r*** _{d}* is a 2D vector in the image plane,

As in ref. ^{2}, we shall assume that the object is a slowly varying function of time, essentially constant over one frame of the science camera, in which case (2.6) becomes

$${\overline{g}}_{m}^{(j)}={\int}_{\infty}{d}^{2}r{h}_{m}^{(j)}(\mathbf{r}){f}^{(j)}(\mathbf{r}),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}m=1,\dots ,M,j=1,\dots ,J,$$

(2.8)

where *f*^{(}^{j}^{)}(**r**) = *f*(**r**, *t _{j}*),

$${h}_{m}^{(j)}(\mathbf{r})=\int {d}^{2}{r}_{d}{d}_{m}({\mathbf{r}}_{d}){p}^{(j)}({\mathbf{r}}_{d},\mathbf{r}),$$

(2.9)

and

$${p}^{(j)}({\mathbf{r}}_{d},\mathbf{r})={\int}_{{t}_{j}}^{{t}_{j}+T}\mathit{dtp}({\mathbf{r}}_{d},\mathbf{r},t).$$

(2.10)

The sequence of object functions, {*f*^{(}^{j}^{)}(**r**), *j* = 1, …*J*}, will be denoted **F**, and the sequence of PSFs, {*p*^{(}^{j}^{)}(**r*** _{d}*,

A second overbar will be used to indicate an average over the random PSFs **P** conditional on **F**. In component form,

$${\overline{\overline{g}}}_{m}^{(j)}={\overline{\overline{g}}}_{m}^{(j)}({\mathbf{f}}^{(j)})=\int {d}^{2}r{\overline{h}}_{m}^{(j)}(\mathbf{r}){f}^{(j)}(\mathbf{r}),$$

(2.11)

where the average kernel is related to the average incoherent PSF by

$${\overline{h}}_{m}^{(j)}(\mathbf{r})=\int {d}^{2}{r}_{d}{d}_{m}({\mathbf{r}}_{d}){\overline{p}}^{(j)}({\mathbf{r}}_{d},\mathbf{r}).$$

(2.12)

The final average, over the object variability, yields

$${\overline{\overline{\overline{g}}}}_{m}^{(j)}=\int {d}^{2}r{\langle {\overline{h}}_{m}^{(j)}(\mathbf{r}){f}^{(j)}(\mathbf{r})\rangle}_{\mathbf{F}}=\int {d}^{2}r{\overline{h}}_{m}^{(j)}(\mathbf{r}){\overline{f}}^{(j)}(\mathbf{r}),$$

(2.13)

where the second form holds only if **p**^{(}^{j}^{)} is statistically independent of **F**.

Some special cases of these averages will be discussed in Sec. 4.

In this section we briefly review some basic concepts from statistical decision theory, relating them specifically to triply stochastic spatiotemporal data.

In any classification task, the goal is to assign the object that produced an image to one of two or more classes. If the hypothesis that the object belongs to the *k ^{th}* class is denoted

If we assume that each image must be assigned without equivocation either to hypothesis *H*_{0} (signal absent) or to *H*_{1} (signal present), the decision on a detection task can be made in complete generality by computing some scalar test statistic *t*(**G**) from the data; the observer then decides *H*_{1} if the test statistic is greater than a decision threshold and decides *H*_{0} otherwise. The value of the threshold controls the tradeoff between true positive decisions (correctly choosing *H*_{1}) and false positive decisions (choosing *H*_{1} when *H*_{0} is true). In signal-detection problems, the true-positive fraction (TPF) is called the *probability of detection*, and the false-positive fraction (FPF) is called the *false-alarm rate*.

A plot of TPF vs. FPF as the threshold is varied is called a *receiver operating characteristic* (ROC) curve. Meaningful figures of merit for binary classification include the true positive fraction at a specified false-positive fraction (the Neyman-Pearson criterion), the area under the ROC curve (AUC), and certain detectability indices derived from the ROC curve. The probability of detection alone is not a meaningful metric since it can always be made large, even unity, simply by choosing a low threshold.

In the medical imaging literature, it is common to divide detection tasks further depending on whether the background object, the signal to be detected or both are random. The most tractable – and least realistic –problem is where the background is specified exactly and the signal, if present, is also specified exactly; such problems are known as SKE/BKE (signal known exactly, background known exactly). The only remaining randomness in an SKE/BKE problem is the measurement noise and, of course, whether or not the signal is present. Though SKE/BKE problems are never realistic models for the actual clinical tasks of interest, we shall see that they are necessary stepping stones to the analysis of more practical problems.

In clinical practice, the background is not known. In mammography, for example, the normal anatomical structures vary randomly from patient to patient and must be treated as a spatial random process. To make any progress on this problem, we must have at least partial knowledge of the statistics of this random process, so in this case we say that the problem is BKS (background known statistically).

Similarly, in real clinical applications, the signal to be detected (e.g., a possible tumor) is also random in size, shape and location. Again, we must have some partial statistical knowledge of these parameter in order to even be able to define the signal part of the image, so this situation is referred to as SKS (signal known statistically).

These distinctions are needed in astronomy as well. In planet detection, for example, the planet brightness and location are not known ahead of time, so the problem is inherently SKS. Sky background and other astronomical objects contribute to a random background, so real astronomical problems are also BKS.

In astronomy, however, there is the additional complication of the random PSF. In a perfect AO system or in a long-exposure image where the PSF fluctuations average out, the PSF is nonrandom and the problem is designated PKE (PSF known exactly). On the other hand, if the residual speckle is important as in a sequence of short-exposure images, the problem is PKS (PSF known statistically).

The ideal observer on a detection task is defined variously as one that maximizes the area under the ROC curve, maximizes the TPF at all specified FPFs or minimizes a cost function defined in terms of TPF and FPF. By any of these criteria, the test statistic used by the ideal observer is the likelihood ratio,

$$\mathrm{\Lambda}(\mathbf{G})\equiv \frac{\text{pr}(\mathbf{G}\mid {H}_{1})}{\text{pr}(\mathbf{G}\mid {H}_{0})}.$$

(3.1)

Equivalently, the ideal observer can use the logarithm of the likelihood ratio, λ(**G**) ln Λ(**G**), as the test statistic. In either case, the ideal observer computes the test statistic and compares it to a threshold, deciding that *H*_{1} is true if the statistic exceeds the threshold. Some optimization criteria fix the value of the threshold according to the prevalences of the two hypotheses or costs assigned to erroneous decisions, but in every case the ideal test statistic is the likelihood ratio or its logarithm.

In practice, the relevant probabilities pr(**G**|*H _{k}*) in (3.1) are seldom known directly. Instead, they are given by

$$\text{pr}(\mathbf{G}\mid {H}_{k})=\int d\mathbf{F}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{G}\mid \mathbf{F})\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{F}\mid {H}_{k})={\langle \text{pr}(\mathbf{G}\mid \mathbf{F})\rangle}_{\mathbf{F}\mid {H}_{k}},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}(k=0,1).$$

(3.2)

For a PKE problem, pr(**G**|**F**) describes the measurement noise, and it is easy to compute from the basic physics of the problem. For detectors with no significant readout noise, for example, pr(**G**|**F**) describes independent Poisson random variables at each pixel in each frame. With no PSF randomness and the specific object **F**, knowledge of the mean image at each pixel and frame is sufficient to determine pr(**G**|**F**).

If the PSF is random, it is useful to write

$$\text{pr}(\mathbf{G}\mid {H}_{k})=\int d\mathbf{P}\int d\mathbf{F}\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{G}\mid \mathbf{P},\mathbf{F})\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{P}\mid \mathbf{F})\phantom{\rule{0.16667em}{0ex}}\text{pr}(\mathbf{F}\mid {H}_{k})={\langle {\langle \text{pr}(\mathbf{G}\mid \mathbf{P},\mathbf{F})\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}\mid {H}_{k}}.$$

(3.3)

The conditioning of the average over PSF **P** on the object **F** is needed in an AO system if the control signals for the adaptive element are derived from the object of interest; if a separate guide star is used and not treated as a part of **F**, then the object and PSF are statistically independent. In either case, pr(**G**|**P**, **F**) is easily derived from the physics of the problem. Markov-chain Monte Carlo (MCMC) methods can in many cases be used to perform the two averages.^{1}

Often the likelihood ratio is difficult to compute, and in these cases a useful alternative to the ideal observer is the ideal linear observer, called the Hotelling observer^{1}^{, }^{5}^{–}^{7} in the medical literature.

Linear observers compute linear discriminants, so the test statistic for an image sequence has the form *t*(**G**) = **W**^{t}**G**, where **W** is another image sequence called the template. The notation **W**^{t}**G** denotes a pixel-by-pixel, frame-by-frame scalar product of the template with the spatiotemporal data.

The Hotelling discriminant uses a template that maximizes a certain class separability measure.^{8} Linear test statistics are usually normally distributed by dint of the central limit theorem, and in this case maximizing this class separability is equivalent to maximizing the AUC among linear observers.

It can also be shown that the Hotelling test statistic is equal to the log-likelihood ratio if the raw data are normal with the same covariance under both hypotheses, so the Hotelling observer is identical to the ideal observer in this case and thus maximizes AUC among all observers, not just linear ones.

Computation of the Hotelling test statistic requires only the overall mean vectors and the covariance matrices of the data under the two hypotheses. The test statistic is given by

$${t}_{\mathit{Hot}}(\mathbf{G})={\mathbf{W}}^{t}\mathbf{G}={\left[{\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}\right]}^{t}{\mathbf{K}}_{av}^{-1}\mathbf{G},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{where}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\mathbf{K}}_{av}\equiv {\scriptstyle \frac{1}{2}}[{\mathbf{K}}_{\mathbf{G}\mid {H}_{1}}+{\mathbf{K}}_{\mathbf{G}\mid {H}_{0}}].$$

(3.4)

The inverse of the average covariance matrix is related to the familiar signal-processing operation of prewhitening, and for this reason, the Hotelling observer is sometimes called a prewhitening matched filter; unless the noise is stationary, however, the prewhitening and matched filtering cannot be carried out in the Fourier domain.

A figure of merit for the Hotelling observer is the Hotelling signal-to-noise ratio, sometimes called the Hotelling trace; it is given by

$${\text{SNR}}_{\mathit{Hot}}^{2}\equiv {\left[{\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}\right]}^{t}{\mathbf{K}}_{av}^{-1}\left[{\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}\right]=\text{tr}\left\{{\mathbf{K}}_{av}^{-1}\left[{\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}\right]\phantom{\rule{0.16667em}{0ex}}{\left[{\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}\right]}^{t}\right\},$$

(3.5)

where tr{·} denotes the trace (sum of the diagonal elements) of the matrix.

Computational approaches to evaluating the Hotelling test statistic and SNR are discussed in refs. ^{1} and ^{2}.

In most detection problems, we want to know not only that a signal is present but where it is. A useful alternative to the ROC curve in these cases is the LROC (localization ROC) curve, which plots the probability of detection and correct localization (within some preset tolerance) against the false-alarm rate. In the medical literature, the common figure of merit for joint detection/localization is the area under the LROC curve (ALROC).

As developed so far, neither the ideal nor the Hotelling observer gives any information about signal location, but both are readily modified to do so. A well-known way of using the likelihood for any detection problem with parameter uncertainty is the generalized likelihood ratio test (GLRT), in which the numerator in (3.1) is modified to

$$\mathrm{\Lambda}(\mathbf{G};\mathit{\theta})\equiv \frac{\text{pr}(\mathbf{G}\mid \mathit{\theta},{H}_{1})}{\text{pr}(\mathbf{G}\mid {H}_{0})},$$

(3.6)

where the vector ** θ** specifies the location or any other random parameter of the signal. In a GLRT, the parameter-dependent likelihood ratio Λ(

For the case where ** θ** is signal location, Khurd and Gindi

The Hotelling observer alone is ineffective with location uncertainty because the average signal,
${\overline{\overline{\mathbf{G}}}}_{1}-{\overline{\overline{\mathbf{G}}}}_{0}$ in (3.4) or (3.5), tends to a small constant, independent of location in the image, when the double-bar average includes random signal location. As with the GLRT, however, a simple modification salvages the concept. The Hotelling template can be defined for an SKE problem with a specified location. The template can then be scanned over all possible locations, and the resulting maximum value becomes the final test statistic for the detection decision. This strategy, called the scanning Hotelling observer, is discussed in more detail in refs. ^{1} and ^{2}.

As we saw in the previous section, the covariance matrices under both hypotheses are needed for the Hotelling or scanning Hotelling observer, and the full PDFs of the data are needed for the ideal observer or the GLRT. In this section we discuss some approaches to obtaining these statistical descriptions.

The overall covariance matrix of a triply stochastic image sequence is defined as

$${\mathbf{K}}_{\mathbf{G}}\equiv {\langle [\mathbf{G}-\overline{\overline{\overline{\mathbf{G}}}}]{[\mathbf{G}-\overline{\overline{\overline{\mathbf{G}}}}]}^{t}\rangle}_{\mathbf{G},\mathbf{P},\mathbf{F}}n={\langle {\langle {\langle [\mathbf{G}-\overline{\overline{\overline{\mathbf{G}}}}]{[\mathbf{G}-\overline{\overline{\overline{\mathbf{G}}}}]}^{t}\rangle}_{\mathbf{G}\mid \mathbf{P},\mathbf{F}}\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}}.$$

(4.1)

To be explicit, **K _{G}** is an

$${[{\mathbf{K}}_{\mathbf{G}}]}_{{mm}^{\prime}}^{(j,{j}^{\prime})}={\langle {\langle {\langle \left[{g}_{m}^{(j)}-{\overline{\overline{\overline{g}}}}_{m}^{(j)}\right]\phantom{\rule{0.16667em}{0ex}}\left[{g}_{{m}^{\prime}}^{({j}^{\prime})}-{\overline{\overline{\overline{g}}}}_{{m}^{\prime}}^{({j}^{\prime})}\right]\rangle}_{\mathbf{G}\mid \mathbf{P},\mathbf{F}}\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}}.$$

(4.2)

Adding and subtracting terms in each factor of (4.2), we obtain

$${\mathbf{K}}_{\mathbf{G}}={\langle {\langle {\langle [\mathbf{G}-\overline{\mathbf{G}}+\overline{\mathbf{G}}-\overline{\overline{\mathbf{G}}}+\overline{\overline{\mathbf{G}}}-\overline{\overline{\overline{\mathbf{G}}}}]\phantom{\rule{0.16667em}{0ex}}{[\mathbf{G}-\overline{\mathbf{G}}+\overline{\mathbf{G}}-\overline{\overline{\mathbf{G}}}+\overline{\overline{\mathbf{G}}}-\overline{\overline{\overline{\mathbf{G}}}}]}^{t}\rangle}_{\mathbf{G}\mid \mathbf{P},\mathbf{F}}\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}}.$$

(4.3)

Even without any assumptions of independence, the cross terms vanish identically, and we can write

$${\mathbf{K}}_{\mathbf{G}}={\overline{\overline{\mathbf{K}}}}_{\mathbf{G}}^{\mathit{noise}}+{\overline{\mathbf{K}}}_{\overline{\mathbf{G}}}^{\mathit{PSF}}+{\mathbf{K}}_{\overline{\overline{\mathbf{G}}}}^{\mathit{obj}},$$

(4.4)

where

$${\overline{\overline{\mathbf{K}}}}_{\mathbf{G}}^{\mathit{noise}}\equiv {\langle {\langle {\langle [\mathbf{G}-\overline{\mathbf{G}}]{[\mathbf{G}-\overline{\mathbf{G}}]}^{t}\rangle}_{\mathbf{G}\mid \mathbf{P},\mathbf{F}}\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}};$$

(4.5)

$${\overline{\mathbf{K}}}_{\overline{\mathbf{G}}}^{\mathit{PSF}}\equiv {\langle {\langle [\overline{\mathbf{G}}-\overline{\overline{\mathbf{G}}}]{[\overline{\mathbf{G}}-\overline{\overline{\mathbf{G}}}]}^{t}\rangle}_{\mathbf{P}\mid \mathbf{F}}\rangle}_{\mathbf{F}};$$

(4.6)

$${\mathbf{K}}_{\overline{\overline{\mathbf{G}}}}^{\mathit{obj}}\equiv {\langle [\overline{\overline{\mathbf{G}}}-\overline{\overline{\overline{\mathbf{G}}}}]{[\overline{\overline{\mathbf{G}}}-\overline{\overline{\overline{\mathbf{G}}}}]}^{t}\rangle}_{\mathbf{F}}.$$

(4.7)

Thus the overall covariance matrix for a triply stochastic image sequence can be rigorously decomposed into three terms representing, respectively, the contributions from measurement noise, from the random PSF and from randomness in the object being imaged.

The noise term in the covariance is almost always diagonal: measurements at different detector pixels and/or different frames are uncorrelated. The other two terms, however, exhibit significant spatial and temporal correlations; approaches to estimating them by simulation are discussed in ref. ^{2}.

Multivariate normal random processes are fully specified by their mean vector and covariance matrices, but normal statistics cannot accurately describe incoherent images since irradiance cannot go negative. Thus, if we wish to compute the performance of an ideal Bayesian observer on a detection task, we need more information than just the mean and covariance. This section briefly discusses some approaches to obtaining relevant probability density functions.

The simplest model for a random sky background is that it is spatially constant over the field of view and does not vary with time over the observation period, but that the absolute level of the background is not known *a priori*. In that case, the background is described by a single scalar random variable *f _{b}*. The ideal observer needs information on the PDF pr(

The next level of sophistication is to allow the spatially uniform background level to vary with time, denoting it *f _{b}*(

Various background models with random spatial variation are used in medical imaging, and many of these yield images similar to astronomical scenes. In many cases, the models are not only realistic but also mathematically tractable; analytic expressions can be given for the characteristic functional, which in principle contains all statistical information about the random process. For a review of this topic, see Barrett and Myers.^{1}

One particular spatial model that should accurately model globular clusters and other aggregations of point-like object is a nonstationary Poisson point process.^{2}^{, }^{3}

How one models a random PSF depends on the exposure time *T* relative to the speckle correlation time of the PSF, denoted *τ _{c}*, and on how many images are included in the data set.

If the data consist of a single very long exposure, the PSF may not be known accurately, but it is fixed for that exposure and does not have to be treated as random. It can be estimated from measured data^{10} or calculated from knowledge of the atmosphere, telescope and AO system, and this estimated PSF can then be used for performing the tasks of interest.

If the whole observing period is divided into shorter exposures but each individual frame is still long compared to the speckle correlation time, then the PSF for one frame can be considered as random but with a form parameterized by the local atmospheric conditions during that exposure. It may even be possible to specify the form of the PSF adequately by knowing the average Fried parameter *r*_{0} for that frame. In that case the sequence **P** is temporally correlated but fully specified by the instantaneous Fried parameter for each frame. The problem of modeling the PSF boils down to understanding the statistics of the scalar random process *r*_{0}(*t*).

The more interesting problem is when the data **G** consist of many short exposures, each with *T* < *τ _{c}*, so that

One way to formulate an interesting planet-detection problem is to assume that the astronomical scene of interest consists of a uniform nonrandom background, a single star, and possibly a single planet, and that the data are a sequence of short-exposure images. The randomness in the data then comes from measurement noise, the residual speckle in the PSF and the random size and location of the planet. The problem can be classified as SKS/BKE/PKS in the language of Sec. 3.2. We shall use this problem to illustrate the statistical approaches described above.

Numerous authors have analyzed the deterministic and statistical properties of residual speckle in AO systems^{11}^{–}^{14} None of these authors, however, consider either the full multivariate PSF needed by the ideal observer or even the spatiotemporal covariance needed by the Hotelling observer.

For simplicity, assume that the star is located at **r** = **0**, and assume that the light is narrowband with mean wavelength λ. If the instantaneous pupil function, after the AO correction, is given by *a _{ap}*(

$$u({\mathbf{r}}_{d},t)\propto {\int}_{\infty}{d}^{2}r{a}_{ap}(\mathbf{r})exp[i\phi (\mathbf{r},t)]exp\left(\frac{2\pi i\mathbf{r}\xb7{\mathbf{r}}_{d}}{\lambda f}\right).$$

(5.1)

By expanding exp[*i*(**r**, *t*)] in a Taylor series, taking the squared modulus of the field, and retaining terms linear or quadratic in the residual phase, we can express the random irradiance in the image plane approximately as^{13}

$$I({\mathbf{r}}_{d},t)\propto \phantom{\rule{0.16667em}{0ex}}{\mid {A}_{ap}(\stackrel{\sim}{\mathbf{r}})\mid}^{2}-\phantom{\rule{0.16667em}{0ex}}2{A}_{ap}(\stackrel{\sim}{\mathbf{r}})Im\phantom{\rule{0.16667em}{0ex}}\{[{A}_{ap}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t)\}\phantom{\rule{0.16667em}{0ex}}+{\mid [{A}_{ap}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t)\mid}^{2}-\phantom{\rule{0.16667em}{0ex}}{A}_{ap}(\stackrel{\sim}{\mathbf{r}})Re\phantom{\rule{0.16667em}{0ex}}\{[{A}_{ap}\ast \mathrm{\Phi}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t)\},$$

(5.2)

where *A _{ap}*(

The leading term in (5.2) is the ideal (nonrandom) Airy pattern for no aberrations (which is a real function for a symmetric aperture), and the other terms, corresponding to residual speckle, are spatiotemporal random processes. The second and fourth terms are referred to as pinned speckle since they are modulated by the Airy pattern; the third term is not pinned since it is convolved with the Airy pattern but not multiplied by it. For later reference, we label the terms in (5.2), in sequence, as *I*_{1}, *I*_{2}, *I*_{3} and *I*_{4}. Thus *I* = *I*_{1} + *I*_{2} + *I*_{3} + *I*_{4}.

Next we look at three assumptions that will simplify computation of the spatiotemporal covariance and PDF of the data. All three derive from properties of uncorrected Kolmogorov turbulence, but it is argued that the same properties hold to a reasonable approximation after AO correction.

According to Kolmogorov theory, the uncorrected pupil phase fluctuations (before correction by the deformable mirror and truncation by the aperture) are well described as a widesense-stationary spatial random process, but it is not so obvious that the wave emerging from the deformable mirror is spatially stationary. The fixed locations of the actuators on the mirror argue against any claim that the spatial autocorrelation function of the field or the phase in the pupil depends only on the separation of the two observation points. A well-functioning AO system, however, will accurately correct all phase fluctuations within a subspace spanned by the deformable mirror modes. If the original random process is stationary and the component of the atmospheric phase in the mirror subspace is uncorrelated with the component in its orthogonal complement, then the corrected phase is stationary as well.

The spatiotemporal statistics can be reduced to a purely spatial problem if we make the familiar Taylor frozen-flow assumption, under which (**r**, *t*) = _{0}(**r** − **v***t*) and

$$\mathrm{\Phi}(\mathit{\rho},t)={\mathrm{\Phi}}_{0}(\mathit{\rho})exp(2\pi i\mathit{\rho}\xb7\mathbf{v}t),$$

(5.3)

where **v** is the wind velocity. Normally the frozen-flow hypothesis is applied to uncorrected phase aberrations, but if the AO system responds rapidly and nearly nulls out the components in the subspace spanned by the mirror modes, the hypothesis should apply to the residual phase after correction as well.

Another useful assumption is that Φ(, *t*) is a circular Gaussian spatial random process for a fixed time *t*. One justification is similar to the argument used above: The uncorrected phase is a Gaussian in Kolmogorov turbulence theory, and a good AO system projects the Kolmogorov phase onto the orthogonal complement of the subspace spanned by the mirror modes. That projection is a linear operator, and any linear transformation of a Gaussian is a Gaussian. Moreover, we can argue that the real and imaginary parts of Φ(, *t*) are identically distributed since a small shift of the uncompensated part of the pupil phase will interchange real and imaginary parts.

Extensive simulations are underway to understand fully the PSF term in the covariance for the special case of pinned speckle and short-exposure images. Numerical results will be reported separately, but in this section we examine the problem analytically with the help of the assumptions discussed above. Particular attention will be paid to the effect of the linear term, *I*_{2}, in (5.2).

The spatiotemporal autocovariance function of the irradiance produced by a single point object on the optical axis is defined by

$${k}_{I}({\mathbf{r}}_{d},{\mathbf{r}}_{d}^{\prime},t,{t}^{\prime})\equiv \langle I({\mathbf{r}}_{d},t)I({\mathbf{r}}_{d}^{\prime},{t}^{\prime})\rangle -\langle I({\mathbf{r}}_{d},t)\rangle \phantom{\rule{0.16667em}{0ex}}\langle I({\mathbf{r}}_{d}^{\prime},{t}^{\prime})\rangle .$$

(5.4)

Decomposing *I*(**r*** _{d}*,

$${k}_{I}={k}_{22}+{k}_{33}+{k}_{44}+{k}_{34}+{k}_{43},$$

(5.5)

where

$${k}_{mn}\equiv \langle {I}_{m}({\mathbf{r}}_{d},t){I}_{n}({\mathbf{r}}_{d}^{\prime},{t}^{\prime})\rangle -\langle {I}_{m}({\mathbf{r}}_{d},t)\rangle \phantom{\rule{0.16667em}{0ex}}\langle {I}_{n}({\mathbf{r}}_{d}^{\prime},{t}^{\prime})\rangle .$$

(5.6)

All of these terms can be evaluated analytically if we are willing to make some of the assumptions introduced in Sec. 5.2. We illustrate this point by considering specifically *k*_{22}, given in the scaled coordinates ( = **r*** _{d}*/λ

$${k}_{22}(\stackrel{\sim}{\mathbf{r}},t,{\stackrel{\sim}{\mathbf{r}}}^{\prime},{t}^{\prime})=4{A}_{ap}(\stackrel{\sim}{\mathbf{r}}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime})\phantom{\rule{0.16667em}{0ex}}\langle Im\{[{A}_{ap}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t)\}Im\{[{A}_{ap}\ast \mathrm{\Phi}]({\stackrel{\sim}{\mathbf{r}}}^{\prime},{t}^{\prime})\}\rangle .$$

(5.7)

In the frozen-flow approximation, the essence of the calculation is to compute expectations of the form,

$$\langle [{A}_{ap}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t){[{A}_{ap}\ast \mathrm{\Phi}]}^{\ast}({\stackrel{\sim}{\mathbf{r}}}^{\prime},{t}^{\prime})\rangle =\int {d}^{2}\rho \int {d}^{2}{\rho}^{\prime}{A}_{ap}(\stackrel{\sim}{\mathbf{r}}-\mathit{\rho}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime}-{\mathit{\rho}}^{\prime})exp[2\pi i(\mathit{\rho}\xb7\mathbf{v}t-{\mathit{\rho}}^{\prime}\xb7\mathbf{v}{t}^{\prime})]\phantom{\rule{0.16667em}{0ex}}\langle {\mathrm{\Phi}}_{0}(\mathit{\rho}){\mathrm{\Phi}}_{0}^{\ast}({\mathit{\rho}}^{\prime})\rangle .$$

(5.8)

If we also assume that _{0}(**r**) is widesense stationary, it can be shown that

$$\langle {\mathrm{\Phi}}_{0}(\mathit{\rho}){\mathrm{\Phi}}_{0}^{\ast}({\mathit{\rho}}^{\prime})\rangle ={S}_{{\phi}_{0}}(\mathit{\rho})\delta (\mathit{\rho}-{\mathit{\rho}}^{\prime}),$$

(5.9)

where *S*_{0}(** ρ**) is the power spectral density of

$$\langle [{A}_{ap}\ast \mathrm{\Phi}](\stackrel{\sim}{\mathbf{r}},t){[{A}_{ap}\ast \mathrm{\Phi}]}^{\ast}({\stackrel{\sim}{\mathbf{r}}}^{\prime},{t}^{\prime})\rangle =\int {d}^{2}\rho {A}_{ap}(\stackrel{\sim}{\mathbf{r}}-\mathit{\rho}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime}-\mathit{\rho})exp[2\pi i\mathit{\rho}\xb7\mathbf{v}(t-{t}^{\prime})]\phantom{\rule{0.16667em}{0ex}}{S}_{{\phi}_{0}}(\mathit{\rho}).$$

(5.10)

The complex conjugate of (5.10) also appears in *k*_{22}, as do two terms involving Φ_{0}(** ρ**)Φ

$${k}_{22}(\stackrel{\sim}{\mathbf{r}},t,{\stackrel{\sim}{\mathbf{r}}}^{\prime},{t}^{\prime})={A}_{ap}(\stackrel{\sim}{\mathbf{r}}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime})\left\{\int {d}^{2}\rho {A}_{ap}(\stackrel{\sim}{\mathbf{r}}-\mathit{\rho}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime}-\mathit{\rho})cos[2\pi \mathit{\rho}\xb7\mathbf{v}(t-{t}^{\prime})]\phantom{\rule{0.16667em}{0ex}}{S}_{{\phi}_{0}}(\mathit{\rho})-\int {d}^{2}\rho {A}_{ap}(\stackrel{\sim}{\mathbf{r}}-\mathit{\rho}){A}_{ap}({\stackrel{\sim}{\mathbf{r}}}^{\prime}+\mathit{\rho})cos[2\pi \mathit{\rho}\xb7\mathbf{v}(t-{t}^{\prime})]\phantom{\rule{0.16667em}{0ex}}{S}_{{\phi}_{0}}(\mathit{\rho})\right\}.$$

(5.11)

This result predicts a stationary time dependence for *k*_{22}, and it also predicts a strong positive correlation for *t* = *t*′ and ′ near and a strong negative correlation for ′ near −. Simulation studies to check these predictions are in progress.

The quadratic terms in (5.2) are more complicated, but their means and covariances are calculable from properties of circular Gaussians if we are willing to add that assumption.

To go from the autocovariance *function* of the PSF,
${k}_{I}(\mathbf{r},{\mathbf{r}}_{d}^{\prime},t,{t}^{\prime})$, to the PSF term in the covariance *matrix*, we must integrate over frames and pixels as in (2.10) and (2.12).

In principle, we should also include an object variability term, but we have assumed in this section that the background and the host star are nonrandom, so there is no object variability under the planet-absent hypothesis, *H*_{0}. There is variability in planet location and brightness under *H*_{1}, but these factors come into the mean signal, not the average covariance, if the star is much brighter than its companion. Moreover, with the scanning Hotelling observer, the mean and covariance are conditioned on a particular location.

We do need to include a covariance term accounting for measurement noise. If the noise is statistically independent (conditional on the object and PSF) from pixel to pixel and frame to frame, we can write

$${\left[{\overline{\overline{\mathbf{K}}}}_{\mathbf{G}}^{\mathit{noise}}\right]}_{m{m}^{\prime}}^{(j,{j}^{\prime})}={\sigma}_{mj}^{2}{\delta}_{m{m}^{\prime}}{\delta}_{j{j}^{\prime}}.$$

(5.12)

If both readout and Poisson noise are present, then

$${\sigma}_{mj}^{2}={\sigma}_{m}^{2}+{\overline{\overline{g}}}_{m}^{(j)},$$

(5.13)

where
${\sigma}_{m}^{2}$ is the electronic noise for the *m ^{th}* detector pixel in photoelectron units and
${\overline{\overline{g}}}_{m}^{(j)}$ in the present problem is obtained by averaging (5.2) over random PSFs and normalizing to photoelectron units.

Thus, with only minimal numerical calculation, we will have access to the full covariance matrix needed to implement the Hotelling or scanning Hotelling observer.

If we wish to go beyond Hotelling to the ideal observer, we need PDFs for the data, not just means and covariance. We have already suggested that it might be valid take Φ as circular Gaussian; in that case the *I*_{2} term in (5.2) is fully characterized, as is its contribution to the PDF of measured data. It is still necessary, however, to validate the circular-Gaussian assumption with detailed simulations.

The statistics of the *I*_{3} and *I*_{4} terms are not so easy to determine, even under a circular Gaussian model, because they involve products of spatiotemporal random processes. Further theoretical investigation is needed if these terms are significant.

The probability law for the measurement noise (again conditional on an object and PSF) is easy to state in the limit of pure Poisson noise or pure Gaussian readout noise.

The main point stressed in this paper is that task performance is the ultimate measure of image quality. Astronomical images are produced either for estimation tasks, such as photometry, or for classification tasks such as planet detection. Both kinds of task were analyzed previously for AO systems in ref. ^{2}, but the analysis was restricted to linear observers, namely the Hotelling observer for binary detection tasks and the generalized Wiener estimator for estimation tasks.

In this paper we applied the linear analysis of ref. ^{2} specifically to detection of exoplanets, and we extended the discussion of detection tasks to nonlinear ideal observers that maximize the area under the ROC curve. We also briefly discussed the LROC curve for joint detection and estimation tasks.

As in ref. ^{2}, we noted that task performance in AO systems is limited by three main sources of data randomness: measurement noise, object randomness and residual PSF randomness. For short-exposure images, the data from an AO system is a discrete spatiotemporal random process. Calculation of achievable performance by any observer depends on accurate multivariate statistical knowledge. For linear observers, mean vectors and spatiotemporal covariance matrices are needed, and for the ideal observer full multivariate PDFs are required. Univariate statistics say very little about task performance. In particular, pixel signal-to-noise ratio is almost unrelated to detectability.

For the specific problem of exoplanet detection from a sequence of short-exposure images, we found that the relevant covariance matrices could be expressed in a numerically tractable form with the help of two plausible assumptions, namely frozen flow and spatial stationarity of the phase in the pupil. The range of validity of these assumptions remains to be explored.

As a result of the investigations reported here, we now have the ingredients needed to implement the Hotelling and scanning Hotelling observer for exoplanet detection. This future work should lead to improved detectability, but perhaps more importantly it will provide a firm basis for optimization of AO systems for this task. In particular, it will provide a direct comparison of the achievable detectability with multiple short-exposure images as compared to a single long exposure.

Implementation of the ideal observer is further in the future, but its realization should be facilitated by the discussion in this paper.

This research was supported by Science Foundation Ireland under grant no. 01/PI.2/B039C and by an SFI Walton Fellowship (03/W3/M420) for H. H. Barrett. Development of the basic methodology for objective assessment of image quality was also supported in part by the National Institutes of Health under grant nos. R37 EB000803 and P41 EB002035.

Harrison H. Barrett, College of Optical Sciences and Department of Radiology, University of Arizona, Tucson AZ 85724.

Kyle J. Myers, NIBIB/CDRH Laboratory for the Assessment of Medical Imaging Systems, Food and Drug Administration, Rockville, MD 20850.

Nicholas Devaney, Department of Physics, National University of Ireland, Galway, Ireland.

J. C. Dainty, Department of Physics, National University of Ireland, Galway, Ireland.

Luca Caucci, Department of Electrical and Computer Engineering, University of Arizona, Tucson AZ 85721.

1. Barrett HH, Myers KJ. Foundations of Image Science. John Wiley and Sons; New York: 2004.

2. Barrett HH, Myers KJ, Devaney N, Dainty JC. Objective assessment of image quality: IV. Application to adaptive optics. 2006 Submitted to J. Opt. Soc. Am. A. [PMC free article] [PubMed]

3. Hobson MP, McLachlan C. A Bayesian approach to discrete object detection in astronomical data sets. Mon Not R Astron Soc. 2003;338:765784.

4. Braems I, Kasdin NJ. Bayesian hypothesis testing for planet finding. 203rd Meeting; American Astronomical Society; 2004.

5. Barrett HH. Objective assessment of image quality: effects of quantum noise and object variability. J Opt Soc Am A. 1990;7:1266–1278. [PubMed]

6. Smith WE, Barrett HH. Hotelling trace criterion as a figure of merit for the optimization of imaging systems. J Opt Soc Am A. 1986;3:717–725.

7. Fiete RD, Barrett HH, Smith WE, Myers KJ. The Hotelling trace criterion and its correlation with human observer performance. J Opt Soc Am A. 1987;4:945–953. [PubMed]

8. Hotelling H. The generalization of Student’s ratio. Ann Math Stat. 1931;2:360–378.

9. Khurd P, Gindi G. Decision strategies that maximize the area under the LROC curve. IEEE Trans Med Im. 2005;24:1626–1636. [PubMed]

10. Veran JP, Rigaut F, Maitre H, Rouan D. Estimation of the adaptive optics long-exposure point-spread function using control loop data. J Opt Soc Am A. 1997;14:3057–3068.

11. Aime C, Soummer R. The usefulness and limits of coronagraphy in the presence of pinned speckles. Astrophysical J. 2004;612:L85L688.

12. Bloemhof EE. Anomalous intensity of pinned speckles at high adaptive correction. Opt Lett. 2004;29:159–161. [PubMed]

13. Perrin MD, Sivaramakrishnan A, Makidon RB, Graham JR. The structure of high strehl ratio point-spread functions. Astrophysical J. 2003;596:702–712.

14. Sivaramakrishnan A, Lloyd JP, Hodge PE, Macintosh BA. Speckle decorrelations and dynamic range in speckle noise-limited imaging. Astrophysical J. 2002;581:L59–L62.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |