Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3166524

Formats

Article sections

- Abstract
- 1. INTRODUCTION
- 2. ADAPTIVE DECONVOLUTION APPROACH
- 3. IMPLEMENTATION STRATEGY
- 4. VALIDATION AND APPLICATION TO MONO-FRAME DATA
- 5. APPLICATION TO MULTI-FRAME DATA SETS
- 6. APPLICATION TO THREE-DIMENSIONAL DATA SETS
- 7. SUMMARY AND FUTURE DIRECTIONS
- REFERENCES

Authors

Related links

J Opt Soc Am A Opt Image Sci Vis. Author manuscript; available in PMC 2011 September 3.

Published in final edited form as:

PMCID: PMC3166524

NIHMSID: NIHMS314800

Erik F. Y. Hom,^{*} Franck Marchis, Timothy K. Lee, Sebastian Haase, David A. Agard, and John W. Sedat^{†}

Erik F. Y. Hom, Graduate Group in Biophysics and Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA;

The publisher's final edited version of this article is available at J Opt Soc Am A Opt Image Sci Vis

See other articles in PMC that cite the published article.

We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A **21**, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy.

Images acquired using any optical system are fundamentally limited in resolution by diffraction and corrupted by measurement noise. Aberrations intrinsic to the optical system and imaging medium result in further degradation and distortions of the observed images. In ground-based astronomical imaging, atmospheric turbulence is the primary source of aberrations. In microscopic and biological imaging, significant aberrations arise as a result of index-of-refraction inhomogeneities within the sample under study.

Aberration artifacts can be largely corrected using adaptive optics (AO) methods.^{1} Limited by the spatial and/or temporal response of AO hardware, however, such corrections remain imperfect. AO-corrected images are often contaminated by residual blurring that can significantly reduce the contrast of fine image details. Significant denoising and improved image contrast can be obtained using post-acquisition deconvolution techniques,^{2} implying that both hardware and software correction strategies are needed for optimal image recovery.

Deconvolution is an explicit attempt to model and computationally compensate for measurement nonidealities. Classic approaches presume that the imaging point-spread function (PSF) of the optical system is exactly known. In practice, however, the PSF is estimated either theoretically^{3,4} or by imaging a subresolution pointlike object (e.g., guide star or fluorescent bead).^{5,6} Such estimates may deviate significantly from the true PSF, yet no margin is given in classical methods for the PSF to adjust to a more appropriate estimate. Using a fixed, imperfect PSF thus inherently limits one’s ability to generate the most accurate and highest-resolution object reconstructions.

Myopic or blind deconvolution approaches allow an imprecise or unknown PSF estimate to adapt to a more correct form and thereby offer the possibility of improved object reconstructions over classical methods. The success of these myopic–blind methods, however, is dependent upon *a priori* constraints that compensate for the lack of information associated with having the PSF be variable.^{7–11}

In this paper, we describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of two-dimensional (2D) and three-dimensional (3D) image data within a maximum *a posteriori* framework. AIDA is a *de novo* implementation and extension of the MISTRAL (Myopic Iterative STep-preserving Restoration ALgorithm) method, originally developed by Mugnier and co-workers^{12} to effectively deconvolve a broad range of astronomical targets with superior photometric restoration and sharp-edge feature preservation. We have significantly improved AIDA’s run-time performance over the original MISTRAL implementation and have developed a simple yet effective scheme to balance maximum-likelihood estimation with object regularization in the deconvolution process. Moreover, AIDA has capabilities to process multiple image frames simultaneously, thereby leveraging the information available through multiple observations.^{2,11} In Section 2, we present the deconvolution approach. In Section 3, we describe how AIDA was implemented and describe our automatic regularization scheme. In Section 4, we demonstrate AIDA’s effectiveness on both synthetic and experimental single-frame data. In Sections 5–7, we present the application of AIDA to multiple-image-frame data and 3D images. We conclude with a survey of possible algorithmic improvements and applications, offering AIDA as an open-source alternative to MISTRAL for further development.

Consider an image, *i*(**r**), of an object, *o*(**r**), observed through a telescope or microscope system and measured using a CCD detector array. This image may be viewed as a probabilistic mapping of the object’s brightness distribution to an intensity count distribution sampled over the discrete pixel/voxel position, **r**:*o*(**r**)*i*(**r**). Assuming that (i) image formation is linear and space invariant (isoplanatic approximation), (ii) the response of each CCD pixel element is equivalent and independent of all others, and (iii) signal-independent Gaussian and signal-dependent Poisson noise sources are present,^{13} the image formed can be described by the following equation:

$$i(\mathbf{r})=\underset{g(\mathbf{r})}{\underbrace{o(\mathbf{r})\otimes h(\mathbf{r})}}\phantom{\rule{thinmathspace}{0ex}}\u25e6\phantom{\rule{thinmathspace}{0ex}}{\stackrel{\u2323}{n}}_{P}(\mathbf{r})+{\stackrel{\u2323}{n}}_{G},$$

(1)

where *h*(**r**) is the PSF, *g*(**r**) denotes the noise-free image, * _{G}*(

When both Gaussian and Poisson noise sources are present and images are not photon-limited, a nonstationary but additive weighted-Gaussian noise model with variance

$$w(\mathbf{r})\equiv {\sigma}_{\stackrel{\u2323}{n}(\mathbf{r})}^{2}(\mathbf{r})={\sigma}_{G}^{2}+{\sigma}_{P}^{2}(\mathbf{r})$$

(2)

is a very good approximation.^{12,14} With this noise model, the operator in Eq. (1) may be replaced by simple addition, and Eq. (1) may also be expressed as

$$I(\mathbf{k})=O(\mathbf{k})H(\mathbf{k})+\stackrel{\u2323}{N}(\mathbf{k}),$$

(3)

where capitalization denotes the Fourier transform of the variable, *H*(**k**) is the optical transfer function (OTF), and **k** is the conjugate spatial frequency. For brevity, the dependence on **r** and **k** will often be implicit hereafter.

The goal of deconvolution is to invert Eq. (1). Classical deconvolution approaches aim to find the best estimate, *ô*, of the true object given a single image frame, *i*, and an exactly known PSF convolution kernel, *h*. Such approaches are ill-posed (lacking a unique solution, or having a solution that is discontinuous with respect to the data) and ill-conditioned (numerically sensitive to small errors and thus unstable) for two reasons: (1) *h* is intrinsically band-limited by the resolution limit of the optical system, and (2) noise is present at frequencies beyond the band limit.^{8,15} This situation is further complicated in the case of myopic or blind deconvolution where the characteristics of the PSF kernel are poorly known, if at all. Because of ill-posedness, the quality of the deconvolution depends critically on the quantity and quality of *a priori* information that is incorporated into the inversion process.^{8,16} This *a priori* information can be divided into three classes related to *, o*, and *h*.

Owing to the presence of noise, deconvolution may be viewed as a problem of stochastic inversion. It is helpful to state the goal of deconvolution in Bayesian terms, namely, to maximize the *a posteriori* probability of observing the object, *o*, and PSF, *h*, given an image, *i*, and a set of model assumptions, *a*,

$$p(o,h|i,a)=\frac{p(i|o,h,a)p(o|a)p(h|a)}{p(i|a)}\phantom{\rule{thinmathspace}{0ex}}.$$

(4)

*p*(*i* | *o, h, a*) is the posterior probability density of observing an image, *i*, as expressed by the forward-imaging equation, Eq. (1). This term is the focus of maximum-likelihood methods, which aim to optimize the fidelity of the observed data to a set of parameters and subject to a particular noise model. *p*(*o* | *a*), *p*(*h* | *a*), and *p*(*i* | *a*) are the *a priori* probability distributions for the object, PSF, and image, respectively. These *a priori* distributions must be inferred based on the assumptions, *a*. In classical deconvolution methods for which the PSF is known, for example, *p*(*h* | *a*) is assumed to be a constant. In maximum-entropy deconvolution methods, *p*(*o* | *a*) is set implicitly by the definition of the entropy measure used.^{17} When the positivity of the variables *o, h*, and *i* can be assumed (e.g., under incoherent imaging conditions), the *a priori* probabilities for negative values can be set to zero.

Each probability term in Eq. (4) may be interpreted as a Gibbs distribution with an energy cost function, *J*(*x*), and partition function, *Z*(*x*) = ∫_{x} exp[−*J*(*x*)]d*x*,^{18,19}

$$p(x)=\text{exp}[-J(x)]/Z(x)\phantom{\rule{thinmathspace}{0ex}},$$

(5)

so that

$$p(o,h|i,a)=\text{exp}[-J(o,h|i,a)]/Z=({Z}_{i}/{Z}_{n}{Z}_{o}{Z}_{h}Z)\text{exp}[-{J}_{n}(i|o,h,a)-{J}_{o}(o|a)-{J}_{h}(h|a)+{J}_{i}(i|a)]\phantom{\rule{thinmathspace}{0ex}},$$

(6)

where we have used the subscripts *n* to denote noise-model-related data fidelity terms, *o* to denote the terms arising from the *a priori* object distribution, *h* to denote the terms arising from the *a priori* assumptions for the PSF, and *i* to denote the terms arising from the *a priori* distribution of images. The mode or best estimate for *both o* and *h* can be found by maximizing Eq. (6) with respect to these variables or, equivalently, by minimizing the corresponding negative log-likelihood, *J*(*o, h* | *i, a*),

$${[\widehat{o},\widehat{h}]}_{Z}=\underset{{[\widehat{o},\widehat{h}]}_{Z}}{\text{arg min}}\left\{J(o,h|i,a)\right\}=\underset{{[\widehat{o},\widehat{h}]}_{Z}}{\text{arg min}}\left\{{J}_{n}(i|o,h,a)+{J}_{o}(o|a)+{J}_{h}(h|a)\right\}.$$

(7)

Since *J _{i}*(

Our goal is to minimize Eq. (7) subject to a specific set of model assumptions for *J _{n}*(

Assuming the mixed-Gaussian noise model of Eq. (2), the fidelity of the reconstructed object *ô* and PSF *ĥ* with respect to the observed image *i* can be described by the following weighted maximum-likelihood term:

$${J}_{n}(i|o,h,a)=\frac{1}{2}{\displaystyle \sum _{\mathbf{r}}}\frac{{(i(\mathbf{r})-\widehat{o}(\mathbf{r})\otimes \widehat{h}(\mathbf{r}))}^{2}}{w(\mathbf{r})}\phantom{\rule{thinmathspace}{0ex}}.$$

(8)

Deconvolution approaches that are based solely on this term often lead to noise amplification and severe ringing artifacts. The Landweber method and the Richardson–Lucy or expectation-maximization algorithm are examples of such approaches, which assume a stationary-Gaussian and Poisson noise model for *w*(**r**), respectively.^{8,17} To minimize noise amplification artifacts and find a unique and stable solution in practice, Eq. (8) must be regularized. In the aforementioned methods, regularization is accomplished empirically by limiting the number of deconvolution iterations.

Equation (8) may also be regularized through a quadratic penalty term based on an object’s spatial gradient.^{15,16} Quadratic regularization, however, often yields results that are oversmoothed and have compromised image contrast when applied uniformly to all object features. Using a roughness penalty that is instead subquadratic for regions of high contrast has been very successful in preserving edges and other sharp object features.^{15,20–22} The underlying assumption here is that large gradient discontinuities in the image arise from genuine object features and should be penalized comparatively less than small gradients due to noisy background features. We use the isotropic edge-preserving prior proposed by Mugnier *et al.*,^{12} which is based on the work of Brette and Idier^{23}:

$${J}_{o}(o|a)={\lambda}_{o}{\displaystyle \sum _{\mathbf{r}}}\mathrm{\Phi}(\gamma (\widehat{o},{\theta}_{\mathbf{r}}))\phantom{\rule{thinmathspace}{0ex}},$$

(9)

$$\mathrm{\Phi}(\gamma )=\gamma -\text{ln}(1+\gamma )\phantom{\rule{thinmathspace}{0ex}},$$

(10)

$$\gamma (\widehat{o},{\theta}_{\mathbf{r}})=\left(\frac{\Vert \mathrm{\nabla}\widehat{o}(\mathbf{r})\Vert}{{\theta}_{\mathbf{r}}}\right)\phantom{\rule{thinmathspace}{0ex}},$$

(11)

where ‖*ô*(**r**)‖ = [(_{x}*ô*(**r**))^{2} + (_{y}*ô*(**r**))^{2} + (_{z}*ô*(**r**))^{2}]^{1/2} is the norm of the spatial gradient of the object, θ_{r} and λ_{o} are auxiliary parameters or hyperparameters of the object prior distribution, γ is a reduced gradient modulus, and Φ(γ) is called the clique potential. Φ(γ) is a function that characterizes the local object texture at a position **r** based on a subset or clique of neighboring pixels. This clique is defined in practice through the calculation of the gradient norm in Eq. (11). For large values of γ, Φ(γ) ≈ γ, whereas for small values of γ, Φ(γ) = γ − (γ − γ^{2}/2 + ) ≈ γ^{2}/2, resulting in so-called L1–L2 (linear–quadratic) behavior. Numerous L1–L2 regularization functionals have been suggested in the literature (e.g, see Teboul *et al.*^{22}). The advantage of Eq. (10) over other forms is that it is convex and its derivative with respect to *ô* does not involve any transcendental or exponential functions, making cost function optimization easier and less expensive (see Subsection 3.C).

The scaling parameter λ_{o} plays an important role in balancing maximum-likelihood fidelity to the data, with the preservation of high-contrast features in the object estimate. The hyperparameter, θ_{r}, sets the width and shape of the Gibbs distribution in Eq. (5). It governs the point at which regularization transitions from being quadratic to being linear. In Mugnier *et al.*’s treatment,^{12} the same scalar pair of values (λ_{o}, θ) is applied to each pixel element of the object. We have found that using an inhomogeneous hyperparameter model as advocated by others,^{24–27} in which θ_{r} is pixel/voxel dependent (as indicated by the subscript) and adapted to the local object texture, results in better deconvolution results.

To myopically reconstruct the PSF, the following Fourier domain constraint is used:

$${J}_{h}(h|a)=\frac{{\lambda}_{H}}{2}{\displaystyle \sum _{\mathbf{k}}}\frac{|\widehat{H}(\mathbf{k})-\overline{\mathscr{H}}(\mathbf{k}){|}^{2}}{\upsilon (\mathbf{k})}\phantom{\rule{thinmathspace}{0ex}},$$

(12)

where λ_{H} controls the degree of the OTF regularization constraint relative to the data fidelity term [Eq. (8)], *Ĥ*(**k**) is the true estimate of the OTF, (**k**) is a measured OTF, and the overbar denotes an average over *l* measured OTF samples. υ(**k**) is the OTF sampling variance or power spectral density defined as

$$\upsilon (\mathbf{k})={\langle |{\mathscr{H}}_{l}(\mathbf{k})-\overline{\mathscr{H}}(\mathbf{k}){|}^{2}\rangle}_{l}={\langle |{\mathscr{H}}_{l}(\mathbf{k}){|}^{2}\rangle}_{l}-|\overline{\mathscr{H}}(\mathbf{k}){|}^{2}\phantom{\rule{thinmathspace}{0ex}}.$$

(13)

υ(**k**) serves as a spring constant to harmonically constrain each OTF *k* component to a mean value, consistent with a set of measured OTFs. Equation (12) intrinsically handles band-limitedness of the OTF; frequencies beyond the optical system’s resolution are essentially ignored, since they are not represented in the measured samples. Conan and co-workers^{28,29} have shown that this harmonic OTF constraint performs noticeably better toward recovering the true OTF than a simple band-limited constraint typically used in blind deconvolution methods.^{7,30} An harmonic constraint for each spatial *frequency*, |**k**|, which is functionally equivalent to using a radially averaged υ(**k**), may be used, although we have found that using the less stringent constraint, Eq. (12), is sometimes more robust.

The focus thus far has been on a single image frame. One of our goals in developing AIDA was to combine the demonstrated strengths of MISTRAL with the multiple-frame synthesis capabilities available in a method such as IDAC, the Iterative Deconvolution Algorithm in C.^{2,30,31} Christou *et al.*^{31} have argued that the use of multiple observations can serve as an additional deconvolution constraint: the ratio of unknown variables to measured quantities being reduced from 2:1 for a single image frame to (*M* + 1):*M* for *M* image frame observations. The simultaneous analysis of multiple observations implicitly accounts for correlations that may exist among variables as well as between variables and the data.^{32} Consequently, multiple-frame deconvolution should result in systematically lower error bounds with more reliable results than when individual image frames are deconvolved separately or when multiple frames are merged into an averaged “shift-and-added” image (i.e., an image generated by averaging the image frames after appropriate pixel shifts are made to maximize image correlation) and then deconvolved.^{2,11,33–36}

The extension to multi-frame deconvolution is straightforward. For multiple-image observations, Eq. (1) may be expressed generally in vector form:

$$\left\{\begin{array}{c}\hfill {i}_{1}={o}_{1}\otimes {h}_{1}+{\stackrel{\u2323}{n}}_{1},\hfill \\ \hfill {i}_{2}={o}_{2}\otimes {h}_{2}+{\stackrel{\u2323}{n}}_{2},\hfill \\ \hfill \vdots \hfill \\ \hfill {i}_{M}={o}_{M}\otimes {h}_{M}+{\stackrel{\u2323}{n}}_{M}\hfill \end{array}\right\}\equiv \mathbf{i}(\mathbf{r})=\underset{\mathbf{g}(\mathbf{r})}{\underbrace{\mathbf{o}(\mathbf{r})\ddot{\otimes}\mathbf{h}(\mathbf{r})}}+\stackrel{\u2323}{\mathbf{n}}(\mathbf{r})\phantom{\rule{thinmathspace}{0ex}},$$

(14)

where ⊗̈ specifies a convolution performed over appropriate *o _{j}*:

The cost function to be minimized for multi-PSF deconvolution is given by

$${J}_{M\_\mathit{\text{PSF}}}(o,\mathbf{h}|\mathbf{i},a)=\left\{\frac{1}{2}{\displaystyle \sum _{\beta}^{{M}_{h}}}\left[{\displaystyle \sum _{\mathbf{r}}}\frac{{({i}_{\beta}-\widehat{o}\phantom{\rule{thinmathspace}{0ex}}\otimes \phantom{\rule{thinmathspace}{0ex}}{\widehat{h}}_{\beta})}^{2}}{{w}_{\beta}}+{\lambda}_{{h}_{\beta}}{\displaystyle \sum _{\mathbf{k}}}\frac{{\widehat{H}}_{\beta}-\overline{\mathscr{H}}{|}^{2}}{\upsilon}\right]\right\}+{\lambda}_{o}{\displaystyle \sum _{\mathbf{r}}}\mathrm{\Phi}(\gamma (\widehat{o},{\theta}_{\mathbf{r}}))$$

(15)

and for multi-object deconvolution by

$${J}_{M\_\mathit{\text{object}}}(\mathbf{o},h|\mathbf{i},a)=\left\{{\displaystyle \sum _{\alpha}^{{M}_{o}}}\left[{\displaystyle \sum _{\mathbf{r}}}\left(\frac{{({i}_{\alpha}-{\widehat{o}}_{\alpha}\phantom{\rule{thinmathspace}{0ex}}\otimes \phantom{\rule{thinmathspace}{0ex}}\widehat{h})}^{2}}{2{w}_{\alpha}}+{\lambda}_{{o}_{\alpha}}\mathrm{\Phi}(\gamma ({\widehat{o}}_{\alpha},{\theta}_{\mathbf{r},\alpha}))\right)\right]\right\}+\frac{{\lambda}_{h}}{2}{\displaystyle \sum _{\mathbf{k}}}\frac{|\widehat{H}-\widehat{\mathscr{H}}{|}^{2}}{\upsilon},$$

(16)

where α and β are used to index multiple objects and PSFs, respectively.

We implemented AIDA using Numerical Python–Numarray,^{37} with calls to a specialized C++ conjugate gradient (CG) optimizer (see Subsection 3.B), which were handled by code generated using the Simplified Wrapper and Interface Generator^{38,39} (SWIG). Fast Fourier transforms were computed using the fftw (version 2.1.5) subroutine library^{40} (see also http://www.fftw.org) in lieu of the standard Numarray fftpack library, resulting in about a factor of 2 improvement in the overall speed of the algorithm. A schematic of the algorithm is shown in Fig. 1.

AIDA optimization protocol. A: Setup and variable initialization stage. Equation numbers for variables are shown in curly brackets. *M*_{o} and *M*_{h} are the number of objects and PSFs to be estimated, respectively. B: Deconvolution scheme. The subscript *j* indexes **...**

AIDA begins with a preprocessing stage to estimate data fidelity weights, *w* (see below, Subsection 3.C), and to calculate the mean OTF, , and OTF variance, υ. It is assumed that all the images supplied have been properly flat fielded and optionally background subtracted. In cases where the image does not have negative pixels following background subtraction (as is the case for an image without true dark areas), the user must supply either a value for σ_{G} or a dark image from which it can be estimated.

The present version of AIDA expects images of reference PSFs (e.g., of a guide star or subdiffraction-sized bead), which are normalized to 1 and used to compute and υ. If only one PSF image is supplied, υ is calculated based on the noise statistics of the image as for *w*. AIDA is equipped with an optional clean-up module to remove hot–dark pixels from these PSF images and remove noise according to some user-defined threshold. An option to use a radially averaged OTF variance is provided to enable a more stringent harmonic constraint of spatial frequencies (see Subsection 2.C.3).

The default mode for AIDA uses automatic hyperparameter settings as described below in Subsection 3.D. The option to directly specify hyperparameter values or a scale factor by which to multiply the automatic estimates is available for fine-tuning purposes. For mono-frame deconvolutions, AIDA is also capable of performing unsupervised deconvolutions over a grid of λ_{o} and θ_{r} hyperparameter values centered about automatic estimates or user-defined centers.

Although it is possible to simultaneously estimate both sets of objects, **ô**, and PSFs, **ĥ**, by stacking them into a single variable to be optimized [see Eq. (7)], doing so could result in slower convergence, since significant differences in magnitude between **ô** and **ĥ** can result in a skewed optimization landscape and ill-conditioning.^{41} Although variable renormalization could solve this issue, we have chosen instead to alternate between the minimization of **ô** and **ĥ** in the current version of AIDA, as advocated by Mugnier *et al.*^{12}

For nonquadratic cost functions, solution convergence can often be improved by periodically restarting the CG minimization after a defined number of steps so as to interlace steepest-descent steps with CG steps. We have found this partial conjugate gradient (PCG) approach^{41} to be more effective than a simple CG approach in minimizing the quasi-quadratic cost functions Eqs. (15) and (16), consistent with the findings of Mugnier *et al.*^{12}

Starting with each PSF in **ĥ** set to the mean of the sampled PSFs (^{−1}[]), each object in **ô** is optimized via a PCG approach. CG optimization is capped by a set number of iterations, ζ (typically 25), constituting a CG block and repeated for π_{o} PCG iterations. The resulting estimate for **ô** is then fixed, and each PSF in **ĥ** is optimized via π_{h} PCG iterations. The multi-frame estimates **ô** and **ĥ** are alternatively optimized, with each pair of estimations constituting one AIDA optimization round. The number of PCG iterations per optimization round for **ô** and **ĥ** is typically increased progressively, with the possibility of separate PCG iteration plans for **ô** and **ĥ**. By default, the number of PCG iterations executed per optimization round is given by *PCG*[*j*] = 2(*j* − 1) + 1, where *j* is the optimization round, from 1 to η, the maximum default number of optimization rounds (typically 8). Progressively increasing the number of PCG iterations in this manner ensures that the optimization of the current variable (e.g., **ô**) does not get fixed too quickly relative to the other variable (e.g., **ĥ**), which may yet be suboptimal. Multi-frame optimization of **ô** and **ĥ** is continued until the fraction of individual *ô _{j}* and

AIDA’s quasi-quadratic cost function was minimized using a constrained CG algorithm developed by Goodman and co-workers^{42} and is freely available as part of the EDEN Holographic Method package.^{43,44} This algorithm incorporates three significant advances over the conventional CG method.^{45} First, to ensure that solutions are positive (or within a user-specified bound), a projected gradient or active sets approach is used.^{41} Johnston *et al.*^{46} have shown that such an approach is superior to maintaining solution positivity via reparametrization, since reparametrization often leads to the creation of spurious minima that can complicate the optimization process. Second, to prevent zig-zagging behavior that can arise when using an active sets approach or minimizing nonquadratic functions, an adaptive bending line search is used to set the most effective conjugate direction step size (typically called α). Third, to better preserve conjugacy between successive directions, the CG deflection parameter (typically called β) is computed using the Hestenes–Stiefel formula instead of the standard Fletcher–Reeves or Polak–Ribiere formula.^{41}

To facilitate modification and future developments of AIDA, the calculation of the cost function was written in an extensible manner in which cost function terms may be turned on or off. For computational efficiency, only terms that are dependent upon the variable being estimated are computed (e.g., for *ô _{j}*, data fidelity and object regularization terms, but not the OTF constraint, are computed).

The data fidelity weights for each image frame, *w*(**r**) [see Eq. (8)], can be computed as a sum of Gaussian and Poissonian contributions according to Eq. (2) as proposed by Mugnier *et al.*^{12}:

$$w(\mathbf{r})=\underset{{\sigma}_{G}^{2}}{\underbrace{\frac{\pi}{2}{({\langle i(\mathbf{r})\rangle}_{\le 0})}^{2}}}+\underset{{\sigma}_{P}^{2}}{\underbrace{\text{max}[i(\mathbf{r})\phantom{\rule{thinmathspace}{0ex}},0]}}.$$

(17)

The first term accounts for Gaussian detection–electronic readout noise, ${\sigma}_{G}^{2}$, which can be estimated using the average over all negative pixels in the image. For images of extended objects that do not have any negative-pixel areas (common in microscopy), a separate dark image is required from which ${\sigma}_{G}^{2}$ can be computed directly. The second term in Eq. (17) accounts for Poisson photonic noise, ${\sigma}_{P}^{2}$; this term is derived from the fact that the variance equals the mean and the mode for a Poisson distribution. Although this term should technically be determined using a noise-free image estimate, ${\sigma}_{P}^{2}=\text{max}[\widehat{g}(\mathbf{r})\phantom{\rule{thinmathspace}{0ex}},0]$, we did not observe a significant improvement in deconvolution quality to merit using this more accurate though algorithmically complicated approach.

The estimates for the variances in Eq. (17) implicitly assume that *i* has been properly background subtracted so as to lead to a properly centered and sampled Gaussian distribution for readout noise. Only noise arising from the image formation is accounted for here. “Scientific noise” (e.g., cellular autofluorescence in microscopy imaging), which may be irrelevant to image features of scientific interest, are not accounted for here explicitly but treated as an optically genuine component of the object under observation.

The clique potential [see Eq. (10)] used for edge-preserving object regularization requires that effective spatial gradients of the object estimate be computed. This can be done efficiently by convolving the object estimate with a gradient mask:

$${\mathrm{\nabla}}_{r}\widehat{o}(\mathbf{r})=\widehat{o}(\mathbf{r})\otimes \chi {\mathbf{G}}_{r},$$

(18)

where **G**_{r}, is a 3 × 3 matrix operator corresponding to the gradient of interest in the direction *r* and χ is a scaling normalization factor. Many different gradient masks that have been developed for image segmentation may be used.^{17,47} We prefer masks based on the work of Frei and Chen,^{48} since it is equally effective on horizontal, vertical, and diagonal edges, and we have found these operators to be more effective in recovering subtle object features than traditional nearest-neighbor finite-difference approximations (see, e.g., Press *et al.*, Section 5.7^{45}). In two dimensions this is given by

$${\mathbf{G}}_{x}=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill \sqrt{2}\hfill & \hfill 0\hfill & \hfill -\sqrt{2}\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill -1\hfill \end{array}\right]\phantom{\rule{thinmathspace}{0ex}};\phantom{\rule{thinmathspace}{0ex}}{\mathbf{G}}_{y}=\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -\sqrt{2}\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill \sqrt{2}\hfill & \hfill 1\hfill \end{array}\right]\phantom{\rule{thinmathspace}{0ex}},$$

(19)

and in three dimensions it is given by

$$\begin{array}{c}\hfill {{\displaystyle \mathbf{G}}}_{x}=\left[(\mathbf{0}),\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill \sqrt{2}\hfill & \hfill 0\hfill & \hfill -\sqrt{2}\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill -1\hfill \end{array}\right],(\mathbf{0})\right];\hfill \\ \hfill {{\displaystyle \mathbf{G}}}_{y}=\left[(\mathbf{0}),\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -\sqrt{2}\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill \sqrt{2}\hfill & \hfill 1\hfill \end{array}\right],(\mathbf{0})\right];\hfill \\ \hfill {{\displaystyle \mathbf{G}}}_{z}=\kappa \cdot \left[\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill -\sqrt{2}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill -1\hfill & \hfill 0\hfill \end{array}\right],(\mathbf{0}),\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \sqrt{2}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \end{array}\right]\right],\hfill \end{array}$$

(20)

where $\chi ={(2+\sqrt{2})}^{-1}$ and κ is a *z*-resolution compensation factor. In 3D microscopic imaging, the OTF support in the axial direction is significantly smaller than in the radial direction. This leads to a greater loss of information and thus increased blurring in the *z* direction relative to the *x* or *y* direction; κ is used compensate for a more diffuse and uncertain gradient observed in the *z* direction of the image stack so that axial and lateral gradient information are on equal footing. Given the lateral and axial resolutions of a microscope, *r _{xy}* and

$$\kappa ~3.33n/\text{NA},$$

(21)

and, using values typical in microscopic imaging (*n* ≈ 1.33, NA ≈ 1.4), κ ≈ 3.

Minimizing the AIDA cost function [Eq. (15) or (16)] with the CG method requires analytical derivatives with respect to both object and PSF estimates. These can be determined through functional differentiation^{50} and are given by

$$\frac{\partial J}{\partial {o}_{\alpha}}=\{{\displaystyle \sum _{\beta}^{{N}_{\mathit{\text{PSFs}}}}}{\widehat{h}}_{\beta}\phantom{\rule{thinmathspace}{0ex}}\u2605\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\widehat{o}}_{\alpha}\otimes {\widehat{h}}_{\beta}-{i}_{\beta}}{{w}_{\beta}}\right)\}+\frac{{\lambda}_{{o}_{\alpha}}}{{\theta}_{\mathbf{r}}^{2}}\left(\frac{{\nabla}_{\mathbf{r}}^{2}{\widehat{o}}_{\alpha}}{1+\gamma ({\widehat{o}}_{\alpha},{\theta}_{\mathbf{r}})}\right)\phantom{\rule{thinmathspace}{0ex}},$$

(22)

$$\frac{\partial J}{\partial {h}_{\beta}}=\left\{{\displaystyle \sum _{\alpha}^{{N}_{\mathit{\text{objects}}}}}{\widehat{o}}_{\alpha}\phantom{\rule{thinmathspace}{0ex}}\u2605\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\widehat{o}}_{\alpha}\otimes {\widehat{h}}_{\beta}-{i}_{a}}{{w}_{a}}\right)\right\}+{\lambda}_{{h}_{\beta}}\frac{({N}_{d}+1)}{2}{\mathcal{F}}^{-1}\left[\frac{({\widehat{H}}_{\beta}-\overline{\mathscr{H}})}{\upsilon}\right]\phantom{\rule{thinmathspace}{0ex}},$$

(23)

where denotes a correlation. In practice, the terms in curly brackets are computed in the Fourier domain, in accordance with the convolution- and correlation-Fourier theorems.^{45,51} We assume that the arrays (or region-of-interest subarrays) used in Fourier calculations are sufficiently padded so that boundary aliasing problems can be ignored. In computing the derivative of the OTF constraint with respect to *h* [rightmost term in Eq. (23)], we have used the property of the discrete Fourier transform, [*x*^{*}] = *N _{d}*

The spatial Laplacian of the object in Eq. (22) may be computed by convolving the spatial object gradient with a gradient mask [cf. Eq. (18)] as proposed by Mugnier *et al.*^{52} Alternatively, the object may be convolved directly with the following Laplacian operator mask, which we find to be faster and yield finer results:

$${\nabla}_{\mathbf{r}}^{2}\widehat{o}=\widehat{o}(\mathbf{r})\otimes \chi \mathbf{L},$$

(24)

where in two dimensions

$$\chi =\frac{1}{12},\phantom{\rule{thinmathspace}{0ex}}L=\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -2\hfill & \hfill -1\hfill \\ \hfill -2\hfill & \hfill -12\hfill & \hfill -2\hfill \\ \hfill -1\hfill & \hfill -2\hfill & \hfill -1\hfill \end{array}\right]$$

(25)

and in three dimensions

$$\begin{array}{c}\hfill \chi =\frac{1}{16(1+\kappa )}\phantom{\rule{thinmathspace}{0ex}},\hfill \\ \hfill L=\left[\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -1\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -2\kappa \hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill -1\hfill \end{array}\right]\phantom{\rule{thinmathspace}{0ex}},\phantom{\rule{thinmathspace}{0ex}}\left[\begin{array}{ccc}\hfill -\kappa \hfill & \hfill -2\kappa \hfill & \hfill -\kappa \hfill \\ \hfill -2\kappa \hfill & \hfill 16(1+\kappa )\hfill & \hfill -2\kappa \hfill \\ \hfill -\kappa \hfill & \hfill -2\kappa \hfill & \hfill -\kappa \hfill \end{array}\right]\phantom{\rule{thinmathspace}{0ex}},\phantom{\rule{thinmathspace}{0ex}}\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -1\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -2\kappa \hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill -1\hfill \end{array}\right]\right]\phantom{\rule{thinmathspace}{0ex}},\hfill \end{array}$$

(26)

where κ once again compensates for the relative loss in resolution in the *z* versus *xy* directions (typically ~3).

Methods to estimate the hyperparameters that tune object regularization terms such as Eq. (9) have been a subject of considerable attention.^{24–27,53–59} A number of approaches have been advocated including L-curve analysis and generalized cross validation.^{54,55} These heuristic methods are computationally expensive, essentially requiring that multiple deconvolutions be performed over a grid of λ_{o} values for each image to be processed. Other more advanced and theoretically rigorous approaches attempt to optimize hyperparameters jointly with object reconstruction.^{54,58,59} These methods aim to maximize the marginal likelihood of observing the measured image given an incomplete data set over the space of hyperparameters: (_{r}, _{o}) = arg max_{θr,λo}*p*(*i* | θ_{r}, λ_{o}); this is functionally equivalent to maximizing the ratio of partition functions, *Z*/*Z _{o}Z_{n}* [cf. Eq. (6)], with respect to the hyperparameter variables.

Our initial efforts to derive an automatic scheme were founded upon a large collection of deconvolution results generated over a grid of θ_{r} and λ_{o} values spanning several orders of magnitude. We used a variety of different 2D object types and natural scenes to build a reference set of images covering a broad range of signal-to-noise ratios. A subset of these reference objects is shown in Fig. 2. These reference images were used to assess deconvolution quality as a function of hyperparameter pairs. From a grid search over hyperparameters, a plane of acceptable (θ_{r}, λ_{o}) solutions (determined by visual inspection) was found to exist, in agreement with observations by Jalobeanu *et al.*^{26} This finding implies that one hyperparameter may be defined while the other hyperparameter is optimally adjusted to balance data fidelity with object regularization. Within the AIDA cost function framework for a single image frame, we found a balance can be achieved by setting θ_{r} according to

$${\theta}_{\mathbf{r}}\equiv \sqrt{w(\mathbf{r})/{\sigma}_{G}}$$

(27)

and computing λ_{o} directly via the approach detailed below. The form of θ_{r} was motivated by general trends observed in the aforementioned set of grid search results as well the desire for a simple scalar form for λ_{o} (see below).

Subset of reference objects used to test AIDA and establish its automatic hyperparameter estimation scheme. Each object (with maximum intensity set to 100, 1000, or 10,000) was blurred with a Gaussian PSF (FWHM = 4 pixels), had intensity-based Poisson **...**

From Eqs. (8) and (9), the following partition function-like integrals may be defined over the distribution of possible data-model *variations*, δ *i* − *o* *h*, and the distribution of possible gradient norm values *for each pixel element*:

$${\zeta}_{n}(\mathbf{r}){\rfloor}_{\delta}\equiv {\displaystyle {\int}_{\delta}}\text{exp}[-{({\delta}_{\mathbf{r}})}^{2}/2w(\mathbf{r})]\mathrm{d}\delta ,$$

(28)

$${\zeta}_{o}(\mathbf{r}){\rfloor}_{\Vert \nabla o\Vert}\equiv {\displaystyle {\int}_{\Vert \nabla o(\mathbf{r})\Vert}}\text{exp}\phantom{\rule{thinmathspace}{0ex}}\left[-{\lambda}_{o}\left(\frac{\Vert \nabla o(\mathbf{r})\Vert}{\theta (\mathbf{r})}-\text{ln}\phantom{\rule{thinmathspace}{0ex}}\left(1+\frac{\Vert \nabla o(\mathbf{r})\Vert}{\theta (\mathbf{r})}\right)\right)\right]\phantom{\rule{thinmathspace}{0ex}}\mathrm{d}\Vert \nabla o(\mathbf{r})\Vert .$$

(29)

A convenient relation linking θ_{r} and λ_{o} can be obtained by equating these integrals:

$${\zeta}_{n}(\mathbf{r}){\rfloor}_{\delta}\doteq {\zeta}_{o}(\mathbf{r}){\rfloor}_{\Vert \nabla o\Vert ,}\sqrt{2\pi w(\mathbf{r})}={\theta}_{\mathbf{r}}{e}^{{\lambda}_{o}}{\displaystyle {\int}_{1}^{\mathrm{\infty}}}{e}^{-{\lambda}_{o}t}/{t}^{-{\lambda}_{o}}\mathrm{d}t\u2024\approx {\theta}_{\mathbf{r}}\left(\frac{1}{{\lambda}_{o}}+1\right)\phantom{\rule{thinmathspace}{0ex}},$$

(30)

where the approximation holds for λ_{o} 10. The element-by-element equivalence of these integrals essentially assumes that the behavior of each pixel/voxel element can be decoupled and that the Gibbs distribution (and thus partition function *Z*) of Eq. (5) can be represented as a product of separable functions (i.e., a mean-field approximation).^{59} Equating these integrals effectively defines the balance of maximum-likelihood estimation with edge-preserving regularization: it is achieved by properly normalizing the probability distributions for data fidelity and object gradient norms with respect to one another. In more rigorous marginal likelihood–based hyperparameter estimation approaches,^{24,54,57–59} partition functions over primitive model variable(s) (e.g., *i* or *o*) are used, which lead to nonanalytical equalities that require expectation-maximation sampling in order to be solved. Our scheme estimates the sum over all states using conglomerate variables instead [Eqs. (28) and (29)], leading to the approximate though analytical relation of Eq. (30). Solving for λ_{o} in expression (30):

$${\lambda}_{o}={(\sqrt{2\pi w(\mathbf{r})}/{\theta}_{\mathbf{r}}-1)}^{-1}.$$

(31)

This definition, along with the vector definition of θ_{r}, Eq. (27), leads to a simple, pixel-independent *scalar* expression for λ_{o}:

$${\lambda}_{o}={(\sqrt{2\pi {\sigma}_{G}}-1)}^{-1}.$$

(32)

From Eq. (17) and given the quantized nature of real, noisy data, σ_{G} is guaranteed to be $\ge \sqrt{\pi /2}$ such that θ_{r} and λ_{o} are well defined by Eqs. (27) and (32). Using *w*(**r**) as defined in Eq. (17) and object gradients and Laplacians calculated according to expressions (18)–(26), this estimation scheme is quite robust for data with PSFs of compact spatial extent (effective FWHM 8 pixels). For imaging data with spatially extended or oversampled PSFs, the pixel-by-pixel integral equivalence approximation used in Eq. (30) breaks down and can lead to somewhat overregularized results. In such cases, scaling the single scalar hyperparameter estimate, λ_{o}, down by typically no more than a factor of 10–100 is sufficient to generate optimal reconstructions. It is important to note that careful estimates of σ_{G} and *w*(**r**) in accordance with Eq. (17) are important for the success of this estimation scheme.

For the OTF constraint, a quadratic term in real space [Eq. (8)] must be balanced with a quadratic term in Fourier space [Eq. (12)]. Consistent with the fast Fourier transform^{40} normalization scheme used in our algorithm, we have found that this balance can be approximately achieved by setting

$${\lambda}_{H}=1/{N}_{d},$$

(33)

where *N _{d}*, the number of pixel/voxel elements, is assumed. The heuristic motivation for this comes from the power conservation relation of Parseval’s theorem for discrete Fourier, transforms, in which ${\sum}_{r=0}^{{N}_{d}-1}}|x(r){|}^{2}=(1/{N}_{d}){\displaystyle {\sum}_{k=0}^{{N}_{d}-1}}|\tilde{x}(k){|}^{2$.

In Fig. 3, we present classical deconvolution results for one of our synthesized data sets to demonstrate the effectiveness of the automatic estimation scheme. The brain object (256 × 256 pixels) shown in Fig. 3(A) is from a magnetic-resonance imaging (MRI) scan available from the Computer Vision Group at the University of Granada.^{60} This object was convolved with a Gaussian PSF of FWHM of 4 pixels and normalized to a maximum intensity of 1000. This noise-free image, *g*(**r**), was subjected to a Poisson noise transformation. Varying amounts of Gaussian noise were subsequently added (mimicking CCD detector readout noise) according to a predetermined image signal-to-noise ratio (SNR), which we define as

$$\mathit{\text{SNR}}\equiv 10\phantom{\rule{thinmathspace}{0ex}}{\text{log}}_{10}\frac{\text{var}[g(\mathbf{r})]}{{\langle w(\mathbf{r})\rangle}_{\mathbf{r}}},$$

(34)

where var[*g*(**r**)] is the variance of the noise-free image.^{8}

Classical deconvolution test results using automatic hyperparameter estimation. A: Deconvolution series for image SNR of 0, 10, and 20 dB; top, convolved image with Poisson and Gaussian noise (*i*); bottom, corresponding deconvolution result (*ô* **...**

Significant denoising can be observed after deconvolution [Fig. 3(B)] with a contrast enhancement of about 50%. Average contrast improvement was computed by multiple (*N* ≥ 6) comparisons of average intensities over an area of 3 × 3 pixels within a region of interest (*I _{ROI}*) versus over an adjacent background region (

$$\mathrm{\Delta}\mathit{\text{Contrast}}\equiv {\langle \frac{{\langle {I}_{\mathit{\text{ROI}}}\rangle}_{\mathit{\text{area}}}-{\langle {I}_{\mathit{\text{background}}}\rangle}_{\mathit{\text{area}}}}{{\langle {I}_{\mathit{\text{background}}}\rangle}_{\mathit{\text{area}}}}\rangle}_{N\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{samples}}}.$$

(35)

Using the definition

$$\mathrm{\Delta}\mathit{\text{SNR}}\equiv 10\phantom{\rule{thinmathspace}{0ex}}{\text{log}}_{10}\frac{\Vert i-o\Vert}{\Vert \widehat{o}-o\Vert},$$

(36)

we see signal-to-noise improvements of 6.2, 4.2, and 2.4 dB for the deconvolution results of SNR = 0, 10, and 20 dB images, respectively.

Figure 4 shows the deconvolution results for the SNR = 20 dB image of Fig. 3 over a grid of λ_{o} or θ_{r} values that are 20 times larger or smaller than those automatically estimated. Using the estimated hyperparameters (Fig. 4, center) gave the best visual results and balance between data fidelity and regularization. Using the estimated _{o} and a value of θ_{r}= _{r}/20 also gave acceptable results (though contrast was slightly compromised). In general, the deconvolution results were generally less sensitive to changes in θ_{r} than λ_{o} over the range of values examined. Although not shown, we note that AIDA’s hyperparameter estimation scheme works equally well for a range of maximum intensity scalings (i.e., images for which the maximum intensity of the noise-free image is 100 or 10,000). Deconvolution results were typically generated within 30–90 s per (256 × 256) image pixels on a 2.8 GHz Intel Xeon Linux machine.

In Figs. 5 and and6,6, we demonstrate the capabilities of the myopic deconvolution approach with a synthetic phantom composed of pointlike, line, and smooth extended elements. The object in Fig. 5(A) was convolved with a true PSF [Fig. 6(B), left] taken from a set of aberrated PSFs generated using pupil functions with random Zernike polynomial phase components of up to order 15 (Gaussian-distributed amplitudes with mean = 0 and standard deviation = 0.1). The resulting noise-free image was normalized, and Poisson and Gaussian noise was added as described above for a combined image SNR of 17 dB [Fig. 5(B)].

Myopic deconvolution results for a test phantom. A: The original phantom object, *o*. B: The convolved and noisy phantom image, *i* (SNR = 17 dB). C: Reconstructed object after classical deconvolution using the average of synthetically generated PSFs (see **...**

Classical deconvolution of this image using a fixed average PSF () results in significant denoising and contrast enhancement (Δ*SNR* = 1.7 dB), although artifacts can be seen in the reconstructed object [Fig. 5(C), bottom]. Allowing the PSF to relax through myopic deconvolution [Fig. 5(D)] helps to remove these artifacts and further improves image contrast (Δ*SNR* = 2.9 dB). Object recovery is not perfect, however, as highlighted in the bottom panel of Fig. 5(D): (1) dotlike features are larger than in the true object, and two out of the three dots shown are not fully resolved; (2) some residual haze surrounds the two intersecting line elements, and the square-on-square feature is slightly compressed in the lateral direction. The diameter of the dots can be reduced, and the remaining haze around the line elements can be removed by scaling the estimated _{o} hyperparameter down by a factor of 2 (Fig. 5(E); Δ*SNR* = 4.2 dB). With slightly lower regularization, however, the square-on-square feature becomes less smooth, highlighting the intrinsic balance between noise suppression and edge preservation. For comparison, classical deconvolution results using the true PSF and the scaled λ_{o} hyperparameter value are shown in Fig. 5(F) (Δ*SNR* = 3.8 dB). The two lower dot features (separated peak to peak by ~3 pixels) remain unresolvable, although this is consistent with the resolution limitations of the simulated PSFs (FWHM of 3–4 pixels). Stricter *a priori* constraints that assume pointlike objects may lead to improved separation of these features.^{29,61} Owing to imperfect noise suppression, the edges of the square-to-square feature are more jagged in the classical result versus myopic deconvolution result, in which the PSF is allowed to relax. The relaxation of the PSF also results in better noise suppression and fewer noise speckles in the myopic deconvolution result versus the classical result; this leads to an improved Δ*SNR* for the myopic result over the classical result. Artifactual lateral compression of the square features is not seen in the classical result as it is in the myopic result, however.

Photometric comparisons with the true phantom object are shown in Table 1 for each of the highlighted features in Fig. 5. With the exception of dotlike features, myopic deconvolution using automatic hyperparameter estimates can recover intensity values to within ~10%; this is only slightly improved by λ_{o} scaling. However, using the true PSF or scaling down λ_{o} can dramatically improve the photometric recovery over the dotlike features by 15%–30%.

Displayed in Fig. 6(B) are the true PSF (*h _{true}*), the average PSF used as the initial guess in myopic deconvolution (), the myopically recovered PSF using AIDA (

Below, we demonstrate the effectiveness of AIDA in myopically deconvolving real imaging data for two astronomical targets, Io and Titan.

Io is the innermost Galilean satellite of Jupiter with a diameter similar to Earth’s moon (~3600 km) and is known to be volcanically active. To understand the origin of Io’s volcanism, its time evolution, and relationship to tidal heating, its volcanic activity needs to be monitored over a large time baseline. With the demise of the Galileo spacecraft that was in orbit around the Jovian system until 2003, the monitoring of Io volcanism now lies in the hands of ground-based observers.

When Io is closest to earth, its angular size is ~1.2 arcsec, which is very close to the natural angular resolution (seeing) provided by ground-based telescopes. Because of its brightness (apparent visual magnitude, *m*_{υ} ~ 5), Io is ideally suited for observation by adaptive optics (AO) systems. Volcanism on Io has been monitored regularly in the near infrared (NIR) between 1 and 5 µm by one of us (F. Marchis) using the Keck 10m telescope AO system.^{62–64} The angular resolution provided by AO varies with the wavelength range of observations from 55 milli-arcsec (mas) in the Kc band (centered at 2.2 µm) to 100 mas in the Ms band (4.7 µm), corresponding, respectively, to ~170 and ~305 km on the surface of the satellite. Such spatial resolution is comparable with that of the Galileo observations of Io in the same wavelength range.^{65}

Marchis and co-workers^{62,64} used MISTRAL to process the first high-resolution AO images of Io volcanic activity. We compared the performance of AIDA (with automatic hyperparameter estimation) with that of MISTRAL with a set of Io images acquired in 2003. The deconvolution results for three different broadband filter observations are shown in Fig. 7. Each basic-processed filtered image was a shift-and-added synthesis of five observations (<5min each; background subtracted and flat fielded). The improvement in image contrast after deconvolution is obvious. In the Kc band, the surface reflectance or albedo markings including dark paterae and bright frost areas are visible on the surface of Io. The general features of Io are in excellent agreement with those of Galileo/Voyager maps shown in Fig. 8. AIDA and MISTRAL deconvolution results are extremely similar, with a correlation coefficient of 99.4% when calculated over the area of the satellite.

Myopic deconvolution results for AO-corrected images of Io, a volcanically active moon of Jupiter. The PSF of the system was estimated using images of a star located near the target with the same visible magnitude. PSF variability [characterized by υ **...**

Reconstructed appearance of Io on January 26, 2003, at 07:38 UT observed from Earth. This image is based on Galileo solid state imaging and Voyager composite maps at a resolution of 20 km (courtesy of P. Descamps, Institute de Mécanique Céleste **...**

For a single, 512 × 512 image, our AIDA implementation was 15–20 times faster than the original MISTRAL implementation (e.g., ~25 min versus ~7 h on a 1.8 GHz iMac G5 computer running Mac OS X 10.3). In practice, multiple MISTRAL deconvolutions must typically be performed to hone in on hyperparameter values that yield the best results. This is often a time-consuming and laborious process: between 10 and 20 MISTRAL deconvolution runs are usually necessary to locate an optimal (θ_{r}, λ_{o}) pair. Thus, the practical gain in processing time of AIDA compared with MISTRAL is >100-fold.

The image of Io in the Ms band is radically different than for the Kc band, being dominated by the localized thermal emission of the volcanoes. In the Lp band (intermediate wavelength, ~3.8µm), large-scale albedo features on the surface are visible as are the thermal emissions of the active centers. After deconvolution, several additional hot spots were revealed on the hemisphere of Io. Most of them can be found in the basic-processed image upon more careful scrutiny. The Lp band result generated with AIDA using automatic hyperparameters is noticeably different (more diffuse bright spot and some slight ringing) compared with that of MISTRAL, although these differences can be reduced by manually adjusting the hyperparameters (data not shown).

The accurate recovery of image intensities from which the temperature and emission areas of these hot spots can be determined (e.g., assuming a blackbody emission law) is also of interest. Hot-spot flux was measured using aperture photometry on the deconvolved image, assuming that most of the flux is gathered in an area slightly larger than the angular resolution on the image.^{66} This is a good approximation for hot spots with a peak contrast lower than 20%, since the intensity of the first Airy ring is negligible compared with the variation of brightness on the surface. For the extremely bright hot spot (outburst) on the Ms band image, a prominent Airy ring remains after deconvolution. This residual artifact may be explained by the fact that the Keck PSF is hexagonal in shape^{67} and that its orientation changes with the position of the telescope; optimizing the rotation of the sampled PSFs (and thus the mean PSF to which the PSF estimate is constrained) would likely minimize this artifact. Since this problem would not significantly affect the scientific analysis of the image, we have not pursued this matter further. The hot spot can be seen on the basic-processed image with a very good SNR, and therefore its integrated intensity can be easily measured after comparison with the PSF. Overall, the deconvolution of Io images with AIDA provides excellent reconstructions, which can be used to analyze surface changes on Io and to detect the faintest active centers and quantify their intensities.

Titan, Saturn’s largest moon, was largely a mystery until very recently. Observations collected by the Voyager spacecraft in 1981^{68} showed that Titan is obscured by a dense and opaque atmosphere consisting mainly of nitrogen. The surface of this 0.9″ angular-sized satellite, however, can be probed in the NIR through methane windows using such high-resolution techniques as speckle imaging^{69} and AO.^{70} Recent AO observations of its atmosphere revealed the presence of clouds and a complex structure with seasonal variability. The NASA-ESA Cassini–Huygens probe in orbit within the Saturnian system and an intensive campaign of observations using AO systems available on the Keck 10m telescope (Mauna Kea, Hawaii) and the ESO-8 m Very Large Telescope (Cerro Paranal, Chile) are in place to help understand this complex satellite.

In Fig. 9(A), we show a ground-based observation of Titan taken on January 15, 2005, one day after the Huygens probe landed on its surface. Titan was observed with the Keck AO using the NIRC-2 camera with a pixel scale of 9.94 mas through a narrowband He filter (2.06 ± 0.03 µm). At this wavelength, the atmosphere is nearly transparent, and most of the structures visible on the image are larger than 330 km (corresponding to 55 mas). A remarkable gain in image contrast is obtained after AIDA deconvolution, as shown in Fig. 9(B). This imaged hemisphere contains the landing site of the Huygens probe and was regularly observed by the Cassini spacecraft [Figs. 9(C) and 9(D)]. The similarity between the Imaging Science Subsystem images (with a slight rotation of Titan) is striking. The smallest albedo structures detected after deconvolution have clear equivalents in the higher-resolution image^{71} (see arrow markings). This comparison validates the efficiency of our algorithm and demonstrates the absence of significant artifacts on the deconvolved image. A full scientific analysis of this and numerous other Titan observations and deconvolution results is presented elsewhere.^{71}

When multiple AO images of a common object are acquired, they are often simply combined into a single shift-and-added image, which is then deconvolved. This practice has been demonstrated by others to be suboptimal; a more effective data reduction strategy would be to deconvolve the set of images in a global fashion, linking common variables while maintaining the distinctiveness of each observation. Extending the MISTRAL approach to simultaneously deconvolve multiple image frames is another feature of AIDA. Below, we present deconvolution results for two different multi-frame data sets. The first consists of AO images of Uranus’s atmosphere and is used to demonstrate AIDA’s multi-PSF deconvolution capabilities, in which there is a common object but a variable PSF. The second data set consists of time-lapsed fluorescence microscopy images of yeast microtubule dynamics and is used to demonstrate AIDA’s multi-object mode, in which there is a common PSF but different objects between frames.

Since the Voyager spacecraft encounter of the planet Uranus in 1986, interest in this planet has been revitalized with the discovery that its atmosphere is considerably active.^{72} High-angular-resolution imaging, however, is necessary to detect cloud motions,^{73} faint rings, and small satellite systems.^{74,75} The extended disk (diameter ~3.6 arcsec) of the planet (integrated apparent visual magnitude, *m*_{υ} ~ 6) is bright enough to be used as a reference for wavefront sensor analysis on most AO systems. However, since the position of the centroid on the wavefront is not well determined in the case of a quad-cell aperture wavefront sensor for such an extended object, the atmospheric correction is degraded in the final image, and artifacts may appear in several frames.^{75} We tested AIDA on observations of Uranus taken on October 3, 2003, with the Keck AO system and its NIRC-2 camera, using a broadband filter centered at 1.6 µm (H band). Five 30 s frames recorded in less than 8 min were processed using standard near-infrared data reduction techniques (flat-field, sky subtraction, and bad pixel removal). To estimate the PSF for myopic deconvolution, we imaged Puck, a bright satellite of Uranus located 2.4″ away from the center of the planet and whose motion was negligible during the exposure time. Given the large imaged size of Uranus and size of the image frames (1024 × 1024 pixels), using MISTRAL for deconvolution would not have been practical due to the long processing time needed (~23 h/deconvolution on a Sun Ultra 10 computer), especially since we would have needed to run multiple deconvolutions to determine a good choice of regularization parameters. Deconvolution using AIDA with automatic hyperparameter estimation was significantly faster (45 min for mono-frame deconvolution and 1.5 h for multi-PSF deconvolution on a 2.8 GHz Intel Xeon Linux machine) with the possibility of analyzing all AO data frames simultaneously.

Deconvolution results in significant image sharpening (Fig. 10), with a gain in contrast of ~2–3 on the cloud features. A layered structure of the northern haze and some faint clouds at ~40° latitude are revealed, and the structure of the large clouds on the southern hemisphere is clearer after deconvolution. A ghost outer ring artifact present in previous observations using the same Keck AO system^{75} is visible in several of the individual AO-corrected image frames [Fig. 10(C)]. This artifact remains in the mono-frame deconvolution of the shift-and-added combined image but is half as intense in the multi-frame deconvolution result [cf. Figs. 10(D) and 10(E)]. The glare of Uranus (e.g., see area near the innermost ringlet) is also further reduced in the multi-frame deconvolution result than in the mono-frame deconvolution result. Overall, we find that simultaneous deconvolution of multiple-frame data is better able to restore low SNR features and minimize artifacts than the deconvolution of a single shift-and-added representation of the multiple-frame data.

Microtubules are hollow cylindrical polymers that radiate from near the nucleus of a cell and serve as tracks upon which cellular components are transported. Roughly 25 nm in diameter, these microtubules are formed from the stochastic polymerization and depolymerization of α- and β-tubulin proteins. The regulation of microtubule dynamics has been a topic of investigation for many years in cell biology, aided greatly by the direct observation of microtubules using time-lapsed video fluorescence microscopy.^{76}

We used AIDA in multi-object deconvolution mode to process time-series images of microtubule dynamics in the fission yeast, *Schizosaccharomyces pombe*. Using the OMX wide-field fluorescence microscope system developed recently in our lab at the University of California, San Francisco (UCSF), a yeast cell whose microtubules were fluorescently labeled using the green fluorescence protein fused to α-tubulin was imaged every second. Each image was formed by physically sweeping the microscope focus (by linearly moving the sample stage) through the entire *z* depth of the cell (~4 µm in 50 ms) every second. Using estimates of the PSF based on a set of three images of a 0.1 µm fluorescent bead acquired under similar conditions, these time-series data were myopically deconvolved assuming a common (time-invariant) PSF for the whole data set and assuming each image was simply a snapshot of a distinct object.

In Fig. 11(A), we show the results of standard myopic deconvolution and multi-object deconvolution with automatic hyperparameter estimates for a single representative time slice. In Fig. 11(B), the corresponding kymograph plots—1D maximum intensity projections of each image as a function of time—are shown for these data. These kymograph plots provide a better perspective on the time-dependent features of microtubule growth and shrinkage. The mono-frame deconvolution results are significantly denoised with improved microtubule contrast. The multi-object deconvolution results have even better contrast enhancement, exhibiting thinner microtubule fibers and a more textured background within the cell cytoplasm. It is unclear how much of this texturing may be artifactual. However, the fact that each image slice was deconvolved independently with respect to the time axis and that a number of cell background features are temporally persistent in the kymograph suggest that some of these grainy features are genuine.

One main advance of AIDA is the extension of the MISTRAL method to deconvolved 3D data commonly encountered in biological imaging. Unlike the 2D PSFs encountered in low-numerical-aperture astronomical imaging, the PSFs in optical microscopy are more diffuse, with significant axial (*z*-dimensional) blurring on the order of three times the lateral blur. Deconvolution is expected to dramatically sharpen image data subject to such out-of-focus blur. Recently, Chenegros *et al.*^{77} demonstrated the effectiveness of MISTRAL’s edge-preserving regularization term in deconvolving synthetic 3D retinal images. Here, we show myopic deconvolution results for two 3D data sets, one synthesized from magnetic-resonance imaging (MRI) data of a frog and another of real, wide-field fluorescence microscopy data of chromosomes within cells undergoing cell division.

We constructed synthetic 3D frog images (128 × 256 × 256 pixels) by convolving a MRI volume data set from The Whole Frog Project (Lawrence Berkeley National Laboratory)^{78} with a PSF derived from microscopic imaging of a subresolution (0.1 µm) fluorescent bead; Poisson and Gaussian noise was added to the convolved image as described earlier. The PSF used had a FWHM in the lateral direction of ~3 pixels and an effective resolution loss in the *z* direction (κ) of ~3 [see Eq. (21)]. Using an ensemble of similarly acquired experimental PSFs, these frog images were myopically deconvolved using automatic hyperparameter estimates (~6 h on a 2.8 GHz Intel Xeon Linux machine).

Additive 2D volume projections for the raw and deconvolved 3D image stacks for image SNRs of 0 and 20 dB are shown in Figs. 12(A) (*en face*) and 12(B) (side view). The denoising and object reconstructions for these data are striking. The quality of the deconvolution results conveyed by these 2D projections is comparable to that seen from a comparison of individual 2D slices. Representative slices through the 3D volume stack of the original object, 20 dB SNR image, and deconvolution result are shown in Fig. 13; also shown are intensity line profiles (denoted by an asterisk) through the eye region of the 2D frog slices. Deconvolution with AIDA leads to substantial photometric restoration of the original frog data, with a signal-to-noise improvement (Δ*SNR*) of 5.7 and 5.1 dB for image SNRs of 0 and 20 dB, respectively.

2D volume projections for myopically deconvolved 3D frog image stacks with image SNRs of 0 and 20 dB. A: *xy* projection; B: *yz* projection. Each image is shown using a full intensity scale (from minimum value to maximum value). Automatic hyperparameter **...**

Nearly 50 years since the atomic structure of DNA was elucidated, the higher-order structural organization of DNA within chromosomes of cells remains poorly understood. With recent advances in high-resolution microscopic imaging and fluorescent labeling technology, however, discerning the mesoscopic arrangements of DNA within living cells is becoming more of a reality. A primary interest of ours is to better understand the detailed structural changes of chromosomes as a cell divides in a process called mitosis. During mitosis, a cell’s chromosomes are unraveled, condensed, and separated; defects in chromosome structure during any of these mechanical steps could have devastating consequences on the fidelity of genetic transmission to daughter cells.^{79}

*Drosophila melanogaster* (fruit fly) embryos offer a unique opportunity to study chromosome structural changes during mitosis. Cells in early embryos (within 2–3 h) undergo multiple rounds of cell division in an ordered and highly reproducible manner. Using the OMX microscope system mentioned earlier (Subsection 5.B), a 3D image stack (32 × 512 × 512 pixels) was acquired of a cell-cycle-10 *D. melanogaster* embryo fixed in 10% formaldehyde and mounted in glycerol. Cells in this embryo were stained with the DNA-specific dye, DAPI, and captured undergoing anaphase, the stage of mitosis in which chromosomes separate. This image stack was deconvolved myopically using a PSF derived from an image of a 170 nm fluorescent bead under similar imaging settings. Image pixel spacing was 80 nm in *xy* and 150 nm in *z*, for a total image stack thickness of 4.8 µm. ζ was set to 3.2 based on the extent of a measured OTF in the lateral versus axial directions.

Shown in Fig. 14 are 2D maximum intensity projections of representative portions of the original 3D image stack and the result after myopic deconvolution. Although the original data shown are of especially good quality so that most chromosome arms can be distinguished in Fig. 14(A), chromosome boundaries are significantly more demarcated in the deconvolution result. The benefits of deconvolution are even more pronounced in Fig. 14(B) in which there is greater blurring in the axial versus lateral directions: finer structures and corrugated banding patterns of the chromosome arms become noticeable; the arrows highlight a few representative areas showing improved contrast in fine image features. Some residual hour-glass PSF blur remains after deconvolution, however, and appears to become more prominent with increasing *z* depth (see, e.g., lower left of deconvolution result, Fig. 14(B)). This blur may be attributed to greater index-of-refraction aberrations between the microscope objective lens and the sample as one focuses deeper into the embryo. The true PSF in this case is thus likely to be depth dependent, although space-invariant PSFs are assumed in the current AIDA deconvolution framework.

Chromosomes of mitotically dividing cells (cell cycle 10, anaphase) within a *D. melanogaster* (fruit fly) embryo. Chromosomes were stained with the fluorescent dye, DAPI, and embryos were fixed in 10% formaldehyde fixation buffer A, mounted in glycerol, **...**

To achieve the nonblurry, visually balanced deconvolution result of Fig. 14, we found it necessary to scale the automatic hyperparameter estimate, _{o}, down by a factor of 10. Inaccurate hyperparameter estimation is likely due to at least one of four possible causes. First, since only a single PSF estimate was available for these data [in which case the OTF constraint is based simply on the photonic-noise variance (see Subsection 3.A)], the calculated OTF statistics may not be sufficient to guide the myopic deconvolution toward a more correct OTF. A lower _{o} likely compensates for imprecise OTF statistics. Second, as alluded to above, depth-dependent variations of the true PSF are not accounted for in our imaging model and may lead to compromised object reconstructions. Third, there may be noise sources (e.g., out-of-focus, scattered background light) that are not accounted for by the assumed noise model; the effectiveness of the hyperparameter estimation scheme is predicated upon good estimates for the Gaussian and Poisson noise statistics (as discussed in Subsection 3.D). Fourth, out-of-focus contributions to the image stack from areas of the embryo *outside* the image stack are not accounted for in the current imaging framework. The effects of these factors on deconvolution outcome and strategies to compensate for them are currently being explored by our group.

We have reimplemented and extended the MISTRAL approach^{12} to myopically deconvolve, as far as we know for the first time, multiple-image-frame data and 3D image stacks. Unlike MISTRAL, which is implemented using the commercial Interactive Data Language (Research Systems, Inc., Boulder, Colorado) and has proprietary source code, our adaptive image deconvolution algorithm, AIDA, was implemented using freely available Numerical Python and is intended for open-source development. AIDA runs at least 15 times faster than the original MISTRAL implementation. Importantly, AIDA incorporates a simple yet robust scheme to estimate regularization hyperparameters, which greatly simplifies the tedious and delicate though necessary task of balancing maximum-likelihood estimation with object regularization and noise suppression. Object reconstructions can be generated using AIDA that are comparable with those of MISTRAL, with high photometric precision and good edge preservation and without the need to sample (typically 10–20) different hyperparameter settings in order optimize the degree of regularization. This results in a practical efficiency gain of AIDA over MISTRAL of greater than 100-fold.

Multiple image observations are commonly acquired in adaptive optics imaging, although they are often combined into a single averaged image before deconvolution. Deconvolving these images simultaneously, however, is a more effective data reduction strategy.^{11,31–33} The multi-frame deconvolution results for the Uranus AO observations show that leveraging invariable aspects of the data while retaining the unique variations between distinct observations leads to object reconstructions with crisper details than the corresponding mono-frame deconvolution result.

AIDA’s multi-frame deconvolution capabilities are currently limited to data with a single object and multiple variable PSFs (*M _{o}* = 1;

AIDA is equally effective in deconvolving 3D image data and 2D data, and deconvolution times scale linearly with the size of the image data. In the current AIDA framework, each image pixel element is treated as a variable to be optimized, leading to substantial computational demands as image size increases. Work in our group is in progress to recast the optimization of the PSF in terms of the more computationally compact pupil function that characterizes the optical wavefront at the exit pupil of an imaging system arising from a point source.^{80,81} In addition to greater computational efficiency for larger image data sets, myopic deconvolution using the pupil function could provide explicit insight into the inherent or dynamic aberration modes of an optical system (e.g., by Zernike mode decomposition). The ease with which the pupil function can be modified to account for aberrations also makes it particularly amenable to use in cases where the PSF is space variant^{80,81} (e.g., with depth-dependent index-of-refraction variations in microscopy or anisoplanatic imaging in astronomy). Moreover, use of the pupil function could help bridge the synthesis of wavefront-sensing data from AO and imaging data in the deconvolution process.^{52}

At least four issues merit further development and exploration. First, the reasons for the success of our automatic hyperparameter estimation scheme. While this semiempirical scheme is effective in deconvolving a broad range of image data, the theoretical foundations for its robustness deserve future study. The assumption of quasi-independent pixel/voxel prior distributions and the assumption that the balance of maximum-likelihood estimation and object regularization is best achieved by normalizing these prior distributions with respect to one another should be explored in relation to the partition functions of Eq. (6) and other more rigorous marginal likelihood approaches. Second, the development of a multi-object deconvolution mode more specifically tailored for time-series data. In deconvolving the microtubule dynamics data in subsection 5.B, the temporal independence of each object in the time series was assumed. While this was helpful in highlighting common, persistent features between time frames, incorporating a cost function term or procedure within the deconvolution algorithm to maximize the temporal correlation between adjacent time slices may help reinforce object features that are self-similar and suppress temporally uncorrelated noise artifacts. Third, as image data sets become larger and/or deviations from the assumed noise model become more pronounced, the time-to-optimization convergence may become seriously compromised. Convergence might be improved by toggling between a weighted least-squares (L2-norm) form for the data fidelity term [Eq. (8)] and a robust L1-norm form that is computationally simpler and less sensitive to noise model mismatch and data outliers.^{82–84} Deconvolution efficiency might also be improved by a reparametrization of the object, for example, using wavelets,^{17} and by incorporating aspects of multi-resolution/hierarchical scaling into the deconvolution algorithm.^{17,85–87} Finally, it would be interesting to see how the myopic capabilities and edge-preserving noise suppression advantages of AIDA deconvolution could improve the processing of data from such superresolution imaging modalities as multi-frame mosaicing^{84,88} and structured illumination microscopy.^{89,90}

We thank members of the Sedat Laboratory Z. Kam, and M. Gustafsson for discussions and comments on the manuscript. We thank Eugene Ingerman [University of California, Davis, and Lawrence Livermore National Laboratory (LLNL)] and Stephen Lane (LLNL) for making the conjugate gradient code available to us. We thank the authors of MISTRAL for allowing F. Marchis privileged use of their program and a reviewer for bringing to our attention the work of Chenegros *et al.*^{77} The astronomical data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Work by E. F. Y. Hom, S. Haase, and J. W. Sedat was supported by NIH grant GM25101-26. F. Marchis was supported in part by NASA grant PAST04-0000-0025 and by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement AST-9876783. T. K. Lee was supported by Howard Hughes Medical Institute summer fellowships. D. A. Agard was supported by NIH grant GM31627 and is a Howard Hughes Medical Institute investigator.

*OCIS codes:* 100.1830, 100.3020, 100.3190, 010.1080, 180.0180, 180.6900.

AIDA source code, news, and updates will be available at http://msg.ucsf.edu/AIDA.

Erik F. Y. Hom, Graduate Group in Biophysics and Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA.

Franck Marchis, Department of Astronomy, University of California, Berkeley, 601 Campbell Hall, Berkeley, California 94720, USA.

Timothy K. Lee, Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA.

Sebastian Haase, Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA.

David A. Agard, Howard Hughes Medical Institute and Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA.

John W. Sedat, Department of Biochemistry and Biophysics, University of California, San Francisco, Genentech Hall, 600 16th Street, San Francisco, California 94143-2240, USA.

1. Roddier F. Adaptive Optics in Astronomy. Cambridge U. Press; 2004.

2. Christou JC, Bonnacini D, Ageorges N, Marchis F. Myopic deconvolution of adaptive optics images. ESO Messenger. 1999;97:14–22.

3. Véran J-P, Rigaut F, Henri M, Rouan D. Estimation of the adaptive optics long-exposure point-spread function using control loop data. J. Opt. Soc. Am. A. 1997;14:3057–3069.

4. Gibson SF, Lanni F. Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy. J. Opt. Soc. Am. A. 1991;8:1601–1613. [PubMed]

5. Le Mignant D, Marchis F, Bonaccini D, Prado P, Barrios E, Tighe R, Merino V, Sanchez A. The 3.60m Telescope Team, and ESO Adaptive Optics Team. The ESO ADONIS system: a 3 years experience in observing methods. In: Bonaccini D, editor. ESO Conference and Workshop Proceedings; European Southern Observatory; 1999. p. 287.

6. Hiraoka Y, Sedat JW, Agard DA. The use of a charge-coupled device for quantitative optical microscopy of biological structures. Science. 1987;238:36–41. [PubMed]

7. Holmes TJ. Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach. J. Opt. Soc. Am. A. 1992;9:1052–1061. [PubMed]

8. Bertero M, Boccacci P. Introduction to Inverse Problems in Imaging. Institute of Physics; 1998.

9. Ayers GR, Dainty JC. Iterative blind deconvolution method and its applications. Opt. Lett. 1988;13:547–549. [PubMed]

10. Stockham TG, Cannon TM, Ingebretsen RB. Blind deconvolution through digital signal processing. Proc. IEEE. 1975;63:678–692.

11. Schulz TJ. Multiframe blind deconvolution of astronomical images. J. Opt. Soc. Am. A. 1993;10:1064–1073.

12. Mugnier LM, Fusco T, Conan J-M. MISTRAL: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images. J. Opt. Soc. Am. A. 2004;21:1841–1854. [PubMed]

13. Howell SB. Handbook of CCD Astronomy. Cambridge U. Press; 2000.

14. Leung W-YV, Lane RG. Blind deconvolution of images blurred by atmospheric speckle. Proc. SPIE. 2000;4123:73–83.

15. Demoment G. Image reconstruction and restoration: overview of common estimation structures and problems. IEEE Trans. Acoust., Speech, Signal Process. 1989;37:2024–2036.

16. Molina R, Nunez J, Cortijo FJ, Mateos J. Image restoration in astronomy: a Bayesian perspective. IEEE Signal Process. Mag. 2001;18:11–29.

17. Starck J-L, Murtagh F. Astronomical Image and Data Analysis. Springer; 2002.

18. Besag J. On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B (Methodol.) 1986;48:259–302.

19. Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:721–741. [PubMed]

20. Charbonnier P, Blanc-Feraud L, Aubert G, Barlaud M. Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 1997;6:298–311. [PubMed]

21. Bouman C, Sauer K. A generalized Gaussian image model for edge-preserving MAP estimation. IEEE Trans. Image Process. 1993;2:296–310. [PubMed]

22. Teboul S, Blanc-Feraud L, Aubert G, Barlaud M. Variational approach for edge-preserving regularization using coupled PDEs. IEEE Trans. Image Process. 1998;7:387–397. [PubMed]

23. Brette S, Idier J. Optimized single site update algorithms for image deblurring. Proceedings of IEEE International Conference on Image Processing; IEEE; 1996. pp. 65–68.

24. Saquib SS, Bouman CA, Sauer K. ML parameter estimation for Markov random fields with applications to Bayesian tomography. IEEE Trans. Image Process. 1998;7:1029–1044. [PubMed]

25. Jalobeanu A, Blanc-Féraud L, Zerubia J. Rapport de Recherche de l’INRIA–Sophia Antipolis, Equipe ARIANA RR-3956. INRIA; 2000. Adaptive parameter estimation for satellite image deconvolution.

26. Jalobeanu A, Blanc-Féraud L, Zerubia J. Hyperparameter estimation for satellite image restoration using a MCMC maximum-likelihood method. Pattern Recogn. 2002;35:341–352.

27. Jalobeanu A, Blanc-Féraud L, Zerubia J. An adaptive Gaussian model for satellite image deblurring. IEEE Trans. Image Process. 2004;13:613–621. [PubMed]

28. Conan J-M, Mugnier LM, Fusco T, Michau V, Rousset G. Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra. Appl. Opt. 1998;37:4614–4622. [PubMed]

29. Fusco T, Véran J-P, Conan J-M, Mugnier LM. Myopic deconvolution method for adaptive optics images of stellar fields. Astron. Astrophys., Suppl. Ser. 1999;134:193–200.

30. Jefferies SM, Christou JC. Restoration of astronomical images by iterative blind deconvolution. Astrophys. J. 1993;415:862–874.

31. Christou JC, Roorda A, Williams DR. Deconvolution of adaptive optics retinal images. J. Opt. Soc. Am. A. 2004;21:1393–1401. [PubMed]

32. Straume M. Sequential versus simultaneous analysis of data: differences in reliability of derived quantitative conclusions. Methods Enzymol. 1994;240:89–121. [PubMed]

33. Christou JC, Hege EK, Jefferies SM. Speckle deconvolution imaging using an iterative algorithm. Proc. SPIE. 1995;2566:134–143.

34. Ingleby HR, McGaughey DR. Experimental results of parallel multiframe blind deconvolution using wavelength diversity. Proc. SPIE. 2004;5578:8–14.

35. Christou JC, Hege EK, Jefferies SM, Keller CU. Application of multiframe iterative blind deconvolution for diverse astronomical imaging. Proc. SPIE. 1994;2200:433–444.

36. Schulz TJ, Stribling BE, Miller JJ. Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope. Opt. Express. 1997;1:355–362. [PubMed]

37. Greenfield P, Miller T, Rick W, Hsu JC, Barrett P, Kupper J, Verveer PJ. Numarray: an open source project. http://www.stsci.edu/resources/software_hardware/numarray.

38. Beazley DM. Automated scientific software scripting with SWIG. FGCS, Future Gener. Comput. Syst. 2003;19:599–609.

39. Beazley DM. the SWIG Team. A simplified wrapper and interface generator. http://www.swig.org.

40. Frigo M, Johnson SG. The design and implementation of fftw3. Proc. IEEE. 2005;93:216–231.

41. Bazaraa MS, Sherali HD, Shetty CM. Nonlinear Programming: Theory and Algorithms. 2nd ed. Wiley; 1993.

42. Goodman DM, Johansson EM, Lawrence TW. On applying the conjugate gradient algorithm to image processing problems, Chap. 11. In: Rao CR, editor. Multivariate Analysis: Future Directions. North-Holland; 1993. Vol. 5 of North-Holland Series in Statistics and Probability.

43. Somoza JR, Szoke H, Goodman DM, Beran P, Truckses D, Kim SH, Szoke A. Holographic methods in X-ray crystallography. 4. A fast algorithm and its application to macromolecular crystallography. Acta Crystallogr., Sect. A: Found. Crystallogr. 1995;51:691–708. [PubMed]

44. Szoke H, Szoke A, Somoza J, Maia F. The EDEN Holographic Method. http://www.edencrystallography.org/

45. Press WH, Flannery BP, Teukolsky SA, Vetterling WT. Numerical Recipes in C: The Art of Scientific Computing. Cambridge U. Press; 1992.

46. Johnston RA, Connolly TJ, Lane RG. An improved method for deconvolving a positive image. Opt. Commun. 2000;181:267–278.

47. Pratt WK. Digital Image Processing: PIKS Inside. 3rd ed. Wiley-Interscience; 2001.

48. Frei W, Chen C-C. Fast boundary detection: a generalization and a new algorithm. IEEE Trans. Comput. 1977;C26:988–998.

49. Jonkman JEN, Stelzer EHK. Resolution and contrast in confocal and two-photon microscopy, Chap. 5. In: Diaspro A, editor. Confocal and Two-Photon Microscopy: Foundations, Applications, and Advances. Wiley-Liss; 2001.

50. Ryder LH. Quantum Field Theory. Cambridge U. Press; 1985.

51. Bracewell RN. The Fourier Transform and Its Applications. McGraw-Hill; 1999.

52. Mugnier LM, Robert C, Conan J-M, Michau V, Salem S. Myopic deconvolution from wave-front sensing. J. Opt. Soc. Am. A. 2001;18:862–872.

53. Chu CK, Glad IK, Godtliebsen F, Marron JS. Edge-preserving smoothers for image processing. J. Am. Stat. Assoc. 1998;93:526–541.

54. Galatsanos NP, Katsaggelos AK. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Image Process. 1992;1:332–336. [PubMed]

55. Golub GH, Heath M, Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics. 1979;21:215–223.

56. Ibáñez MV, Simó A. Parameter estimation in Markov random field image modeling with imperfect observations. A comparative study. Pattern Recogn. Lett. 2003;24:2377–2389.

57. Jalobeanu A, Blanc-Féraud L, Zerubia J. Estimation of adaptive parameters for satellite image deconvolution. In: Sanfeliu A, Villanueva JJ, Vanrell M, Alquezar R, Eklundh JO, Aloimonos Y, editors. Proceedings of the IEEE International Conference on Pattern Recognition; IEEE; 2000. pp. 318–321.

58. Potamianos GG, Goutsias JK. Partition function estimation of Gibbs random-field images using Monte Carlo simulations. IEEE Trans. Inf. Theory. 1993;39:1322–1332.

59. Zhou ZY, Leahy RM, Qi JY. Approximate maximum likelihood hyperparameter estimation for Gibbs priors. IEEE Trans. Image Process. 1997;6:844–861. [PubMed]

60. University of Granada Computer Vision Group. Brain magnetic resonance dataset (mr030.pgm) http://decsai.ugr.es/cvg/dbimages/gbio256.php.

61. Thiébaut E, Conan J-M. Strict *a priori* constraints for maximum-likelihood blind deconvolution. J. Opt. Soc. Am. A. 1995;12:485–492.

62. Marchis F, de Pater I, Davies AG, Roe HG, Fusco T, Le Mignant D, Descamps P, Macintosh BA, Prangé R. High-resolution Keck adaptive optics imaging of violent volcanic activity on Io. Icarus. 2002;160:124–131.

63. Marchis F, Davies AG, Gibbard SG, Le Mignant D, Lopes RM, Macintosh B, de Pater I. Volcanic activity of Io monitored with Keck-10m AO in 2003–2004. American Geophysical Union, Fall Meeting 2004 Abstracts; American Geophysical union; 2004. p. C1483.

64. Marchis F, Le Mignant D, Chaffee FH, Davies AG, Kwok SH, Prangé R, de Pater I, Amico P, Campbell R, Fusco T, Goodrich RW, Conrad A. Keck AO survey of Io global volcanic activity between 2 and 5 µm. Icarus. 2005;176:96–122.

65. Douté S, Schmitt B, Lopes-Gautier R, Carlson R, Soderblom L, Shirley J. The Galileo NIMS Team. Mapping SO_{2} frost on Io by the modeling of NIMS hyperspectral images. Icarus. 2001;149:107–132.

66. Marchis F. Ph.D. thesis. France: European Southern Observatory, Chile, and Institut d’Astrophysique Spatiale; 2000. High resolution imaging of the solar system bodies by means of adaptive optics. A study of the volcanic activity of Io.

67. Marchis F. Theoretical PSFs for the KECK telescope and AO instruments. http://astron.berkeley.edu/~fmarchis/Science/Keck/Perfect.PSF/

68. Smith BA, Soderblom L, Beebe RF, Boyce JM, Briggs G, Bunker A, Collins SA, Hansen C, Johnson TV, Mitchell JL, Terrile RJ, Carr MH, Cook AF, Cuzzi JN, Pollack JB, Danielson GE, Ingersoll AP, Davies ME, Hunt GE, Masursky H, Shoemaker EM, Morrison D, Owen T, Sagan C, Veverka J, Strom R, Suomi VE. Encounter with Saturn—Voyager 1 imaging science results. Science. 1981;212:163–191. [PubMed]

69. Gibbard SG, de Pater I, Macintosh BA, Roe HG, Max CE, Young EF, McKay CP. Titan’s 2 µm surface albedo and haze optical depth in 1996–2004: Titan: Pre-Cassini view. Geophys. Res. Lett. 2004;31:L17S02.

70. Coustenis A, Gendron E, Lai O, Véran J-P, Woillez J, Combes M, Vapillon L, Fusco T, Mugnier L, Rannou P. Images of Titan at 1.3 and 1.6 µm with adaptive optics at the CFHT. Icarus. 2001;154:501–515.

71. de Pater I, Ádámkovics M, Bouchez AH, Brown ME, Gibbard SG, Marchis F, Roe HG, Schaller E, Young E. Titan imagery with Keck adaptive optics during and after probe entry. Geophys. Res. Lett. 111:E07S05.

72. Sromovsky LA, Spencer JR, Baines KH, Fry PM. Ground-based observations of cloud features on Uranus. Icarus. 2000;146:307–311.

73. Hammel HB, de Pater I, Gibbard SG, Lockwood GW, Rages K. New cloud activity on Uranus in 2004: first detection of a southern feature at 2.2 µm. Icarus. 2005;175:284–288.

74. Descamps P, Marchis F, Berthier J, Prangé R, Fusco T, Le Guyader C. First ground-based astrometric observations of Puck. C. R. Phys. 2002;3:121–128.

75. de Pater I, Gibbard SG, Macintosh BA, Roe H, Gavel DT, Max CE. Keck adaptive optics images of Uranus and its rings. Icarus. 2002;160:359–374.

76. Waterman-Storer CM. Microtubules and microscopes: how the development of light microscopic imaging technologies has contributed to discoveries about microtubule dynamics in living cells. Mol. Biol. Cell. 1998;9:3263–3271. [PMC free article] [PubMed]

77. Chenegros G, Mugnier LM, Lacombe F, Glanc M. 3D deconvolution of adaptive-optics corrected retinal images. Proc. SPIE. 2006;6090:60900P.

78. The Whole Frog Project. http://froggy.lbl.gov.

79. Nasmyth K. Disseminating the genome: joining, resolving, and separating sister chromatids during mitosis and meiosis. Annu. Rev. Genet. 2001;35:673–745. [PubMed]

80. Hanser BM, Gustafsson MGL, Agard DA, Sedat JW. Phase retrieval for high-numerical-aperture optical systems. Opt. Lett. 2003;28:801–803. [PubMed]

81. Hanser BM, Gustafsson MGL, Agard DA, Sedat JW. Phase-retrieved pupil functions in wide-field fluorescence microscopy. J. Microsc. 2004;216:32–48. [PubMed]

82. Huber PJ. Robust Statistics. Wiley-Interscience; 2003.

83. Ke Q, Kanade T. Robust subspace computation using L1 norm. http://reports-arcliive.adin.cs.crnu.edu/anon/2003/CMU-CS-03-172.pdf.

84. Farsiu S, Robinson D, Elad M, Milanfar P. Advances and challenges in super-resolution. Int. J. Imaging Syst. Technol. 2004;14:47–57.

85. Figueiredo MAT, Nowak RD. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003;12:906–916. [PubMed]

86. Puetter RC. Pixon-based multiresolution image reconstruction and the quantification of picture information content. Int. J. Imaging Syst. Technol. 1995;6:314–331.

87. Wakker BP, Schwarz UJ. The Multi-Resolution CLEAN and its application to the short-spacing problem in interferometry. Astron. Astrophys. 1988;200:312–322.

88. Biggs DS, Wang C-L, Holmes TJ, Khodjakov A. Subpixel deconvolution of 3D optical microscope imagery. Proc. SPIE. 2004;5559:369–380.

89. Gustafsson MG. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 2000;198:82–87. [PubMed]

90. Gustafsson MG. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. U.S.A. 2005;102:13081–13086. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |