Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2753980

Formats

Article sections

- Abstract
- 1. Introduction
- 2. The 3D fundamental resolution measure
- 3. The 3D practical resolution measure
- 4. Effects of orientation and axial location
- 5. Comparison with the classical criterion
- 6. Maximum likelihood estimation with simulated data
- 7. Conclusions
- References

Authors

Related links

Opt Commun. Author manuscript; available in PMC 2010 May 1.

Published in final edited form as:

Opt Commun. 2009 May 1; 282(9): 1751–1761.

doi: 10.1016/j.optcom.2009.01.062PMCID: PMC2753980

NIHMSID: NIHMS96293

Jerry Chao,^{a,}^{b} Sripad Ram,^{b} Anish V. Abraham,^{a,}^{b} E. Sally Ward,^{b} and Raimund J. Ober^{*,}^{a,}^{b}

See other articles in PMC that cite the published article.

A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined.

The resolution of the optical microscope in two dimensions (2D) (i.e., the resolution in one focal plane) has been the subject of much research. There, an intrinsic assumption is that the objects of interest reside in a focal plane of the microscope. In many applications, however, the objects of interest are not confined to a plane of focus. Instead, they are located in three-dimensional (3D) space. An important example is the imaging of biological interactions at the level of individual biomolecules inside a cell, which has been made possible with the advent of single molecule microscopy (e.g., [1, 2]). Monitoring the distance of separation between closely-spaced biomolecules is of importance as the distance information often characterizes the nature of the biological interaction. In such applications, the problem of determining the distance separating the objects of interest becomes one of resolution in 3D.

Compared to 2D (transversal) resolution, fewer constraints can be assumed with 3D (axial) resolution concerning the object pair of interest. In the 2D case, the position of each object is completely described by an *x* coordinate and a *y* coordinate, since both objects are assumed to lie in a focal plane. In the 3D context, no such assumption can be made, and the positional information of each object must include its location along the *z* dimension. The *z* dimension also introduces many more possibilities in terms of spatial orientation for the 3D scenario. Whereas in a 2D setting all possible orientations of two objects are confined to rotation within a focal plane, in a 3D setting the two objects can be rotated freely in all three directions.

The classical 3D resolution limit [3-5] specifies for the 3D scenario a threshold distance of separation below which two point sources are deemed indiscernible. More specifically, the classical criterion gives the minimum distance by which two like point sources that are aligned parallel to the optical (*z*-) axis can be separated, and yet still be distinguishable. This minimum distance *d _{min}* is given by the expression [5]

$${d}_{\mathit{\text{min}}}=\frac{2\lambda n}{{n}_{a}^{2}},$$

(1)

where *λ* is the wavelength of the photons detected from the point source pair, *n* is the refractive index of the medium containing the point source pair, and *n _{a}* is the numerical aperture of the objective lens. Just as the 2D resolution criterion due to Rayleigh heuristically specifies the minimum resolvable distance in a focal plane to be the distance from the center to the first minimum of the Airy diffraction pattern, Eq. (1) is the distance from the center to the first minimum of the classical 3D point spread function of Born and Wolf [3] along the optical axis. Though a good rule of thumb when the image is detected with the unaided human eye, the 3D classical criterion has important drawbacks.

First, as is the case with Rayleigh's criterion, the 3D classical limit is deterministic and neglects the stochastic nature of the photon emission process. As such, it does not take into account the number of detected photons, which one would expect to be an important factor that affects resolvability. Intuitively, one would expect a closely spaced pair of point sources to be more easily resolved if a larger number of photons are emitted. On the other hand, even point source pairs that are spaced far apart beyond the classical limit, but emit only very few photons over the observation period, would be expected to be difficult to resolve.

Second, the classical criterion does not consider the precise location of the point source pair in 3D space, and moreover, its applicability is limited to the specific spatial orientation wherein the point sources are aligned parallel to the optical axis. It is not unreasonable to think that both the location and the orientation of the point source pair can affect resolvability. One might expect, for example, that given two point sources, it would be more difficult to distinguish them if they are located far from the focal plane. One might also predict that, everything else being equal, it would be harder to determine the distance of separation when the two point sources are aligned parallel to the optical axis. Distance estimation is expected to be particularly difficult in this case because this orientation corresponds to one point source being located directly in front of, and hence obscuring, the other.

Third, the fact that it neglects important parameters such as the photon count and the point source pair's axial location suggests that the classical criterion might in some cases predict too high a limit. For example, if an objective lens with a numerical aperture of *n _{a}* = 1.45 is used in conjunction with an immersion oil of refractive index

To address these drawbacks of the classical 3D resolution limit, we take a parameter estimation approach to evaluating resolution in 3D. Specifically, we adopt the theoretical framework that is laid out in [6]. This mathematical framework models the intrinsically stochastic nature of the photon detection process in optical microscopy, and provides the foundation on which specific problems such as point source localization [7] and resolution [8] can be formulated. In [8], the resolution problem is considered strictly in the 2D context.

Within the stochastic framework, we propose an information-theoretic 3D resolution measure for the conventional optical microscope which, instead of specifying a smallest resolvable distance as does the classical criterion, predicts how accurately a given distance can be determined. Analogous to its 2D counterpart [8], it predicts that given a sufficient number of photons from the point source pair, arbitrarily small distances of separation can be determined with prespecified accuracy. Additionally, this resolution measure takes into account the precise location and spatial orientation of a point source pair. Therefore, it is applicable to point sources that are situated in any arrangement with respect to one another in 3D space.

Since our resolution measure indicates the accuracy with which a given distance can be determined in the context of a parameter estimation problem, it is of practical importance to know that there are estimation algorithms that can in fact achieve the predicted accuracy. Therefore, we also demonstrate in this paper, using simulated images of point source pairs in 3D space, that the maximum likelihood estimator is able to attain the accuracy indicated by the resolution measure.

By predicting arbitrarily small distances to be resolvable with prespecified accuracy, the 3D resolution measure effectively implies that the classical 3D resolution limit can be surpassed. This is demonstrated with a comparison of what the mathematical formulae yield and with our estimations on simulated images. As other groups have demonstrated the determination of distances below the classical limit via the use of non-conventional optical microscopy techniques (e.g., [9]), the significance of our result lies in the fact that it indicates the conventional optical microscope to be capable by itself of overcoming the classical barrier. Indeed, superresolution approaches based on frequency domain band extrapolation, for example, have been proposed in the 3D context for the conventional optical microscope (e.g., [10]).

We begin by presenting in Section 2 the 3D resolution measure for noise-free (i.e., no extraneous noise) imaging with an ideal detector of infinite and continuous area. We use it to illustrate how the resolution measure is affected by the number of photons detected from, and the separation distance of, a pair of point sources. In Section 3, we consider the resolution measure for practical imaging scenarios where extraneous noise sources (e.g., sample autofluorescence, detector dark current, detector readout) are present and the detector is non-ideal (i.e., has finite and pixelated area). The effects of extraneous noise, detector pixelation, and the lateral magnification of the microscope are demonstrated. In Section 4, we consider the resolution measure's dependence on two attributes of a point source pair that are unique to the 3D context - its spatial orientation with respect to, and its location along, the optical axis. In Section 5, we present a specially derived resolution measure that directly compares with the classical 3D resolution limit. We follow in Section 6 with the results of our maximum likelihood distance estimations on simulated images, and conclude our paper in Section 7 with a summary and a discussion on the practical usage of the resolution measure.

We approach resolution from the perspective of how accurately, rather than whether, a given distance separating two objects (e.g., two point sources) can be determined. To quantify this accuracy, the task of determining the separation distance in 3D is formulated as a parameter estimation problem. From the acquired image, the parameter vector to be estimated consists of six parameters that describe how the two objects to be resolved are situated in 3D space. This vector is given by *θ* = (*d, ϕ, ω, s _{x}, s_{y}, s_{z}*),

Pair of point sources situated in 3D space. The point sources are separated by a distance *d*, and the midpoint between them is given by the coordinates (*s*_{x}, *s*_{y}, *s*_{z}). The spatial orientation of the point source pair is described by the angle *ω* which **...**

In any parameter estimation problem, it is important to know the accuracy of a chosen estimation algorithm in terms of the standard deviation of its estimates. For the category of algorithms referred to as unbiased estimators (i.e., estimators that on average attain the correct result), the well-known Cramer-Rao inequality in estimation theory [11] provides a bound on their accuracy. This inequality is a general result which states that the covariance matrix of any unbiased estimator of the unknown parameter vector *θ* is no smaller than the inverse of the Fisher information matrix **I**(*θ*), i.e.,

$$\text{Cov}\phantom{\rule{0.1em}{0ex}}(\stackrel{\u2038}{\theta})\ge {\mathbf{\text{I}}}^{-1}(\theta ),$$

(2)

where for an *N*-parameter *θ, N* a positive integer, *I*(*θ*) is an *N*-by-*N* matrix whose elements are functions of the *N* parameters in *θ*. In other words, this result asserts that the accuracy of any unbiased estimator is no better than the bound on the right-hand side of Eq. (2). It is important to emphasize that the right-hand side bound of Eq. (2) is not specific to a particular unbiased estimator . Instead, it is independent of the estimation algorithm that is used, and therefore serves as a useful benchmark for comparing different estimators.

Due to its general applicability to all unbiased estimators, we base our 3D resolution measure on the accuracy benchmark provided by the Cramer-Rao inequality. For our resolution problem, the unknown parameter vector *θ* is the six-parameter vector given above, and the Fisher information matrix **I**(*θ*) is a 6-by-6 matrix where the rows and columns correspond to the six parameters in the given order. Since the distance of separation *d* corresponding to element (1, 1) of the matrix is what we are interested in, and since it is customary to express accuracy in terms of the standard deviation rather than the variance, the resolution measure, which we denote by
${\delta}_{d}^{3D}$, is given by the square root of element (1, 1) of the inverse of the 6-by-6 Fisher information matrix, i.e.,
${\delta}_{d}^{3D}=\sqrt{{[{\mathbf{\text{I}}}^{-1}(\theta )]}_{11}}$. Put another way, the 3D resolution measure is a lower bound on the standard deviation of the estimates of any unbiased estimator of the distance *d*. As such, large values of the resolution measure indicate poor accuracy, while small values indicate good accuracy.

Note that while our result is taken in the end from a single element of the inverse Fisher information matrix, it is, in general, necessary that we work with a 6-by-6 matrix. This is because in the most realistic case, we do not know the true value of any of the six parameters that collectively characterize the location of a given object pair. We therefore need to estimate all six parameters from the acquired image, and accordingly the Fisher information matrix needs to be one that accounts for all six unknown parameters. Eliminating one or more parameters from being estimated, and hence reducing the size of the Fisher information matrix, would in general imply knowledge of the true value of each of the excluded parameters.

The Fisher information matrix **I**(*θ*) takes different forms depending on the assumptions we make concerning the conditions for image acquisition. In what follows in this section, we present the Fisher information matrix for imaging under ideal conditions. In Section 3, we present the Fisher information matrix for image acquisition under less than ideal, but practical conditions.

In our stochastic framework, the data that is collected for each detected photon consists of the spatial coordinates as well as the time point at which the photon hits the image detector. More specifically, the image formed from the detected photons is modeled as a spatio-temporal random process [12], which we refer to as the image detection process [6]. Assuming the ideal scenario where imaging is carried out using an unpixelated (i.e., continuous area) detector of infinite area and in the absence of any extraneous noise sources, the Fisher information matrix for an image acquired during the time interval [*t*_{0}, *t*] is given by [6, 7, 12]

$$\mathbf{\text{I}}(\theta )={\int}_{{t}_{0}}^{t}{\int}_{{\mathbb{R}}^{2}}\frac{1}{{\Lambda}_{\theta}(\tau )\phantom{\rule{0.2em}{0ex}}{f}_{\theta ,\tau}(x,y)}{\left(\frac{\partial [{\Lambda}_{\theta}(\tau )\phantom{\rule{0.2em}{0ex}}{f}_{\theta ,\tau}(x,y)]}{\partial \theta}\right)}^{T}\frac{\partial [{\Lambda}_{\theta}(\tau )\phantom{\rule{0.2em}{0ex}}{f}_{\theta ,\tau}(x,y)]}{\partial \theta}\mathit{\text{dxdyd}}\tau ,\phantom{\rule{0.8em}{0ex}}\theta \in \Theta ,$$

(3)

where Λ* _{θ}* is the time varying intensity function of the inhomogeneous Poisson process that models the time points at which the photons are detected, and {

Since no assumptions are made concerning the parameter vector *θ* and the functions Λ* _{θ}* and {

$${f}_{\theta ,\tau}(x,y)=\frac{1}{{M}^{2}}\phantom{\rule{0.2em}{0ex}}\left[{\epsilon}_{1}(\tau )\phantom{\rule{0.2em}{0ex}}{q}_{{z}_{01},1}\left(\frac{x}{M}-{x}_{01},\frac{y}{M}-{y}_{01}\right)+{\epsilon}_{2}(\tau )\phantom{\rule{0.2em}{0ex}}{q}_{{z}_{02},2}\left(\frac{x}{M}-{x}_{02},\frac{y}{M}-{y}_{02}\right)\right],$$

(4)

where (*x, y*) ^{2}, *ε _{i}*(

We refer to
${\delta}_{d,g\phantom{\rule{0.2em}{0ex}}\mathit{\text{frem}}}^{3D}=\sqrt{{[{\mathbf{\text{I}}}^{-1}(\theta )]}_{11}}$, where **I**(*θ*) is that of Eq. (3) with Λ* _{θ}* and {

The 3D g-FREM of Section 2.2 provides a general expression from which resolution measures for more specific scenarios can be derived. An important scenario of interest is one where the objects to resolve are two like point sources whose images are described by the classical 3D point spread function of Born and Wolf [3]. These conditions are among the assumptions made by the classical 3D resolution limit, and therefore are necessary if comparisons are to be made with the classical criterion (see Section 5). For this special scenario, we define a special case of the 3D g-FREM called the 3D fundamental resolution measure (3D FREM). The 3D FREM is conceptually analogous to the 3D g-FREM in that it gives the best theoretically attainable accuracy for distance estimation. However, it applies specifically to the conditions we have described.

Denoted by
${\delta}_{d,\phantom{\rule{0.1em}{0ex}}\mathit{\text{frem}}}^{3D}$, the 3D FREM is by definition the square root of element (1, 1) of the inverse of the Fisher information matrix for the assumed special conditions. Specifically, this matrix is derived from Eq. (3) for two point sources that emit photons of the same wavelength and the same constant detection rate (Λ_{1}(*τ*) = Λ_{2}(*τ*) = Λ_{0}, *τ* ≥ *t*_{0}), and for image functions that are given by the Born and Wolf point spread function. Under these conditions, the Fisher information matrix is conveniently expressed as a product of the scalar quantity Λ_{0} · (*t* − *t*_{0}) (i.e., the expected photon count detected from each point source) and a 6-by-6 matrix. Accordingly, the 3D FREM can be written as

$${\delta}_{d,\phantom{\rule{0.1em}{0ex}}\mathit{\text{frem}}}^{3D}=\sqrt{\frac{{\Gamma}_{0}^{3D}(d)}{2\xb7{\Lambda}_{0}\xb7(t-{t}_{0})}},$$

(5)

where

$${\Gamma}_{0}^{3D}(d)={\left\{{\left[{\int}_{{\mathbb{R}}^{2}}\frac{1}{{f}_{\theta}\phantom{\rule{0.2em}{0ex}}(x,y)}{\left(\frac{\partial \phantom{\rule{0.2em}{0ex}}{f}_{\theta}\phantom{\rule{0.2em}{0ex}}(x,y)}{\partial \theta}\right)}^{T}\frac{\partial \phantom{\rule{0.2em}{0ex}}{f}_{\theta}\phantom{\rule{0.2em}{0ex}}(x,y)}{\partial \theta}\mathit{\text{dxdy}}\right]}^{-1}\right\}}_{11},\phantom{\rule{0.5em}{0ex}}\theta \in \Theta .$$

(6)

In the expression for the function
${\Gamma}_{0}^{3D}(d)$, the subscript 11 denotes element (1, 1) of the inverse of the 6-by-6 matrix inside the square brackets. The density function *f _{θ}*(

$${f}_{\theta}\phantom{\rule{0.2em}{0ex}}(x,y)=\frac{1}{2{M}^{2}}\phantom{\rule{0.2em}{0ex}}\left[{q}_{{z}_{01}}\phantom{\rule{0.4em}{0ex}}\left(\frac{x}{M}-{x}_{01},\frac{y}{M}-{y}_{01}\right)+{q}_{{z}_{02}}\phantom{\rule{0.4em}{0ex}}\left(\frac{x}{M}-{x}_{02},\frac{y}{M}-{y}_{02}\right)\right]\phantom{\rule{0.4em}{0ex}},\phantom{\rule{0.8em}{0ex}}(x,y)\in {\mathbb{R}}^{2},$$

(7)

where the image functions *q _{z}*

$${q}_{{z}_{0}}(x,y)=\frac{4\pi {n}_{a}^{2}}{{\lambda}^{2}}{\left|{\int}_{0}^{1}{J}_{0}\phantom{\rule{0.3em}{0ex}}\left(\frac{2\pi {n}_{a}}{\lambda}\sqrt{{x}^{2}+{y}^{2}}\rho \right)\phantom{\rule{0.3em}{0ex}}{e}^{j\frac{\pi {n}_{a}^{2}{z}_{0}}{n\lambda}{\rho}^{2}}\rho d\rho \right|}^{2},\phantom{\rule{0.8em}{0ex}}(x,y)\in {\mathbb{R}}^{2},\phantom{\rule{0.5em}{0ex}}{z}_{0}\in \mathbb{R}.$$

(8)

In the Born and Wolf image function of Eq. (8), *n _{a}* denotes the numerical aperture of the objective lens,

The number of photons detected from a point source pair of interest is a quantity that should have an impact on how well the two point sources can be distinguished. The more photons (i.e., data) that are collected, the easier one should be able to determine the distance of separation. This idea is reflected in our resolution measure.

The 3D FREM of Eq. (5) is the square root of the ratio of two independent terms - the expected photon count Λ_{0} · (*t* − *t*_{0}) that is detected from each point source, and the function
${\Gamma}_{0}^{3D}(d)$ of Eq. (6) that depends on the rest of the parameters, including the distance of separation *d*. Whatever the value of
${\Gamma}_{0}^{3D}(d)$ given an arbitrarily small distance of separation *d*, the expected photon count Λ_{0} · (*t* − *t*_{0}) can be increased independently to obtain an arbitrarily small value for the resolution measure, and hence an arbitrarily good accuracy for determining the distance of separation. Therefore, Eq. (5) shows that, given enough photons from the point source pair, arbitrarily small distances in 3D can be determined with the desired accuracy.

Fig. 2a shows the photon count dependence of the 3D FREM for three point source pairs that differ only in their distances of separation. For each of the three distances of separation shown, an inverse square root dependence is seen as the 3D FREM decreases, and therefore the accuracy for distance estimation improves, with increasing expected photon count. For a distance of *d* = 200 nm, for example, the 3D FREM predicts an accuracy of ±40.12 nm when an expected 350 photons are detected from each point source. This accuracy is approximately ±20% of the 200 nm distance. By increasing the expected photon count to 1500 photons per point source, the accuracy is improved to ±19.38 nm, which is approximately ±10% of the 200 nm distance.

Dependence of the 3D resolution measure (3D FREM) on (a) the expected photon count and (b) the distance of separation of a pair of point sources. (a) The photon count dependence is shown for three point source pairs that differ only in their distances **...**

By specifying the unknown parameter vector *θ* to consist only of the six parameters that describe the location of an object pair, an implicit assumption is made that all other attributes of the object pair are either known or can be independently estimated. It is assumed, for example, that the photon detection rate Λ_{0} of each point source is known. In practice, however, it may be difficult to independently determine the photon detection rates of the two point sources. For such cases, the vector *θ* can be extended to include the photon detection rates Λ_{1} and Λ_{2} of the point sources, and accordingly an 8-by-8 Fisher information matrix can be computed to obtain the resolution measure. In Fig. 2a, resolution measures computed using the 8-by-8 matrix are shown as dashed curves. These dashed curves show that, under the same conditions, treating the photon detection rates as unknown parameters will slightly worsen the accuracy with which the distance of separation can be estimated. However, it is important to note that the shape, and hence the behavior, of the resolution measure as a function of photon count remains unaltered.

In addition to the photon count dependence, Fig. 2a demonstrates the effect of the distance of separation on the resolution measure. Intuitively, one would expect that larger separation distances are easier to determine than smaller ones, since in the latter case there is more overlap of the images of the two point sources. One way to see this in Fig. 2a is that given an expected number of detected photons, the resolution measure for a larger distance is smaller than that for a smaller distance. Another way to appreciate the distance dependence is by comparing the photon counts required to determine the three distances with the same level of accuracy. For a distance of 200 nm, a ±23.74 nm accuracy is achieved when an expected 1000 photons are detected from each point source. This accuracy is approximately ±12% of 200 nm. When the distance is halved to 100 nm, more than ten times the expected number of photons (12000) are needed from each point source to obtain an accuracy of ±12.18 nm. This accuracy is about ±12% of the 100 nm distance, and hence corresponds to the same level of accuracy. If the distance is halved again to 50 nm, then even at an expected 100000 photons per point source, the resolution measure still predicts an accuracy of ±7.90 nm, which is approximately ±16% of the 50 nm distance.

The distance dependence is illustrated directly in Fig. 2b, where the 3D FREM corresponding to an expected 2500 photons per point source is shown as a function of separation distance. Here, the *y*-axis of the plot is given as a percent error, which is just the ratio of the 3D FREM to the distance, expressed as a percentage. Using such a plot, one can easily determine the minimum distance that can be estimated with a particular percent error. In Fig. 2b, for instance, the intersection of the percent error curve and the dashed horizontal line indicates that, for the conditions assumed, a distance of 175 nm or greater can be determined with an error that is no worse than ±10%.

In Section 2 we presented resolution measures (3D g-FREM and 3D FREM) for the best imaging scenario where a detector of infinite and continuous area and noise-free conditions are assumed. In a practical experiment, however, the photon detector (e.g., a CCD) is pixelated and has a finite area. Furthermore, photons that arise from extraneous noise sources such as the autofluorescence of the sample and the readout process of the detector contribute to the total number of photons collected by the detector. When these data-deteriorating experimental factors are to be accounted for, the 3D g-FREM and the 3D FREM no longer apply, and instead, the 3D generalized practical resolution measure (3D g-PREM) and the 3D practical resolution measure (3D PREM) are used. The practical resolution measures take into account the pixelation and finite size of the detector, and model spurious photons (e.g., sample autofluorescence, detector dark current) in the acquired image as additive Poisson noise, and measurement noise (e.g., detector readout) as additive Gaussian noise.

In a practical imaging scenario, the image acquired during the time interval [*t*_{0}, *t*] by a pixelated detector consisting of *N _{p}* pixels is modeled as a sequence of independent random variables {

We note that throughout this paper, the term “noise” refers to the extraneous noise accounted for by the random variables *B _{k}* and/or

It can be shown [6] that the number of photons detected from the two objects at each pixel has a mean (i.e., *S _{θ,k}, k* = 1, …,

$${\mu}_{\theta}\phantom{\rule{0.1em}{0ex}}(k,t)={\int}_{{t}_{0}}^{t}{\int}_{{C}_{k}}{\Lambda}_{\theta}(\tau )\phantom{\rule{0.2em}{0ex}}{f}_{\theta ,\tau}(x,y)\phantom{\rule{0.1em}{0ex}}\mathit{\text{dxdyd}}\tau ,$$

(9)

where [*t*_{0}, *t*] is the acquisition time interval, *C _{k}* is the region in the

In the absence of measurement noise (*W _{k}* = 0,

$$\mathbf{\text{I}}(\theta )=\sum _{k=1}^{{N}_{p}}\phantom{\rule{0.3em}{0ex}}\frac{1}{{\mu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)+\beta (k,t)}\phantom{\rule{0.2em}{0ex}}{\left(\frac{\partial {\mu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}{\partial \theta}\right)}^{T}\frac{\partial {\mu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}{\partial \theta},$$

(10)

where for *k* = 1, …, *N _{p}*,

$$\mathbf{\text{I}}(\theta )={\sum _{k=1}^{{N}_{p}}\phantom{\rule{0.2em}{0ex}}\left(\frac{\partial {\mu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}{\partial \theta}\right)}^{T}\frac{\partial {\mu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}{\partial \theta}\left({{\int}_{\mathbb{R}}\frac{\left({\sum}_{l=1}^{\infty}\frac{{[{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)]}^{l-1}{e}^{-{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}}{(l-1)!}\xb7\frac{1}{\sqrt{2\pi}{\sigma}_{k}}{e}^{-\frac{1}{2}{\left(\frac{z-l-{\eta}_{k}}{{\sigma}_{k}}\right)}^{2}}\right)}{{p}_{\theta ,k}\phantom{\rule{0.2em}{0ex}}(z)}}^{2}dz-1\right),$$

(11)

where for *k* = 1, …, *N _{p}*,

$${p}_{\theta ,k}(z)=\frac{1}{\sqrt{2\pi}{\sigma}_{k}}\sum _{l=0}^{\infty}\frac{[{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}{(k,t)]}^{l}{e}^{-{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}}{l!}{e}^{-\frac{1}{2}{\left(\frac{z-l-{\eta}_{k}}{{\sigma}_{k}}\right)}^{2}},\phantom{\rule{0.5em}{0ex}}z\in \mathbb{R}.$$

(12)

The 3D g-PREM is then defined to be
${\delta}_{d,\mathit{\text{gprem}}}^{3D}=\sqrt{{[{\mathbf{\text{I}}}^{-1}(\theta )]}_{11}}$, where **I**(*θ*) is given by either Eq. (10) or Eq. (11), depending on whether measurement noise is present.

Analogous to the derivation of the 3D FREM from the 3D g-FREM, the 3D PREM (
${\delta}_{d,\mathit{\text{prem}}}^{3D}$) is obtained by evaluating the 3D g-PREM for two like point sources that emit photons of the same wavelength and the same constant detection rate. Furthermore, the image of each point source is assumed to be described by the Born and Wolf image function of Eq. (8). In other words, the 3D PREM is realized by evaluating *μ _{θ}*(

Note that even with the assumptions made for the 3D PREM, the expected photon count Λ_{0} · (*t* − *t*_{0}), which appears in the expressions *μ _{θ}*(

In the remainder of this section, we use the 3D PREM to illustrate how the accuracy for distance determination is affected by detector pixelation, noise, and magnification.

We recall from Section 2.2 that the framework on which the resolution measure is based records for each detected photon the spatial coordinates at which the photon hits the detector. In the ideal imaging scenario, we assume that the continuous area of the image detector allows the recording of these coordinates with arbitrary precision. When the detector area is pixelated, however, this arbitrary precision is lost since the spatial coordinates of a given photon can only be recorded with a precision that is up to the size of a single pixel. We therefore expect that the larger the pixel size, the worse the precision, and accordingly the higher the value of the resolution measure. On the other hand, as the pixel size becomes very small, the loss of precision becomes limited and the value of the resolution measure will approach that for the ideal scenario.

In a practical scenario, photons that arise from extraneous Poisson and/or Gaussian noise sources will deteriorate the quality of the signal in each pixel. Therefore, the presence of extraneous noise will always result in a poorer (larger) resolution measure. However, given a fixed region of interest (i.e., portion of the detector area), the precise effect of noise in conjunction with pixelation depends on how a particular type of noise is modeled. For example, in the case of Poisson noise such as that due to sample autofluorescence, the mean number of spurious photons over the entire region of interest can be modeled as a constant, regardless of the number of pixels that comprise the region. Therefore, given the same region of interest, and assuming uniform allocation of the spurious photons over the region, the mean number of spurious photons per pixel will be higher for a larger pixel size and lower for a smaller pixel size. In contrast, noise due to the detector readout process can be modeled as Gaussian noise with the same mean and standard deviation at each pixel, regardless of what the size of the pixel is. As we will see for a fixed region of interest, these two types of noise sources will introduce different degrees of deterioration to the resolution measure depending on the specific pixel size.

In Fig. 3a, we show the effects of detector pixelation and extraneous noise on the resolution measure. The point source pair is assumed to be the same as the one with the 200 nm separation distance that is considered in Fig. 2a, and its image is assumed to be centered on a 500 *μ*m by 500 *μ*m region of interest with square pixels. The flat line at the bottom of the plot is the value of the 3D FREM, and represents the best accuracy that is theoretically attainable for the given conditions. The remaining curves show the 3D PREM for different types and combinations of noise sources, in each case as a function of pixel size.

Dependence of the 3D resolution measure (3D PREM) on (a) the detector pixel size and (b) the magnification of the objective lens in the presence of different types and combinations of noise sources: noise-free (*), additive Poisson noise (e.g., sample **...**

When no extraneous noise sources are present and pixelation is the only deteriorating factor, the figure shows that, as discussed above, the PREM improves (decreases) and converges to the FREM as the pixel size is decreased. The convergence to the FREM is expected since in the limiting case, a pixel size of 0 *μ*m means the continuous detector area assumed by the FREM. When additive Poisson noise with a constant mean over the entire region of interest is uniformly allocated to each pixel, the PREM also decreases with decreasing pixel size, but its value is consistently larger than that of the pixelation only scenario. The figure confirms that the PREM will not converge to the FREM in this case because whereas the FREM assumes a noise-free image acquisition, the PREM here entails a constant level of noise due to spurious photons.

The remaining two curves in Fig. 3a entail additive Gaussian noise with identical mean and standard deviation at any given pixel, regardless of the pixel size. When it is the only noise source present, the PREM decreases with decreasing pixel size as in the case of Poisson noise, but only down to a certain pixel size before it begins to increase as the pixel size decreases further. This is because while the number of photons collected from the point source pair continues to decrease in each pixel as the pixel size becomes smaller, the readout noise in each pixel remains unchanged and causes significant deterioration of the signal to noise ratio. The final PREM curve shows that the same U shape is observed when both Poisson and Gaussian noise sources are present. However, consistently larger PREM values are expected due to the additional Poisson noise.

The pixelation of the detector and the finite size of a region of interest also mean that the magnification of the objective lens has an important effect on the resolution measure. When the lateral magnification is small, the image of a closely spaced point source pair will largely be confined to a single pixel. In this case, very little information on the distance of separation can be extracted from the collected data. On the other hand, when the magnification is large, considerable portions of the image of the point source pair will fall outside of the region of interest. As a result, a significant amount of information on the distance of separation will be lost. Therefore, for both low and high magnifications, we should expect a poor accuracy for determining the distance of separation, and accordingly a large value for the resolution measure.

Fig. 3b shows the 3D PREM as a function of magnification for the same point source pair and the same types and combinations of extraneous noise sources as in Fig. 3a. The point source pair is again positioned such that its image is centered on the region of interest, which in this case is assumed to be a 21-by-21 array of 13 *μ*m by 13 *μ*m pixels. For all four types and combinations of noise sources shown, the same pattern is observed. As expected, the resolution measure takes on large values at small magnification values wherein the image of the point source pair is concentrated mostly in a single pixel. However, the value of the resolution measure quickly drops as the magnification is increased and the image of the point source pair is distributed across more pixels. A steady deterioration (i.e., increase) is then observed as the magnification continues to increase and more and more of the image of the point source pair falls outside of the region of interest. The flat line at the bottom of the plot is again the 3D FREM which represents the best theoretically attainable accuracy. Note that despite the magnification *M* being present in the density functions of Eqs. (4) and (7), it is canceled out by substitution when the Fisher information matrices for the 3D g-FREM and the 3D FREM are derived. The fundamental resolution measures are therefore independent of the magnification, and accordingly the 3D FREM is shown as a constant function of magnification in Fig. 3b.

In this section, we investigate the dependence of the 3D resolution measure on a point source pair's spatial orientation with respect to the optical axis as well as its location along that axis. These attributes are specific to a point source pair situated in 3D space, and one would expect both how the point source pair is oriented and its distance from the focal plane to have an impact on how accurately the distance of separation can be determined.

According to the classical resolution criteria, the optical microscope is better at resolving distances in 2D than in 3D. Consider again a point source pair that emits photons at a wavelength of *λ* = 655 nm. If both point sources are brought into the plane of focus (*ω* = 90°, *s _{z}* = 0 nm; see Fig. 1), then Rayleigh's criterion, given by 0.61

Our resolution measure predicts results that are analogous. In general, provided that all other parameters remain the same, a given distance of separation can be determined with the best accuracies when the orientation of the point source pair approaches the side-by-side configuration in an *xy*-plane. In contrast, the poorest accuracies can be expected when the orientation approaches the front-and-back configuration wherein the point sources are aligned parallel to the optical axis. This can be seen in Fig. 4a, which shows the dependence of the 3D PREM on the orientation angle *ω* for two point sources that are separated by 200 nm. The three curves shown differ only in the types of noise sources that are present. In all three cases, the side-by-side scenario (*ω* = 90°) corresponds to the smallest resolution measure and hence the best accuracy, while orientations approaching the front-and-back scenario (*ω* ≈ 0°) correspond to the highest resolution measures and hence the worst accuracies.

Dependence of the 3D resolution measure (3D PREM) on (a) the 3D orientation (angle *ω*) and (b) the axial location of a point source pair. (a) The resolution measures corresponding to three different types and combinations of noise sources are shown: **...**

Unlike the classical criteria, the resolution measure is able to account for intermediate orientations between the two special cases. In Fig. 4a, the curves show that as we rotate the point source pair out of and away from the *xy*-plane (decrease the angle *ω*), we begin to lose accuracy slowly but steadily until roughly an angle half way between the *xy*-plane and the optical axis (*ω* = 45°) is formed with the optical axis. Then, as the point source pair is rotated further towards the front-and-back orientation (*ω* < 45°), the deterioration in accuracy becomes significantly sharper before eventually leveling off at very small values of *ω*. By comparing the three curves, we can see that the presence of additive Poisson and/or Gaussian noise sources only makes the situation worse for *ω* < 45° as it significantly augments the deterioration of the resolution measure.

Intuitively, one would expect that the farther two point sources are from the focal plane, the more they will appear in the acquired image to be a single point source, and hence the more difficult it will be to determine the distance that separates them. This is corroborated by the curve shown in Fig. 4b, which illustrates how the value of the 3D PREM changes as a point source pair is moved along the optical axis from two microns below the focal plane to two microns above. The resolution measure is shown as a function of the axial position *z*_{01} of the first point source, and the focal plane is located at *z*_{01} = 0 nm. With the exception of axial locations within roughly half a micron from the focal plane, the curve shows that as the point source pair is moved away from the focal plane in either direction along the optical axis, the value of the resolution measure increases, and hence the accuracy for determining the distance of separation worsens.

The exception to the rule for axial locations within half a micron from the focal plane can be explained by the fact that the accuracy with which the axial position of a point source can be determined deteriorates drastically when the point source is near the focal plane [16, 17]. Since the problem of resolving two point sources can be viewed equivalently as determining the locations of the two point sources, this inability to accurately localize a near-focus point source has similar implications for the resolution problem. As shown in Fig. 4b, the ability to accurately determine the distance of separation is severely compromised by either of the two point sources being very close to the focal plane. While a relatively small but significant increase in the value of the resolution measure is already observed as the point source pair approaches the focal plane from within a half-micron distance, sharp deteriorations are seen around *z*_{01} = 0 nm and around *z*_{01} ≈ 141.42 nm. The former corresponds to the first point source being very close to the focal plane, and the latter corresponds to the second point source being very close the focal plane. Note that the symmetry of the curve in Fig. 4b is due to the *z*-symmetry of the Born and Wolf 3D point spread function with respect to the focal plane, and is not generally expected for all point spread functions.

The dashed curve in Fig. 4b represents the 3D PREM when the mean *β*(*k*, *t*) of the additive Poisson noise at each pixel (assumed to be the same for all pixels), as well as the photon detection rates Λ_{1} and Λ_{2} of the two point sources, are introduced as additional unknown parameters (i.e., PREM computed with a 9-by-9 Fisher information matrix). Similar to what we saw with the FREM in Fig. 2a, only a small deterioration of the PREM is observed, and hence the additional unknowns do not significantly impact the resolution measure. This serves as an example that by computing and comparing resolution measures for different sets of unknown parameters, one can quantitatively assess the significance of a parameter. For another example, given the conditions assumed by Fig. 4b, one may choose to exclude the possibly insignificant angle *ϕ* from the unknown parameter vector *θ*. For the point corresponding to *z*_{01} = 400 nm, for instance, the value of the PREM essentially remains at ±20.93 nm when *ϕ* is excluded from *θ* and a 5-by-5 Fisher information matrix is used instead.

In Section 2.3, we presented the 3D fundamental resolution measure (3D FREM) which makes important assumptions that are shared by the classical 3D resolution limit (Eq. (1)). Specifically, it assumes the objects of interest to be a pair of like point sources whose images are described by the Born and Wolf point spread function. However, whereas the 3D FREM expression of Eq. (5) applies generally to a point source pair with an arbitrary spatial orientation in 3D space, the classical 3D resolution limit assumes the two point sources to be oriented parallel to the optical axis (*ω* = 0°). Therefore, to make a direct comparison between the two, we evaluate the FREM with the assumption that we know the angle *ω* = 0°. We also discard the angle *ϕ* as one of the unknown parameters since it has no meaning when *ω* = 0°. This leaves us with the parameter vector *θ* = (*d, s _{x}, s_{y}, s_{z}*),

$${\delta}_{d,\mathit{\text{frem}}}^{3D}=\sqrt{\frac{{\Gamma}_{0}^{3D}(d)}{\pi \xb7{\Lambda}_{0}\xb7(t-{t}_{0})}}\xb7\frac{\lambda n}{{n}_{a}^{2}},$$

(13)

where

$$\begin{array}{c}{\Gamma}_{0}^{3D}(d)=\frac{A(d)}{A(d)\xb7B(d)-{C}^{2}(d)},\\ A(d)={\int}_{{\mathbb{R}}^{2}}\frac{{\left(\left\{{H}_{1,c}^{+}\xb7{H}_{3,s}^{+}-{H}_{1,s}^{+}\xb7{H}_{3,c}^{+}\right\}+\left\{{H}_{1,c}^{-}\xb7{H}_{3,s}^{-}-{H}_{1,s}^{-}\xb7{H}_{3,c}^{-}\right\}\right)}^{2}}{{\left[{H}_{1,c}^{+}\right]}^{2}+{\left[{H}_{1,s}^{+}\right]}^{2}+{\left[{H}_{1,c}^{-}\right]}^{2}+{\left[{H}_{1,s}^{-}\right]}^{2}}\mathit{\text{dxdy}},\\ B(d)={\int}_{{\mathbb{R}}^{2}}\frac{{\left(\left\{{H}_{1,c}^{+}\xb7{H}_{3,s}^{+}-{H}_{1,s}^{+}\xb7{H}_{3,c}^{+}\right\}-\left\{{H}_{1,c}^{-}\xb7{H}_{3,s}^{-}-{H}_{1,s}^{-}\xb7{H}_{3,c}^{-}\right\}\right)}^{2}}{{\left[{H}_{1,c}^{+}\right]}^{2}+{\left[{H}_{1,s}^{+}\right]}^{2}+{\left[{H}_{1,c}^{-}\right]}^{2}+{\left[{H}_{1,s}^{-}\right]}^{2}}\mathit{\text{dxdy}},\\ C(d)={\int}_{{\mathbb{R}}^{2}}\frac{{\left({H}_{1,c}^{+}\xb7{H}_{3,s}^{+}-{H}_{1,s}^{+}\xb7{H}_{3,c}^{+}\right)}^{2}-{\left({H}_{1,c}^{-}\xb7{H}_{3,s}^{-}-{H}_{1,s}^{-}\xb7{H}_{3,c}^{-}\right)}^{2}}{{\left[{H}_{1,c}^{+}\right]}^{2}+{\left[{H}_{1,s}^{+}\right]}^{2}+{\left[{H}_{1,c}^{-}\right]}^{2}+{\left[{H}_{1,s}^{-}\right]}^{2}}\mathit{\text{dxdy}},\\ {H}_{\beta ,c}^{\pm}={\int}_{0}^{1}{J}_{0}\phantom{\rule{0.2em}{0ex}}\left(\sqrt{{x}^{2}+{y}^{2}}\rho \right)\phantom{\rule{0.2em}{0ex}}\text{cos}\phantom{\rule{0.2em}{0ex}}\left(\frac{\pi {n}_{a}^{2}\left({s}_{z}\pm \frac{d}{2}\right)}{n\lambda}{\rho}^{2}\right)\phantom{\rule{0.3em}{0ex}}{\rho}^{\beta}d\rho ,\\ {H}_{\beta ,s}^{\pm}={\int}_{0}^{1}{J}_{0}\phantom{\rule{0.2em}{0ex}}\left(\sqrt{{x}^{2}+{y}^{2}}\rho \right)\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\left(\frac{\pi {n}_{a}^{2}\left({s}_{z}\pm \frac{d}{2}\right)}{n\lambda}{\rho}^{2}\right)\phantom{\rule{0.2em}{0ex}}{\rho}^{\beta}d\rho .\end{array}$$

Note that like the classical 3D resolution limit, the 3D FREM of Eq. (13) is a product involving the term
$\lambda n/{n}_{a}^{2}$. However, whereas in the classical criterion the term
$\lambda n/{n}_{a}^{2}$ is multiplied by the constant 2, in the 3D FREM it is multiplied by a more complex term that depends nonlinearly on the expected photon count Λ_{0} · (*t* − *t*_{0}) and the function
${\Gamma}_{0}^{3D}(d)$. Through the function
${\Gamma}_{0}^{3D}(d)$, the 3D FREM is nonlinearly dependent on the separation distance *d* and the term
$\lambda n/{n}_{a}^{2}$ itself, which occurs in reciprocal form in the inner integrals
${H}_{\beta ,c}^{\pm}$ and
${H}_{\beta ,s}^{\pm}$.

We recall the example from Section 1 where a fluorophore pair emitting at a wavelength of *λ* = 655 nm is imaged using an objective lens of numerical aperture *n _{a}* = 1.45 and an immersion oil of refractive index

By definition, the 3D resolution measure is a lower bound on the standard deviation of the distance estimates of any unbiased estimator. In other words, it is the best (i.e., the smallest) standard deviation that can be expected for a set of distance estimates obtained using an unbiased estimation algorithm. Here, we show that the maximum likelihood estimator is capable of achieving the accuracy as per the resolution measure. We demonstrate this by performing maximum likelihood estimations on simulated images of point source pairs in 3D space. For each data set, we calculate the standard deviation of the estimated distances of separation, and show that it comes close to the corresponding 3D PREM. We also show, by way of these estimations, that distances well below the classical 3D resolution limit can in fact be determined with the desired accuracy. In what follows, we use the notation of Section 3 in our descriptions.

Maximum likelihood estimation was performed on data sets that are deteriorated by different noise sources. For a noise-free image or one where only additive Poisson noise was present, the maximum likelihood estimation was realized by maximizing the log-likelihood function for Poisson-distributed data. This function is given by

$$\text{ln}(\mathcal{L}(\theta |{z}_{1},\dots ,{z}_{{N}_{p}}))=\sum _{k=1}^{{N}_{p}}\text{ln}\phantom{\rule{0.2em}{0ex}}\left(\frac{[{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}{(k,t)]}^{{z}_{k}}{e}^{-{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)}}{{z}_{k}!}\right)=\sum _{k=1}^{{N}_{p}}({z}_{k}\text{ln}[{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)]-{\nu}_{\theta}\phantom{\rule{0.2em}{0ex}}(k,t)-\text{ln}({z}_{k}!)),$$

(14)

where *N _{p}* is the number of pixels in the pixel array, and for

For an image where additive Gaussian noise was present, potentially along with additive Poisson noise, the maximum likelihood estimation was realized by maximizing the log-likelihood function

$$\text{ln}\phantom{\rule{0.1em}{0ex}}(\mathcal{L}(\theta |{z}_{1},\dots ,{z}_{{N}_{p}}))=\sum _{k=1}^{{N}_{p}}\text{ln}\phantom{\rule{0.1em}{0ex}}({p}_{\theta ,k}\phantom{\rule{0.1em}{0ex}}({z}_{k})),$$

(15)

where for *k* = 1, …, *N _{p}*,

We performed estimations on six sets of simulated data which differ in parameters such as the distance of separation, the spatial orientation, and the types of noise sources that are present (see Table 1). For all data sets, the generation of simulated images in the form of 15-by-15 pixel arrays and the subsequent estimation and analysis were carried out using the technical programming language of MATLAB (The MathWorks, Inc., Natick, MA) and its optimization toolbox. Specifically, we made use of the core functionalities that are part of the software packages EstimationTool [18] and FandPLimitTool [19].

We note that in the software implementation of the maximum likelihood estimator, we minimize the negative of the log-likelihood functions of Eqs. (14) and (15). Also, for data which were either noise-free (data set 1) or contained only additive Poisson noise (data sets 2 and 6), the term − ln(*z _{k}*!) in Eq. (14) was dropped because it depends only on the simulated photon count

Each image was simulated as follows. Each element *k* of a 15-by-15 pixel array was assigned the mean photon count detected at the *k ^{th}* pixel from the point source pair, as given by the sum of two Born and Wolf 3D point spread functions (see Eq. (8)). In other words, we computed

For each data set, five hundred simulated images were generated as described, and estimations were performed using the appropriate maximum likelihood estimator. For the first four data sets, all six parameters that describe the geometry of a point source pair in 3D space (Section 2.1) were assumed to be unknown and estimated. However, for data sets 5 and 6, only the distance *d* and the coordinates *s _{x}, s_{y}*, and

We note that due to the Born and Wolf point spread function's *z*-symmetry with respect to the focal plane *z* = 0 nm, the image produced by two point sources with axial positions *z*_{01} and *z*_{02} will be identical to the image generated when the axial position of point source 1 is changed to −*z*_{01} and/or the axial position of point source 2 is changed to −*z*_{02}. If both *z*_{01} and *z*_{02} are nonzero, then the flipping of exactly one of the point sources with respect to the focal plane will result in a different separation distance *d* despite the image remaining the same. Consequently, estimations performed on a given image can potentially yield different distance estimates. For our estimations here, we have assumed that both point sources are located above the focal plane. In practice, this ambiguity does not present a problem if the investigator knows that the point sources of interest are always confined to one side of the focal plane (e.g., placing the focal plane at the plasma membrane of a cell so that the interior of the cell is entirely above or below the focal plane). We stress that this ambiguity exists only within the context of the actual distance estimation, and that it is not an issue when it comes to the theory or computation of the resolution measure.

Table 1 shows that in each of the six scenarios, the mean of the distance estimates closely matches the true distance of separation. In addition, in each data set the standard deviation of the distance estimates comes close to the resolution measure (3D PREM). These results therefore demonstrate that the maximum likelihood estimator is capable of achieving the accuracy predicted by the resolution measure.

Each of the first three data sets entails two point sources that are separated by a distance of *d* = 200 nm and form a 45° angle with the positive optical axis (*ω* = 45°). These three scenarios differ only in the noise sources that are present during image acquisition. Data set 1 is noise-free, while data set 3 has the highest level of noise. Therefore, of the first three data sets, data set 1 has the best accuracy with a resolution measure of ±12.12 nm and a standard deviation of ±12.50 nm, whereas data set 3 has the worst accuracy with a resolution measure of ±20.18 nm and a standard deviation of ±19.35 nm.

Data set 4, in conjunction with data set 3, provides an example of a tradeoff between two attributes of a point source pair - the distance of separation and the orientation. The parameters of data sets 3 and 4 are identical in every way except in two respects. While data set 4 has a smaller, less accurately estimable separation distance of *d* = 170 nm (Section 2.5), it also has a more favorable orientation of *ω* = 55° (Section 4.1). As a result, the smaller distance of 170 nm can be estimated with the same level of accuracy as for data set 3 (i.e., in data sets 3 and 4, the standard deviation is approximately ±10% of the respective mean distance).

Data set 5 involves a point source pair that is aligned parallel to the optical axis (*ω* = 0°). This orientation is the one assumed by the classical 3D criterion and is the most difficult to resolve (Section 4.1). As a result, a much larger number of photons (50000 per point source) must be collected (Section 2.4) in order to obtain an approximately ±10% level of accuracy. Data set 6 shows for this special orientation that a distance of separation four times smaller (*d* = 50 nm) can also be estimated with an approximately ±10% level of accuracy, but at the cost of detecting 100 times the number of photons (5000000) from each point source. Importantly, according to the classical 3D resolution limit, the smallest resolvable distance given the experimental parameters used here is 944 nm. Therefore, data sets 5 and 6 serve as examples that, given enough photons from the point source pair, distances significantly smaller than indicated by the classical limit can be determined accurately.

Using the tools of information theory, we have proposed a 3D resolution measure which approaches resolution from the perspective of how accurately a given distance between two objects in 3D space can be determined. Based on a mathematical framework that allows the calculation of resolution measures for different imaging conditions, we have considered two important realizations that, like the classical 3D resolution limit, assume the imaging of two like point sources whose images are given by the Born and Wolf 3D point spread function. The 3D fundamental resolution measure (3D FREM) is specific to noise-free imaging with a detector of infinite and continuous area, while the 3D practical resolution measure (3D PREM) accounts for the practical scenario where the data is complicated by the presence of extraneous noise sources and the detector is pixelated and of finite size.

Using the 3D FREM and the 3D PREM, we have illustrated the dependence of the resolution measure on various attributes of a point source pair such as its orientation, and on experimental settings such as the lateral magnification of the microscope. Importantly, the resolution measure shows that by detecting a sufficient number of photons from a point source pair, arbitrarily small distances of separation can be estimated with prespecified accuracy. This result implies that distances smaller than indicated by the 3D classical criterion can be determined accurately, and we have shown this to be the case with a comparison of the mathematical formulae as well as with estimations using simulated images. By definition a lower bound on the accuracy (i.e., standard deviation) with which a given distance can be determined using any unbiased estimator, we have also shown with the estimations that the maximum likelihood estimator is capable of attaining the resolution measure.

Our mathematical framework is such that it allows the customization of the unknown parameter vector *θ*. Though we have presented a vector comprising the six unknown parameters that describe the 3D geometry of an object pair, in practice the precise parameters to include in the vector should depend on what is actually known and unknown in a particular circumstance. As we have shown with the extension of *θ* to include the photon detection rates Λ_{1} and Λ_{2} of two point sources (Sections 2.4 and 4.2) and the mean *β*(*k, t*) of the Poisson (background) noise (Section 4.2), the mean *η _{k}* of the Gaussian (detector readout) noise at the

In our illustration of the resolution measure, we have used the Born and Wolf 3D point spread function to describe the image of a point source as it is the model from which the classical 3D resolution limit is derived. In practice, however, the resolution measure should be computed using a point spread function that best models the acquired data. For example, in the presence of refractive index mismatches between sample, immersion medium, etc., the Gibson-Lanni model [13] or a vectorial model (e.g., [14, 15]) might be more suitable.

This work was supported in part by the National Institutes of Health (R01 GM071048 and R01 GM085575).

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Walter NG, Huang C, Manzo AJ, Sobhy MA. Nat Methods. 2008;5:475–489. [PMC free article] [PubMed]

2. Moerner WE. Proc Natl Acad Sci USA. 2007;104:12596–12602. [PubMed]

3. Born M, Wolf E. Principles of Optics. Cambridge University Press; Cambridge, UK: 1999.

4. Pluta M. principles and basic properties. Elsevier; Amsterdam: 1988. Advanced light microscopy, vol. 1.

5. Inoué S. Foundations of confocal scanned imaging in light microscopy. In: Pawley JB, editor. Handbook of Biological Confocal Microscopy. 3rd. Springer Science+Business Media, LLC; New York: 2006.

6. Ram S, Ward ES, Ober RJ. Multidim Syst Sig Process. 2006;17:27–57.

7. Ober RJ, Ram S, Ward ES. Biophys J. 2004;86:1185–1200. [PubMed]

8. Ram S, Ward ES, Ober RJ. Proc Natl Acad Sci USA. 2006;103:4457–4462. [PubMed]

9. Heintzmann R, Ficz G. Methods Cell Biol. 2007;81:561–580. [PubMed]

10. Conchello J. J Opt Soc Am A. 1998;15:2609–2619. [PubMed]

11. Rao CR. Linear statistical inference and its applications. Wiley; New York, USA: 1965.

12. Snyder DL, Miller MI. Random point processes in time and space. 2nd. Springer Verlag; New York, USA: 1991.

13. Gibson SF, Lanni F. Opt Soc Am A. 1992;9:154–166. [PubMed]

14. Török P, Varga P. Appl Opt. 1997;36:2305–2312. [PubMed]

15. Haeberlé O. Opt Commun. 2003;216:55–63.

16. Ram S, Ward ES, Ober RJ. Proc SPIE. 2005;5699:426–435. [PMC free article] [PubMed]

17. Aguet F, Ville DVD, Unser M. Opt Express. 2005;13:10503–10522. [PubMed]

18. EstimationTool. www4.utsouthwestern.edu/wardlab/estimationtool.

19. FandPLimitTool. www4.utsouthwestern.edu/wardlab/fandplimittool.

20. Lidke KA, Rieger B, Jovin TM, Heintzmann R. Opt Express. 2005;13:7052–7062. [PubMed]

21. Janesick JR. Scientific charged-coupled devices. SPIE Press; Bellingham, USA: 2001.

22. Kalaidzidis Y. Eur J Cell Biol. 2007;86:569–578. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |