Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2801428

Formats

Article sections

- Abstract
- 1 Introduction
- 2 TIE and Its Numerical Solution
- 3 Experimental Setup
- 4 Error Analysis
- 5 Image Denoising
- 6 Discussion
- References

Authors

Related links

Opt Eng. Author manuscript; available in PMC 2010 January 4.

Published in final edited form as:

Opt Eng. 2006; 45(9): 94001.

doi: 10.1117/1.2352732PMCID: PMC2801428

NIHMSID: NIHMS23057

Indiana University, School of Optometry, Bloomington, Indiana 47405-3907

E-mail: sbarber2/at/indiana.edu

See other articles in PMC that cite the published article.

Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data.

In wavefront propagation through homogeneous media, the amplitude and phase of a wave are related by a partial differential equation first derived by A. Sommerfeld and I. Runge in 1911.^{1} The paraxial approximation to this equation is called the transport-of-intensity equation (TIE). In the early 1980s, Teague^{2} and Streibl^{3} proposed using TIE as a phase retrieval technique, and today, TIE is widely used in applications such as wavefront sensing in astronomy^{4} and optical metrology,^{5}^{,}^{6} or as a phase-imaging technique in optical,^{7} electron microscopy,^{8} x rays,^{9} and neutron beams.^{10}

The theory of partial differential equations establishes that TIE is a well-posed technique for phase retrieval (i.e., a unique solution is known to exist), provided the irradiance is a smooth and positive defined function with appropriate boundary conditions. In practice, however, TIE may become ill-conditioned due to experimental errors (e.g., measurement errors, misalignment, photodetection noise). One way to reduce the effects of some of these errors is to increase the separation between planes of measurement of irradiance that provides the input data for TIE calculations. Recently, an analytical procedure for determining the optimum location of measurement planes has been proposed.^{11} However, it is important to consider two important errors introduced by separating the planes long distances. First, the partial derivative of the irradiance is evaluated using the finite differences approximation, which associated error increases with the separation between planes. Second, the assumption (implicit in TIE) that the wavefront propagates paraxially between planes might be not valid for large propagation distances in the case of highly aberrated wavefronts. Therefore, in practice, the accuracy in solving TIE is limited by this error trade-off in the separation between planes.

In this work, we study systematically the accuracy of phase reconstruction due to numerical and experimental errors. We show that image denoising techniques applied to irradiance measurements reduce the need for increased separation between planes, thereby avoiding the associated errors. Quantitative error analysis is presented for experimental data and simulations.

The paper is organized as follows: Section 2 introduces the numerical method used to solve TIE and its associated error. Section 3 describes the experimental setup. Section 4 summarizes the experimental error analysis and modeling. Section 5 describes the image denoising technique and our quantitative results of its application. Section 6 gives a brief discussion of the general results.

We adopt *Principles of Optics*^{12} notation. A monochromatic optical field in free space is represented by *U* =*u*(*r*)exp[i*k*_{0}*s*(*r*)], where *s*(*r*) is the eikonal function [level surfaces of *s*(*r*) define wavefronts] and *k*_{0} is the wave number in a vacuum. The intensity of the optical field (i.e., the modulus of the complex field) is represented as *I*, while the irradiance (i.e., energy flux through unit area in the detector plane) is represented as , where =*I _{z}*[

(1)

Here _{} * _{x}* +

It should be noted also that TIE is strictly valid only for pure monochromatic light. However, in practice, TIE might be also valid for real, partially coherent sources (like the diode laser used in our experiment) if a “generalized” or average eikonal is defined—for further discussion, see Paganin et al.^{14}^{,}^{15}

To solve the TIE equation (1), we need to define boundary conditions. For the wavefront test used here, it is acceptable to use homogeneous Neumann conditions. However, in general cases, the definition of the boundary conditions is an important issue to consider carefully in solving TIE as pointed out by Rubinstein et al.^{13} Experimentally, the irradiance is measured in a set of discrete “points” (e.g., CCD pixel locations), and afterward the partial derivative of the irradiance is evaluated using finite differences between two planes. Hence, TIE can be solved only with a numerical scheme. The typical numerical scheme associates the solution of a partial differential equation with the solution of an optimization problem by means of calculus of variations^{16}—the so-called weak formulation of the partial differential equation. Specifically, we use a Finite Element Method (FEM), for which a weak formulation of Eq. (1) is easy to compute^{16}:

(2)

where Ω is the domain where the irradiance is positive, Ω is the boundary of Ω, and *ψ* is the so-called test function. FEM converts the variational formulation into a linear algebraic problem.

In practice, this is done in three steps:

- Generate a triangular mesh inside the domain Ω. This has been done with Matlab built-in code from the PDE toolbox (version 7.0).
- Discretize Eq. (2) by means of a barycentric quadrature rule in the generated mesh. The result is a linear system
*Ks*=*F*, where*K*is called the stiffness matrix,*F*is the force vector containing the boundary data information, and*s*is the solution for the eikonal in the mesh points. This step (called assembly) is also done in MATLAB. - Finally, the linear system
*Ks*=*F*is solved by a least-squares procedure.

The numerical solution leads to an inevitable bound error. Theoretically, this error, under the well-posed condition, is proportional to the square of the maximum diameter of the mesh triangles.^{17} We computed this bound error—Root-Mean-Square-Error (RMSE)—for different mesh sizes, and the results are shown in Fig. 1. Finer meshes produce less error. However, as the mesh triangles become smaller, the computing time increases, becoming impractical for very fine meshes. Hence, there is a trade-off between the tolerance error and the computing time. It is desirable to have the numerical bound error be considerably below the associated experimental errors. In our case, this condition is satisfied for a maximum triangle diameter of 35.7 *μ*m, for which the associated RMSE is 5.2 (10^{−6} *μ*m). We use this value as the theoretical bound error.

The experimental setup was designed to produce a Fraunhofer diffraction pattern of a circular aperture. A diode laser with a nominal wavelength of 0.635 *μ*m (#S1FC635, Thorlabs, Inc.) and a fiber collimation package (#F220FC-B, Thorlabs, Inc.) produced a Gaussian beam with a waist radius (1/*e*2) of 1.11 mm (manufacturing data). A pinhole (#04PPM039, Melles Griot, Inc.) of 30±2-*μ*m diameter was located ~100 mm from the diode laser. According to Gaussian theory, the beam waist can be considered quasiplanar in the pinhole plane, with variations of irradiance being less than 0.04%. Therefore, it is reasonable to assume uniform irradiance and phase in the circular aperture. A CCD sensor (#ICX205AK, Sony, Inc.) mounted in a cooled monochrome camera (CoolSNAP, RoperScientific, Inc.) was located 110 mm from the pinhole plane. Since this distance is large compared to the pinhole size, we conclude that the CCD records the Fraunhofer diffraction pattern of a circular aperture. The irradiance of this pattern is the Airy disk and the eikonal is *r*^{2}/2*z* (*r* is the radial coordinate, and *z* is the distance between the pinhole plane and the CCD plane) for the domain inside the first ring of the Airy disk. Figure 2 shows the theoretical irradiance distribution for an Airy disk and the finite difference in the irradiance between two planes separated 0.2 mm.

(a) Simulated Airy disk irradiance distribution in ADU units. (b) Simulated finite difference between two Airy irradiance patterns of two planes separated 0.2 mm.

This simple setup permits us to analyze the error in the eikonal reconstruction inherent to all possible setups using a CCD. Also, it is important to note that, in this case, the propagation of the wavefront is highly paraxial.

Henceforward, we characterize the error in the irradiance and in the irradiance difference using the signal-to-noise-ratio (SNR), which is defined as the *l*^{2} norm of the signal divided by the *l*^{2} norm of noise. For the error in the reconstructed eikonal, we use the ratio between the RMSE and the theoretical bound RMSE found in Sec. 2.2—what we call the normalized RMSE. Conceptually, the normalized RMSE describes how far we are from the best possible eikonal reconstruction we can obtain. We also used the absolute error (difference between theoretical and estimated values in the eikonal) in Fig. 3.

The partial derivative of the irradiance (* _{z}I*) is evaluated with the central difference approximation:

where *δ*, the separation between planes, is an independent variable controlled experimentally. Errors measuring *δ* introduce significant bias errors in the phase reconstruction. In our test, an error in the distance between planes of 100 *μ*m or 10 *μ*m is propagated to an increase of 5057.7 and 730.8, respectively, in the normalized RMSE eikonal reconstruction.

Optical alignment errors produce tilts and shifts between the sensor plane and the plane where the eikonal is to be reconstructed. Reconstruction errors due to these misalignments are especially critical for decentrations. In the case of a radially symmetric pupil and eikonal distribution, it is possible to correct for decentration using a centroid algorithm that detects the chief ray in each image. This was done in our experimental test where we had a decentration error around 15 *μ*m. Table 1 shows the RMSE in the experimental case with and without decentration correction for three different plane distances. Of course, the centroid algorithm method applies only to radially symmetric irradiances, so more sophisticated techniques are required to correct decentration errors in systems that lack radial symmetry.

The quantization error in the sensor (in our case, a CCD sensor) arise for two reasons. First, the conversion of photons to a photon-electron signal is quantized due to the nature of light. This effect is small enough to be neglected here. Second, quantization errors are introduced during the conversion from photoelectric signal to Analog to Digital (ADU) units. Moreover, although it does not introduce a quantization error by itself, the bit depth of the pixel limits the dynamic range of possible measured irradiances. Therefore, it is important to avoid the nonlinear region of the CCD (the region close to saturation) that would introduce a critical parabolic error.^{18} In our experiment, the CCD is a silicon chip where nonlinearity effects are negligible when the CCD works under the full well-capacity range, as was the case in our experiment. As reported by the manufacturer, the gain of our CCD is 3e-/ADU. We simulated the error associated with the quantization, finding an increase of the normalized RMSE error by the factor 2.16. Figure 4 shows why this happens: The irradiance difference signal is corrupted with some artificats due to the quantization conversion from e- to ADUs.

A mathematical representation of a generalized noise model is:

where *I* is the “deterministic” noise-free irradiance values in the CCD. The origin of the multiplicative noise *N _{o}* (affecting the signal before detection) is the noise introduced by the optical elements and scattering effects in the object to measure. The additive noise is the photodetection noise. Depending on the configuration of the experimental setup,

In the photodetection noise, we have:

*n*(_{s}*j*) is the signal dependent noise (i.e., photon noise). In the first approximation, it follows Poisson statistics. This is strictly valid only for nonfluctuating optical fields. In real applications, the optical field would be fluctuating both spatially and temporally (e.g., using light reflected from a scattered object), and the photo-detection process could no longer be described by Poisson statistics and other models should be considered.^{19}*n*(_{r}*j*) is the readout noise (i.e., electronic noise introduced in the signal reading process). It is assumed to have Gaussian statistics.*n*(_{b}*j*) is the background noise, composed of the thermally induced electrons (Gaussian statistics) and a contribution of ambient photon noise (Poisson statistics).

It is also assumed that the noise introduced in each pixel is uncorrelated to each other (reasonable if signal levels are low enough to avoid saturation and blooming effects). If the CCD is properly cooled, as with the CoolSNAP camera used in this work, thermal noise is negligible, as well as the background noise, since the measurements were taken in a darkened room. Therefore, we considered only the photon and readout noise. The readout noise of our CCD is determined by the standard deviation of a bias frame, which is σ* _{r}* =15

In this section, we present an image denoising technique to reduce the photodetection noise and therefore reduce the ill-conditioned nature of TIE solutions. A first option could be to take an average of a set of frames. However, this becomes unpractical because the number of frames necessary to achieve the same improvement obtained with the image denoising filters is extremely large (around 400 frames in our experiment). Image restoration techniques, where statistical information of the noise model is used, are the most likely to succeed. For example, maximizing a likelihood functional for the CCD noise is given by Snyder^{20} using iterative optimization algorithms. However, such techniques are computationally very expensive and model dependent because they require not only qualitative information of the statistics, but also quantitative parameters of the statistical distributions. Wavelet denoising is an alternative fast technique that uses qualitative, but not quantitative, information of the noise model. This is the method investigated below.

Wavelet techniques for image denoising have a short but intensive history that began with the works of Donoho et al.^{21}^{,}^{22} in the 1990s. Wavelet thresholding or shrinkage is based on the fact that an estimation of the underlying irradiance signal (assumed smooth) is equivalent to estimating the coefficients of a wavelet expansion for which a threshold has been applied to some specific coefficients (the so-called detailed coefficients). The noise information is assumed to be mainly concentrated in these detailed coefficients. Donoho’s work assumed additive Gaussian noise, so if extra photo noise, with Poisson statistics, is corrupting the irradiance data, a previous variance-stabilizing transformation is required.^{23} For this purpose, we adopted the Anscombe transformation.^{24} The Anscombe transformation is defined by the following equation:

(3)

This transformation was applied before the wavelet filtering. After the filtering, the inverse Anscombe transformation is used to recover the signal. A redundant wavelet decomposition, i.e., without reducing the number of coefficient (subsampling), is mandatory because subsampling can introduce aliasing effects. The number of levels increases the degree of detail in which we decompose the signal. In our case, a Four-level decomposition was enough. We used a three-orientation (horizontal, vertical, and diagonal) decomposition to avoid differences in the signal profile with orientation. Finally, although we verified empirically that, for our experiment, the type of wavelet family is not an important parameter, we noticed that better results were obtained with a reverse biorthogonal family (“ReverseBior”). The code (written in MATLAB) for the filtering is freely given by the authors.

Figure 5 shows how the signal in the irradiance difference is improved with the wavelet denoising. The images must be compared with the theoretical free-noise irradiance difference shown in Fig. 2(b). Note that the noise images [Figs. 5(a) and 5(c)] totally hide the signal. In the filtered images [Figs. 5(b) and 5(d)], the range scale of the values of the irradiance is significantly reduced, thereby revealing the signal, although still with noise artifacts because of imperfect filtering. Figure 3 shows the error propagation to the eikonal reconstruction before and after filtering of the results obtained in Fig. 5. Finally, Table 2 gives the SNR and normalized RMSE data for the examples of Figs. 3 and and5,5, as well as the improved ratios.

(a)Simulated noisy irradiance finite difference. (b)Simulated noisy irradiance finite difference after filtering. (c)Experimental irradiance finite difference. (d)Experimental irradiance finite difference after filtering. Note the different in scale (in **...**

The accuracy in phase reconstruction from TIE is limited by two inevitable errors: the numerical bound error associated with the density of the mesh of measurement points, and the quantization error in the irradiance sensor. However, the most critical errors are those associated with experimental factors. This is clearly shown in Fig. 6, where reconstruction errors are plotted as a function of the separation between measurement planes. As mentioned in the introduction, while separating the planes reduces the photodetection noise, the errors because of assuming paraxial propagation and the errors in computing * _{z}I*(

Sylvain Fischer and Rafael Redondo helped in the selection of the type of wavelet filtering. Eva Acosta provided some important remarks on the implications of the work. Magda Michna gave some useful experimental advice. Specially useful have been the discussions maintained with Jacob Rubinstein and Chu-Ming Lee in the theoretical aspects of the project. Spanish Ministerio de Educacion y Cultura funded a post-doctoral fellowship to Sergio Barbero through the Fulbright Commission. The research was funded by NIH grant R01-EY05109.

**Sergio Barbero** obtained his BS degree in physics (optics) in 1999 from the Universidad de Zaragoza and his PhD degree in the field of physiological optics in 2004 through the Instituto de Optica, CSIC, Madrid. Since 2004, funded by a Fulbright postdoctoral fellowship, he has been working in the School of Optometry, Indiana University, focusing on the development of new techniques in wavefront sensing and reconstruction and the application of the theory of aberrations in physiological optics.

**Larry N. Thibos** was educated at the University of Michigan, where he earned BS (1970) and MS (1972) degrees in electrical engineering, and at the University of California, Berkeley, where he received a PhD degree in physiological optics (1975) for research on the neurophysiological mechanisms of sensitivity control in the vertebrate retina. From 1975 to 1983, he was a research fellow at the John Curtin School of Medical Research at the Australian National University in Canberra, Australia, where he investigated the neurophysiology of retinal information processing. In 1983, he joined the Visual Sciences faculty of the School of Optometry at Indiana University and is currently professor of optometry and visual sciences. His research interests include the effects of optical aberrations of the eye on visual performance, the limits to spatial vision imposed by retinal architecture, and the characterization of vision in the peripheral field.

1. Sommerfeld A, Runge J. Anwendung der vektorrechnung auf die grundlagen der geometrischen optik. Ann Phys. 1911;35:277–298.

2. Teague MR. Irradiance moments—their propagation and use for unique retrieval of phase. J Opt Soc Am. 1982;72(9):1199–1209.

3. Streibl N. Phase imaging by the transport-equation of intensity. Opt Commun. 1984;49(1):6–10.

4. Roddier F. Curvature sensing and compensation—a new concept in adaptive optics. Appl Opt. 1988;27(7):1223–1225. [PubMed]

5. Roddier C, Roddier F. Wavefront reconstruction from defocused images and the testing of ground-based optical telescopes. J Opt Soc Am A. 1993;10(11):2277–2287.

6. Quiroga JA, Gomez-Pedrero JA, Martinez-Anton JC. Wavefront measurement by solving the irradiance transport equation for multifocal systems. Opt Eng. 2001;40(12):2885–2891.

7. Barty A, Nugent KA, Paganin D, Roberts A. Quantitative optical phase microscopy. Opt Lett. 1998;23(11):817–819. [PubMed]

8. Bajt S, Barty A, Nugent KA, McCartney M, Wall M, Paganin D. Quantitative phase-sensitive imaging in a transmission electron microscope. Ultramicroscopy. 2000;83(1–2):67–73. [PubMed]

9. Nugent KA, Gureyev TE, Cookson DF, Paganin D, Barnea Z. Quantitative phase imaging using hard x rays. Phys Rev Lett. 1996;77(14):2961–2964. [PubMed]

10. Allman BE, McMahon PJ, Nugent KA, Paganin D, Jacobson DL, Arif M, Werner SA. Imaging: phase radiography with neutrons. Nature (London) 2000;6809:158. [PubMed]

11. Soto M, Acosta E, Rios S. Performance analysis of curvature sensors: optimum positioning of the measurement planes. Opt Express. 2003;11(20):2577–2588. [PubMed]

12. Born M, Wolf E, Joint A. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. 6. Oxford: New York; 1980.

13. Rubinstein J, Wolansky G. A variational principle in optics. J Opt Soc Am A. 2004;21(11):2164–2172. [PubMed]

14. Paganin D, Nugent KA. Noninterferometric phase imaging with partially coherent light. Phys Rev Lett. 1998;80(12):2586–2589.

15. Gureyev TE, Paganin DM, Stevenson AW, Mayo SC, Wilkins SW. Generalized eikonal of partially coherent beams and its use in quantitative imaging. Phys Rev Lett. 2004;93(6):1–4. [PubMed]

16. Pinchover Y, Rubinstein J. Introduction to Partial Differential Equations. Cambridge University Press; New York: 2005.

17. Larsson S, Thome V. Texts in Applied Mathematics. Vol. 45. New York, Berlin: 2003. Partial Differential Equations with Numerical Methods.

18. Vdovin G. Reconstruction of an object shape from the near-field intensity of a reflected paraxial beam. Appl Opt. 1997;36(22):5508–5513. [PubMed]

19. Mandel L, Wolf E. Optical Coherence and Quantum Optics. Cambridge; New York: 1995.

20. Snyder DL, Hammoud AM, White RL. Image recovery from data acquired with a charge-coupled-device camera. J Opt Soc Am A. 1993;10(5):1014–1023. [PubMed]

21. Donoho DL, Johnstone IM. Ideal spatial adaptation by wavelet shrinkage. Biometrika. 1994;81(3):425–455.

22. Donoho DL, Johnstone IM, Kerkyacharian G, Picard D. Wavelet shrinkage—asymptopia. J R Stat Soc Ser B (Methodol) 1995;57(2):301–337.

23. Besbeas P, De Feis I, Sapatinas T. A comparative simulation study of wavelet shrinkage estimators for poisson counts. Int Statist Rev. 2004;72(2):209–237.

24. Anscombe F. The transformation of Poisson, binomial and negative binomial data. Biometrika. 1948;35:246–254.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's Canada Institute for Scientific and Technical Information in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |