|Home | About | Journals | Submit | Contact Us | Français|
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data.
In wavefront propagation through homogeneous media, the amplitude and phase of a wave are related by a partial differential equation first derived by A. Sommerfeld and I. Runge in 1911.1 The paraxial approximation to this equation is called the transport-of-intensity equation (TIE). In the early 1980s, Teague2 and Streibl3 proposed using TIE as a phase retrieval technique, and today, TIE is widely used in applications such as wavefront sensing in astronomy4 and optical metrology,5,6 or as a phase-imaging technique in optical,7 electron microscopy,8 x rays,9 and neutron beams.10
The theory of partial differential equations establishes that TIE is a well-posed technique for phase retrieval (i.e., a unique solution is known to exist), provided the irradiance is a smooth and positive defined function with appropriate boundary conditions. In practice, however, TIE may become ill-conditioned due to experimental errors (e.g., measurement errors, misalignment, photodetection noise). One way to reduce the effects of some of these errors is to increase the separation between planes of measurement of irradiance that provides the input data for TIE calculations. Recently, an analytical procedure for determining the optimum location of measurement planes has been proposed.11 However, it is important to consider two important errors introduced by separating the planes long distances. First, the partial derivative of the irradiance is evaluated using the finite differences approximation, which associated error increases with the separation between planes. Second, the assumption (implicit in TIE) that the wavefront propagates paraxially between planes might be not valid for large propagation distances in the case of highly aberrated wavefronts. Therefore, in practice, the accuracy in solving TIE is limited by this error trade-off in the separation between planes.
In this work, we study systematically the accuracy of phase reconstruction due to numerical and experimental errors. We show that image denoising techniques applied to irradiance measurements reduce the need for increased separation between planes, thereby avoiding the associated errors. Quantitative error analysis is presented for experimental data and simulations.
The paper is organized as follows: Section 2 introduces the numerical method used to solve TIE and its associated error. Section 3 describes the experimental setup. Section 4 summarizes the experimental error analysis and modeling. Section 5 describes the image denoising technique and our quantitative results of its application. Section 6 gives a brief discussion of the general results.
We adopt Principles of Optics12 notation. A monochromatic optical field in free space is represented by U =u(r)exp[ik0s(r)], where s(r) is the eikonal function [level surfaces of s(r) define wavefronts] and k0 is the wave number in a vacuum. The intensity of the optical field (i.e., the modulus of the complex field) is represented as I, while the irradiance (i.e., energy flux through unit area in the detector plane) is represented as , where =Iz[k0s(r)].12 The paraxial TIE, with propagation along the z coordinate, is given (Cartesian coordinates) by the elliptic partial differential equation:
Here x + y. In this equation, the measured magnitudes are I(x,y) and zI(x,y), and the unknown is the eikonal s(x,y). It is important to note that a photometric sensor measures the irradiance but not the intensity, which is the variable used in TIE. Thus, strictly, we should include a “cosine” correction given by the ray direction with the sensor optical axis.13 In general, this correction should be small—recall also that TIE is valid only in the paraxial regime—as is the case in the experiment performed in this work. However, in highly aberrated beams, this correction could be significant.
It should be noted also that TIE is strictly valid only for pure monochromatic light. However, in practice, TIE might be also valid for real, partially coherent sources (like the diode laser used in our experiment) if a “generalized” or average eikonal is defined—for further discussion, see Paganin et al.14,15
To solve the TIE equation (1), we need to define boundary conditions. For the wavefront test used here, it is acceptable to use homogeneous Neumann conditions. However, in general cases, the definition of the boundary conditions is an important issue to consider carefully in solving TIE as pointed out by Rubinstein et al.13 Experimentally, the irradiance is measured in a set of discrete “points” (e.g., CCD pixel locations), and afterward the partial derivative of the irradiance is evaluated using finite differences between two planes. Hence, TIE can be solved only with a numerical scheme. The typical numerical scheme associates the solution of a partial differential equation with the solution of an optimization problem by means of calculus of variations16—the so-called weak formulation of the partial differential equation. Specifically, we use a Finite Element Method (FEM), for which a weak formulation of Eq. (1) is easy to compute16:
where Ω is the domain where the irradiance is positive, Ω is the boundary of Ω, and ψ is the so-called test function. FEM converts the variational formulation into a linear algebraic problem.
In practice, this is done in three steps:
The numerical solution leads to an inevitable bound error. Theoretically, this error, under the well-posed condition, is proportional to the square of the maximum diameter of the mesh triangles.17 We computed this bound error—Root-Mean-Square-Error (RMSE)—for different mesh sizes, and the results are shown in Fig. 1. Finer meshes produce less error. However, as the mesh triangles become smaller, the computing time increases, becoming impractical for very fine meshes. Hence, there is a trade-off between the tolerance error and the computing time. It is desirable to have the numerical bound error be considerably below the associated experimental errors. In our case, this condition is satisfied for a maximum triangle diameter of 35.7 μm, for which the associated RMSE is 5.2 (10−6 μm). We use this value as the theoretical bound error.
The experimental setup was designed to produce a Fraunhofer diffraction pattern of a circular aperture. A diode laser with a nominal wavelength of 0.635 μm (#S1FC635, Thorlabs, Inc.) and a fiber collimation package (#F220FC-B, Thorlabs, Inc.) produced a Gaussian beam with a waist radius (1/e2) of 1.11 mm (manufacturing data). A pinhole (#04PPM039, Melles Griot, Inc.) of 30±2-μm diameter was located ~100 mm from the diode laser. According to Gaussian theory, the beam waist can be considered quasiplanar in the pinhole plane, with variations of irradiance being less than 0.04%. Therefore, it is reasonable to assume uniform irradiance and phase in the circular aperture. A CCD sensor (#ICX205AK, Sony, Inc.) mounted in a cooled monochrome camera (CoolSNAP, RoperScientific, Inc.) was located 110 mm from the pinhole plane. Since this distance is large compared to the pinhole size, we conclude that the CCD records the Fraunhofer diffraction pattern of a circular aperture. The irradiance of this pattern is the Airy disk and the eikonal is r2/2z (r is the radial coordinate, and z is the distance between the pinhole plane and the CCD plane) for the domain inside the first ring of the Airy disk. Figure 2 shows the theoretical irradiance distribution for an Airy disk and the finite difference in the irradiance between two planes separated 0.2 mm.
This simple setup permits us to analyze the error in the eikonal reconstruction inherent to all possible setups using a CCD. Also, it is important to note that, in this case, the propagation of the wavefront is highly paraxial.
Henceforward, we characterize the error in the irradiance and in the irradiance difference using the signal-to-noise-ratio (SNR), which is defined as the l2 norm of the signal divided by the l2 norm of noise. For the error in the reconstructed eikonal, we use the ratio between the RMSE and the theoretical bound RMSE found in Sec. 2.2—what we call the normalized RMSE. Conceptually, the normalized RMSE describes how far we are from the best possible eikonal reconstruction we can obtain. We also used the absolute error (difference between theoretical and estimated values in the eikonal) in Fig. 3.
The partial derivative of the irradiance (zI) is evaluated with the central difference approximation:
where δ, the separation between planes, is an independent variable controlled experimentally. Errors measuring δ introduce significant bias errors in the phase reconstruction. In our test, an error in the distance between planes of 100 μm or 10 μm is propagated to an increase of 5057.7 and 730.8, respectively, in the normalized RMSE eikonal reconstruction.
Optical alignment errors produce tilts and shifts between the sensor plane and the plane where the eikonal is to be reconstructed. Reconstruction errors due to these misalignments are especially critical for decentrations. In the case of a radially symmetric pupil and eikonal distribution, it is possible to correct for decentration using a centroid algorithm that detects the chief ray in each image. This was done in our experimental test where we had a decentration error around 15 μm. Table 1 shows the RMSE in the experimental case with and without decentration correction for three different plane distances. Of course, the centroid algorithm method applies only to radially symmetric irradiances, so more sophisticated techniques are required to correct decentration errors in systems that lack radial symmetry.
The quantization error in the sensor (in our case, a CCD sensor) arise for two reasons. First, the conversion of photons to a photon-electron signal is quantized due to the nature of light. This effect is small enough to be neglected here. Second, quantization errors are introduced during the conversion from photoelectric signal to Analog to Digital (ADU) units. Moreover, although it does not introduce a quantization error by itself, the bit depth of the pixel limits the dynamic range of possible measured irradiances. Therefore, it is important to avoid the nonlinear region of the CCD (the region close to saturation) that would introduce a critical parabolic error.18 In our experiment, the CCD is a silicon chip where nonlinearity effects are negligible when the CCD works under the full well-capacity range, as was the case in our experiment. As reported by the manufacturer, the gain of our CCD is 3e-/ADU. We simulated the error associated with the quantization, finding an increase of the normalized RMSE error by the factor 2.16. Figure 4 shows why this happens: The irradiance difference signal is corrupted with some artificats due to the quantization conversion from e- to ADUs.
A mathematical representation of a generalized noise model is:
where I is the “deterministic” noise-free irradiance values in the CCD. The origin of the multiplicative noise No (affecting the signal before detection) is the noise introduced by the optical elements and scattering effects in the object to measure. The additive noise is the photodetection noise. Depending on the configuration of the experimental setup, No will be more or less important. However, the photodetection noise will be always the same, as it depends only on the sensor and signal. Hence, we examined the isolated effect of the photodetection noise in our experimental setup. Laser noise (i.e., intensity fluctuations of the beam) was not considered because it is assumed to be very small compared with the photodetection noise.
In the photodetection noise, we have:
It is also assumed that the noise introduced in each pixel is uncorrelated to each other (reasonable if signal levels are low enough to avoid saturation and blooming effects). If the CCD is properly cooled, as with the CoolSNAP camera used in this work, thermal noise is negligible, as well as the background noise, since the measurements were taken in a darkened room. Therefore, we considered only the photon and readout noise. The readout noise of our CCD is determined by the standard deviation of a bias frame, which is σr =15 e- (manufacturer data confirmed with an experimental bias frame). The code, written in MATLAB, to simulate the photodetection noise in our simulations is available from the authors.
In this section, we present an image denoising technique to reduce the photodetection noise and therefore reduce the ill-conditioned nature of TIE solutions. A first option could be to take an average of a set of frames. However, this becomes unpractical because the number of frames necessary to achieve the same improvement obtained with the image denoising filters is extremely large (around 400 frames in our experiment). Image restoration techniques, where statistical information of the noise model is used, are the most likely to succeed. For example, maximizing a likelihood functional for the CCD noise is given by Snyder20 using iterative optimization algorithms. However, such techniques are computationally very expensive and model dependent because they require not only qualitative information of the statistics, but also quantitative parameters of the statistical distributions. Wavelet denoising is an alternative fast technique that uses qualitative, but not quantitative, information of the noise model. This is the method investigated below.
Wavelet techniques for image denoising have a short but intensive history that began with the works of Donoho et al.21,22 in the 1990s. Wavelet thresholding or shrinkage is based on the fact that an estimation of the underlying irradiance signal (assumed smooth) is equivalent to estimating the coefficients of a wavelet expansion for which a threshold has been applied to some specific coefficients (the so-called detailed coefficients). The noise information is assumed to be mainly concentrated in these detailed coefficients. Donoho’s work assumed additive Gaussian noise, so if extra photo noise, with Poisson statistics, is corrupting the irradiance data, a previous variance-stabilizing transformation is required.23 For this purpose, we adopted the Anscombe transformation.24 The Anscombe transformation is defined by the following equation:
This transformation was applied before the wavelet filtering. After the filtering, the inverse Anscombe transformation is used to recover the signal. A redundant wavelet decomposition, i.e., without reducing the number of coefficient (subsampling), is mandatory because subsampling can introduce aliasing effects. The number of levels increases the degree of detail in which we decompose the signal. In our case, a Four-level decomposition was enough. We used a three-orientation (horizontal, vertical, and diagonal) decomposition to avoid differences in the signal profile with orientation. Finally, although we verified empirically that, for our experiment, the type of wavelet family is not an important parameter, we noticed that better results were obtained with a reverse biorthogonal family (“ReverseBior”). The code (written in MATLAB) for the filtering is freely given by the authors.
Figure 5 shows how the signal in the irradiance difference is improved with the wavelet denoising. The images must be compared with the theoretical free-noise irradiance difference shown in Fig. 2(b). Note that the noise images [Figs. 5(a) and 5(c)] totally hide the signal. In the filtered images [Figs. 5(b) and 5(d)], the range scale of the values of the irradiance is significantly reduced, thereby revealing the signal, although still with noise artifacts because of imperfect filtering. Figure 3 shows the error propagation to the eikonal reconstruction before and after filtering of the results obtained in Fig. 5. Finally, Table 2 gives the SNR and normalized RMSE data for the examples of Figs. 3 and and5,5, as well as the improved ratios.
The accuracy in phase reconstruction from TIE is limited by two inevitable errors: the numerical bound error associated with the density of the mesh of measurement points, and the quantization error in the irradiance sensor. However, the most critical errors are those associated with experimental factors. This is clearly shown in Fig. 6, where reconstruction errors are plotted as a function of the separation between measurement planes. As mentioned in the introduction, while separating the planes reduces the photodetection noise, the errors because of assuming paraxial propagation and the errors in computing zI(x,y) by finite differences increase. Therefore, in some situations, the optimum positioning of the separation between planes is located at some finite distance.11 Because of this, we selected a test wavefront that is highly paraxial, where zI(x,y) is in practice quite constant, so the experimental errors are dominant. This explains why in our test the errors are reduced as the separation between planes increases (Fig. 6). The eikonal reconstruction is clearly improved by wavelet denoising of simulated and experimental data for all values of plane separation. For the simulated data, it can be seen that the error associated with 0.6-mm separation with unfiltered data is of the same order as that for 0.1 mm with filtered data. This result demonstrates one of our main conclusions: image denoising relaxes the demand for larger separation between measurement planes when the errors due to the nonparaxiality of the wavefront are less important than the experimental errors. On the other hand, it is important to note that the error reduction is smaller for the experimental data than for the simulations. This could be because of alignment errors in the experimental procedure (see Sec. 4.1). Finally, we emphasize that our tests were deliberately chosen to be best-case scenarios using very smooth irradiances. We anticipate that more sophisticated image-denoising techniques could be necessary for more irregular irradiance profiles.
Sylvain Fischer and Rafael Redondo helped in the selection of the type of wavelet filtering. Eva Acosta provided some important remarks on the implications of the work. Magda Michna gave some useful experimental advice. Specially useful have been the discussions maintained with Jacob Rubinstein and Chu-Ming Lee in the theoretical aspects of the project. Spanish Ministerio de Educacion y Cultura funded a post-doctoral fellowship to Sergio Barbero through the Fulbright Commission. The research was funded by NIH grant R01-EY05109.
Sergio Barbero obtained his BS degree in physics (optics) in 1999 from the Universidad de Zaragoza and his PhD degree in the field of physiological optics in 2004 through the Instituto de Optica, CSIC, Madrid. Since 2004, funded by a Fulbright postdoctoral fellowship, he has been working in the School of Optometry, Indiana University, focusing on the development of new techniques in wavefront sensing and reconstruction and the application of the theory of aberrations in physiological optics.
Larry N. Thibos was educated at the University of Michigan, where he earned BS (1970) and MS (1972) degrees in electrical engineering, and at the University of California, Berkeley, where he received a PhD degree in physiological optics (1975) for research on the neurophysiological mechanisms of sensitivity control in the vertebrate retina. From 1975 to 1983, he was a research fellow at the John Curtin School of Medical Research at the Australian National University in Canberra, Australia, where he investigated the neurophysiology of retinal information processing. In 1983, he joined the Visual Sciences faculty of the School of Optometry at Indiana University and is currently professor of optometry and visual sciences. His research interests include the effects of optical aberrations of the eye on visual performance, the limits to spatial vision imposed by retinal architecture, and the characterization of vision in the peripheral field.