Home | About | Journals | Submit | Contact Us | Français |

**|**Sensors (Basel)**|**v.8(6); 2008 June**|**PMC2952888

Formats

Article sections

- Abstract
- Introduction
- Optical Coherence Tomography
- General Framework
- System Modeling
- The Inverse Problem for SAR and ISAM
- Results
- Alternate ISAM Modalities
- Conclusions
- References

Authors

Related links

Sensors (Basel). 2008 June; 8(6): 3903–3931.

Published online 2008 June 11. doi: 10.3390/s8063903

PMCID: PMC2952888

NIHMSID: NIHMS201169

The Beckman Institute for Advanced Science and Technology and The Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA; E-mails: bryn/at/uiuc.edu (B. J. D.); Email: dmarks/at/uiuc.edu (D. L. M.); Email: tralston/at/ll.mit.edu (T S. R.); Email: carney/at/uiuc.edu (P. S. C.)

Received 2008 June 5; Revised 2008 June 9; Accepted 2008 June 9.

Copyright © 2008 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland.

This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

This article has been cited by other articles in PMC.

Three-dimensional image formation in microscopy is greatly enhanced by the use of computed imaging techniques. In particular, Interferometric Synthetic Aperture Microscopy (ISAM) allows the removal of out-of-focus blur in broadband, coherent microscopy. Earlier methods, such as optical coherence tomography (OCT), utilize interferometric ranging, but do not apply computed imaging methods and therefore must scan the focal depth to acquire extended volumetric images. ISAM removes the need to scan the focus by allowing volumetric image reconstruction from data collected at a single focal depth. ISAM signal processing techniques are similar to the Fourier migration methods of seismology and the Fourier reconstruction methods of Synthetic Aperture Radar (SAR). In this article ISAM is described and the close ties between ISAM and SAR are explored. ISAM and a simple strip-map SAR system are placed in a common mathematical framework and compared to OCT and radar respectively. This article is intended to serve as a review of ISAM, and will be especially useful to readers with a background in SAR.

Traditional sensing modalities such as X-ray projection imaging [1], nuclear magnetic resonance (NMR) spectroscopy [2, 3], radar [4] and focused optical imaging [5] rely primarily on physical instrumentation to form an image. That is, the instrument is constructed so that the resulting relation between the object of interest and the collected data is sufficiently simple so as to allow data interpretation with little or no data processing. However, for more than 40 years the performance of microelectronic devices has improved exponentially, as famously quantified, in part, by Moore's law [6]. The resulting abundance of powerful computing resources has been a great boon to almost every area of science and technology, and has transformed sensing and imaging. When significant computational data processing is added to an imaging system, the effect of the physical sensing may be mathematically inverted, allowing the use of instruments with more complicated, multiplex object-to-data relations. The resulting sensing systems provide new imaging modalities, improved image quality and/or increased flexibility in instrument design. This coupling of sensing instrumentation and physically-based inference is often known as computed imaging.

The application of computed imaging techniques to the imaging and non-imaging sensor systems listed above has been revolutionary: X-ray projection imaging has evolved into computed tomography [7, 8]; the contrast mechanisms of NMR spectroscopy form the basis of magnetic resonance imaging [9]; radar has led to synthetic aperture radar (SAR) [10–12]; while the subject of this article, ISAM [13–20], is an example of computed imaging in focused optical systems. Computed imaging techniques also appear in nature—perhaps the most ubiquitous example of what can arguably described as computed imaging is the stereoptic human visual system, where a pair of two-dimensional images (one collected by each eye) are processed in the brain to give depth perception [21]. These examples of computed imaging are far from forming an exhaustive list—the field is large field and growing. Other examples include array-based radio astronomy [22], diffusion tomography [23–25] and positron emission tomography [26]. New applications and contrast mechanisms are still being discovered and the escalation of available computational power is allowing increasingly difficult inverse problems to be solved. For example, the recent explosion of activity in compressive sampling has already brought certain problems in analysis, inference and reconstruction, thought to be intractable, into the realm of tractable problems [27, 28]. Instruments employing compressive sensing not only draw inferences from data using a physical model, they exploit statistical redundancy in the description of the object to significantly decrease the amount of data required, e.g. [29].

This article is focused specifically on ISAM imaging technologies. In addition to the broad commonality ISAM has with other computed imaging techniques, it has strong physical and mathematical connections to a family of instruments including SAR, synthetic aperture sonar [30–32], seismic migration imaging [33, 34] and certain modalities in ultrasound imaging [35, 36]. All of these systems apply computed imaging to multi-dimensional data collected using both spatial diversity and a time-of-flight measure from a spectrally-broad temporal signal. In this article ISAM and SAR are cast in the same mathematical framework, with similarities and differences between the two systems discussed throughout.

In the following section, OCT, the forerunner of ISAM, is described. In Sec. 3 a general framework for ISAM, OCT, SAR and radar is developed. The distinctions between the ISAM/SAR and OCT/radar models are discussed within this framework in Sec. 4. In Sec. 5 it is shown how the models used lead to a simple Fourier-domain resampling scheme to reconstruct the imaged object from the collected data. Simulated and experimental results are shown in Sec. 6, while alternative ISAM instrument geometries are briefly discussed in Sec. 7. Conclusions and references appear at the end of this article.

An obvious distinction between ISAM and SAR is the spectrum of the electromagnetic field used to probe the sample—ISAM operates in the near infrared (IR), while most SAR systems operate in the radio spectrum. Probing in the near-IR allows the formation of an image with resolution on the order of microns. Additionally, in many biological tissues the near-IR spectral band is primarily scattered rather than absorbed [37], allowing greater depth of penetration than at other wavelengths. Near-IR light backscattered from an object can be used to form a three-dimensional image using OCT [38–41]. Since the image is formed based on the natural scattering properties of the object, OCT and related methods are non-invasive and non-perturbing, c.f., methods such as histology (which requires destruction of the sample) or fluorescence microscopy (which requires staining of the object).

OCT combines interferometry, optical imaging, and ranging. Due to its sensitivity to wavelength-scale distance changes, interferometry has been an important tool in physics (e.g., Young's experiment [42] and the Michelson-Morley experiment [43]) and is now widely applied using many techniques [44]. OCT can be implemented in a Michelson interferometer arrangement as shown in Fig. 1. The focusing optics localize the illumination and collection operations around a transverse focal point. This focal point is scanned in two (transverse) dimensions across the sample. Interferometry with a broadband source is used to image the sample along the third, axial, dimension. The coherently backscattered light and the reference light only interfere for backscattering from a narrow axial (depth) region, of length *L _{c}*, determined by the statistical coherence length of the source (see [Ref. 45], Sec. 4.2.1). That is,

A basic illustration of an OCT system. Light traveling in one arm of a Michelson interferometer is focused into the sample. The length of the reference arm can be adjusted using a moveable mirror. The reference light and the light backscattered from the **...**

As described above, depth discrimination in OCT is achieved via coherence gating, while transverse resolution is achieved using focusing optics. Ideal focusing optics would produce a thin collimated beam in the sample, described as a pencil beam in Fig.2.These ideal optics may not be physically realized, as the propagation laws of electromagnetic radiation prohibit beams that are both perfectly collimated and localized. For focusing systems, the beam is often quantified using a scalar Gaussian beam model [46], within which the depth of focus *b* (i.e., the axial depth over which the beam is approximatetly collimated) is proportional to the square of the minimum width *ω*_{0} of the beam. As illustratted in Fig. 2, this relationship between *ω*_{0} and *b* implies that the resolution, which improves with decreasing *ω*_{0}, and the depth of focus are competing constraints in OCT. When the coherence gate is set to image planes outside of the depth of focus, the transverse resolution suffers as the beam becomes wider.

ISAM uses computational imaging to overcome the trade-off between depth of focus and resolution. By accurately modeling the scattering processes and the data collection system, including the defocusing ignored in OCT image formation, the scattering properties of the object can be quantitatively estimated from the collected data. As in SAR, diffraction-limited resolution is achieved throughout the final image. for both ISAM and SAR the key to this capability is the coherent collection of a complex data set.

Interferometric microscopes [47], such as OCT systems, give holographic data, i.e., the phase of the backscattered light can be recovered from the raw data. This is a substantial advantage over standard non-interferometric systems where the phase information is lost at detection. This holographic data collection is analogous to the coherent data collection used in SAR systems. Indeed, parallels between SAR and holographic microscopy were recognized and discussed in a series of papers [48-50]. In both ISAM and SAR, the collection of complex coherent data allows the numerical implementation of advantageous operations that would be prohibitively difficult to implement physically. In SAR the multiple along-track range profiles collected from a small aperture can be used to synthesize an aperture corresponding to the whole along-track path. In ISAM, multiple complex OCT range profiles can be computationally reconstructed so that all planes appear simultaneously in-focus, i.e., the blurred out-of-focus regions seen in OCT can be brought into focus numerically.

In both SAR and ISAM an electromagnetic wave is used to probe the object, the detection apparatus is scanned in space, and a time-of-flight measurement is used to image an additional spatial dimension. Thus, in a fundamental sense, the connection between the data and the object is determined by the same physical laws in either case. This analogy can also be extended to other wave-based techniques such as ultrasound and seismic migration imaging. In this section a general model for radar, SAR, OCT and ISAM techniques is presented. While there are significant differences in system scale and operation, see Fig. 3, the analogy between SAR and ISAM is sufficiently strong to allow a common mathematical description.

An illustration of the differences between the data acquisition geometries in SAR and ISAM. SAR involves a one-dimensional scan track, while ISAM scans over a plane. Unlike SAR beams, ISAM fields include a region within the object that is in focus. Note **...**

As shown in Fig. 3, both SAR and ISAM systems involve a translation of the aperture. This aperture position will be described by a vector ** ρ**, while the vector r describes the position in the imaged object. In the SAR case, a linear-track strip-map system is considered so that the detector is moved along points

Consider the scattered field returned to the aperture when the aperture is offset from the origin by ** ρ**, the object consists of a point scatterer at the position r, and an ideal temporal impulse response is used as input to the aperture. This returned scattered field will be denoted by

(1)

The linearity of the system is predicated on the assumption that multiple scattering effects are negligible— this is often known as the first Born approximation (see [Ref. 45], Sec. 7.6.2). The system input *Ê _{r}*(

It is often convenient to represent the temporal convolution seen in Eq. (1) in the Fourier domain so that,

(2)

A caret (^) above a function denotes that the function is represented in the space-time domain, while the absence of a caret denotes a function represented in the space-frequency domain. The fact that *η*(**r**) is not a function of *ω* in Eq. (2) is indicative of an implicit assumption made in Eq. (1). The assumption is that the imaged susceptibility is a constant function of the probing signal frequency, i.e., that the object is not dispersive. This assumption is adequate over sufficiently narrow regions of the spectrum or when the object does not have significant absorbing resonant peaks over the imaging band. This is often true to a good approximation in the biological samples imaged using ISAM.

The backscattered field incident on the detecting aperture is represented in Eq. (1). Rather than being used directly, this field is typically processed in radar systems, in a technique known as pulse compression. The most common processing used is a matched filter [53], which can be expressed as,

(3)

where *Î _{R}* represents the processed radar data. In Eq. (3), the detected field is filtered with a function matched to the broadcast pulse

While coherent detection of *Ê _{s}*(

The response times of optical detectors are generally of such a scale that the measured data can be considered a long-time average over optical time scales. Assuming that the fields in the system are statistically stationary and ergodic (see [Ref. 45], Sec. 2.2), these long time averages can be written as

(6)

where *τ* is the temporal delay on the reference arm,
and the brackets represent an ensemble average. That *Î _{T}*(

Because Γ* _{rr}*(0) and Γ

Using the definition of Γ* _{sr}*(

(7)

As in Eq. (4), these data can be written in the Fourier domain. The Fourier domain data will again be denoted by *S*(* ρ*,

(8)

where, in this case, *A*(*ω*) is the power spectral density of the reference beam, which is found, via the Wiener-Khintchine theorem (see [Ref. 45], Sec. 2.4), as the Fourier transform of Γ* _{rr}*(

The identical forms of Eq. (4) and Eq. (8) illustrate the commonalities between radar and OCT. Both can be regarded as linear systems collecting data in *N* − 1 spatial and 1 spectral dimension, in order to estimate a spatial-domain object of *N* dimensions. Note that these data collection models have different integral kernels *h*(**r** − * ρ*,

The instrument modality described in the section above and illustrated in Fig. 1 is known as time-domain OCT or time-domain ISAM. It is however, possible to collect the data directly in the frequency domain, in a methodology known as Fourier-domain OCT [56]. In this system the reference mirror seen in Fig. 1 is fixed and the detector replaced with a spectrometer. Fourier-domain OCT eliminates the need for scanning the reference mirror position and has significant advantages in terms of image acquisition time and/or signal-to-noise ratio (SNR) [57, 58]. A complementary principle is applied in Fourier transform infrared spectroscopy [59, 60], where spectral information is measured using interferometric time-domain measurements.

In Fourier-domain OCT or ISAM the collected data are,

(9)

where *τ*_{0} represents the fixed delay on the reference arm. Note that the Fourier-domain reference and sample fields appearing above are spectral domain representations of random processes. These Fourier domain representations are assumed to exist, at least in the sense of mean-square stochastic convergence of the Fourier integral [61].

The first term of Eq. (9) is the power spectral density *A*(*ω*) used above. This term is constant in * ρ* and typically slowly varying with

The Fourier spectrum, *S*(* ρ*,

(10)

This suggests that the remaining term in Eq. (9) be written 2**Re**{exp(-*iω**τ*_{0})*S*(* ρ*,

In this section equivalent detection models have been posed for OCT and radar, as represented by Eq. (4) and Eq. (8) respectively. To understand image formation, the integral kernel *h*(**r** − * ρ*,

As shown by Eq. (4) and Eq. (8), the relationship between the object *η*(**r**) and the data *S*(* ρ*,

As described in Sec. 3.1, the time-domain kernel *ĥ*(**r** − * ρ*,

(11)

where **r**_{‖} is the transverse component of **r**, *u*(**r**_{‖}) describes the width of the illuminating beam, υ (**r**_{‖}) describes the width of the detection sensitivity, and *t _{d}*(

The temporal delay *t _{d}*(

(12)

where *c* is the speed of light. If the same aperture is used in both transmission and detection, reciprocity [63] requires that *u*(**r**_{‖}) = υ (**r**_{‖}). Appealing to Eq. (11) and Eq. (12), Eq. (7) becomes,

(13)

This expression relates time-domain OCT data to the imaged object. The object is convolved with a PSF with transverse extent governed by *u*^{2}(**r**_{‖}) and axial extent determined by Γ* _{rr}*(

It is convenient to take Eq. (13) into the temporal Fourier domain:

(14)

where *k*(*ω*) is the wavenumber given by the dispersion relation,

(15)

This expression is an alternative representation of the time-domain data and directly describes the information bearing term in Fourier-domain OCT. Comparing Eq. (14) to Eq. (8) reveals that the kernel used for the OCT forward model is given by the expression

(16)

Reliance on this approximate model limits OCT and radar imaging systems—OCT images are of increasingly poor quality away from the depth of focus, and transverse radar resolution is limited by the beam width and hence the maximum aperture size.

The computed imaging approaches of SAR and ISAM are based on models that more closely approximate solutions of Maxwell's equations. Contrary to the assumptions made in OCT, the transverse and axial system responses cannot be decoupled accurately, due to the beam-spreading illustrated in both Fig. 2 and Fig. 3. The changes in the model are reflected by changes in the kernel *h*(**r** − * ρ*,

The kernel *h*(**r** − * ρ*,

(17)

Here the objective lens (ISAM) or transmitting aperture (SAR) produces a field *g*(**r** − * ρ*,

When the same aperture is used for both illumination and detection (as is typically the case), reciprocity can again be invoked to show that the illumination and detection patterns are equal. Furthermore, the illumination field *g*(**r** − * ρ*,

(18)

where,

(19)

In free space *k*(*ω*) is given by Eq. (15), however more complicated dispersion relations can also be used for dispersive materials [18, 64, 65], where the speed of light depends on *ω*.

The angular spectrum of Eq. (18) must be modified for the two-dimensional SAR system. In SAR a two-dimensional (*x*, *z*) object is imaged, meaning that **r**_{‖} and **q**_{‖} are each one-dimensional. However, the electromagnetic fields present in the system spread in three dimensions. In the simple strip-map SAR system considered here, the SAR aperture track and the object are both assumed to lie in the *x*–*z* plane, i.e., the aperture altitude is neglected. In this geometry the spreading in *y* can be modeled as a [*k*(*ω*)*z*]^{−1/2} decay so that, for the SAR system, Eq. (18) becomes,

(20)

As will be seen subsequently, this difference in dimensionality between SAR and ISAM does not change the nature of the data processing required, only the dimensionality of the processing.

The angular spectra *G*(**q**_{‖}, *ω*) and *G _{s}*(

The forward model used in SAR and ISAM (Eq. (17)) is more accurate than that assumed in radar and OCT (Eq. (16)). In the simple OCT model each point in the data set is associated with a point in the object, as described in Eq. (13). To correct for the out-of-focus blurring described by the more accurate kernel of Eq. (17), mathematical processing must be applied. The appropriate computed imaging algorithm is described in the next section.

The linear integral equation of Eq. (8) and the expression for the kernel, given in Eq. (17), form the forward model used in SAR and ISAM. This relation describes the dependence of the data on the object. Estimating the object from the data, using the forward model, requires solving the inverse problem. In general this problem may be ill-posed, but with the use of regularization techniques [67-69], an estimate of the object may be found. The quality of this estimate will depend on how much information is passed by the instrument.

Since the ISAM forward model is well defined, the inverse problem can, in principle, be solved using numerical techniques. However, an approximation to the forward model allows a more elegant, and significantly more efficient [20], solution to the inverse problem. This solution is explained in this section.

The angular spectrum representations seen in Eq. (18) and Eq. (20) give the transverse spatial Fourier transform of the illuminating field *g*(**r**, *ω*). The model kernel can then be taken to the transverse spatial Fourier domain, denoted by a tilde, by noting that the product seen in Eq. (17) becomes a convolution,

(21)

Comparing Eq. (18) and Eq. (20), it can be seen that the SAR result is similar to the expression above but in one fewer dimension and with a prefactor of [*k*(*ω*)*z*]^{−1}.

As a first step towards the solution of the inverse problem, it is useful to recognize that the transverse part of the integral appearing in Eq. (8) is in the form of a two-dimensional convolution. Thus, by taking the two-dimensional (transverse) spatial Fourier transform of the data, the inverse problem may be reduced from a problem involving a three-dimensional integral equation to one of a series of one-dimensional integral equations, i.e.,

(22)

As illustrated in Fig. 3, the fields used in SAR and ISAM are divergent away from the *z* = 0 plane. In Eq. (21), this implies that the complex exponential factor in the integrand is rapidly oscillating. Such oscillatory integrals can be approximated using the method of stationary phase (see [Ref. 45], Sec. 3.3). The stationary point occurs when the argument of the exponential has zero gradient, which in this case is at the point
.

Applying the method of stationary phase in two dimensions gives the ISAM result,

(23)

where *H _{D}*(

As seen in Fig. 3, the object in ISAM, unlike in SAR, contains the focused *z* = 0 plane. Around this region the exponential seen in the integrand of Eq. (21) is not highly oscillatory, meaning the method of stationary phase can not be accurately applied. However, it is still possible to approximate the function (**q**_{‖}, *z*, *ω*) to obtain an elegant inversion [19].

In the focal region, the integrand of Eq. (21) is dominated by the product
. For symmetric apertures, this product will be peaked around the point
. The exponential factor may be expanded in a Taylor series about this point and, since it is slowly varying for small *k*(*ω*)*z*, all but the leading term discarded. The consequent analysis, given in detail in [Ref. 14], then results in an approximation of the form,

(24)

The exponential factor above is the same as for the diverging region.

The approximated models described above can be substituted into the data model of Eq. (22) to give,

(25)

where *H* (**q**_{‖}, *ω*) = *H _{F}*(

In Eq. (25), *A*(*ω*) *H*(−**q**_{‖}, *ω*) act as linear filters on the data. The effects of these filters can be compensated by standard means, such as the Wiener filter [70]. For systems without aberrations, the function *H*(**q**_{‖}, *ω*) is slowly varying, as is *A*(*ω*), meaning that it may be acceptable to neglect the effects of *A*(*ω*)*H*(−*q*, *ω*) in many situations.

In either case, the remaining integral in Eq. (25) can be seen to be of the form of a Fourier transform. Consequently,

(26)

where *η͌′* is the three-dimensional Fourier transform of *η*(**r**)/*R*(*z*), the object with an attenuation away from focus, and

(27)

This equation describes a Fourier domain warping relating the data and the object. This warping is known as the Stolt mapping and is illustrated in Fig. 4. The Stolt mapping was originally developed in the field of geophysical imaging [71, 72] and is used in Fourier migration techniques. In ultrasonic imaging, Eq. (27) forms the basis of the Synthetic Aperture Focusing Technique (SAFT) [73-76]. The Stolt mapping was also recognized as applicable in SAR [77], where it is typically known as the *ω*−*k* algorithm or the wavenumber algorithm. This work shows the utility of the Stolt mapping in the field of interferometric broadband microscopy.

A geometric illustration of the Stolt mapping relating a point [**q**_{‖}, *k*(*ω*)] in the Fourier-domain data to a point [*q*_{‖}, −2*k*_{z} (**q**_{‖}/2, *ω*)] in the Fourier-domain object. Note that the *ω* dependence of the **...**

The equivalent Fourier mapping for OCT, found from the kernel of Eq. (16) and valid only within the focal region, is

(28)

This OCT model describes only a rescaling of the axial coordinate, while the Stolt mapping of Eq. (26) describes the physical effects of out-of-focus beam spreading.

The relation given in Eq. (26) gives a clear indication of how to estimate the object from the collected data *S*(* ρ*,

- Starting with the complex data
*S*(*ρ*_{‖},*ω*), collected as described in Sec. 3, take the transverse spatial Fourier transform to get (**q**_{‖},*ω*). - Implement a linear filtering, i.e., a Fourier-domain multiplication of a transfer function with (
**q**_{‖},*ω*), to compensate for the bandpass shape given by*A*(*ω*)*H*(−**q**_{‖},*ω*) in Eq. (25). This step may often be omitted without significant detriment to the resulting image. - Warp the coordinate space of (
**q**_{‖},*ω*) so as to account for the Stolt mapping illustrated in Fig. 4. Resample the result back to a regular grid to facilitate numerical processing. - Take the inverse three-dimensional Fourier transform to get an estimate of
*η*(**r**)/*R*(*z*), the object with an attenuation away from focus. - If required, multiply the resulting estimate by
*R*(*z*) to compensate for decay of the signal away from focus.

The operations described above are computationally inexpensive and allow a fast implementation of ISAM processing [20].

In this section ISAM images are compared to those obtained using standard OCT methods. The high quality of the results obtained validates the calculations made above, while also showing that the approximations made to the forward model in Sec. 5, do not introduce significant error in the solution to the inverse problem.

Numerical simulations of the ISAM system are useful for providing a theoretical corroboration of the proposed methods in a tightly controlled and well understood environment. In Fig. 5, simulation results are shown for the imaging of an isotropic point scatterer located out of focus on the *z* axis.

Simulated OCT image from a point scatterer located at (0, 0, 10)*μ*m (a) and the real part of the corresponding Fourier representation (c). The ISAM Fourier resampling takes the data shown in (c) to the reconstruction of (d). The corresponding spatial-domain **...**

The data were produced using the focused vector beam formulation given in [66]. The electromagnetic field defined in that paper is an exact solution to Maxwell's equations, and obeys geometrical-optics boundary conditions on the lens aperture. An objective lens with 0.75 numerical aperture was simulated and light between the wavelengths of 660nm and 1000nm was collected. Further details of this type of data simulation can be found in [Ref. 14].

The magnitude of the spatial-domain OCT data gives a broadly spread and low-amplitude response. Ideally the image would be point-like, corresponding to the point scatterer. The blurring observed is due to the scatterer being in the out-of-focus region. When the OCT image is examined in the Fourier domain, curved phase fronts can be seen. For the offset point scatterer imaged, the Fourier spectrum should have flat phase fronts parallel to the *q _{x}*–

The Fourier resampling of ISAM can be seen to take the curved OCT phase fronts to the expected straight lines. When the ISAM image is represented in the spatial domain, the desired high-amplitude, point-like image is seen. These simulations lend strong support to ISAM, as the detailed, vectorial forward model is inverted accurately by a simple Fourier-domain resampling only.

Beyond simulations, the next step in ISAM validation is to image an engineered object (i.e. a phantom) with known structure. Here the phantom was constructed by embedding titanium dioxide scatterers, with a mean diameter of 1*μ*m, in silicone. This phantom was imaged with a spectral-domain ISAM system employing an objective lens with a numerical aperture of 0.05. A femtosecond laser (Kapteyn-Murnane Laboratories, Boulder, Colorado) was used as a source, to give a central wavelength of 800nm and a bandwidth of 100nm. The resulting focused pattern *g*(**r**, *ω*) can be approximated as a Gaussian beam with a spot size of 5.6*μ*m and a depth of focus of approximately 240*μ*m. Further details of the ISAM instrument and the phantom can be found in [Ref. 18].

ISAM processing, including dispersion compensation [64], was applied to the collected data to produce an image. Specific details of the computational implementation can be found in [Ref. 18], while [Ref. 20] gives a thorough general description of ISAM algorithms and computational demands. The raw data and the ISAM reconstruction are shown in Fig. 6 and Fig. 7 (transverse-axial and transverse-transverse planes respectively), with corresponding renderings in Fig. 8.

Planar *x*–*y* slices of the OCT image volume (d) and the ISAM reconstruction volume (e). Three planes are shown, with details of extent 80*μ*m×80*μ*m for each. The planes are located at *z* = − 1100*μ*m (c,h), *z* = **...**

Three-dimensional renderings of the OCT (a) and ISAM (b) images of titanium dioxide scatterers. Out of focus blur can be seen in the OCT image, while the ISAM reconstruction has isotropic resolution. Note that the axial axis has been scaled by a factor **...**

Out of focus blurring is clearly visible in the collected data. This blurring limits the depth of field in OCT. The ISAM reconstruction can be seen to bring the out-of-focus regions back into focus, as evidenced by the point-like features in the image, which correspond to individual titanium dioxide scatterers. It should be noted that the point-like reconstructions observed are produced by the physics-based computational imaging, not by the use of any assumed prior knowledge of the sample, e.g., [78]. The *x*–*y* details of Fig. 7 provide further insight into the action of the ISAM resampling algorithm. In Fig.7(b) and Fig.7(c) interference fringes can be clearly seen. These result from the simultaneous illumination of two (or more) point scatterers and the consequent interference of the light scattered from each. The reconstructions of Fig.7(g) and Fig.7(h) show that these interference fringes are correctly interpreted as multiple point scatterers in the ISAM reconstruction.

To further illustrate the SAR-ISAM analogy, ISAM and SAR images are compared below. Strip-map radar and SAR images from a linear rail SAR imaging system [79,80] are shown in Fig.9. This imaging system consists of a small radar sensor mounted on linear rail that is 225cm in length. The radar sensor is moved down the rail at 2.5cm increments, acquiring a range profile of the target scene at each location along the rail. The radar sensor is a linear FM radar system with 5GHz of chirp bandwidth spanning approximately 7.5GHz to 12.5GHz. The chirp time is 10ms, the transmit power is approximately 10dBm, the receiver dynamic range is better than 120dB, and the digitizer dynamic range is 96dB. Range profile data from each increment across the rail are fed into a range-migration SAR algorithm [12], a stolt Fourier resampling, to yield a high-resolution SAR image of the target scene. Raw radar range profile data are similar to out of focus data in coherence microscopy, as seen in Fig. 9(a) for radar and Fig. 6(b) for OCT. The SAR image, after Stolt Fourier resampling, is shown in Fig. 9(b), which is analogous to the ISAM image of Fig. 6(c).

OCT and ISAM are primarily biological imaging methods. As such, the most important capability of ISAM is the imaging of tissue. As described in [Ref. 18], human breast tissue was acquired and imaged with the same ISAM system used to image the titanium dioxide scatterers. Examples of the resulting images can be seen in Fig. 10. Once again, it can be seen that ISAM successfully removes blur and resolves interference artifacts in otherwise out of focus regions.

Breast tissue is imaged according to the geometry illustrated in the rendering in the upper left. Data are shown in the *x*–*y* plane for two different values of *z*. Plane A is at *z* = − 643 *μ*m, while plane B is at *z* = − 591 **...**

The improvement observed in the ISAM reconstructions has significant consequences in terms of the diagnostic utility of the images. In the out-of-focus OCT images, the cellular structure is almost entirely lost, while in the ISAM reconstructions, significant features can be seen on the micrometer scale. For example, cell membranes can be recognized, and the boundary between the adipose and fibrous tissue can be clearly seen. There is also a strong correspondence to the histological sections, although embedding, sectioning and staining of the tissue disrupt the sample to some extent. ISAM, unlike OCT, can be seen to allow diffraction-limited imaging at all planes within the sample, rather than just at the physical focus. As a result, significantly more information regarding the tissue can be extracted without increasing the measurement duration or scanning the focal plane. In contrast to the histological images, the structure visible in the ISAM images is observed without destruction of the sample. This suggests ISAM may be particularly useful in applications where in vivo imaging over a large tissue volume is preferable to biopsy.

ISAM is a microscopic imaging technique and is implemented on a bench-top scale. This provides significant flexibility in the design of alternative ISAM modalities. In this section some alternative ISAM instruments are briefly discussed.

To achieve a maximum-resolution image it is necessary to use the highest possible numerical aperture objective lens (high-numerical-aperture OCT is often known as optical coherence microscopy [81]). For such high-angle lenses the electromagnetic fields present in the system cannot be accurately approximated as scalar fields. Furthermore, it has been shown that the vectorial nature of the high-aperture focused field can be explicitly exploited to probe anisotropic properties of the object, e.g., [82-87]. ISAM can be generalized to vectorial fields [14].

In the vectorial system, scattering from the object is recognized as being dependent on the polarization state of the relevant fields. As a result, the object is not a scalar function *η* (**r**), but a rank-two tensor function of position * (***r**). The illumination and detection patterns, **g**(**r**, *ω*) and **f** (**r**, *ω*), are vectorial, which results in six independent ISAM kernels—one for each possible pair of field directions in illumination and detection. That is,

(29)

where *g*(**r** − *ρ*, *ω*, *α*) is an element of the field **g**(**r**,*ω*), and *α* takes on values of *x*, *y* and *z*. The scalar kernel of Eq. (17) is a special case of this expression.

It can be shown that the data then depend on the scattering tensor as [14],

(30)

where (**r**, *α*, *β*) is an element of the tensor (**r**). The scalar case of Eq. (8) is a special case of this expression.

It can be seen from Eq. (29) that *h*(**r** − * ρ*,

Full-field OCT systems [89-93] involve capturing *x*–*y* images sequentially, one frequency *ω*, or delay *τ*, at a time. A similar system has been analyzed for ISAM [15]. In this system the object is illuminated with a *z*-propagating plane wave, so that the illumination pattern is

(31)

The angular spectrum of the illuminating light is then,

(32)

The scattered light is collected by an objective lens, so that the detection pattern *f*(**r**, *ω*) is of the same form as *g*(**r**, *ω*) in Eq. (18).

The spatial-domain kernel of Eq. (17), can then be taken into the Fourier domain using the same process as that used to find Eq. (21). That is,

(33)

This exact kernel is of the same form as the approximated forward models of Eq. (23) and Eq. (24). As a result, the relationship between the Fourier-domain data and the Fourier-domain object for the full-field ISAM system is of the same form as Eq. (26), but with the new mapping,

(34)

Thus the inverse problem in full-field ISAM is also solved by a Fourier-domain resampling, albeit on a different grid.

Confocal ISAM is analogous to SAR, and both techniques share the Stolt mapping. The full-field ISAM mapping of Eq. (34) also appears in diffraction tomography [94–96], a technique applied in ultrasound, optical and microwave imaging.

Rotationally-scanned ISAM [16] is a sensing system compatible with catheter-based imaging, as illustrated in Fig. 11. Rather than scanning the aperture in two coplanar dimensions, the aperture is scanned in one linear dimension (along the catheter) and one rotational dimension (along the azimuthal angle).

An illustration of the rotationally-scanned ISAM system. A single-mode fiber delivers light to focusing optics which project the beam into the object. The beam is scanned linearly inside a catheter sheath and is rotated about the long catheter axis. This **...**

The complex analytic signal, *S*(*p*, *θ*, *ω*), is sampled at points given by the displacement p along the y axis and the azimuthal coordinate *θ*, as well the usual frequency *ω*. Taking the Fourier transform with respect to both *p* (argument *ξ*) and *θ* (argument *η _{θ}*) results in the data (

(35)

where *K*(*ξ*, *η _{θ}*,

(36)

where *$\widehat{x}$*, *ŷ*, are unit vectors and

(37)

is a weighted Fourier transform of the object *η*(**r**), with *P*(**r**) a function of the radial distance to focus [16]. It is thus seen that the solution of inverse problem may again be reduced to a filtering and resampling of the data between appropriate Fourier transforms and inverse Fourier transforms.

In a recent analysis [97], it has been shown that a spatially-extended, statistically partially-coherent source can be incorporated in full-field ISAM to produce differing illumination and detection patterns *g*(**r**, *ω*) and *f* (**r**, *ω*). Varying the source coherence length allows considerable control of *g*(**r**, *ω*) and also results in a changing resampling scheme in the inverse problem. The multiple scattering artifacts that can be problematic in full-field ISAM can be effectively mitigated using partially-coherent ISAM.

ISAM is a computed imaging technique that quantitatively estimates a three-dimensional scattering object in broadband coherent microscopy. The solution of the inverse problem allows the reconstruction of areas typically regarded as out of focus. The result obviates the perceived trade-off between resolution and depth of focus in OCT.

ISAM, like OCT, is a tomographic method, i.e., the images produced are truly three-dimensional. While ISAM addresses an inherent weakness in OCT, namely the need to scan the focus axially to obtain images outside of the original focal plane, ISAM is not merely a method to refocus the field computationally. Refocusing may be achieved from a single interferometric image at a fixed frequency, but the resulting image is still inherently two-dimensional, failing to unambiguously distinguish contributions to the image from various depths. As in other ranging technologies, the broadband nature of ISAM allows a true three-dimensional reconstruction.

ISAM and SAR are closely related technologies, to the point where they can be cast in the same mathematical framework. Both techniques employ a Fourier domain resampling, based on the Stolt mapping, in the inverse processing. While the mathematics of the two systems are closely related, each uses a significantly different region of the electromagnetic spectrum and images objects of com-mensurately different scales. In SAR, translation of the aperture and computational imaging allow the synthesis of a virtual aperture of dimension dependent on the along track path length, rather than the physical aperture size. This larger synthetic aperture produces an image of higher resolution than would otherwise be achievable. In OCT the limitations on the size of the physical aperture (i.e., the objective lens) are not the limiting factor, rather the image acquisition time becomes prohibitively long if the focal plane must be scanned through an object of extended depth. The computational imaging in ISAM gives diffraction-limited resolution in all planes, not just at the physical focus, and hence eliminates the need for focal-plane scanning.

ISAM and SAR are examples in the broad class of modalities known as computed imaging. Like almost all computed imaging modalities in common practice today, they are based on the solution of linear inverse problems. Linear inversion problems offer advantages such as the option to pre-compute and store the elements of an inversion kernel for rapid computation of images from data. Moreover, error and stability may be well understood and there exist a wealth of well-studied methods for regularization (stabilization) of the inversion algorithms. ISAM and SAR are also members of the more restrictive class of problems that may be cast as data resampling. To arrive at the resampling view of these problems, the data must be Fourier transformed and the resampled data Fourier transformed again. Thus the methods take advantage (are even reliant on) one of the greatest advances in applied mathematics in the last half-century, the fast Fourier transform [98]. They may be made to run very fast and are amenable to parallelization.

The authors would like to thank Gregory L. Charvat for providing the SAR data seen in Fig. 9. This research was supported in part by grants from the National Science Foundation (CAREER Award 0239265 to P.S.C.; BES 03-47747, BES 05-19920, BES 06-19257 to S.A.B.), the National Institutes of Health (1 R01 EB005221 to S.A.B.), and the Grainger Foundation (to S.A.B. and P.S.C).

1. Röntgen W. C. On a new kind of rays. Nature. 1896;1369:274–276.

2. Bloch F., Hansen W. W., Packard M. The nuclear induction experiment. Phys. Rev. 1946;70:474–485.

3. Carr H. Y., Purcell E. M. Effects of diffusion on free precession in nuclear magnetic resonance experiments. Phys. Rev. 1954;94:630–638.

4. Buderi R. The Invention That Changed the World. second edition. Abacus; 1998.

5. Sabra A. I. The Optics of Ibn Al-Haytham. The Warburg Institute; 1989.

6. Schaller R. R. Moore's law: past, present and future. IEEE Spectrum. 1997;34(6):52–59.

7. Cormack A. M. Representation of a function by its line integrals, with some radiological applications. J. Appl. Phys. 1963;34:2722–2727.

8. Hounsfield G. N. Computerized transverse axial scanning (tomography): part I. Description of system. Br. J. Radiol. 1973;46:1016–1022. [PubMed]

9. Lauterbur P. C. Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature. 1973;242:190–191. [PubMed]

10. Curlander J. C., McDonough R. N. Synthetic Aperture Radar: Systems and Signal Processing. Wiley-Interscience; 1991.

11. Gough P. T., Hawkins D. W. Unified framework for modern synthetic aperture imaging algorithms. Int. J. Imag. Syst. Technol. 1997;8:343–358.

12. Carrara W. G., Goodman R. S., Majewski R. M. Spotlight Synthetic Aperture Radar Signal Processing Algorithms. Artech House; 1995.

13. Davis B. J., Ralston T. S., Marks D. L., Boppart S. A., Carney P. S. Autocorrelation artifacts in optical coherence tomography and interferometric synthetic aperture microscopy. Opt. Lett. 2007;32:1441–1443. [PubMed]

14. Davis B. J., Schlachter S. C., Marks D. L., Ralston T. S., Boppart S. A., Carney P. S. Nonparax-ial vector-field modeling of optical coherence tomography and interferometric synthetic aperture microscopy. J. Opt. Soc. Am A. 2007;24:2527–2542. [PubMed]

15. Marks D. L., Ralston T. S., Boppart S. A., Carney P. S. Inverse scattering for frequency-scanned full-field optical coherence tomography. J. Opt. Soc. Am A. 2007;24:1034–1041. [PubMed]

16. Marks D. L., Ralston T. S., Carney P. S., Boppart S. A. Inverse scattering for rotationally scanned optical coherence tomography. J. Opt. Soc. Am A. 2006;23:2433–2439. [PubMed]

17. Ralston T. S., Marks D. L., Carney P. S., Boppart S. A. Inverse scattering for optical coherence tomography. J. Opt. Soc. Am A. 2006;23:1027–1037. [PubMed]

18. Ralston T. S., Marks D. L., Carney P. S., Boppart S. A. Interferometric synthetic aperture microscopy. Nat. Phys. 2007;3:129–134.

19. Ralston T. S., Marks D. L., Boppart S. A., Carney P. S. Inverse scattering for high-resolution interferometric microscopy. Opt. Lett. 2006;24:3585–3587. [PubMed]

20. Ralston T. S., Marks D. L., Carney P. S., Boppart S. A. Real-time interferometric synthetic aperture microscopy. Opt. Express. 2008;16:2555–2569. [PMC free article] [PubMed]

21. Wheatstone C. Contributions to the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos. T. Roy. Soc. 1838;128:371–394. [PubMed]

22. Kellermann K. I., Moran J. M. The development of high-resolution imaging in radio astronomy. Annu. Rev. Astrophys. 2001;39:457–509.

23. Markel V. A., Schotland J. C. Inverse problem in optical diffusion tomography. I. Fourier-Laplace inversion formulas. J. Opt. Soc. Am. A. 2001;18:1336–1347. [PubMed]

24. Milstein A. B., Oh S., Webb K. J., Bouman C. A., Zhang Q., Boas D. A., MIllane R. P. Fluorescence optical diffusion tomography. Appl. Opt. 2003;42:3081–3094. [PubMed]

25. Boas D. A., Brooks D. H., Miller E. L., DiMarzio C. A., Kilmer M., Gaudette R. J., Zhang Q. Imaging the body with diffuse optical tomography. IEEE Signal Proc. Mag. 2001;18(6):57–75.

26. Ollinger J. M., Fessler J. A. Positron-emission tomography. IEEE Signal Proc. Mag. 1997;14(1):43–55.

27. Donoho D. L. Compressed sensing. IEEE Trans. Inf. Theory. 2006;52:1289–1306.

28. Cande's E. J., Romberg J., Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 2006;52:489–509.

29. Takhar D., Laska J. N., Wakin M. B., Duarte M. F., Baron D., Sarvotham S., Kelly K. F., Bara-niuk R. G. Proc. Computational Imaging IV. volume 6065. SPIE; 2006. A new compressive imaging camera architecture using optical-domain compression.

30. Sato T., Ueda M., Fukuda S. Synthetic aperture sonar. J. Acoust. Soc. Am. 1973;54:799–802.

31. Williams R. E. Creating an acoustic synthetic aperture in the ocean. J. Acoust. Soc. Am. 1976;60:60–73.

32. Hayes M. P., Gough P. T. Broad-band synthetic aperture sonar. IEEE J. Oceanic Eng. 1992;17:80–94.

33. Gray S. H., Etgen J., Dellinger J., Whitmore D. Seismic migration problems and solutions. Geophysics. 2001;66:1622–1640.

34. Bleistein N., Cohen J. K., Stockwell J. W. Mathematics of Multidimensional Seiesmic Imaging, Migration and Inversion. Springer; 2001.

35. Angelsen B. A. J. Ultrasound Imaging: Waves, Signals and Signal Processing. Emantec; 2000.

36. Nelson T. R., Pretorius D. H. Three-dimensional ultrasound imaging. Ultrasound Med. Biol. 1998;24:1243–1270. [PubMed]

37. Profio A. E., Doiron D. R. Transport of light in tissue in photodynamic therapy. Photochem. Photobiol. 1987;46:591–599. [PubMed]

38. Huang D., Swanson E. A., Lin C. P., Schuman J. S., Stinson W. G., Chang W., Hee M. R., Flotte T., Gregory K., Puliafito C. A., Fujimoto J. G. Optical coherence tomography. Science. 1991;254:1178–1181. [PubMed]

39. Schmitt J. M. Optical coherence tomography (OCT): a review. IEEE J. Select. Topics Quantum Electron. 1999;5:1205–1215.

40. Brezinski M. E. Optical Coherence Tomography: Principles and Applications. Academic Press; 2006.

41. Zysk A. M., Nguyen F. T., Oldenburg A. L., Marks D. L., Boppart S. A. Optical coherence tomography: a review of clinical development from bench to bedside. J. Biomed. Opt. 2007;12 (051403) [PubMed]

42. Young T. A Course of Lectures on Natural Philosophy and the Mechanical Arts. Joseph Johnson; London: 1807.

43. Michelson A. A., Morley E. W. On the relative motion of the Earth and the luminiferous ether. Am. J. Sci. 1887;34:333–345.

44. Hariharan P. Optical Interferometry. Academic Press; 2003.

45. Mandel L., Wolf E. Optical Coherence and Quantum Optics. Cambridge University; 1995.

46. Saleh B. E. A., Teich M. C. Fundamentals of Photonics. John Wiley and Sons; 1991. pp. 80–107. chapter 3.

47. Gabor D., Goss W. P. Interference microscope with total wavefront reconstruction. J. Opt. Soc. Am. 1966;56:849–858.

48. Arons E., Leith E. Coherence confocal-imaging system for enhanced depth discrimination in transmitted light. Appl. Opt. 1996;35:2499–2506. [PubMed]

49. Leith E. N., Mills K. D., Naulleau P. P., Dilworth D. S., Iglesias I., Chen H. S. Generalized confocal imaging and synthetic aperture imaging. J. Opt. Soc. Am. A. 1999;16:2880–2886.

50. Chien W.-C., Dilworth D. S., Elson L., Leith E. N. Synthetic-aperture chirp confocal imaging. Appl. Opt. 2006;45:501–510. [PubMed]

51. Davis B. J., Ralston T. S., Marks D. L., Boppart S. A., Carney P. S. International Conference on Image Processing. volume 4. IEEE; 2007. Interferometric synthetic aperture microscopy: physics-based image reconstruction from optical coherence tomography data; pp. 145–148.

52. Born M., Wolf E. Principles of Optics. 6 edition. Cambridge University; 1980.

53. Turin G. An introduction to matched filters. IEEE Trans. Inf. Theory. 1960;6:311–329.

54. Wahl D. E., Eichel P. H., Ghiglia D. C., Jakowatz C. V., Jr. Phase gradient autofocus—a robust tool for high resolution SAR phase correction. IEEE Trans. Aero. Elec. Sys. 1994;30:827–835.

55. Ralston T. S., Marks D. L., Carney P. S., Boppart S. A. Phase stability technique for inverse scattering in optical coherence tomography. 3rd International Symposium on Biomedical Imaging; 2006. pp. 578–581.

56. Fercher A. F., Hitzenberger C. K., Kamp G., El-Zaiat S. Y. Measurement of intraocular distances by backscattering spectral interferometry. Opt. Commun. 1996;117:43–48.

57. Choma M. A., Sarunic M. V., Yang C., Izatt J. A. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Opt. Express. 2003;11:2183–2189. [PubMed]

58. Leitgeb R., Hitzenberger C. K., Fercher A. F. Performance of Fourier domain vs. time domain optical coherence tomography. Opt. Express. 2003;11:889–894. [PubMed]

59. Bhargava R., Wang S.-Q., Koenig J. L. FTIR microspectroscopy of polymeric systems. Adv. Polym. Sci. 2003;163:137–191.

60. Griffiths P. R., De Haseth J. A. Fourier Transform Infrared Spectrometry. second edition. Wiley-Interscience; 2007.

61. Papoulis A., Pillai S. U. Probability, Random Variables and Stochastic Processes. McGraw-Hill; 2002. pp. 513–522. chapter 11.4.

62. Zhao Y., Chen Z., Saxer C., Xiang S., de Boer J. F., Nelson J. S. Phase-resolved optical coherence tomography and optical Doppler tomography for imaging blood flow in human skin with fast scanning speed and high velocity sensitivity. Opt. Lett. 2000;25:114–116. [PubMed]

63. Potton R. J. Reciprocity in optics. Rep. Prog. Phys. 2004;67:717–754.

64. Marks D. L., Oldenburg A. L., Reynolds J. J., Boppart S. A. A digital algorithm for dispersion correction in optical coherence tomography. Appl. Opt. 2003;42:204–217. [PubMed]

65. Marks D. L., Oldenburg A. L., Reynolds J. J., Boppart S. A. Autofocus algorithm for dispersion correction in optical coherence tomography. Appl. Opt. 2003;42:3038–3046. [PubMed]

66. Richards B., Wolf E. Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system. Proc. R. Soc. London A. 1959;253:358–379.

67. Hansen P. C. Numerical tools for analysis and solution of Fredholm integral equations of the first kind. Inverse Prob. 1992;8:849–872.

68. Karl W. C. Handbook of Image and Video Processing. Academic; 2000. pp. 141–161. chapter Regularization in Image Restoration and Reconstruction.

69. Vogel C. R. Computational Methods for Inverse Problems. SIAM; 2002.

70. Wiener N. Extrapolation, Interpolation, and Smoothing of Stationary Time Series. The MIT Press; 1964.

71. Gazdag J., Sguazzero P. Migration of seismic data. Proc. IEEE. 1984;72:1302–1315.

72. Stolt R. H. Migration by Fourier transform. Geophysics. 1978;43:23–48.

73. Langenberg K. J., Berger M., Kreutter T., Mayer K., Schmitz V. Synthetic aperture focusing technique signal processing. NDT Int. 1986;19:177–189.

74. Mayer K., Marklein R., Langenberg K. J., Kreutter T. Three-dimensional imaging system based on Fourier transform synthetic aperture focusing technique. Ultrasonics. 1990;28:241–255.

75. Schmitz V., Chakhlov S., Müller Experiences with synthetic aperture focusing technique in the field. Ultrasonics. 2000;38:731–738. [PubMed]

76. Passmann C., Ermert H. A 100-MHz ultrasound imaging system for dermatologic and ophthal-mologic diagnostics. IEEE Trans. Ultrason. Ferr. 1996;43:545–552.

77. Cafforio C., Prati C., Rocca F. SAR data focusing using seismic migration techniques. IEEE Trans. Aero. Elec. Sys. 1991;27:194–207.

78. Ralston T. S., Marks D. L., Kamalabadi F., Boppart S. A. Deconvolution methods for mitigation of transverse blurring in optical coherence tomography. IEEE Trans. Image Proc. 2005;14:1254–1264. [PubMed]

79. Charvat G. L. PhD thesis. Michigan State University; 2007. A Low-Power Radar Imaging System.

80. Charvat G. L., Kempel L. C., Coleman C. A low-power, high sensitivity, X-band rail SAR imaging system. IEEE Antenn. Propag. Mag. 2008;50 (to be published)

81. Izatt J. A., Hee M. R., Owen G. M., Swanson E. A., Fujimoto J. G. Optical coherence microscopy in scattering media. Opt. Lett. 1994;19:590–592. [PubMed]

82. Abouraddy A. F., Toussaint K. C., Jr. Three-dimensional polarization control in microscopy. Phys. Rev. Lett. 2006;96:153901. [PubMed]

83. Beversluis M. R., Novotny L., Stranick S. J. Programmable vector point-spread function engineering. Opt. Express. 2006;14:2650–2656. [PubMed]

84. Novotny L., Beversluis M. R., Youngworth K. S., Brown T. G. Longitudinal field modes probed by single molecules. Phys. Rev. Lett. 2001;86:5251–5254. [PubMed]

85. Quabis S., Dorn R., Leuchs G. Generation of a radially polarizaed doughnut mode of high quality. Appl. Phys. B. 2005;81:597–600.

86. Sick B., Hecht B., Novotny L. Orientational imaging of single molecules by annular illumination. Phys. Rev. Lett. 2000;85:4482–4485. [PubMed]

87. Toprak E., Enderlein J., Syed S., McKinney S. A., Petschek R. G., Ha T., Goldman Y. E., Selvin P. R. Defocused orientation and position imaging (DOPI) of myosin V. PNAS. 2006;103:6495–6499. [PubMed]

88. Butcher P. N., Cotter D. The Elements of Nonlinear Optics. Cambridge University; 1990. pp. 131–134. chapter 5.2.

89. Akiba M., Chan K. P., Tanno N. Full-field optical coherence tomography by two-dimensional heterodyne detection with a pair of CCD cameras. Opt. Lett. 2003;28:816–818. [PubMed]

90. Dubois A., Moneron G., Grieve K., Boccara A. C. Three-dimensional cellular-level imaging using full-field optical coherence tomography. Phys. Med. Biol. 2004;49:1227–1234. [PubMed]

91. Dubois A., Vabre L., Boccara A.-C., Beaurepaire E. High-resolution full-field optical coherence tomography with a Linnik microscope. Appl. Opt. 2002;41:805–812. [PubMed]

92. Laude B., De Martino A., Drévillon B., Benattar L., Schwartz L. Full-field optical coherence tomography with thermal light. Appl. Opt. 2002;41:6637–6645. [PubMed]

93. Považay B., Unterhuber A., Hermann B., Sattmann H., Arthaber H., Drexler W. Full-field time-encoded frequency-domain optical coherence tomography. Opt. Express. 2006;14:7661–7669. [PubMed]

94. Devaney A. J. Reconstructive tomography with diffracting wavefields. Inverse Prob. 1986;2:161–183.

95. Pan S. X., Kak A. C. A computational study of reconstruction algorithms for diffraction tomography: interpolations versus filtered backpropagation. IEEE Trans. Acoust. Speech Signal Proc. 1983;ASSP-31:1262–1275.

96. Wolf E. Three-dimensional structure determination of semi-transparent objects from holographic data. Opt. Commun. 1969;1:153–156.

97. Marks D. L., Davis B. J., Boppart S. A., Carney P. S. Partially coherent illumination in full-field interferometric synthetic aperture microscopy (submitted) J. Opt. Soc. Am. A. 2008 [PMC free article] [PubMed]

98. Brigham E. The Fast Fourier Transform and Its Applications. Prentice-Hall; 1988.

Articles from Sensors (Basel, Switzerland) are provided here courtesy of **Multidisciplinary Digital Publishing Institute (MDPI)**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's Canada Institute for Scientific and Technical Information in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |