|Home | About | Journals | Submit | Contact Us | Français|
Optical sectioning of biological specimens provides detailed volumetric information regarding their internal structure. To provide a complementary approach to existing three-dimensional (3D) microscopy modalities, we have recently demonstrated lensfree optical tomography that offers high-throughput imaging within a compact and simple platform. In this approach, in-line holograms of objects at different angles of partially coherent illumination are recorded using a digital sensor-array, which enables computing pixel super-resolved tomographic images of the specimen. This imaging modality, which forms the focus of this review, offers micrometer-scale 3D resolution over large imaging volumes of, for example, 10–15 mm3, and can be assembled in light weight and compact architectures. Therefore, lensfree optical tomography might be particularly useful for lab-on-a-chip applications as well as for microscopy needs in resource-limited settings.
Holographic imaging has significantly advanced since Gabor’s invention several decades ago . Together with the use of optoelectronic sensor arrays as the recording medium, digital holography has grown to be a powerful imaging technique for life sciences enabling microscopic imaging of biological specimens. Toward this end, digital inline holographic microscopy was developed [2,3] using coherent light sources, i.e., lasers, filtered through pinholes having diameters on the order of the wavelength of light. In this approach, micrometer-scale spatial resolution is routinely achieved by magnifying the holographic fringes using a spherical reference wave rather than using an objective-lens to magnify the optical fields transmitted through the objects. As an alternative approach to in-line geometry, off-axis holography [4–8] uses a tilted reference wave to record holograms, which can also be implemented in common-path geometries with the possibility of using white-light for increased stability and reduced coherent noise [9–11]. Alternatively, phase-shifting holography [12–15] acquires multiple holograms recorded by precisely shifting the phase of the reference wave. These techniques are capable of providing quantitative phase images of the specimen despite the fact that digital sensor-arrays are responsive only to the intensity of a complex optical field. Furthermore, the lateral resolution achieved by these holography modalities can also be improved by employing synthetic aperture super-resolution techniques [16–18].
Quite importantly, this complex field obtained by holographic reconstruction techniques can be propagated to different depths along the optic axis to obtain volumetric images [3,19,20]. Nevertheless, the axial resolving power in holographic reconstruction is still limited due to its long depth-of-focus [21–25]. Therefore, significant efforts have been devoted to extend the three-dimensional (3D) transfer function of holographic microscopy  to improve their slicing ability and obtain tomographic images of samples with isotropic or nearly isotropic spatial resolution. To this end, sample rotation has been utilized in off-axis holography to obtain projections of the phase of the sample to obtain quantitative 3D refractive index distribution of cells and micro-objects . Along the same lines, illumination rotation has also been utilized, as opposed to sample rotation, [27–29], and real-time imaging has been demonstrated with this technique .
Other tomographic microscopy schemes based on holography have also been developed that do not rely on multi-angle views of the sample. Among these, one can cite optical scanning holography that requires a 2D raster scan to obtain 3D images , and low-coherence holographic microscopy that uses the short coherence length of illumination to achieve sectioning [32,33]. Wavelength scanning has also been used to achieve tomographic microscopy based on digital holography . Alternatively, compressive holographic microscopy  takes a computational approach that does not rely on multiple images, and can offer improved axial resolution in digital inline holography using a single hologram. Multivariate statistical analysis and feature extraction techniques have also been demonstrated as computational means of achieving 3D imaging from a single-shot hologram recorded using coherent or partially coherent light [36–38]. Moreover, 3D holographic imaging has also been extended to fluorescent imaging modalities through the use of spatial light modulators without the need for mechanical scanning [39–41].
Existing holographic tomography platforms, some of which are summarized above, typically have relatively complex structures, and rely on magnification (either using fringe magnification or image magnification with, for example, objective-lenses) to provide microscopic images, which partially limits their field-of-view (FOV) and reduces the imaging throughput. To enable sectional imaging of large sample volumes (e.g., ≥15 mm3) using relatively simple on-chip architectures, we have recently introduced lensfree optical tomography [42–44], which is based on partially coherent lensfree in-line holography [45,46]. In this approach, a sample is illuminated at different angles using partially coherent quasi-monochromatic light to compute 3D images. This platform offers a 3D spatial resolution of < 1 μm × < 1 μm× < 3 μm along the x, y, and z direction, respectively, over an imaging volume of ~15 mm3 . Offering a decent spatial resolution in a compact and simple architecture, lens-free optical tomography can be particularly useful for lab-on-a-chip applications, as well as for use in low-resource settings. In this manuscript, we will review this recently developed technique, and provide detailed theoretical analysis and experimental characterization of this imaging modality.
Our lensfree optical tomography approach is based on partially coherent lensfree holography [45,46]. Thus, we will first provide a discussion of the working principles of our holographic on-chip microscopy approach. A simple illustration of our holographic microscopy platform is shown in Fig. 1. In this on-chip imaging technique, the specimen is directly placed on an optoelectronic sensor array (e.g., a CMOS or CCD chip), and is illuminated with a partially coherent source such as a light emitting diode (LED). The fundamental principle of imaging, same as in all digital holography schemes, is to record the interference between the scattered (object wave) and the unperturbed (reference wave) portions of the light as it is transmitted through the sample. This recorded interference pattern encodes the phase information of the object wave in the form of amplitude oscillations, termed fringes. As a result, the recorded intensity image, i.e., the hologram, can be digitally reconstructed to obtain both phase and amplitude information regarding the object wave. Relying on recording the interference between waves, holography inherently requires sufficient coherence between these two wavefronts. To achieve this, traditional in-line holography techniques have employed lasers filtered through small apertures (e.g., ~1–2 μm in diameter). In the partially coherent lensfree holography scheme of this manuscript, however, incoherent sources such as LEDs that are filtered through unusually large apertures (e.g., 0.05–0.1 mm in diameter) are utilized. Together with using simple incoherent sources emanating through very large apertures, the objects are also brought closer to the sensor, in contrast to traditional in-line holography schemes, such that the pinhole-to-object distance (z1 ~ 50–100 mm) is one or 2 orders of magnitude larger than the object-to-sensor distance (z2 < 1–4 mm). Propagation over a distance of z1 enables the incoherent illumination at the aperture plane to acquire sufficient spatial coherence at the sensor plane, so as to permit recording of the interference between the object and the reference waves. In addition to this, the small z2 distance of our hologram recording scheme also helps us with temporal coherence requirements of our technique such that a relatively wide-band illumination spectrum of, e.g., 10–20 nm, can be employed without limiting the achievable spatial resolution. Moreover, this unique geometry also enables using the entire active area of the sensor array as the imaging FOV, significantly increasing the imaging throughput. The use of partially coherent illumination significantly reduces coherent noise terms such as speckle and multiple reflection interference noise originating from air-glass and sample-glass interfaces [46,47], which, as a result, increases the signal-to-noise ratio of the recorded digital holograms. As further discussed in the results section, the low coherence of the source also reduces the cross talk among objects within the same plane as well as among objects situated at different depths due to the low spatial coherence at the object plane and the short coherence length of the source, respectively. To support the qualitative explanations provided in this section, a theoretical analysis of hologram formation in partially coherent lensfree holography will be presented in the next section.
To better understand the hologram formation process for partially coherent illumination, let us assume that two point scatterers (laterally separated by 2a and located at the object plane, i.e. z = z1) with a field transmission of the form t(x, y) = 1 + c1δ(x − a, y) + c2δ(x + a, y) are illuminated vertically, where the amplitudes of the complex coefficients c1 and c2 denote the strength of the scattering process, and δ(x, y) defines a 2D Dirac-delta function in space. For the same imaging system let us assume that a large aperture (at z = 0 plane) having an arbitrary shape with a transmission function of p(x, y) is uniformly illuminated by a spatially incoherent light source. Then, referring to the coordinate system and the variables provided in Fig. 1, the cross-spectral density at the aperture plane can be written as [46,47]:
where (x1, y1) and (x2, y2) represents two arbitrary points at z = 0, and S(γ) denotes the power spectrum of the incoherent source with a center wavelength (frequency) of λ0(γ0). Here, cross-spectral density is a measure of the spatial coherence at a given optical frequency, and can be related to the intensity of an optical field using I(x, y) = ∫W(x, y, x, y, γ)dγ . After free space propagation over a distance of z1, the cross-spectral density at z = z1 (just before interacting with the cells) can be written as 
where , and ( ) represent two arbitrary points on the object plane as noted in Fig. 1. After interacting with the objects, the cross-spectral density right after the object plane becomes , which further propagates a distance of z2 toward the detector plane, which is at z = z1 + z2. Thus, the cross-spectral density at the detector plane is given by
where (xD1, yD1) and (xD2, yD2) define arbitrary points on the detector plane (within the lensfree hologram region of each object) as noted in Fig. 1; and . At the detector plane, (xD, yD), the recorded intensity, i(xD, yD) can then be written as i(xD, yD) = ∫WD(xD, yD, xD, yD, γ)dγ. Assuming t(x, y) = 1 + c1δ(x − a, y) + c2δ(x + a, y), the detected intensity can be decomposed into four main terms, such that i(xD, yD) = C(xD, yD) I + (xD, yD) + H1(xD, yD) + H2(xD, yD), where:
In these equations “c.c.” and “*” refer to the complex conjugate and convolution operations, respectively, , and is the 2D spatial Fourier transform of the arbitrary aperture function p(x, y). D0 represents the background light that does not contain any information regarding the objects, and can be subtracted out digitally. It is rather important to note that (xD, yD) in these equations refers to points within the lensfree in-line hologram extent of an object rather than the entire FOV of the detector array. Further, , representing the 2D coherent impulse response of free space propagation over an effective distance of Δz = F · z2. For the incoherent source, we have assumed a center frequency (wavelength) of γ0(λ0), where the spectral bandwidth was assumed to be much smaller than λ0 with a power spectrum of S(γ) S0δ(γ − γ0). This approximation can be justified since we typically use incoherent sources (e.g., LEDs) at λ0 ~ 500–650 nm with a spectral bandwidth of ~10–20 nm.
Equation (4.1) describes that the background illumination (term D0) is superposed with the classical diffraction terms (proportional to the strength of self-interference, i.e. |c1|2 and |c2|2) that occur between the object and the detector planes under the paraxial approximation, which is a valid assumption since for this work z1 and z2 are typically much longer than the extent of each hologram. Equation (4.2) contains the information of the interference between the scattering points located at the object plane. Similar to the self-interference term, this cross-interference term, i.e., I(xD, yD), also does not contain any useful information as far as holographic reconstruction of the object image is concerned. This interference term is proportional to the amplitude of , and since this term will rapidly decay to zero for a large aperture such as ours, one can estimate that if (where D is roughly the aperture width) the scattered fields cannot strongly interfere with each other at the detector plane, which reduces the intensity of this cross-interference term, I(xD, yD), for objects far apart within our imaging FOV.
Equations (4.3) and (4.4) denote the dominant holographic terms, which represent the interference of the scattered light from each object with the background/reference wave. H1(xD, yD) and H2(xD, yD) denote the holographic diffraction of the first scatterer, c1δ(x − a, y), and the second scatterer, c2δ(x + a, y), respectively. Further inspecting Eqs. (4.3) and (4.4), we can realize that, for each point scatterer, a scaled (by ) and shifted (by ) version of the aperture function (x, y) is convolved with the free space impulse response hc(xD, yD), hence coherently diffracts toward the sensor plane with an effective propagation distance of Δz = F · z2. In other words, inspection of Eqs. (4.3) and (4.4) suggests that each point scatterer at the object plane [e.g., c1δ(x − a, y)] can be replaced by the squeezed version of the aperture function [e.g., c1 · p(−xD · M + a · M · F, −yD · M)], leading to a blurring effect, which can then be propagated toward the detector plane. As M is typically > 100, the large aperture size effectively shrinks down by M fold at the object plane to a size of, e.g., < 500 nm, and therefore does not significantly degrade the spatial resolution during the hologram recording process. Therefore, for , incoherent illumination through a large aperture is approximately equivalent to coherent illumination of each object individually, as long as the object’s diameter is smaller than the coherence diameter ( ), which can be easily satisfied in our hologram recording geometry (see Fig. 1).
The derivation discussed above was made for two point scatterers separated by 2a, such that c1δ(x − a, y) + c2δ(x + a, y). The more general form of the partially coherent holographic term (equivalent of Eqs. (4.3) and (4.4) for a continuous 2D distribution of scatterers) can be expressed as :
where s(xD, yD)refers to the transmitted field after the object of interest, which represents the 2D map of all the scatterers located within the sample. The physical effect of the fringe magnification factor (F) on the object hologram can also be visualized in this Eq. (5), in harmony with our discussions in the previous paragraphs.
Although multiple in-line holograms are recorded at different illumination angles in lensfree tomographic microscopy , for brevity, the derivation in this section is carried out for vertical illumination case only. Nevertheless, despite the use of tilted illumination angles, the recorded images at each illumination angle are still in-line holograms, and the findings described above apply to all the holograms obtained at varying angles of illumination. As far as the above conclusions are concerned, the most immediate effect of tilted lensfree illumination is the increased z2 distance. In lensfree optical tomography , the light source is rotated along an arc whose origin is at the sensor surface. Therefore, the z1 distance, being the radius of this arc, remains roughly the same at all angles. Nevertheless, the effective z2 distance increases by 1/cos(θ), where θ is the angle of propagation for the undiffracted wave between the object and the sensor planes. That is, as the illumination angle is increased, the field transmitted through the sample propagates a longer distance before reaching the sensor plane. As a result of this, for the largest angle of illumination, e.g., ~50° in air, the z2 distance effectively increases by ~1.3–1.5 fold, and M gets slightly smaller. Therefore, the effect of the large aperture becomes slightly more pronounced at large angles. Also, since z1/z2 is satisfied at all angles, our unit fringe-magnification geometry is preserved (i.e., F ~ 1), and the imaging FOV is not significantly compromised. Another implication of the increased z2 distance at larger angles is the elevated need for temporal coherence of illumination, which will be further discussed in the results section.
Once lensfree in-line holograms are recorded at different directions of illumination, digital reconstruction is necessary to convert these holograms to microscopic images of objects obtained at different viewing angles. For this end, the field at the hologram plane, whose phase is unknown, is digitally propagated back toward the object. Digital beam propagation is achieved using the angular spectrum approach [46,48] that convolves an optical field with the impulse response of free space propagation. This convolution is performed in the frequency domain, involving two fast Fourier transforms and multiplication with the transfer function of propagation . As can be seen in Eqs. (4.3) and (4.4), digital propagation will undo the effects of the coherent diffraction and the holographic field will converge so as to form transmission images of the objects. Nevertheless, after this digital back propagation, the “c.c.” terms will diverge even further as opposed to forming images, casting a defocused image, termed as the twin image, overlapping with the real images of the objects. This twin image can be eliminated by recovering the phase of the hologram, which effectively gets rid of the complex conjugate terms in Eqs. (4.3) and (4.4). In this iterative phase recovery approach detailed in , the square-root of the hologram intensity (i.e., the amplitude) is used as the initial guess of the optical field at the sensor plane with zero phase. This initial field is then propagated back and forth between the parallel sensor and object planes while the loose size of the objects is used as a constraint for the extent of the real images in these iterations to recover the phase . Once the phase is recovered (typically in 10–15 iterations), the final back-propagation yields a cleaned digital image that is almost entirely free of the twin-image artifact.
In the case of tilted illumination, the amplitude of the hologram first needs to be digitally multiplied by a tilted plane wave, whose angle is determined such that the hologram field converges toward the actual position of the object when back-propagated using the same transfer function of free-space propagation. The iterative phase recovery algorithm described above can then be utilized to reconstruct images without the twin-image artifact. As a result, the projection images of the sample for different viewing angles can be obtained, which is the key to achieve tomographic microscopy with partially coherent lensfree holography, as detailed in Section 3. To increase the spatial resolution of each projection image, pixel super-resolution (PSR) techniques are utilized, which enable resolving object features that are smaller than the pixel size of the sensor-array, as will be detailed in the following section.
As suggested by Eqs. (4.3) and (4.4), for a narrow enough p(− xD · M, −yD · M), the spatial modulation of the holographic term is proportional to , which signifies a chirped function that oscillates faster with increasing radial distance from the center of the lensfree hologram. Since F ~ 1 in our hologram recording geometry (Fig. 1), this chirped function is not magnified or stretched. As a result of this, the pixel size at the sensor-array plays a critical role to properly sample these holographic oscillations, making the pixel size an important factor determining the achievable spatial resolution.
By employing PSR techniques [49,50], however, we have circumvented this pixel size limit to achieve submicrometer spatial resolution despite the use of a sensor array with, e.g., 2.2 μm pixel size. As a result, lensfree on-chip holography with PSR achieves relatively high-resolution without trading off the FOV, in contrast to conventional lens-based microscopes. Utilizing PSR techniques is also critical for lensfree optical tomography as it enables reconstruction of PSR projection images for each viewing angle, which ultimately translates to enhanced lateral and axial resolution.
To implement PSR for a given viewing angle, multiple holograms that are slightly shifted with respect to each other are recorded at a given illumination angle . The high-frequency fringe oscillations that are above the noise limit appear to be aliased in each lower-resolution (LR) raw lensfree hologram. The function of PSR is to output a SR hologram where this spatial aliasing/undersampling is resolved by using the information from all the shifted LR lens-free holograms. To record these shifted LR holograms, the objects themselves can be shifted over the sensor array , the aperture can be physically translated [42,43], or alternatively multiple apertures can be placed at different positions , all of which can sufficiently shift the lensfree holograms with respect to each other to achieve PSR. The exact amounts of these shifts are not critical, as almost random shifts can perform equally well. This brings a critical flexibility to lensfree on-chip holography for convenient implementation of PSR, even in field-portable compact telemedicine microscopes [43,50] without using, for example, precise motorized stages.
The first step to digitally achieve PSR is to calculate (with no prior knowledge) the shifts of LR raw holograms with respect to each other using gradient-based iterative shift estimation methods . After this shift estimation, a single SR hologram can be iteratively calculated as detailed in , where a cost function is defined as the square of the absolute error between the target SR hologram and all the measured LR raw holograms. That is, the synthesized SR hologram needs to be consistent with the LR lensfree measurements when properly shifted and downsampled at the detector plane. Once a SR hologram is calculated, it can be digitally reconstructed using the procedures described in Section 2.C.
To demonstrate the spatial resolution enhancement achieved by PSR, Fig. 2 shows a measured LR lensfree hologram and a calculated SR hologram. The SR hologram contains high-frequency fringes that are aliased in the LR raw hologram (see Figs. 2a and 2b). As a result, the digital reconstruction of the SR hologram yields a higher resolution lensfree image as seen in Fig. 2. In lensfree optical tomography, PSR is separately implemented for all illumination angles such that all the projection images input to the tomographic reconstruction algorithm are individually PSR, enabling high-resolution tomographic microscopy on a chip. It is important to emphasize that the use of partially coherent illumination considerably improves the performance of the PSR approach described here. Since partially coherent illumination reduces coherent noise terms (such as speckle and multiple reflection interference), the efficiency of the PSR improves owing to the increased signal-to-noise ratio. Using coherent sources such as lasers, the background noise drastically increases, and can overwhelm the weak undersampled holographic fringes, impeding the success of PSR to resolve them.
We would like to also note that the implementation of PSR is not a fundamental requirement to achieve lensfree tomographic microscopy. PSR is employed to enhance the spatial resolution of individual projection images, which eventually enables obtaining submicrometer lateral resolution in the computed tomographic images. Nevertheless, for applications where a lower spatial resolution is acceptable (e.g., ~1.5 μm lateral and ~4–5 μm axial resolution), the PSR step can be eliminated, which would also significantly reduce the data acquisition time.
Hologram reconstruction essentially involves propagating a wavefront, and therefore different depths along the optic-axis can in principle be reconstructed to obtain 3D imaging of a volume using a single 2D holographic image. Nevertheless, holography cannot be considered a truly tomographic imaging modality owing to its low axial-resolution [23–25]. Particularly for in-line holography, the axial-resolution is practically a strong function of the object size. That is, depth-of-focus (DOF) is in general comparable to the far-field distance of a particle, which is proportional to s2/λ, where s is the particle diameter and λ is the wavelength of illumination . Partially coherent lensfree holography, as discussed earlier, is also subject to these limitations in axial-resolution. To better illustrate this, we digitally reconstructed a LR and a PSR hologram of a microparticle having a diameter of 2 μm at different depths along the optic axis. Figure 3 shows these reconstructed holographic images for this microparticle, where the elongation along the z-direction is clearly visible. We also measured the full-width-at-half-maximum (FWHM) values of the axial line profiles to be ~90 μm when a single LR lensfree hologram is used for reconstruction, and it is reduced down to only ~45 μm using a SR lensfree hologram . Thus, lensfree on-chip holography cannot provide satisfactory sectional images of samples, regardless of its detection numerical aperture (NA), by simply reconstructing a single hologram at different z distances.
To achieve depth-sectioning using partially coherent in-line holography, we have recently demonstrated a lensfree optical tomography technique  that offers a 3D spatial resolution of < 1 μm× < 1 μm× < 3 μm (in x, y, and z, respectively) over a large imaging volume of, e.g., 15 mm3. There are two key factors that enable achieving this 3D resolution without any lenses and using a sensor-chip with 2.2 μm pixel size: (i) to illuminate the sample from multiple directions to record lensfree in-line holograms at different viewing angles; and (ii) to synthesize separate lensfree SR holograms of the samples for each illumination angle, obtaining a set of high-resolution projection images of the objects, which are then used to compute tomographic images.
In our lensfree optical tomographic imaging setup, a partially coherent light source situated about ~70 mm away from a sensor array illuminates the objects placed on the sensor chip. In the bench-top demonstration illustrated in Fig. 4a , multi-angle illumination is achieved by rotating the light source along two orthogonal arcs with 2° discrete increments, using a motorized stage. To perform PSR, a series of subpixel shifted holograms are also recorded at each angle by linearly translating the light source to discrete positions in a 3 × 3 grid in the plane parallel to the sensor surface using step sizes of, e.g., ~60–80 μm, which does not have to be precisely controlled or known a priori. As a result of the large z1/z2 ratio, such relatively large source shifts result in subpixel shifts in the recorded lensfree holograms.
Owing to its architectural simplicity, lensfree optical tomography also lends itself to a compact, cost-effective and field-portable imaging device. Toward this end, we have also demonstrated a portable lens-free tomographic microscope for use in low-resource settings . This light-weight design, shown in Figs. 4(b–c), is identical to the bench-top setup, except that: (i) multi-angle illumination is provided by devoting individual LEDs (butt-coupled to multimode optical fibers) for each angle instead of mechanically rotating a light source; and (ii) hologram shifts (to implement PSR) are achieved by electromagnetically actuating the tips of the optical fibers using low-cost and small coils and magnets, as opposed to using mechanical stages. Accordingly, these multimode optical fibers are attached to a common arc-shaped plastic bridge (see Fig. 4b), which has two cylindrical Neodymium magnets at both of its ends. Small electrocoils are placed across these magnets, which actuate the plastic bridge, hence the fiber tips connected to it, when a voltage is applied. One of these magnets is placed such that its long axis is in the direction orthogonal to the arc, while the other magnet has its long-axis in the direction normal to the sensor surface. As a result, the first magnet moves the plastic bridge in the direction orthogonal to the arc, while the other one pulls down (or pushes up when polarity is reversed) one end of the plastic bridge. Owing to the arc shape geometry, this pulling (or pushing) force at one end is converted into a slight rotation of the arc, as a result of which fibers are displaced along the direction of the arc (see, e.g., Supplementary Video 2 of ). Potential differences up to 5 V is applied to the coils (having a resistance of 50 Ω) to achieve fiber tip displacements of < 500 μm, which generates sufficient hologram shifts at the sensor-plane to perform PSR. As discussed earlier, these fiber displacements do not need to be precise, nor known a priori, since the shifts are digitally estimated using an iterative gradient-based shift estimation algorithm . Indeed, random shifts work equally well as precisely controlled shifts, which significantly relaxes the design criteria for the electromagnetic actuation of our field-portable tomographic microscope. Color filters are also employed (see Fig. 4b) to slightly increase the temporal coherence of LED illumination in this field-portable device. This tomographic microscope weighs only ~110 grams and has low power consumption that could enable battery-powered operation in the field. This microscope utilizes a single axis (as opposed to two in the bench-top version) along which the illumination angle is varied, and has ~4° increments between projections (as opposed to 2° in the bench-top version). Therefore, the axial-resolution was limited to ~7 μm in this portable microscope, while submicrometer lateral resolution could still be achieved.
In our lensfree tomographic imaging experiments, the angular range of illumination has so far been limited to ±50°, since lensfree holograms recorded at larger angles exhibit significant distortions owing to poor response of the available optoelectronic sensors at such large incidence angles. Because of this limited range of projection images, isotropic spatial resolution in 3D cannot be achieved, as a result of which submicrometer axial-resolution cannot be claimed. However, implementing a dual-axis tomography scheme (see Fig. 4a) reduces the amount of missing spatial information, and enables a decent axial-resolution of < 3 μm. Accordingly, after the completion of recording the projections along one axis, the sensor, with the sample mounted on it, is rotated 90° to record a second set of projections along the orthogonal direction. Finally, 459 images (9 shifted holograms for each angle) per axis are automatically acquired in ~5 min per axis using a custom developed LabView interface. The acquisition time can be improved to < 0.5 min per axis using faster mechanical stages together with higher frame rate sensors (e.g., > 15 fps).
Upon synthesizing the SR holograms and then digitally reconstructing them, projection images at all illumination angles are obtained (see Fig. 5). For weakly scattering objects that are not thicker than the DOF of the projection images (~40–50 μm), these reconstructed amplitude images represent line integrals of the magnitude of object’s transmission function (e.g., scattering strength) along the corresponding direction of illumination (ignoring the diffraction within the object as in the case of Optical Projection Tomography ). In this case, the reconstructed images will represent:
where s(xθ, yθ, zθ) denotes the scattering function of the object, and (xθ, yθ, zθ) defines a coordinate system whose z-axis is aligned with the illumination direction (θ) for a given projection. Then, the 3D object image can be computed by back projecting these SR projection images using well established algorithms that are used in, e.g., x ray and electron tomography . To this end, we employed a filtered back-projection (FBP) technique that enables the computation of a 3D image of the object using multiple 2D projection images, assuming that the projection images represent the line integrals of an object function along the direction of illumination , as in the case of Eq. (6). In FBP, the 2D projection images are first high-pass filtered, and then run back through (or back projected toward) the 3D image space. When multiple projections are back projected and added to each other in the image space, these projections constructively add up, yielding the 3D image, i.e., the tomograms . Interestingly, this geometrical description of the FBP in real space also lends itself to a mathematically equivalent Fourier-space implementation. Accordingly, based on the Fourier-slice theorem, the Fourier transform of each 2D projection image can also be considered as a 2D slice from the 3D spatial frequency spectrum of the object . Such a slice lies on the plane that is normal to the wave vector of the incident light that is used to record the corresponding projection image. Then, the 2D Fourier transforms of the projection images can be used to fill up and synthesize the 3D Fourier space of the final image (i.e., the tomogram). To perform the above described back-projection operation, we used an open-source software, TomoJ , which uses the real-space implementation of filtered back projection, and outputs a 3D image using multiple 2D projection images.
To achieve dual-axis tomography, we follow the approach suggested in , where two separate tomograms are computed for each axis using an inverse Radon transformation (using a FBP algorithm). Then, these separate tomograms are merged in the frequency space. Each of these computed volume images contains empty regions in their frequency spaces as they are computed using limited angles. Therefore, for regions where both sets of tomograms have information, we average their values. For regions where only one tomogram contains spatial information, only the corresponding data is used. This dual-axis operation does not entirely fill the missing region in the Fourier space of the 3D image, but significantly shrinks it and as a result improves our axial resolution. It should be noted that employing a dual-axis tomography scheme, the imaging FOV reduces to ~15 mm2 (using a sensor with 24 mm2 active area) since the lensfree holograms of the objects that are close to the sensor edges shift out of the active area at large illumination angles, shrinking the effective FOV in both x and y directions.
To demonstrate depth sectioning with lensfree optical tomography using the bench-top implementation, we performed experiments with microspheres having 5 μm diameter, randomly distributed in a chamber with ~50 μm thickness. This sample was placed directly on the top of a 5 MegaPixel CMOS chip with 2.2 μm pixel size to record lensfree holograms with unit fringe magnification as seen in Fig. 6a. The distance of the bottom of the chamber to the sensor surface was ~0.8 mm. As seen in Fig. 6b, a regular holographic reconstruction of a region of interest shows all the beads in focus, and it is not possible to discern the microparticles located at separate layers. After computing the tomographic images as presented in Figs. 6(c1–c4), however, the same region of interest can be successfully sectioned, the results of which are also validated against a conventional microscope (40× objective-lens with 0.65-NA) as shown in Figs. 6(d1–d4). The reconstruction of the tomograms presented in Fig. 6 takes less than 3 min. using a single graphics processing unit. Even though these results are presented for a small region of interest, the data for the entire FOV shown in Fig. 6a is collected in a single data acquisition step, and the entire sample volume can be tomographically imaged [42,43].
We performed a series of experiments with different microparticles to analyze the imaging performance of lensfree optical tomography. First, we tomographically imaged a sample of microspheres having 2 μm diameter placed such that the distance of particles to the sensor surface is ~0.8 mm. The slice images, obtained by lensfree tomography, in the x-y, y-z and x-z planes through the center of an arbitrarily chosen bead are shown in Figs. 7(a1–a3). The line-profiles have also been plotted in Figs. 7(b1–b3) along the x, y and z directions through the centers of three microbeads situated at different depths. It can be observed that the slice image in the x-y plane shows a circularly symmetric cross-section. Had a single-axis tomographic reconstruction been used, this symmetry would have been broken since the missing spatial frequency information of a single limited-angle axis would shrink the 3D point-spread-function (PSF) in the x-y plane, in the direction orthogonal to the rotation-axis of the illumination [43,54]. Therefore, dual-axis tomography mitigates this artifact and maintains a symmetric PSF in the x-y plane, although it is still elongated axially as observed in Figs. 7(a2–a3).
Our tomography platform also offers an extended depth-of-field of ~4 mm over which depth sectioning can be performed . Since the object waves are not collected through high magnification objective lenses, holograms can be recorded for objects over a large depth range, which increases the imaging volume. Therefore, it is important to quantify the space-variance in the achievable resolution within this extended depth of field. To this end, we conducted more detailed experiments to quantify the effect of the distance (z2) between the sensor and the object plane. This distance is rather important in determining the spatial resolution since several different factors affecting resolution are a function of z2. As discussed in Section 2, the large illumination aperture can reduce the spatial resolution for large values of z2 (e.g., 3–4 mm), since the extent of the scaled version of the aperture function at the object plane can exceed 1 μm (while it is < 500 nm for typical cases), preventing PSR from providing submicrometer resolution. In addition to this effect, the need for temporal coherence of illumination increases together with z2 since the optical path difference (OPD) between the scattered object wave and the unscattered background wave increases for large sample-to-sensor distances. If this OPD is longer than the coherence length of illumination, the contrast of the interference fringes at the sensor plane reduces, leading to lower spatial resolution. Therefore, using incoherent light sources such as LEDs, the 3D spatial resolution can get lower for objects that are away from the sensor surface, e.g., at z2 = 3–4 mm. Moreover, the signal-to-noise ratio of these holograms (for z2 = 3–4 mm) also drops, which can also negatively affect the achievable spatial resolution. In order to study the combined effect of all these factors, Fig. 8 shows the lateral and axial resolution achieved as a function of the vertical distance form the sensor-array. In these experiments, opaque microbeads having a diameter of 4 μm were used, and the sample was brought to different heights above the sensor by using microscope slides as spacers. To quantify the resolution at each z2 distance, we calculated the spatial derivatives of the line profiles for the reconstructed particle images along the x, y, and z directions, and measured the FWHM values of their edge responses, which is a commonly used technique to estimate the PSF of an imaging system [28,55]. As shown in Fig. 8, submicrometer lateral and < 3 μm axial resolution is achieved up to ~1 mm distance from the sensor-array plane. Because of reasons discussed earlier (related to, e.g., detection SNR and temporal coherence requirements), 3D resolution degrades by twofold when objects are as far as ~4 mm from the sensor-chip surface. Based on these results and the fact that the imaging FOV is 15 mm2, we conclude that a volume of ~15 mm3 can be imaged at a spatial resolution of < 1 μm× < 1 μm× < 3 μm along the x, y and z directions, respectively. At the cost of reduced spatial resolution (by up to twofold), the imaging volume can be further increased to, e.g., ~100 mm3.
Despite the extended depth of field of ~4 mm achieved by lensfree optical tomography, objects that are optically thick (e.g., > 100 μm) cannot be effectively imaged due to strong scattering within the object. First of all, for dense objects (such as a tissue sample or thick blood smear) the unscattered portion of illumination (i.e., the reference wave) gets distorted where the in-line holographic approach starts to fail. Secondly, for thick objects within which multiple scattering events typically occur, the reconstructed holograms no longer represent line integrals (projections) of the object function since the scattered optical field strongly deviates from rectilinear paths within the object . Therefore, the majority of the photons impinging on the sensor plane should be weakly scattered to satisfy the requirements of both in-line holography and projection tomography. This essentially requires that objects within the sample volume should be relatively sparsely distributed, and the individual objects should not be thicker than the DOF  of the reconstructed holograms, which is ~50 μm for our system.
The low spatial coherence of illumination at the object plane decreases the cross-talk among different objects that are located in the same depth layer. In other words, the light scattered by objects that are laterally separated by, e.g., > 300–500 μm (over an FOV of 24 mm2) will not coherently interfere at the sensor plane. This reduced speckle noise increases the accuracy and the signal-to-noise ratio of the reconstructed lensfree images. We would like to also note that using partially coherent illumination with short coherence lengths of, e.g., ~20–300 μm (depending on the spectral bandwidth and the center wavelength of illumination) brings an important advantage by reducing the effect of multiple scattering, especially for thick samples. That is, the light scattered from objects that are axially separated by more than the coherence length cannot interfere with each other at the sensor plane, while they can still interfere with the unscattered reference wave, forming their individual holograms. As a result, cross-talk among different layers of a sample is reduced, and the holographic reconstruction around a depth-of-interest becomes more accurate despite the existence of objects in other depth layers.
As mentioned earlier, the results presented in this section are obtained using the bench-top lensfree tomography system . We also demonstrated a field-portable tomographic microscope, shown in Figs. 4(b–c), based on the same lensfree approach . This microscope, based on single axis of illumination, is capable of providing submicrometer lateral resolution and < 7 μm axial-resolution, and it has been shown to effectively image different sections through biologically relevant micro-objects such as parasites . The main reason for this lower axial-resolution of this handheld unit compared to our bench-top results  is the fact that the portable implementation employs a single axis along which illumination is rotated (±50°) using larger angular increments between projection images compared to the bench-top implementation, i.e., 4° as opposed to 2°.
Finally, for completeness we should also point that for better integration of imaging platforms with microfluidic devices  we have also demonstrated lensfree optofluidic tomography on a chip , where the flow of the sample within a microfluidic channel (mounted on a sensor array) is utilized to implement PSR holography without the need for shifting the aperture. This way, moving objects within optofluidic devices could be imaged in 3D without the need for stopping the flow within the microchannels.
We have reviewed lensfree optical tomography as a recently developed 3D on-chip imaging modality. This imaging platform offers a 3D spatial resolution of < 1 μm× < 1 μm× < 3 μm along the x, y and z direction, respectively, over an imaging volume of ~15 mm3 without the need for any lenses. Owing to its simplicity, this technique also lends itself to field-portable architectures to create light-weight (~110 grams), compact and cost-effective microscopes for field use. These characteristics render lensfree optical microscopy as a viable tool for high-throughput imaging applications in lab-on-a-chip applications as well as for use in telemedicine microscopy.
A. Ozcan acknowledges the support of the National Science Foundation (NSF) (CAREER Award on Bio-Photonics), the Office of Naval Research (ONR) (Young Investigator Award), and the National Institutes of Health (NIH) Director’s New Innovator Award—DP2OD006427 from the Office of the Director, NIH. The authors also acknowledge the support of the Gates Foundation, Vodafone Americas Foundation, and the NSF Biophotonics, Advanced Imaging, and Sensing for Human Health program (under Award Nos. 0754880 and 0930501).
Data sets associated with this article are available at http://hdl.handle.net/10376/1610. Links such as “View 1” that appear in figure captions and elsewhere will launch custom data views if ISP software is present.