Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2902728

Formats

Article sections

- Abstract
- Introduction
- Results
- Discussion
- Experimental Methods
- Conclusions
- Supplementary Material
- References

Authors

Related links

Lab Chip. Author manuscript; available in PMC 2010 July 13.

Published in final edited form as:

PMCID: PMC2902728

NIHMSID: NIHMS206727

Onur Mudanyali,^{a} Derek Tseng,^{a} Chulwoo Oh,^{a} Serhan O. Isikman,^{a} Ikbal Sencan,^{a} Waheb Bishara,^{a} Cetin Oztoprak,^{a} Sungkyu Seo,^{a} Bahar Khademhosseini,^{a} and Aydogan Ozcan^{*,}^{a,}^{b}

Despite the rapid progress in optical imaging, most of the advanced microscopy modalities still require complex and costly set-ups that unfortunately limit their use beyond well equipped laboratories. In the meantime, microscopy in resource-limited settings has requirements significantly different from those encountered in advanced laboratories, and such imaging devices should be cost-effective, compact, light-weight and appropriately accurate and simple to be usable by minimally trained personnel. Furthermore, these portable microscopes should ideally be *digitally* integrated as part of a telemedicine network that connects various mobile health-care providers to a central laboratory or hospital. Toward this end, here we demonstrate a lensless on-chip microscope weighing ~46 grams with dimensions smaller than 4.2cm × 4.2cm × 5.8cm that achieves sub-cellular resolution over a large field of view of ~24 mm^{2}. This compact and light-weight microscope is based on digital in-line holography and does not need any lenses, bulky optical/mechanical components or coherent sources such as lasers. Instead, it utilizes a simple light-emitting-diode (LED) and a compact opto-electronic sensor-array to record lensless holograms of the objects, which then permits rapid digital reconstruction of regular transmission or differential interference contrast (DIC) images of the objects. Because this lensless incoherent holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems in resource limited settings.

For decades optical microscopy has been the workhorse of various fields including engineering, physical sciences, medicine and biology. Despite its long history, until relatively recently, there has not been a significant change in the design and working principles of optical microscopes. Over the last decade, motivated partially by the quest to better understand the realm of the nano-world, super-resolution techniques started a renaissance for optical microscopy by addressing some of the most fundamental limitations of optical imaging such as the diffraction limit.^{1}^{–}^{8} Besides these super-resolution techniques, several other novel imaging architectures were also implemented to improve the state of the art in optical microscopy towards better speed, signal to noise ratio (SNR), contrast, throughput, specificity, etc.^{9}^{–}^{14} This recent progress in microscopy utilized various innovative technologies to overcome the fundamental barriers in imaging and has created significant excitement in a diverse set of fields by enabling new discoveries to be made. However, together with this progress, the overall complexity and the cost of the optical imaging platform relatively increased which limits the wide spread use of some of these advanced optical imaging modalities beyond well equipped laboratories.

In the meantime, we have been also experiencing a rapid advancement in digital technologies, with much cheaper 2D solid state detector arrays having significantly larger areas with smaller pixels, better dynamic ranges, frame rates and signal to noise ratios, as well as much faster, cheaper and more powerful digital processors and memories. This on-going digital revolution, when combined with advanced imaging theories and numerical algorithms, also creates an opportunity for optical imaging and microscopy to face another dimension in this renaissance towards simplification of the optical imaging apparatus, making it significantly more compact, cost-effective and easy to use, potentially without a trade-off in its performance. As we illustrate in this manuscript, *lensfree incoherent holographic on-chip imaging* can be considered to be at the heart of this new opportunity and when combined with the advanced state of the art and cost-effective nature of digital electronics, it can provide a transformative solution to some of the unmet needs of cell biology and medical diagnostics especially for resource-limited environments.

Over the last decade various lensfree on-chip imaging architectures were also demonstrated.^{15}^{–}^{23} Among these approaches, lensfree digital holography^{16}^{–}^{20}^{,}^{22} deserves a special attention since with new computational algorithms and mathematical models,^{24} it has the potential to make the most out of this digital revolution that we have been experiencing. In this context, lensfree digital *in-line* holography has already been successfully demonstrated for high-resolution microscopy of cells and other micro-organisms.^{17}

Conventional coherent lensfree in-line holography approaches demand near-perfect spatial coherence for illumination, and therefore require focusing of a laser light on a small aperture that is on the order of a wavelength for spatial filtering^{17}^{,}^{20}. The use of a small aperture size (e.g., 1–2µm) requires a mechanically stable and a carefully aligned system together with a focusing lens to efficiently couple the laser radiation to the aperture for improved light throughput. In addition, keeping such a small aperture clean and operational over an extended period of time can be another challenge especially for field use. Further, the cells of interest are typically positioned far away (e.g., >1 cm) from the sensor surface such that the holographic signature of each cell is spread almost over the entire sensor area, where all the cells’ signatures significantly overlap. Such an approach unfortunately limits the imaging field-of-view (FOV) at the cell plane. All these requirements not only relatively increase the cost and the size of the optical instrument, but also make lensfree coherent in-line holography somewhat inconvenient for use in resource limited settings.

Incoherent or partially coherent sources in holography have also been utilized in different lens-based optical architectures.^{13}^{,}^{25}^{–}^{28} These holographic imaging techniques are not on-chip as they utilize various bulky optical components and therefore they can be considered under the same category as the advanced imaging modalities discussed in the introduction making them much less suitable for field use. Simpler approaches using partially coherent or incoherent lensfree *in-line* holography have also been recently demonstrated for imaging of latex particles.^{19}^{,}^{29} However, these approaches also suffer from a small field-of-view as they position the objects-of-interest far away from the sensor surface, e.g., with a fringe magnification of >10, reducing the available field-of-view of the digital sensor by more than two orders of magnitude.^{19} Further, these studies used coupling optics for the illumination such as a microscope objective-lens (together with a small pinhole size of ~1–5 µm ^{29} or 10 µm ^{19}) and had relatively coarse imaging performance

To provide an alternative solution to lensfree on-chip imaging towards telemedicine applications, here we illustrate an incoherent holographic microscope weighing ~** 46 grams** with dimensions smaller than

(a) A lensfree holographic on-chip microscope that weighs ~ 46 grams is shown. It utilizes an LED source (at 591 nm) with an aperture of ~50–100 µm in front of the source. The LED and the sensor are powered through USB connection from **...**

Further, in this manuscript we illustrate that the same lensfree holographic microscope of Fig. 1(a) can also be converted into a ** differential interference contrast** (DIC) microscope (also known as

Because this compact and light-weight lensless holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems such as malaria, HIV and TB.

The Appendix presents the theoretical analysis of the impact of a large incoherent aperture (with a diameter of >100λ–200λ) on lensfree microscopy on a chip. Based on this analysis, as far as holography is concerned, we can conclude that by bringing the cell plane much closer to the sensor surface (with a fringe magnification of ~1), incoherent illumination through a large aperture can be made equivalent to coherent illumination of each cell *individually*. Further, we also prove that the spatial resolution at the cell plane will ** not** be affected by the large incoherent aperture, which permits recording of coherent holograms of cells or other micro-objects with an imaging field-of-view that is equivalent to the sensor area (which in our case is ~24 mm

We have tested the imaging performance of the handheld lensless microscope of Fig. 1(a) with various cells and particles (such as red blood cells, white blood cells, platelets, 3, 5, 7 and 10 µm polystyrene particles) as well as focused-ion beam (FIB) fabricated objects, the results of which are summarized in Figs. 2–3 and the Supplementary Figures 1–3. In these experiments, the reconstruction results of the presented digital microscope (Fig. 1(a)) were compared against conventional microscope images of the same FOV obtained with 10× and 40× objective lenses with numerical apertures of 0.2 and 0.6, respectively. This comparison (specifically Figs. 2–3 and Supplementary Fig. 1) illustrates that the presented lensless on-chip microscope achieves sub-cellular resolution sufficient to determine the type of a white blood cell (granulocyte, monocyte or lymphocyte – towards 3 part differential imaging) based on the texture of its stained nuclei (see the Experimental Methods Section).

To further investigate the imaging performance of our platform, Supplementary Figure 3 illustrates the recovery result for two squares that are precisely etched onto a glass substrate using FIB milling. In this experiment, the gap between the squares is estimated as 1.94 µm (FWHM) from the reconstructed image cross-section, which matches very well with the gap estimate from the 40× microscope image (1.95 µm FWHM). Considering the fact that the pixel size at the sensor is 2.2 µm, this result implies a sub-pixel resolution for our lensless microscope despite the fact that a unit fringe magnification is used together a large incoherent source. This is rather important, and will be further analyzed in the Discussions Section.

The digital image reconstruction process in our approach, as outlined in the Experimental Methods Section, is quite fast taking less than 4 seconds for a total image size of ~5 Mpixels using a regular CPU (central processing unit – e.g., Intel Q8300) and it gets >40× faster using a GPU (graphics processing unit – e.g., NVIDIA GeForce GTX 285) achieving ** <0.1 sec** computation time for ~5 Mpixels. The holographic images that are saved for digital processing are compressed using Portable Network Graphics (

Next we demonstrated the proof-of-concept of lensfree DIC imaging with the handheld unit of Fig. 1(a). To achieve DIC performance with the same lensless holographic microscope, a thin birefringent crystal (e.g., quartz) is used in between two cross polarizers (see Fig. 4). The function of the birefringent crystal is to create, through the double-refraction process, two holograms of the object (*the ordinary and the extraordinary holograms*) that are spatially separated from each other by a small shear distance of ~1.1 µm. *The thinner the crystal is the smaller this shear distance will get, determining the resolution of the differential phase contrast effect*. This shear distance is naturally created by the uniaxial crystal and quite conveniently does not require any precise alignment of the object or the crystal. These two waves (ordinary and extra-ordinary) that are polarized orthogonal to each other will then interfere at the sensor plane (after passing through the analyzer – see Fig. 4) creating a new hologram, which now has the *differential phase contrast* information of the sample embedded into amplitude oscillations. This process, however, is *wavelength dependent*, and in order to ensure zero net phase bias between the ordinary and the extra-ordinary holograms regardless of the LED wavelength, two quartz plates (each ~180 µm thick, with 1 USD/cm^{2} cost) were assembled with an optical glue (Norland, UVS63) at 90° with respect to each other (Fig. 4). This sandwiched quartz sample (~360 µm thick), which now increases the total shear distance between the ordinary and the extra-ordinary holograms to $\sqrt{2}\xb71.1=1.55\mu m$, is then inserted underneath the sample of interest through the same mechanical interface of Fig. 1(a) for capture of the DIC hologram (see Fig. 4 for details). Note that without affecting the DIC operation principle, the first polarizer (at ϕ=+45°) can also be inserted after the sample plane (above the uniaxial crystal). Such a configuration might especially be useful to eliminate the potential DIC artifacts when imaging naturally birefringent samples.

Despite the major differences in the way that the lensless holograms are created and recorded for regular transmission imaging vs. DIC imaging, the digital image reconstruction process remains the same in both approaches, taking exactly the same amount of time (see the Experimental Methods Section). Figs. 5 and and66 illustrate examples of DIC images of micro-particles, blood cells (both in diluted whole blood and in a smear form) as well as FIB etched square structures (2 µm apart from each other) on a glass slide that are captured with the handheld lensless microscope of Fig. 1(a), after the insertion of 2 plastic polarizers and the birefringent crystal as outlined in Fig. 4. These additional components cost < 2 USD in total and can even be disposed with the sample after being imaged. These DIC images clearly show the differential phase contrast signatures of these micro-objects demonstrating the proof of concept of lensfree DIC imaging within the compact, light-weight and the cost-effective holographic microscope of Fig. 1(a).

The use of an incoherent light source emanating through a large aperture (e.g., 50–100 µm as we have used in this work) greatly simplifies the optics for lensfree on-chip microscopy also making it more robust and cost-effective both of which are highly desired qualities for resource poor environments. Bringing the object plane much closer to the sensor surface together with a fringe magnification of F~1 is one of the key steps in making lensless microscopy possible with a large incoherent source without smearing the spatial information of the cells (see Appendix for a detailed discussion). This choice also brings a significant increase in the FOV and in the throughput of imaging, which we will further detail in the discussion to come. However, when compared to the state of the art in lensless holography, there are some trade-offs to be made in return for these improvements, which we aim to address in this section.

Based on the analysis provided in the Results Section and the Appendix, we can list advantages of a small cell-sensor distance and unit fringe magnification in incoherent lensfree holography as follows: (** i**) The size of the aperture, its exact shape and alignment with respect to the light source is much less of a concern. This makes it orders-of-magnitude more power efficient, easier to align and operate without the use of a laser or any coupling/focusing optics. This is highly important for global health applications, which demand cost-effective, compact and easy-to-use devices for microscopy and medical diagnostics. (

This last point requires more discussion since the improved light collection efficiency does not necessarily imply a better resolution as the sampling period at the hologram plane (i.e., the pixel size, Δx_{D}) is also quite important. This issue is investigated in greater detail in the Appendix. To summarize the conclusions: the detection numerical aperture for a small cell-sensor distance as we have used in this work is significantly improved which increases the light collection efficiency; however, not all the collected light contributes to the holographic texture. *It turns out that the price that is paid for simplification of the optical system towards achieving lensfree cell holography with a large incoherent source over a large field-of-view is an increased need for a smaller pixel size to be able to record all the hologram fringes that are above the detection noise floor to claim a high NA for better lateral and axial resolution.*

Because our platform enjoys a fringe magnification of ~1, in terms of *field of view* it is equivalent to direct near-field (i.e., contact) imaging of the object plane, such that it has the entire sensor area available as its field of view. However, achieving sub-pixel resolution (see e.g., Supplementary Fig. 3) implies that the presented incoherent holography technique achieves much better performance than direct contact imaging, without a trade-off in *any* image metric, such as field of view, signal to noise ratio, phase contrast, etc. In other words, *undoing the effect of diffraction through digital holographic processing (even with unit magnification and LED illumination) performs much better than an hypothetical near-field sampling experiment where a sensor-array having the same pixel size directly images the object plane all in parallel (i.e., without diffraction)*.

There are several reasons for this significant improvement. First, with a hypothetical contact imaging experiment, the random orientation of the object with respect to the pixel edges creates an unreliable imaging performance since the effective spatial resolution of the imaged object will then depend on sampling differences as the alignment of the object-features varies. This random object orientation does not cast a problem for the presented approach, since the diffraction from the object plane to the sensor array significantly reduces the randomness of the spatial sampling at the sensor plane.

Another significant advantage of lensless holographic imaging over direct near-field sampling (i.e., contact imaging) would be the capability of phase imaging. Any phase-only object would not create a detectable contrast in direct near-field sampling on the sensor-array, whereas the presented lensfree incoherent holography approach would naturally pick up the diffraction oscillations that contain the phase information of the samples located over the entire sensor area.

The key for sub-pixel spatial resolution in our incoherent holographic microscope is hidden in the iterative recovery techniques (detailed in the Experimental Methods Section), where at each iteration a digitally identified object support is enforced to recover the lost phase of the hologram texture. This object support can be made appropriately tighter if *a priori* information about the object type and size is known – for instance if the cells of interest are known to be human blood cells, a tighter object support (with dimensions of <15 µm) can be utilized for faster convergence of the phase recovery process. Intuitively, this behavior can be explained by a reduction in the number of unknown pixels in the phase recovery step, which enables iterative convergence to the unique solution, among many other possibilities, based on the measured hologram intensity and the estimated object support. Sub-pixel resolution is therefore coupled to iterative use of this object support for estimation of higher spatial frequencies of the object plane.

Like any other frequency extrapolation method, the practical success of this iterative approach and thus the spatial resolution of this system also depends on the SNR, which is a strong function of the cell/object size (i.e., its scattering cross section). For submicron sized cells, the scattering is rather weak, which implies that the high spatial frequencies (close to n/λ_{0}) carry rather weak energies that can easily fall below the noise floor at the sensor. Therefore, the true resolution and the NA of digital reconstruction indeed depend on the SNR as well as the scattering cross section of the cells/objects, making sub-micron cell imaging challenging for this reason.

As discussed in earlier sections, the use of incoherent illumination through a large aperture brings numerous advantages to on-chip microscopy, making it a highly suitable and promising platform cell biology and medical diagnostics in resource limited settings. Despite significant practical advantages of the proposed lensless holographic microscope, it may mislead the reader that incoherent illumination will increase the burden on the numerical reconstruction process. Nevertheless, as will be further discussed in the Appendix, for incoherent lensfree holography with M = z_{1}/z_{2} 1, *each individual cell can still be treated to be illuminated with a coherent light*. Further, due to their microscopic cross-sections, the incident wave on each cell can be assumed to be a plane wave. Consequently, the reconstruction of each recorded cell hologram can be performed assuming plane-wave illumination.

In order to diffract the wavefronts, the angular spectrum approach is used to numerically solve the Rayleigh-Sommerfeld integral. This computation involves multiplying the Fourier transform of the field with the transfer function of propagation through linear, isotropic media, as shown below:

$$\begin{array}{c}{H}_{z}({f}_{x},{f}_{y})\hfill \\ =\{\begin{array}{cc}\text{exp}\phantom{\rule{thinmathspace}{0ex}}(j2\pi z\frac{n}{\lambda})\sqrt{1-{(\lambda {f}_{x}/n)}^{2}-{(\lambda {f}_{y}/n)}^{2}}\hfill & ,\text{}\sqrt{{{f}_{x}}^{2}+{{f}_{y}}^{2}}\frac{n}{\lambda}\hfill \\ \text{}0\hfill & ,\mathit{\text{otherwise}}\hfill \end{array}\end{array}$$

where *f _{x}* and

Two different iterative approaches are taken in order to reconstruct the microscopic images of cells, free from any twin-image artifact. Both methods work with a single recorded hologram and rely on the constraint that each cell has a finite support. In both methods, the raw holograms are upsampled typically by a factor of four to six, using cubic spline interpolation before the iterative reconstruction procedure. Although upsampling does not immediately increase the information content of the holograms, it still offers significant improvements for achieving a more accurate phase recovery and higher resolution in the reconstructed image. First, it allows defining a more accurate object support by smoothing the edges of the objects in the initial back-projection of the hologram. Using an object support that is closer to the actual cell in terms of size and shape reduces the error of the iterative algorithms, as well as ensuring faster convergence. Second, upsampling introduces higher spatial frequencies initially carrying zero energy, in the hologram. Through the iterative reconstruction steps detailed below, these higher spatial frequencies gradually attain non-zero energy, which allows sub-pixel resolution in the final reconstruction as argued in the Discussion Section.

The first method falls under the broad category of Interferometric Phase-Retrieval Techniques and is applicable to cases where the recorded intensity is dominated by the holographic diffraction terms.^{31}^{–}^{33} The first step is the digital reconstruction of the hologram, which is achieved by propagating the hologram intensity by a distance of z_{2} away from the hologram plane yielding the initial wavefront *U _{rec}*. As a result of this computation, the virtual image of the object is recovered together with its spatially overlapping defocused twin-image. It is important to note that the recorded intensity can also be propagated by a distance of −z

Due to the small cell-sensor distance in the incoherent holographic microscopy scheme presented here, the twin-image may carry high intensities, especially for relatively large objects like white blood cells. In such cases, the fine details inside the micro-objects may get suppressed. Similarly, the twin-images of different cells which are close to each other get superposed, leading to an increase in background noise. This issue is especially pronounced for microscopy of dense cell solutions, where the overlapping twin images of many cells lowers the counting accuracy due to reduced SNR.

In order to eliminate the twin-image artifact, an iterative approach using finite support constraints is utilized.^{33} Basically, this technique relies on the fact that duplicate information for the phase and amplitude of the object exists in two different reconstruction planes at distances +z_{2} and −z_{2} from the hologram plane, where the virtual and real images of the object are recovered, respectively. Therefore, a twin-image-free reconstruction in one of the image planes can be obtained, while filtering out the duplicate image in the other plane. Without loss of generality, we have chosen to filter out the real image to obtain a twin-image-free reconstruction in the virtual image plane at −z_{2}. Due to the finite size of the micro-objects, the real image of the object only occupies the region inside its support, while the defocused twin-image image spreads out to a wider region around the object, also overlapping with the real image inside the support. Hence, deleting the information only inside the support ensures that the real image is completely removed from the reconstructed wavefront. Nevertheless, the virtual image information inside the support is also lost, and the iterative technique tries to recover the missing information of the virtual image by going back and forth between the virtual and real image planes, recovering more of the lost information at each iteration. The success of this algorithm is highly dependent on the Fresnel number of the recording geometry, which is given by *N _{f}* =

The steps of twin-image elimination are detailed below:

- Initially the real image, which is the back-projected hologram at a distance of +z
_{2}, is used for determining the object support. Object support can be defined by either thresholding the intensity of the reconstructed image, or searching for its local minima. - The region inside the support is deleted and a constant value is assigned to this region as an initial guess for the deleted part of the virtual image inside the support as shown below:where ${U}_{z}^{(i)}(x,y)$ denotes the field at the real image plane after the i$${U}_{{z}_{2}}^{(i)}(x,y)=\{\begin{array}{cc}{U}_{\mathit{\text{rec}}}\phantom{\rule{thinmathspace}{0ex}},\hfill & x,y\notin S\hfill \\ {\overline{U}}_{\mathit{\text{rec}}}\phantom{\rule{thinmathspace}{0ex}},\hfill & x,y\in S\hfill \end{array}$$
^{th}iteration.*S*represents the area defined by the object support, and*Ū*is the mean value of_{rec}*U*within the support._{rec} - Then, the field at the real image plane is back propagated by −2z
_{2}to the virtual image plane. Ideally, the reconstruction at this plane should be free from any twin-image distortions. Therefore, the region outside the support can be set to a constant background value to eliminate any remaining out-of-focus real image in the virtual image plane. However, this constraint is applied smoothly as determined by the relaxation parameter β below, rather than sharply setting the image to d.c. level outside the support:where D is the background in the reconstructed field, which can either be obtained from a measured background image in the absence of the object, or can simply be chosen as the mean value of the field outside the object supports at the virtual image plane. β is a real valued parameter greater than unity, and is typically chosen around 2–3 in this article. Increasing β leads to faster convergence, but compromises the immunity of the iterative estimation accuracy to background noise.$${U}_{-{z}_{2}}^{(i)}(x,y)=\{\begin{array}{cc}D-\frac{D-{U}_{-{z}_{2}}^{(i)}}{\beta},\hfill & x,y\notin S\hfill \\ {U}_{-{z}_{2}\text{\hspace{1em}\hspace{1em}}}^{(i)}\text{},\hfill & x,y\in S\hfill \end{array}$$ - The field at the virtual image plane is forward propagated to the real-image plane, where the region inside the support now has a better estimate of the missing part of the virtual image. The region outside the support can be replaced by ${U}_{{z}_{2}}^{(1)}(x,y)$, the original reconstructed field at the real image plane, as shown below:$${U}_{{z}_{2}}^{(i+1)}(x,y)=\{\begin{array}{cc}\hfill {U}_{{z}_{2}}^{(1)}\phantom{\rule{thinmathspace}{0ex}},& x,y\notin S\hfill \\ \hfill {U}_{{z}_{2}}^{(i+1)},& x,y\in S\hfill \end{array}$$Steps c to d can be repeated iteratively until the final image converges. In most cases in this article, convergence is achieved
*after 10–15 iterations*. This iterative computation takes around 4 seconds for an image size of ~5 Mpixels using a regular CPU (central processing unit – e.g., Intel Q8300) and it gets >40× faster using a GPU (graphics processing unit – e.g., NVIDIA GeForce GTX 285) achieving*<0.1 sec*computation time for the same image size.

The second method utilized for eliminating the twin-image is classified under Non-Interferometric Phase-Retrieval Techniques, where the recorded image is not necessarily treated as a hologram, but as the intensity of any diffraction field.^{34} Together with the constraint that the objects have finite support, this technique is capable of iteratively recovering the phase of the diffracted field incident on the detector from a single intensity image. As a result, the complex field (amplitude and phase) of the cell holograms, rather than the intensity, can be back-propagated, thereby allowing reconstruction of the objects free from any twin-image contamination. This method can be decomposed into the following steps:

- The square-root of the recorded hologram intensity is propagated by a distance of −z
_{2}to the cell plane, assuming a field phase of zero as an initial guess. The aim of the algorithm is to iteratively determine the actual phase of the complex field at the detector plane, and eventually at the object plane. In the first iteration, the object support is defined either by thresholding the intensity of the field at the object plane, or by locating its regional maxima and/or minima. - The field inside the object supports is preserved, while the complex field values outside the supports is replaced by a background value
*D*_{−Z2}(*x,y*), as shown below:where$${U}_{-{z}_{2}}^{i+1}(x,y)=\{\begin{array}{cc}m\xb7{D}_{-{z}_{2}}(x,y),\hfill & x,y\notin S\hfill \\ \text{\hspace{1em}}{U}_{-{z}_{2}}^{i}(x,y)\text{},\hfill & x,y\in S\hfill \end{array}$$*D*_{−Z2}(*x,y*) is obtained by propagating the square root of the background intensity of the image obtained by the same setup in the absence of the cells; and $m=\mathit{\text{mean}}\phantom{\rule{thinmathspace}{0ex}}({U}_{-{z}_{2}}^{i}(x,y)/\mathit{\text{mean}}\phantom{\rule{thinmathspace}{0ex}}({D}_{-{z}_{2}}(x,y)$. - The modified field at the object plane is propagated back to the detector plane, where the field now has a non-zero phase value. The amplitude of this field is replaced with the square root of the original recorded hologram intensity as no modification for the amplitude should be allowed while converging for its phase. Consequently, ${U}_{0}^{(i)}(x,y)$, the complex diffraction field at the detector plane after the i
^{th}iteration can be written as follows:where the superscripts denote the iteration step, and ${\varnothing}_{0}^{(i)}(x,y)$ denotes the phase of the field after the i$${U}_{0}^{(i)}(x,y)=|{U}_{0}^{(0)}(x,y)|\xb7\text{exp}\phantom{\rule{thinmathspace}{0ex}}(j\xb7{\varnothing}_{0}^{(i)}(x,y))$$^{th}iteration.

Steps a to c can be iterated until the phase recovery converges. Typically, the results are obtained with less than 15 iterations, which is quite similar to the first Method.

For small or weakly scattering objects such as whole blood cells or micro-beads, both methods yield satisfactory results of comparable image quality. For such objects, the typical Fresnel number of the recording geometry is <1 and the focused real image occupies a small fraction of the area over which the twin-image is spread out. Therefore, deleting the object image in the real image plane leads to minimal information loss for the virtual image, which is to be recovered without twin-image artifacts. However, for larger objects of interest the Fresnel number of the system increases, and deleting the real image may causes excessive information loss in the virtual image, which may be harder to recover iteratively. Furthermore, for strongly scattering objects, the self and cross-interference terms may start dominating such that the holographic content of the recorded intensity gets distorted. Therefore for strongly scattering and/or extended objects, the second method discussed above becomes more preferable over the first method, which requires the holographic terms to be dominant in a setup with Fresnel numbers <10. However, an advantage of the first method is that it does not necessarily require a separate background image taken prior to inserting the sample into the setup. Although a mean value of the field at the object plane can also be used, in the absence of a background image for method 2 (step b), we have observed that the final image quality becomes better with an experimentally obtained background.

The LED source (OSRAM Opto Semiconductors Inc., Part# LY E65B – center wavelength: 591 nm, bandwidth: 18 nm) is butt-coupled to a 50 or 100 µm pinhole without the use of any focusing or alignment optics, illuminating the entire FOV of ~24 mm^{2} (CMOS chip, Model: MT9P031, Micron Technology – pixel size: 2.2 µm, 5 Mpixels). There is a small amount of unavoidable distance between the active area of the LED and the pinhole plane, the effect of which is briefly discussed in the Appendix. Following Figure 1, typical z_{1} and z_{2} distances used in our design are ~2–5 cm and <1–2 mm, respectively. The LED source and the CMOS sensor are powered through a USB connection from the side. The sample is loaded within a mechanical tray from the side. For DIC operation, the thin quartz samples (~180 µm thick) are cut with an optic axis at 45° with respect to the propagation direction as shown in Fig. 4 (sample cost: 1 USD per 1 cm^{2} from Suzhou Qimeng Crystal Material Product Co., China). The plastic polarizers are 0.2 mm thick each, and cost ~0.06USD per 1 cm^{2} (Aflash Photonics, USA). Other details of DIC operation are provided in the Results Section and in Fig. 4.

Test objects with micron-sized features were fabricated on glass to investigate the imaging performance of the system. The first step was coating Borosilicate type-1 cover glasses (150µm thickness) with 20nm thick Aluminum, using Electron Beam Metal Deposition CHA Mark 40. The thin metal coating works as a conductive layer for the Focused Ion Beam (FIB Nova 600) process in the following step. The FIB machine was programmed to over-mill the Aluminum layer, by using long time and high current (much more than required for milling the metal only) so as to ensure that the glass underneath the metal is etched as well. FIB milling was terminated once sufficient milling compared to surface roughness and feature sizes was achieved. The etch depth was monitored by Scanning Electron Microscopy (SEM) in real time during ion beam milling. After the FIB process, the metal layer was washed out by wet etching process of Aluminum, and high resolution phase structures on glass were obtained.

For blood smear imaging experiments, whole blood samples were treated with 2.0 mg EDTA/ml; and 1µL of sample was dropped on the top of a type 0 glass cover slip and another type 0 cover slip was used for spreading and smearing the blood droplet over the entire cover slip with about 30 degree of smearing angle. Smeared specimen was air-dried for 5 min before being fixed and stained by HEMA 3 Wright-Giemsa staining kit (Fisher Diagnostics). Dipping dried samples into three Coplin jars which contain methanol based HEMA 3 fixative solution, eosinophilic staining solution (HEMA 3 solution I) and basophilic solution (HEMA 3 solution II), respectively, was performed five times in a row for one second each step. Then, the specimen was rinsed with de-ionized water and air-dried again before being imaged.

We used RPMI 1640 classic liquid media with L-Glutamine (Fisher Scientific) as a diluent to achieve a desired dilution factor. To achieve accurate dilution, we followed the international standard established by the International Council for Standardization in Hematology (ICSH).^{35}

In this manuscript, we introduced a lensless incoherent holographic microscope weighing ** ~46 grams** with dimensions

We acknowledge the support of the Okawa Foundation, Vodafone Americas Foundation, DARPA DSO (under 56556-MS-DRP), NSF (under Awards # 0754880 and 0930501), NIH (under 1R21EB009222-01 and the NIH Director’s New Innovator Award # DP2OD006427 from the Office Of The Director, NIH), AFOSR (Project # 08NE255). A. Ozcan also gratefully acknowledges the support of the Office of Naval Research (ONR) under Young Investigator Award 2009.

Holography is all about recording the optical phase information in the form of amplitude oscillations. To be able to read or make use of this phase information for microscopy, most existing lensfree in-line holography systems are hungry for spatial coherence and therefore use a laser source that is filtered through a small aperture (e.g., 1–2 µm). Utilizing a completely incoherent light source that is filtered through a large aperture (e.g., >100λ–200λ in diameter) should provide orders-of-magnitude better transmission throughput as well as a much simpler, inexpensive and more robust optical set-up. Here we aim to provide a theoretical analysis of this opportunity and its implications for compact lensless microscopy as we illustrated in this manuscript.

To record cell holograms that contain useful digital information with a spatially incoherent source emanating from a large aperture, one of the key steps is to bring the cell plane close to the detector array by ensuring z_{2}z_{1}, where z_{1} defines the distance between the incoherently illuminated aperture plane and the cell plane, and z_{2} defines the distance between the cell plane and the sensor array (see Fig. 1(b)). In conventional lensless in-line holography approaches, this choice is reversed such that z_{1}z_{2} is utilized, while the total aperture-to-detector distance (z_{1}+z_{2}) remains comparable in both cases, leaving the overall device length almost unchanged. Therefore, apart from using *an incoherent source through a large aperture*, our choice of z_{2}z_{1} is also quite different from the main stream lensfree holographic imaging approaches and thus deserves more attention.

To better understand the quantified impact of this choice on incoherent on-chip microscopy, let us assume two point scatterers (separated by *2a*) that are located at the cell plane (z=z_{1}) with a field transmission of the form *t*(*x, y*) = 1 + *c*_{1} δ(*x − a, y*) + *c*_{2} δ(*x + a, y*) where *c*_{1} and *c*_{2} can be negative and their intensity denotes the strength of the scattering process, and δ(*x,y*) defines a Dirac-delta function in space. These point scatterers can be considered to represent sub-cellular elements that make up the cell volume. For the same imaging system let us assume that *a large aperture of arbitrary shape* is positioned at z=0 with a transmission function of *p*(*x,y*) and that the digital recording screen (e.g., a CCD or a CMOS array) is positioned at z=z_{1}+z_{2}, where typically z_{1} ~ 2–5 cm and z_{2} ~ 0.5–2 mm.

Assuming that the aperture, *p*(*x,y*) is *uniformly* illuminated with a *spatially incoherent light source*, the cross-spectral density at the aperture plane can be written as:

$$W({x}_{1},{y}_{1},{x}_{2},{y}_{2},\gamma )=S\phantom{\rule{thinmathspace}{0ex}}(\gamma )p({x}_{1},{y}_{1})\delta ({x}_{1}-{x}_{2})\delta ({y}_{1}-{y}_{2}),$$

where (*x*_{1}, *y*_{1}) and (*x*_{2}, *y*_{2}) represents two arbitrary points on the aperture plane and *S*(γ) denotes the power spectrum of the incoherent source with a center wavelength (frequency) of λ_{0} (γ_{0}).

We should note that in our experimental scheme (Fig. 1(a)), the incoherent light source (the LED) was butt-coupled to the pinhole with a small amount of unavoidable distance between its active area and the pinhole plane. This remaining small distance between the source and the pinhole plane also generates some correlation for the input field at the aperture plane. In this theoretical analysis, we ignore this effect and investigate the imaging behavior of a *completely* incoherent field hitting the aperture plane. The impact of such an unavoidable gap between pinhole and the incoherent source is an “effective” reduction of the pinhole size in terms of spatial coherence (without affecting the light throughput), which we will not consider in this analysis.

Based on these assumptions, after free space propagation over a distance of z_{1}, the cross-spectral density just before interacting with the cells can be written as^{24}:

$$\begin{array}{c}W(\mathrm{\Delta}x,\mathrm{\Delta}y,q,\gamma )\hfill \\ =\frac{S\phantom{\rule{thinmathspace}{0ex}}(\gamma )}{{(\lambda {z}_{1})}^{2}}{e}^{-j\frac{2\pi \gamma q}{c{z}_{1}}}{\displaystyle \iint p\phantom{\rule{thinmathspace}{0ex}}(x,y){e}^{j\frac{2\pi}{\lambda {z}_{1}}(x\mathrm{\Delta}x+y\mathrm{\Delta}y)}\mathit{\text{dx}}\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{dy}}}\hfill \end{array}$$

where $\mathrm{\Delta}x={x}_{1}^{\prime}-{x}_{2}^{\prime},\mathrm{\Delta}y={y}_{1}^{\prime}-{y}_{2}^{\prime},q=\frac{{x}_{1}^{\prime}+{x}_{2}^{\prime}}{2}\mathrm{\Delta}x+\frac{{y}_{1}^{\prime}+{y}_{2}^{\prime}}{2}\mathrm{\Delta}y;\text{and}({x}_{1}^{\prime},{y}_{1}^{\prime})\phantom{\rule{thinmathspace}{0ex}}({x}_{2}^{\prime},{y}_{2}^{\prime})$ represent two arbitrary points on the cell plane. After interacting with the cells i.e., with *t*(*x,y*), the cross-spectral density, right behind the cell plane, can be written as:

$$W(\mathrm{\Delta}x,\mathrm{\Delta}y,q,\gamma )\xb7{t}^{*}({x}_{1}^{\prime},{y}_{1}^{\prime})\xb7t({x}_{2}^{\prime},{y}_{2}^{\prime})$$

This cross-spectral density function will effectively propagate another distance of z_{2} before reaching the detector plane. Therefore, one can write the cross-spectral density at the detector plane as:

$$\begin{array}{l}{W}_{D}({x}_{D1},{y}_{D1},{x}_{D2},{y}_{D2},\gamma )=W(\mathrm{\Delta}x,\mathrm{\Delta}y,q,\gamma )\phantom{\rule{thinmathspace}{0ex}}{t}^{*}({x}_{1}^{\prime},{y}_{1}^{\prime})\phantom{\rule{thinmathspace}{0ex}}t({x}_{2}^{\prime},{y}_{2}^{\prime})\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \iint {\displaystyle {\iint}_{{h}_{C}^{*}({x}_{1}^{\prime},{x}_{D1},{y}_{1}^{\prime},{y}_{D1},\gamma )\phantom{\rule{thinmathspace}{0ex}}{h}_{C}({x}_{2}^{\prime},{x}_{D2},{y}_{2}^{\prime},{y}_{D2},\gamma )\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dx}}}_{1}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dy}}}_{1}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dx}}}_{2}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dy}}}_{2}^{\prime}}}}\end{array}$$

where (*x*_{D1}, *y*_{D1}) and (*x*_{D2}, *y*_{D2}) define arbitrary points on the detector plane (i.e., within the hologram region of each cell); and

$${h}_{C}(x\prime ,{x}_{D},y\prime ,{y}_{D},\gamma )=\frac{1}{j\lambda {z}_{2}}{e}^{j\frac{2\pi {z}_{2}}{\lambda}}{e}^{j\frac{\pi}{\lambda {z}_{2}}[{(x\prime -{x}_{D})}^{2}+{(y\prime -{y}_{D})}^{2}]}.$$

At the detector plane (*x _{D}, y_{D}*), the

$$i\phantom{\rule{thinmathspace}{0ex}}({x}_{D},{y}_{D})={\displaystyle \int {W}_{D}\phantom{\rule{thinmathspace}{0ex}}({x}_{D},{y}_{D},{x}_{D},{y}_{D},\gamma )\phantom{\rule{thinmathspace}{0ex}}d\gamma}$$

Assuming *t*(*x, y*) = 1 + *c*_{1} δ(*x − a, y*) + *c*_{2} δ(*x + a, y*), this last equation can be expanded into 4 physical terms, i.e.,

$$i\phantom{\rule{thinmathspace}{0ex}}({x}_{D},{y}_{D})=C\phantom{\rule{thinmathspace}{0ex}}({x}_{D},{y}_{D})+I({x}_{D},{y}_{D})+{H}_{1}({x}_{D},{y}_{D})+{H}_{2}({x}_{D},{y}_{D}),$$

where:

$$C\phantom{\rule{thinmathspace}{0ex}}({x}_{D},{y}_{D})={D}_{0}+\frac{|{c}_{1}{|}^{2}{S}_{0}}{{({\lambda}_{0}{z}_{1}{z}_{2})}^{2}}\tilde{P}\phantom{\rule{thinmathspace}{0ex}}(0,0)+\frac{|{c}_{2}{|}^{2}{S}_{0}}{{({\lambda}_{0}{z}_{1}{z}_{2})}^{2}}\tilde{P}\phantom{\rule{thinmathspace}{0ex}}(0,0)$$

(1)

$$I({x}_{D},{y}_{D})=\frac{{c}_{2}{{c}_{1}}^{*}{S}_{0}}{{({\lambda}_{0}{z}_{1}{z}_{2})}^{2}}\tilde{P}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{2a}{{\lambda}_{0}{z}_{1}},0\right)\phantom{\rule{thinmathspace}{0ex}}{e}^{j\frac{4\pi a{x}_{D}}{{\lambda}_{0}{z}_{2}}}+c.c.$$

(2)

$${H}_{1}({x}_{D},{y}_{D})=\frac{{S}_{0}}{{({\lambda}_{0}{z}_{1})}^{2}}[{c}_{1}\xb7\{p\phantom{\rule{thinmathspace}{0ex}}(-{x}_{D}\xb7M+a\xb7M\xb7F,-{y}_{D}\xb7M)*{h}_{c}({x}_{D},{y}_{D})\}+c.c.]$$

(3)

$${H}_{2}({x}_{D},{y}_{D})=\frac{{S}_{0}}{{({\lambda}_{0}{z}_{1})}^{2}}[{c}_{2}\xb7\{p\phantom{\rule{thinmathspace}{0ex}}(-{x}_{D}\xb7M-a\xb7M\xb7F,-{y}_{D}\xb7M)*{h}_{c}({x}_{D},{y}_{D})\}+c.c.]$$

(4)

In these Equations “*c.c.*” and “*” refer to the complex conjugate and convolution operations, respectively, $M=\frac{{z}_{1}}{{z}_{2}},F=\frac{{z}_{1}+{z}_{2}}{{z}_{1}}$, and is the 2D spatial Fourier Transform of the aperture function *p*(*x, y*). It should be emphasized that (*x _{D}, y_{D}*) in these equations refers to the cell hologram extent,

Further, ${h}_{c}({x}_{D},{y}_{D})=\frac{1}{j\phantom{\rule{thinmathspace}{0ex}}{\lambda}_{0}\xb7F\xb7{z}_{2}}{e}^{j\frac{\pi}{{\lambda}_{0}\xb7F\xb7{z}_{2}}({{x}_{D}}^{2}+{{y}_{D}}^{2})}$ which effectively represents the 2D *coherent impulse response* of free space over Δ*z* = *F* · *z*_{2}. For the incoherent source, we have assumed a center frequency (wavelength) of γ_{0}(λ_{0}), where the spectral bandwidth was assumed to be much smaller than λ_{0} with a power spectrum of *S*(γ) *S*_{0}δ(γ − γ_{0}). This is a valid approximation since in this work we have used an LED source at λ_{0} ~591 nm with a spectral FWHM of ~18 nm.

Note that in these derivations we have also assumed paraxial approximation to simplify the results, which is a valid assumption since for this work z_{1} and z_{2} are typically much longer than the extend of each cell hologram (L_{H}). However for the digital microscopic reconstruction of the cell images from their raw holograms, *no such assumptions* were made as also emphasized in the Experimental Methods Section.

Furthermore, *D*_{0} of Eq. 1 can further be expanded as:

$$\begin{array}{cc}{D}_{0}=\hfill & {\displaystyle \iint {\displaystyle \iint {\displaystyle \int \frac{W(\mathrm{\Delta}x,\mathrm{\Delta}y,q,\gamma )}{{(\lambda \phantom{\rule{thinmathspace}{0ex}}{z}_{2})}^{2}}}\phantom{\rule{thinmathspace}{0ex}}{e}^{-j\frac{\pi}{\lambda \phantom{\rule{thinmathspace}{0ex}}{z}_{2}}[{({x}_{1}^{\prime}-{x}_{D})}^{2}+{({y}_{1}^{\prime}-{y}_{D})}^{2}]}}}\hfill \\ \hfill & \hfill \text{}{e}^{j\frac{\pi}{\lambda \phantom{\rule{thinmathspace}{0ex}}{z}_{2}}[{({x}_{2}^{\prime}-{x}_{D})}^{2}+{({y}_{2}^{\prime}-{y}_{D})}^{2}]}{\mathit{\text{dx}}}_{1}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dy}}}_{1}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dx}}}_{2}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dy}}}_{2}^{\prime}d\gamma \hfill \end{array}$$

which simply represents the background illumination and has no spatial information regarding the cells’ structure or distribution. Although this last term, *D*_{0} can further be simplified, for most illumination schemes it constitutes a uniform background and therefore can be easily subtracted out.

Equations (1–4) are rather important to understand the key parameters in lensfree on-chip microscopy with spatially incoherent light emanating from a large aperture. Equation 1 describes the *classical diffraction* that occurs from the cell plane to the detector under the paraxial approximation. In other words, it includes both the background illumination (term *D*_{0}) and also the *self-interference of the scattered waves* (terms that are proportional to |*c*_{1}|^{2} and |*c*_{2}|^{2}). It is quite intuitive that the self interference terms representing the classical diffraction in Eq. (1) are scaled with (0,0) as the extent of the spatial coherence at the cell plane is not a determining factor for self interference.

Equation 2, however, contains the information of the interference between the scatterers located at the cell plane. Similar to the self-interference term, the cross-interference term, *I*(*x _{D}, y_{D}*), also does not contain any useful information as far as holographic reconstruction of the cell image is concerned. This interference term is proportional to the amplitude of $\tilde{P}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{2a}{{\lambda}_{0}{z}_{1}},0\right)$, which implies that for a small aperture size (hence wide ) two scatterers that are located far from each other can also interfere. Based on the term $\tilde{P}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{2a}{{\lambda}_{0}{z}_{1}},0\right)$, one can estimate that if $2a<\frac{{\lambda}_{0}{z}_{1}}{D}$ (where

The final two terms (Eqs. (3–4)) describe the ** holographic diffraction** phenomenon and they are of central interest in all forms of digital holographic imaging systems, including the one presented here. Physically these terms dominate the information content of the detected intensity, especially for weakly scattering objects, and they represent the interference of the scattered light from each object with the background light, i.e.,

A careful inspection of the terms inside the curly brackets in Eqs. (3–4) indicates that, *for each scatterer position, a scaled and shifted version of the aperture function p (x, y) appears to be coherently diffracting with the free space impulse response h _{c}(x_{D}, y_{D})*. In other words, as far as holographic diffraction is concerned,

Even though the entire derivation above was made using the formalism of wave theory, the end result is quite interesting as it predicts a geometrical scaling factor of *M* = *z*_{1}/*z*_{2} (see Figure 1(b)). Further, because M1, each cell hologram only occupies a tiny fraction of the entire field-of-view and therefore behaves independent of most other cells within the imaging field-of-view. *That is the same reason why (unlike conventional lensfree in-line holography) there is no longer a Fourier transform relationship between the detector plane and the cell plane. Such a Fourier transform relationship only exists between each cell hologram and the corresponding cell.*

Notice also that in Eqs. (3–4) the shift of the scaled aperture function *p*(−*x _{D}* ·

According to Eqs. (3–4), for a narrow enough *p*(−*x _{D} M, −y_{D} M*) (such that the spatial features of the cells are not washed out), the modulation of the holographic term at the detector plane can be expressed as $\mathit{\text{sin}}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\pi}{{\lambda}_{0}\phantom{\rule{thinmathspace}{0ex}}F\phantom{\rule{thinmathspace}{0ex}}{z}_{2}}({{x}_{D}}^{2}+{{y}_{D}}^{2})\right)$. This modulation term of the holographic signature at the detector plane implies that for a large fringe magnification (

The derivation discussed above was made for 2 point scatterers separated by 2*a*, such that *c*_{1} δ(*x − a, y*) + *c*_{2} δ(*x + a, y*). The more general form of the incoherent holographic term (equivalent of Eqs. 3 and 4 for a continuous distribution of scatterers - as in a real cell) can be expressed as:

$$H({x}_{D},{y}_{D})\propto \frac{{S}_{0}}{{({\lambda}_{0}{z}_{1})}^{2}}\xb7{\left(\frac{{z}_{2}}{{z}_{1}}\right)}^{2}\phantom{\rule{thinmathspace}{0ex}}\left[\{s\left(\frac{{x}_{D}}{F},\frac{{y}_{D}}{F}\right)*{h}_{c}({x}_{D},{y}_{D})\}+c.c.\right]$$

where *s*(*x _{D}, y_{D}*) refers to the transmission image of the sample/cell of interest, which represents the 2D map of all the scatterers located within the sample/cell volume. The above derivation assumed a narrow enough

Finally, the supplementary text provides further discussion on the spatial sampling requirements at the detector array, as well the space-bandwidth product of the presented technique.^{36}^{–}^{38}

1. Hell SW. Nat Biotech. 2003;21:1347–1355. [PubMed]

2. Gustafsson MGL. Proc. Natl. Acad. Sci. USA. 2005;102:13081–13086. [PubMed]

3. Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF. Science. 2006;313:1642–1645. [PubMed]

4. Rust MJ, Bates M, Zhuang X. Nat Meth. 2006;3:793–796. [PMC free article] [PubMed]

5. Hess ST, Girirajan TP, Mason MD. Biophysical Journal. 2006;91:4258–4272. [PubMed]

6. Ma Z, Gerton JM, Wade LA, Quake SR. Phys. Rev. Lett. 2006;97:260801. [PubMed]

7. Chung E, Kim D, Cui Y, Kim Y, So PT. Biophysical Journal. 2007;93:1747–1757. [PubMed]

8. Pavani Sri Rama Prasanna, Thompson Michael A, Biteen Julie S, Lord Samuel J, Liu Na, Twieg Robert J, Piestun Rafael, Moerner WE. Proc. Natl. Acad. Sci. USA. 2009;9:2995–2999. [PubMed]

9. Zipfel WR, Williams RM, Webb WW. Nat Biotech. 2003;21:1369–1377. [PubMed]

10. Evans CL, Potma EO, Puoris'haag M, Côté D, Lin CP, Xie XS. Proc. Natl. Acad. Sci. USA. 2005;102:16807–16812. [PubMed]

11. Choi W, Fang-Yen C, Badizadegan K, Oh S, Lue N, Dasari RR, Feld MS. Nat Meth. 2007;4:717–719. [PubMed]

12. Barretto RPJ, Messerschmidt B, Schnitzer MJ. Nat Meth. 2009;6:511–512. [PMC free article] [PubMed]

13. Rosen J, Brooker G. Nat Photon. 2008;2:190–195.

14. Goda K, Tsia KK, Jalali B. Nature. 2009;458:1145–1149. [PubMed]

15. Psaltis D, Quake SR, Yang C. Nature. 2006;442:381–386. [PubMed]

16. Haddad WS, Cullen D, Solem JC, Longworth JW, McPherson A, Boyer K, Rhodes CK. Appl. Opt. 1992;31:4973–4978. [PubMed]

17. Xu W, Jericho MH, Meinertzhagen IA, Kreuzer HJ. Proc. Natl. Acad. Sci. USA. 2001;98:11301–11305. [PubMed]

18. Pedrini G, Tiziani HJ. Appl. Opt. 2002;41:4489–4496. [PubMed]

19. Repetto L, Piano E, Pontiggia C. Opt. Lett. 2004;29:1132–1134. [PubMed]

20. Garcia-Sucerquia J, Xu W, Jericho MH, Kreuzer HJ. Opt. Lett. 2006;31:1211–1213. [PubMed]

21. Heng X, Erickson D, Baugh LR, Yaqoob Z, Sternberg PW, Psaltis D, Yang C. Lab Chip. 2006;6:1274–1276. [PubMed]

22. Seo S, Su T, Tseng DK, Erlinger A, Ozcan A. Lab Chip. 2009;9:777–787. [PubMed]

23. Coskun AF, Su T, Ozcan A. Lab Chip. 2010 DOI: 10.1039=b926561a. [PMC free article] [PubMed]

24. Brady DJ. Optical Imaging and Spectroscopy. Hoboken, NJ, USA: John Wiley & Sons; 2009.

25. Lohmann W. J. Opt. Soc. Am. 1965;55 1555_1-1556.

26. Mertz L. Transformation in Optics. Hoboken, NJ, USA: John Wiley & Sons; 1965.

27. Dubois F, Joannes L, Legros J. Appl. Opt. 1999;38:7085–7094. [PubMed]

28. Dubois F, Requena M Novella, Minetti C, Monnom O, Istasse E. Appl. Opt. 2004;43:1131–1139. [PubMed]

29. Gopinathan U, Pedrini G, Osten W. J. Opt. Soc. Am. A. 2008;25:2459–2466. [PubMed]

30. Monroy F, Rincon O, Torres YM, Garcia-Sucerquia J. Optics Communications. 2008;281:3454–3460.

31. Situ G, Sheridan JT. Opt. Lett. 2007;32:3492–3494. [PubMed]

32. Sherman GC. J. Opt. Soc. Am. 1967;57:546–547. [PubMed]

33. Koren G, Polack F, Joyeux D. J. Opt. Soc. Am. A. 1993;10:423–433.

34. Fienup JR. Opt. Lett. 1978;3:27–29. [PubMed]

35. England JM. Clin Lab Haematol. 1994;16:131–138. [PubMed]

36. Goodman JW. Introduction to Fourier Optics. Greenwood Village, CO, USA: Roberts & Company Publishers; 2005.

37. Lohmann W, Testorf ME, Ojeda-Castañeda J. Proc. SPIE. 2002;4737:77–88.

38. Stern, Javidi B. J. Opt. Soc. Am. A. 2008;25:736–741. [PubMed]