PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
 
Opt Express. Mar 14, 2011; 19(6): 5047–5062.
Published online Mar 2, 2011. doi:  10.1364/OE.19.005047
PMCID: PMC3368318
Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy
Gary Brooker, 1 , 2 * Nisan Siegel, 1 , 2 Victor Wang, 1 , 2 and Joseph Rosen 1 , 2 , 3 , 4
1Department of Biomedical Engineering, Johns Hopkins University, 9605 Medical Center Drive, Rockville, Maryland 20850 USA
2Microscopy Center, Johns Hopkins University Montgomery County Campus, Rockville, Maryland 20850 USA
3Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel
4rosen/at/ee.bgu.ac.il
*gbrooker/at/jhu.edu
Received December 13, 2010; Revised February 22, 2011; Accepted February 27, 2011.
Fresnel Incoherent Correlation Holography (FINCH) enables holograms and 3D images to be created from incoherent light with just a camera and spatial light modulator (SLM). We previously described its application to microscopic incoherent fluorescence wherein one complex hologram contains all the 3D information in the microscope field, obviating the need for scanning or serial sectioning. We now report experiments which have led to the optimal optical, electro-optic, and computational conditions necessary to produce holograms which yield high quality 3D images from fluorescent microscopic specimens. An important improvement from our previous FINCH configurations capitalizes on the polarization sensitivity of the SLM so that the same SLM pixels which create the spherical wave simulating the microscope tube lens, also pass the plane waves from the infinity corrected microscope objective, so that interference between the two wave types at the camera creates a hologram. This advance dramatically improves the resolution of the FINCH system. Results from imaging a fluorescent USAF pattern and a pollen grain slide reveal resolution which approaches the Rayleigh limit by this simple method for 3D fluorescent microscopic imaging.
OCIS codes: (110.0180) Microscopy, (090.1995) Digital holography, (180.2520) Fluorescence microscopy, (180.6900) Three-dimensional microscopy, (050.1950) Diffraction gratings, (090.1760) Computer holography, (090.1970) Diffractive optics, (090.2880) Holographic interferometry, (100.6890) Three-dimensional image processing, (110.6880) Three-dimensional image acquisition, (120.5060) Phase modulation, (260.2510) Fluorescence, (330.1720) Color vision
Incoherent [1,2] and partially coherent [3] digital holographic microscopies have recently become fields of much interest because of the ability of microscopes based on these principles to image three dimensional (3D) incoherent objects. In addition, some of these systems are capable of imaging fluorescent labeled specimens [1,2], while others can perform sectioning of the 3-D observed volume [4]; and some have even demonstrated improvement in resolution by operating in synthetic aperture mode [5]. More recently, a lensless version of a partially coherent [3], digital holographic microscope has been installed on-chip in a very compact configuration. The potential of these technologies is promising.
The holographic method used in this study is based upon the recently invented system of a single-channel incoherent interferometer employed for generating digital Fresnel holograms [6,7]. In this non-scanning holographic technique, incoherent light is reflected or emitted from a 3D object, then propagates through a spatial light modulator (SLM), and is finally recorded by a digital camera. For every source point the SLM is used as a diffractive beam splitter in an incoherent interferometer, so that each spherical beam, originating from each object point, is split into two spherical beams with two different curve radii. Accumulation of the entire interferences for all of the couples of spherical beams creates the Fresnel hologram of the observed object. Three holograms are recorded sequentially, each for a different phase factor of the SLM, and are superposed during data processing to produce a complex-valued Fresnel hologram free of the twin image and bias term.
In theory, optical microscopy lends itself to readily adapt the FINCH principle, since the light emitted from an infinity corrected objective is a plane wave that is then focused to an image plane by the microscope tube lens. The FINCH principle can be applied by substituting an SLM for the tube lens to create the focus beam and at the same time pass the plane wave so that there is interference between the two coincident beams [2]. Recording of this interference on a CCD camera creates a hologram of the specimen.
In the present study we have examined the factors necessary to obtain optimal resolution in fluorescence microscopy with the FINCH technique. We report here the combination of a number of new advances in FINCH microscopic imaging which has yielded resolution which approaches the Rayleigh limit.
2.1 Microscope and optical setup
The experimental microscope system was configured as shown in Fig. 1 on a Zeiss Axioskop 2FS microscope platform using an Olympus × 20 - 0.75 NA objective. In the majority of the experiments, the test subject was a negative USAF test slide (Edmund Optics 59204) which rested on a fluorescent plastic backing slide (Chroma) so that the clear features were fluorescent. We also imaged Mixed Pollen Grains (Carolina Biological 30-4264). Either a GFP or Cy3 filter set (Semrock) was used. The correct working distance between the objective and specimen was quite critical and established by first bringing the sample into focus by viewing the specimen through the microscope binoculars. Once the correct focus was established, it was kept constant, the tube lens and binocular were removed and the holography configuration shown in Fig. 1 was established. Because the SLM functions as the tube lens of the microscope, creating the spherical wave along with passing the plane wave to the camera, the current configuration contains a non-polarizing beam splitting cube so that the SLM is optically on axis to eliminate any possibility of image distortion from the SLM lens pattern. In our previous configuration of the microscope for FINCH holography, which we called FINCHSCOPE [2], the SLM was positioned at an angle and we made simple corrections to the lens patterns displayed on the SLM to minimize distortions. In the present configuration, the lens patterns are normal to the optical plane, so that no corrections are necessary and no distortion or loss of resolution can be attributed to having the SLM at an angle to the optical axis. The distance that the light traveled through the beam splitting cube was 25 mm, and the distance between the cube and SLM was 4 mm. The distance between the back aperture of the objective and SLM was 130 mm. The distance between the SLM and camera (QImaging Cooled Retiga 4000R, 2,048 × 2,048 pixel 12-bit CCD sensor) was varied between 164.5 mm and 800 mm. with the optimal distance zh (see Fig. 5 ) being 400 mm.
Fig. 1
Fig. 1
Microscope configuration for holographic imaging. A fluorescent slide was positioned on the stage of the microscope and illuminated by standard epifluorescence methods. The illumination was controlled with a shutter to minimize photobleaching. The fluorescence (more ...)
Fig. 5
Fig. 5
Microscope scheme. P1,2 are the polarizers.
The laser beams were directed into the microscope through a beam splitting cube attached to the microscope turret as shown in Fig. 2 , and used for a variety of functions including precisely aligning the system and measuring the SLM performance.
Fig. 2
Fig. 2
Microscope configuration for SLM testing and alignment. A Coherent DPSS 532 nm or Thorlabs 633 nm laser passed through a Glan-Thompson polarizer and 20 × beam expander. The expanded laser beam was confirmed to be coherent and collimated with a (more ...)
2.2 Spatial light modulator configuration, Fresnel patterns and polarizers
In the original configuration of FINCHSCOPE, a diffractive lens phase pattern was displayed on the SLM, creating a random composition of two different lenses. In the experiments reported here, 37% of the pixels were randomly selected; along with the 13% of the SLM surface that is never active in light modulation (fill factor). These pixels had very long (8.2 m) focal length, in essence focusing slightly the plane wave of every object point from the objective; for the sake of convenience, this wave is still referred to as the plane wave. The rest of the SLM pixels displayed a quadratic phase mask. Therefore, a single wavefront originating from any object point was split by the SLM into two mutually coherent wavefronts with two different spherical curves. These two beams propagate in the same direction toward the camera and mutually interfere on the CCD sensor chip, creating the holographic image. In a new approach reported here that takes advantage of the polarization properties of the SLM, it is possible to utilize the same pixels to pass both the mutually coherent plane and spherical waves by using input and output polarizers (Thorlabs LPVIS100) before and after the light from the objective is reflected off of the SLM. The ‘input’ polarizer, placed in the optical path before the SLM, serves to transmit plane wave from the objective to the SLM with polarization components both aligned with and orthogonal to the SLM polarization, of which every pixel displays the quadratic phase mask. The component with polarization aligned with that of the SLM is focused into a spherical wave, while the component with polarization orthogonal to that of the SLM is simply reflected as if from a mirror. The ‘output’ polarizer, which is the last optical component in the optical path before the CCD, passes both spherical wave and plane wave with identical polarization. These two waves then propagate toward the CCD, interfere with one another and thus create the holographic image which is recorded by the CCD, as in the original FINCHSCOPE configuration. The center of the SLM (and the diffractive lens pattern) was precisely aligned with the optical axis of the microscope system.
Application of quadratic phase patterns equation m1 to the SLM resulted in a primary plane of focus according to the lens transfer function [8], equation m2where λ is the wavelength and fd is the focal length of the diffractive lens displayed on the SLM. Figure 3 demonstrates the functionality of the SLM as a diffractive spherical lens under illumination of two different laser beams with two wavelengths. Equating the argument of the lens transfer function for full aperture of the SLM to the argument of the digital phase pattern and isolating the focal length yields the following equation for the focal length,
equation m3
(1)
where Δ is the pixel size, N is the number of pixels along the largest dimension of the SLM and x max is the value of the matrix (x,y) at the points ( ± N/2,0) (for the experiments reported here, Δ = 8 μm, N = 1920 and x max = 0.873). Substituting the SLM parameters into Eq. (1) gives the equations equation m4and equation m5for λ = 532 nm and 633 nm, respectively. This is a difference between the calculated and experimental data in the slopes of the graphs in Fig. 3 of about 12.5% and 11% for λ = 532 nm and 633 nm, respectively. The reflective SLM devoid of any pattern has a slight curvature for which we measured a focal length of fSLM = 8.2 meters. Taking this into account, the total measured focal length fm is calculated as the focal length of two successive lenses:
equation m6
(2)
where c is the slope of the linear curve equation m7. equation m8is no longer a linear curve but for fSLM >> fd,max it can be approximated to a linear curve with an average slope ca of
equation m9
(3)
where FL,mid is the middle value of the range of 1/FL, which in the present experiment is equal to FL,mid ≈244. Substituting the parameters f SLM = 8.2m, FL,mid = 244, c(λ = 532 nm) = 145473 and c(λ = 633 nm) = 122262 into Eq. (3) gives modified values for the slopes of ca(λ = 532 nm) = 126421 and ca(λ = 633 nm) = 108586. After accounting for the inherent curvature of the SLM, the difference between the calculated and experimental data in the slopes of the graphs in Fig. 3 is only 0.83% and 0.46% for λ = 532 nm and 633 nm, respectively, which is within measurement error.
Fig. 3
Fig. 3
Relationship between the radial parameter of the displayed quadratic phase patterns on the SLM at 532 nm and 633 nm and the distance of the measured plane of focus. Lines marked linear represent best fit lines calculated by the least squares method. The (more ...)
The SLM used was a phase-only SLM (Holoeye HEO 1080P; 1080 × 1920 pixels). The zh distance used (400 mm) for the images reported here was greater than the 230 mm calculated minimum focal length of the SLM. The minimum focal length is determined by the SLM's pixel size Δ (8 microns), and the number of pixels N (1920) along the SLM's diameter, according to the inequality fdNΔ2/λ The SLM firmware was modified and confirmed to produce the desired focal lengths and phase shifts (using 532 nm and 633 nm coherent collimated and expanded laser beams for calibration and testing) to deliver a full 2π phase shift over its working range of 256 gray levels. Diffractive lenses such as the lens patterns on this SLM will have multiple higher diffraction order foci in the desired focal plane, as well as other, undesired, focal planes at different focal distances than the desired focal plane. In the configuration used here, the focusing efficiency of the SLM into the central image of the desired focal plane was measured to be greater than 50%, with insignificant intensity concentrated in undesired planes of focus. Furthermore, at the camera-SLM distance of 400 mm, the higher diffraction order images in the desired focal plane did not project onto the CCD. Recently an excellent review [9] has appeared which describes the characteristics of SLM devices and their application to a variety of functions in microscopy.
2.3 Computational methods
In the current FINCH configuration, the creation of a complex hologram and its reconstruction involves a number of steps. The complete process for microscopic imaging has been automated with computer programs written in LabVIEW (National Instruments). The first step includes a cycle of creating the appropriate quadratic phase mask and applying it to the SLM, opening the shutter to illuminate the sample and capturing a hologram and repeating this cycle for quadratic phase patterns wherein the phase shift is changed between θk = 1,2,3 = 0, 120 and 240 degrees [2]. Next, the three holograms are superposed to create a complex hologram on which Fresnel propagation is performed to yield the individual image planes in the sample. In the case of the USAF test target the best reconstructed plane of focus was selected.
FINCH creates holograms in a single beam system as a result of interference between a plane wave and a spherical wave originating from every object point. In our previous reports we created a random constant phase mask so that with a phase-only SLM, the plane wave from an infinity corrected microscope objective could be directed to the camera along with the spherical wave created by the SLM. The use of a constant phase mask presents certain disadvantages in that it requires half the pixels on the SLM and also degrades the resolution of the mask which creates the spherical wave. Because only one linear polarization state on the liquid crystal based SLM can change the phase of incoming light, half of the randomly polarized fluorescent light striking the device can have quadratic phase modulation whereas the other half is shifted by a constant phase, as shown in Fig. 4(a) . However, the sensitivity of the SLM to a specific linear polarization also makes it possible to use the portion of the light not affected by the SLM to deliver the plane wave as shown in Fig. 4(b), and discussed earlier and below.
Fig. 4
Fig. 4
Comparison of using a constant phase mask (a) versus the polarization method (b) to select and separate the plane and spherical waves in FINCH holography. Notice that when the polarization method is used, all the pixels on the SLM are used to create the (more ...)
The following analysis refers to the system scheme shown in Fig. 5, where it is assumed that the object is an infinitesimal point and therefore the result of this analysis is considered as a point spread function (PSF). For an arbitrary object point at equation m10, in a working distance zs before the objective, where equation m11, the complex amplitude beyond the first polarizer, just before the SLM, is
equation m12
(4)
where it is assumed that the polarizer axis is tilted in a ϕ 1 angle to the x axis, fo is the focal length of the objective, d 1 is the distance between the objective and the SLM and Ax, Ay are the constant amplitudes in the x, y axes, respectively. The asterisk denotes a two dimensional convolution and are unit vectors in the x, y directions, respectively. For the sake of shortening, the quadratic phase function is designated by the function Q, such that equation m13the function L stands for a the linear phase function, such that, equation m14 and equation m15 is a complex constant dependent on the source point's location. The SLM modulates the light in only a single linear polarization and in our case, without loss of generality, this axis is chosen to be x. The light polarized in y direction is reflected from the SLM with only a constant phase shift. Therefore the complex amplitude on the output plane of the SLM is,
equation m16
(5)
where BQ and BM are complex constants. θ is one of the three angles used in the phase shift procedure in order to eliminate the bias term and the twin image [6,7]. The complex amplitude after passing the second polarizer, with axis angle of ϕ 2 to the x axis, has linear polarization in the direction of the polarizer axis. Therefore we can abandon the vector notation and express the complex amplitude beyond the second polarizer, on the CCD plane, as
equation m17
(6)
where zh is the distance between the SLM and the CCD. The intensity of the recorded hologram is,
equation m18
(7)
Following the calculation of Eq. (7), the intensity on the CCD plane is,
equation m19
(8)
where Ao,C 2, C 3 are constants and equation m20, the reconstruction distance of the object point, is given by
equation m21
(9)
The transverse location of the reconstructed object point is,
equation m22
(10)
Equation (8) is the typical expression of an on-line Fresnel hologram of a single point and therefore Ip(x 2,y 2) is the PSF of the recording part of the FINCH. To avoid the problem of the twin image, one of the interference terms, (the second or third terms) in Eq. (8) should be isolated by the phase-shifting procedure [10,11]. Reconstructing this term by Fresnel back propagation yields the image of the point at a distance zr from the hologram given by Eq. (9), and at a transverse location equation m23 given by Eq. (10). The sign '±' in Eq. (9) indicates the possibility to reconstruct from the hologram either the virtual or the real image depending on which term, second or third, is chosen from Eq. (8). The polarization angles ϕ 1 and ϕ 2 are chosen in order to maximize the interference terms [the second and third terms in Eq. (8)]. Their precise values depend on the values of the constants |BQ| and |BM|. In this study we choose their values empirically by picking the angles that yield the best reconstructed image.
Based on Eq. (10), the transverse magnification of this FINCH system is
equation m24
(11)
In this stage we can simplify Eqs. (8) – (11) by choosing the working distance to be zs = fo, as was indeed chosen in the present experiment. In this case fe→∞, and therefore f 1 = -fd, zr = ± (zh-fd) and equation m25.
The minimal resolved object size observed by reconstructing the FINCH hologram is dictated by either the input or output apertures according to the following equation
equation m26
(12)
where equation m27and DH are the diameters of the SLM, and the recorded hologram, respectively. NAin and NAout are the numerical apertures of the system input and output, respectively. The NAin is independently determined by the objective and cannot be changed by the design of the FINCH system. However the product NAoutMT is dependent on the system parameters and our goal should be to keep this product equal or larger than NAin in order not to reduce the resolution determined by the input aperture. Therefore, referring to Eq. (12), an optimal FINCH system satisfies the inequality,
equation m28
(13)
In this inequality all the parameters are well defined besides the diameter of the hologram. This size is dependent on the overall size of the reconstructed image. Based on simple geometrical considerations the diameter of the hologram is,
equation m29
(14)
where a is the ratio between the image and the SLM sizes. a ranges between almost zero for an image of a point, to 1 for a full frame image. Substituting Eq. (14) and quantities zr = |fd-zh| and equation m30 into Eq. (13) yields
equation m31
(15)
The only free parameter in this analysis that does not influence other performances of the system is fd. Therefore by calculating the inequality in Eq. (13) we find the optimal fd in sense of best image resolution. The solution of Eq. (15) is equation m32. In this study and in Ref [11], we used the complete field of view, and therefore we assume a = 1. Consequently the focal length of the diffractive lens should be equal or smaller than twice the distance between the SLM and the CCD, or in a formal way, equation m33. Because the CCD chip is not ideal as a medium for hologram recording, practically it is optimal to display the image as far as possible from the CCD chip. Therefore we find that fd = 2zh is the optimal choice for the length of the focal length of the diffractive lens.
The optimal conditions for imaging the fluorescent USAF slide with both the constant phase mask method and the polarization method were compared. The results demonstrate the superiority of the polarization method. Figure 6 shows the optimal plane of focus from image reconstructions made from holograms captured with both methods. The conditions were identical, using an 800 mm focal length diffractive lens pattern and with the camera positioned 400 mm from the SLM. In Fig. 6(a) the holograms were captured with a 37% constant phase mask and with the input and output polarizers set at 0 degrees (i.e. parallel with the SLM polarization). In Fig. 6(b) the holograms were captured using the polarization method, without any constant phase mask and with the polarizers set at 60 degrees to the x axis.
Fig. 6
Fig. 6
Best plane of focus reconstruction from holograms of the fluorescent USAF test slide using the constant phase mask technique and the polarizers method. (a) Static mask. (b) Input and output polarizers at 60 degrees. Olympus 20 × 0.75 NA objective. (more ...)
Imaging pollen grains has been a convenient way to compare the performance of microscopes on biological samples. We compared the performance of the new dual-polarizers method to our previous constant phase mask method. As with the USAF slide, the results with the polarizers method were much better for the exact same field as shown in Fig. 7 . Also notice the improved resolution of the two pollen grains along the edges of the field with the polarization method. The slight ghost images that can be seen in Fig. 6 are not inherent to FINCH, because ghost images have been viewed even when the SLM has been used as a flat mirror or even when it has been replaced by a regular flat mirror and a refractive lens (data not shown). We suspect that these ghost images appear because of light reflections from the beam splitter. There were no ghost images in images taken with a 45° flat mirror and refractive lens, but that configuration was not suitable for FINCH.
Fig. 7
Fig. 7
Best plane of focus from holograms of a pollen grain test slide using the constant phase mask technique and the dual polarizers method. (a) Constant phase mask. (b) Polarizers at 60 degrees. Olympus 20 × 0.75 NA objective. The full camera field (more ...)
Another advantage of the FINCH holographic method for capturing a 3D image is that the reconstructed image planes have much less out of focus haze when compared to widefield microscopy. This phenomenon can be explained by the following. A widefield microscope has a single PSF which becomes wider and weaker when propagating far from the image plane. A holographic microscopic imager like those using FINCH is different. Each transverse section has its own PSF which is similar to that of the widefield PSF. Therefore every section along the z axis is sharply imaged, i.e. convolved with a relatively sharp PSF of the relevant section, and is summed with relatively weak haze contributed from the other out of focus sections.
The effect of input and output polarization upon reconstructed best planes of focus was tested after capturing holograms of the fluorescent USAF slide for a matrix of conditions wherein the input and output polarization was varied in 15 degree increments between 0 and 90 degrees. Figure 8 shows the phase 0° holograms from each condition and Fig. 9 shows the best plane of focus which was reconstructed from these (and their associated 120° and 240° phase-shifted) holograms. It can be seen that the best conditions for recording holograms were with input and output polarization combinations varying between 45 and 60 degrees.
Fig. 8
Fig. 8
Holograms of the USAF fluorescent test slide using an Olympus 20 × 0.75 NA objective. The input and output polarization orientation was changed as shown in the matrix and the phase 0° hologram from each series of three holograms (phase (more ...)
Fig. 9
Fig. 9
The best plane of focus from reconstructions of holograms of the USAF fluorescent test slide using an Olympus 20 × 0.75 NA objective is shown. The input and output polarization orientation varied as indicated.
The effect of the fluorescence emission bandwidth on widefield images and those generated by FINCH holography was examined. The emission bandwidth of the Semrock GFP filter set used in this study produced an emission bandwidth from the USAF slide of about ~38 nm (500 nm – 538 nm) FWHM (Fig. 10 B -III) when directly measured with a spectrometer. We examined the effect of reducing the normal emission bandwidth by adding a longpass filter (521 nm cut-on) to obtain a 17 nm narrow emission bandwidth (Fig. 10 A-III). Finally we removed the 500 nm – 538 nm bandpass emission filter and used only the longpass filter and measured a fluorescence emission with a bandwidth of >50 nm and a greater than 50 nm tail of fluorescence (Fig. 10 C-III). As shown in column I, the images obtained using the SLM to create the diffractive imaging lens became markedly blurred with increasing bandwidth. However, as shown in column II, increasing the bandwidth of the fluorescence emission had little or no effect on the focus of the images obtained after reconstructing the holograms taken even with wide band fluorescence.
Fig. 10
Fig. 10
Comparison of widefield and FINCH holographic imaging as a function of fluorescence emission bandwidth. The specimen was the USAF test pattern, imaged with an Olympus 20x 0.75 NA objective with a SLM-CCD distance of 400 mm. Columns I, II, and III respectively (more ...)
By using the polarization properties of the SLM it is possible to utilize the same pixels to pass both the plane and spherical waves by including input and output polarizers in the system. This has two advantages: 1. The resolution of the lens patterns is increased because all of the SLM pixels can be used to more accurately represent the lens function (the quadratic phase pattern is not interrupted by non-functional pixels) 2. The plane and spherical wave come from the same pixel and thus the interference is not approximated from adjacent or otherwise random pixels. The configuration used in the present experiments was established to determine the factors necessary for optimal resolution in a FINCH microscopy system. Thus the configuration is not the most light efficient. For example the devices used to control polarization are inefficient and reduce the light more than 50%. There is also only 25% efficiency by using the beam splitting cube so that the SLM can be used on axis. Having established the resolution potential of FINCH, it should be possible to produce diffractive lens patterns with the SLM positioned at 45°, eliminating the need for the beam splitter, so that most of the light is reflected into the camera and no light loss occurs at this step. Success in making this correction using wavefront analysis and Zernike corrections have been reported for SLMs [12]. Furthermore, more efficient polarizers can be used to reduce the light losses due to the use of polarizers.
In spite of the inefficiency of the light budget in our configuration, high quality reconstructed images were obtained at very low light levels. The signal to noise level in the FINCH system is not very dependent upon the intensity of the hologram being captured but is more dependent upon the extent of interference between the two waves propagating from the SLM. This is shown clearly in the matrix of images obtained from holograms captured at different polarizer settings in Figs. 8 and and9,9, along the diagonal with a constant ratio of ϕ 1:ϕ 2 (i.e. from (0,0) to (90,90)). The hologram in Fig. 8 at (0,0) is composed mostly of spherical wave from the SLM, while the hologram at (90,90) is composed mostly of plane wave. The corresponding reconstructed images in Fig. 9 both have extremely poor resolution, even though the holograms from which they are reconstructed are the brightest. The highest resolution reconstructed images in Fig. 9 derive from the holograms taken with the polarizers at intermediate angles, transmitting approximately equal amounts of plane and spherical wave. In contrast to what would be expected in conventional imaging, the highest resolution reconstructed images did not come from the holograms with the highest intensity, but rather from the holograms in which the greatest proportion of both plane and spherical waves produced the interference pattern. Thus, in FINCH imaging, obtaining a high degree of interference visibility between the couples of plane and spherical waves is a more critical factor than simply maximizing the intensity of the recorded holograms.
Another advantage shown here of our FINCH imaging configuration is its greater immunity to the wavelength dependent change in focal length of the diffractive lens. We observed the same sharp focus of images derived from FINCH holograms captured at narrow or very wide bandwidth. Under the same bandwidth conditions, when the SLM was used as a focusing lens and the bandwidth was increased as shown in Fig. 10, loss of focus occurred and blurring of the images was seen. In the case of regular imaging, changing the focal length f of an imaging lens leads to a change of the image distance do according to the imaging formula 1/di + 1/do = 1/f, where di is the distance of object from the imaging lens. Therefore the transverse magnification MT is also sensitive to the change of the focal length because of the relation MT = do/di. Consequently, for each wavelength there is a different image at a different location and with a different scale, which results in blurring of the overall image. This is not the case when the image being recorded is a FINCH hologram. As derived above from Eqs. (9) and (10), and for the case that zs = fo, fe→∞, we see that f 1 = -fd, zr = ± (zh-fd) and equation m34Therefore zr is sensitive to λ because of the dependence of fd with λ. However the transverse magnification MT is independent of fd and therefore it is independent of λ. In other words, there is a different image for each wavelength, as in the case of regular imaging, but all the images appear in the same scale with FINCH imaging and are thus superimposed so that no blurring occurs due to chromatic diffraction effects.
Acknowledgements
We thank Barak Katz for sharing his results before publication and Dr. Brian Storrie and Dr. Edwin Heilweil for most helpful and continued advice and discussions. Supported in part by National Institutes of Health Grant 1 R43 EB008933-01A1 to GB, NIST ARRA Award No. 60NANB10D008 to GB and BS and Celloptic, Inc.
1. Schilling B. W., Poon T.-C., Indebetouw G., Storrie B., Shinoda K., Suzuki Y., Wu M. H., “Three-dimensional holographic fluorescence microscopy,” Opt. Lett. 22(19), 1506–1508 (1997). doi: 10.1364/OL.22.001506. [PubMed] [Cross Ref]
2. Rosen J., Brooker G., “Non-Scanning Motionless Fluorescence Three-Dimensional Holographic Microscopy,” Nat. Photonics 2(3), 190–195 (2008). doi: 10.1038/nphoton.2007.300. [Cross Ref]
3. Mudanyali O., Tseng D., Oh C., Isikman S. O., Sencan I., Bishara W., Oztoprak C., Seo S., Khademhosseini B., Ozcan A., “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). doi: 10.1039/c000453g. [PubMed] [Cross Ref]
4. Lam E. Y., Zhang X., Vo H., Poon T.-C., Indebetouw G., “Three-dimensional microscopy and sectional image reconstruction using optical scanning holography,” Appl. Opt. 48(34), H113–H119 (2009). doi: 10.1364/AO.48.00H113. [PubMed] [Cross Ref]
5. Indebetouw G., Tada Y., Rosen J., Brooker G., “Scanning holographic microscopy with resolution exceeding the Rayleigh limit of the objective by superposition of off-axis holograms,” Appl. Opt. 46(6), 993–1000 (2007). doi: 10.1364/AO.46.000993. [PubMed] [Cross Ref]
6. Rosen J., Brooker G., “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). doi: 10.1364/OL.32.000912. [PubMed] [Cross Ref]
7. Rosen J., Brooker G., “Fluorescence incoherent color holography,” Opt. Express 15(5), 2244–2250 (2007). doi: 10.1364/OE.15.002244. [PubMed] [Cross Ref]
8. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).
9. Maurer C., Jesacher A., Bernet S., Ritsch-Marte M., “What spatial light modulators can do for optical microscopy,” Laser Photonics Rev. 5(1), 81–101 (2011). doi: 10.1002/lpor.200900047. [Cross Ref]
10. Yamaguchi I., Zhang T., “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). doi: 10.1364/OL.22.001268. [PubMed] [Cross Ref]
11. Katz B., Wulich D., Rosen J., “Optimal noise suppression in Fresnel incoherent correlation holography (FINCH) configured for maximum imaging resolution,” Appl. Opt. 49(30), 5757–5763 (2010). doi: 10.1364/AO.49.005757. [PubMed] [Cross Ref]
12. Love G. D., “Wave-front correction and production of Zernike modes with a liquid-crystal spatial light modulator,” Appl. Opt. 36(7), 1517–1520 (1997). doi: 10.1364/AO.36.001517. [PubMed] [Cross Ref]
Articles from Optics Express are provided here courtesy of
Optical Society of America