|Home | About | Journals | Submit | Contact Us | Français|
The power of fluorescence microscopy to study cellular structures and macromolecular complexes spans a wide range of size scales, from studies of cell behavior and function in physiological, three-dimensional (3D) environments, to understanding the molecular architecture of organelles. At each length scale, the challenge in 3D imaging is to extract the most spatial and temporal resolution possible while limiting photodamage/bleaching to living cells. A number of advancements in 3D fluorescence microscopy now offer higher resolution, improved speed, and reduced photobleaching relative to traditional point-scanning microscopy methods. Here, we discuss a few specific microscopy modalities that we believe will be particularly advantageous in imaging cells and subcellular structures in physiologically relevant 3D environments.
Imaging cells and subcellular structures in three-dimensional environments brings with it challenges that are not unique in biological microscopy, but are amplified relative to the 2D imaging of cells adhered to traditional glass coverslips in vitro. We will review a few of the major challenges for 3D and time-lapse 3D (4D) biological imaging, and discuss the weaknesses of conventional laser-scanning confocal microscopy in meeting these challenges. We then cover the strengths and weaknesses of newer microscopy modalities that have successfully addressed these challenges in recent years (see Glossary for introduction to terminology).
The first and most obvious challenge of 3D imaging is that in conventional, uni-axial microscopy arrangements, the resolution along the Z-axis of the microscope is substantially (~2x) worse than in the lateral dimensions due to the inherent shape of the point spread function (PSF; the three-dimensional diffraction pattern of a point source viewed through an objective lens; see Box 1 and Figure 1). The characteristic anisotropy of the PSF is seen as “Z-stretch” in XZ-projections of reconstructed 3D image sets obtained with uni-axial fluorescence microscopy. This reduced axial resolution and Z-stretch can introduce profound errors in tracking of objects in 4D and in co-localization in multichannel 3D imaging 1.
A 3D image of a diffraction-limited point spread function (applies to wide-field epifluorescence, confocal, two-photon, and light-sheet microscopies). Once excited, a fluorophore behaves as a point source of light. When collected by a microscope, the spread of the light waves from the point forms a diffraction pattern (Figure 1A). At high magnification in the image plane, the pattern has a central spot, the Airy disk, and diffraction rings around it. The size of the Airy disk is given by the equation
where d is the diameter of the Airy disk, λ is the wavelength of light, and NA is the numerical aperture of the objective lens (which is determined by the index of refraction of the immersion medium and the maximum angle of light cone that can enter the lens). Thus, the higher the NA, the smaller the Airy disk becomes. The size of this Airy disk (defined as the distance between first intensity minima) limits the ability of the microscope system to resolve two point sources emitting at the same time.
Above and below the focal plane, the light spreads out from the point source, and the extent of this in 3D is known as the Point Spread Function (PSF). When viewed perpendicular to the axis of imaging (in an XZ plane), it has a longer distribution than it does laterally (Figure 1B). An important aspect of this is that the immersion medium in which the objective lens contacts the specimen, as well as the specimen itself, both contribute to the optics of the system. Due to differences in refraction index of the immersion medium with that of the sample, increased spherical aberration is created as one goes deeper into samples. This causes the effective PSF (and hence the light pattern produced by it) to be asymmetric in Zand the Z-stretch to become larger (Figure 1C).
Adding to the problem of axial resolution is that the structures of interest are sometimes tens or hundreds of microns away from a coverslip or objective lens front element. Focusing at these depths requires longer working distances for objective lenses, which comes at the cost of lowering the numerical aperture, thus light collection efficiency and resolving power are reduced. Furthermore, for fluorescence techniques, excitation light penetration in most biological samples is problematic due to photonic interaction with the sample, resulting in elastic scattering 2, 3. The emitted light from a fluorophore also scatters within the sample, and thus contributes to out-of-focus fluorescence and background signal, thereby decreasing the image signal-to-noise ratio (SNR) 4. In addition, at deeper focal planes in scattering samples, the observed PSF can degrade rapidly due to spherical aberration such that it becomes severely asymmetric along the Z axis (Figure 1). Refractive index mismatches between thick specimens, glass, and immersion media further add to spherical aberration, inducing errors in Z localization 5.
Another challenge for 3D fluorescence imaging, particularly over time (4D imaging), is that excessive light exposure to the sample is inherent in the need to collect several Z-planes for each time-point, leading to problems with photobleaching of fluorophores, phototoxicity, and slow 3D frame rate. Photobleaching and phototoxicity occur primarily as a result of the possibility that high energy electrons in excited fluorophores may not emit energy as fluorescence, but instead react with dissolved oxygen. This irreversibly bleaches the fluorophore and produces reactive oxygen species (ROS) which are highly reactive, causing phototoxic effects directly proportional to their diffusion and concentration. While ROS are a major contributor to phototoxicity, other ROS-independent photodamage may also occur, causing even small amounts of light absorption to produce toxicity and unintended effects on cell function 6-8. Although photobleaching and phototoxicity can be abrogated in part by addition of oxygen scavenging systems or simple antioxidants such as L-ascorbic acid 9, the effects of phototoxicity are cumulative and directly proportional to excitation light intensity and total photon load 7. Thus, strategies to maximize light collection efficiency and minimize excitation of the total sample volume (e.g. 6, 10) are critical in advancing 4D imaging.
In addition to photobleaching and phototoxicity, the need to collect several Z-planes increases the acquisition time for each time-point, thus slowing the rate of 4D image acquisition. This leads to trade-offs between time resolution, imaged volume, Z-resolution, and length of imaging period, and places further constraints on the total photon load. This is particularly true with some types of very dynamic processes such as rapid cell migration, membrane remodeling or microtubule dynamics that require both high temporal and spatial resolution in 4D for motion tracking analyses.
Point-scanning methods such as CLSM and two-photon microscopy (2PM) have been a major boon to biological microscopy for decades, revolutionizing our ability to define the 3D structure of cells, tissues and whole animals. By rejecting out-of-focus fluorescence with a pinhole or limiting excitation to a diffraction-limited volume, CLSM and 2PM offer excellent optical sectioning, because only fluorescence emitted from the diffraction-limited volume in the focal plane can reach the photomultiplier tube detector (PMT; for review, see 4).
In spite of their advantages and major contributions to 3D imaging, these traditional scanning techniques are particularly poor at overcoming the challenges to 4D imaging discussed above. First, neither CLSM nor 2PM point-scanning methods abrogate the axial stretch of the PSF discussed above (Box 1). Second, because these methods raster the excitation and detection to build an image one voxel position at a time over the entire X-Y-Z imaged volume, they suffer from slow 3D frame rates and excessive light exposure to the specimen. This contrasts with traditional transmitted light and wide-field epifluorescence modes in which entire X-Y focal planes are imaged, reducing image frame rate considerably for 4D imaging. Furthermore, since PMT detectors used in CLSM and 2PM are noisy and have poor quantum efficiency (QE; ~10-15%), in order to get sufficient fluorescence photons to the detector in dim or highly scattering samples, either the scan speed must be lowered to allow longer excitation dwell times, or the intensity of the excitation must be increased to accumulate more fluorescence signal 11. Typically, the combined effects of scanning and PMT properties result in a SNR that is several times lower than other imaging modalities 12. Furthermore, the need to increase excitation intensity can result in fluorophore saturation 13. In this regime, the number of fluorescent photons produced does not increase as excitation input increases, but the chance for triplet state ROS production continues to increase 8. In addition,for CLSM, although fluorescence emission collection is limited to the in-focus image volume, the entire Z-axis of the specimen is exposed to excitation throughout scanning. Coupled with the fact that peak excitation intensity with CLSM is quite high at ~1 mW/μm2, CLSM can cause extensive photodamage and photobleaching in 4D imaging.
In 2PM, the volume of excitation is limited to a diffraction-limited region within the specimen where pulsed, long-wavelength laser intensity is high enough to produce fluorescence excitation via the non-linear two-photon effect (Figure 2) 14. The use of long-wavelength illumination improves penetration of the sample and the limited excitation volume eliminates photobleaching in out-of-focus areas of the specimen. In addition, pulsed lasers allow for second harmonic generation imaging of collagen fibers and other non-centrosymmetric polymers without sample labeling 15-17. In spite of these advantages over CLSM, the photobleaching rate with two-photon excitation increases at a much faster rate with excitation intensity than with single-photon excitation 18. Furthermore, the maximal rate at which fluorescence can be emitted from a single fluorophore is lower with two-photon excitation than single-photon excitation 19. Even more importantly, phototoxicity observed with two-photon excitation may actually be worse than single-photon excitation in some cases 19, 20. While the average power of the input beam over time may be relatively low, the instantaneous power during an excitation pulse can be as high as 10W/μm2 11. Nevertheless, because of its ability to limit excitation volume and penetrate deep into tissues, 2PM will continue to be a primary tool for 3D and 4D imaging, particularly intravitally in adult animals 21-23. Furthermore, 2PM will be particularly advantageous if used in conjunction with other optical sectioning techniques (see below) 24, 25.
The most fundamental challenges to 3D optical imaging are the resolution limit imposed by diffraction and the anisotropic nature of the PSF, which reduces resolution in the axial dimension compared to the lateral dimensions. These challenges have been overcome recently by structured illumination microscopy (SIM) as well as point source localization techniques that hold great promise for the future.
While optical sectioning of 3D samples has been most often obtained through either CLSM or 2PM, optical sectioning can also be performed using a wide-field microscope with laterally structured illumination 26. Structured illumination microscopy (SIM) was first introduced to enhance lateral resolution beyond the limits of diffraction 27-29. In SIM, the sample is illuminated with patterned light, producing Moire fringes or a ‘beat pattern’ that moves otherwise unresolvable high-frequency information into the passband of the microscope objective. By taking multiple images with phase-shifted patterned illumination, it is possible to computationally remove the excitation pattern and reassemble an image with approximately twice the resolution of the original images.
Gustafsson and coworkers have expanded the initial 2D technique by using three mutually coherent beams to generate interferometrically patterned excitation in both axial and lateral dimensions 28, offering resolution doubling in all three dimensions (Figure 2). This version of 3D SIM can be improved further by employing two opposed objective lenses in a ‘4Pi geometry’ and a beam splitter that makes 6 coherent beams 30. In the 4Pi implementation, the resultant images do not suffer from the same Z-stretch observed with confocal images 28, 30, 31, as 100 nm near-isotropic resolution is obtained. In addition to the significant increase in resolution, 3D SIM does not rely on a pinhole, thus enabling efficient light collection, good sensitivity, and the ability to capture widefield images on a high-QE, low-noise array detector such as charge-coupled device (CCD) or complimentary metal-oxide semiconductor (CMOS) cameras. In addition, SIM is relatively easy and inexpensive to implement in the context of a traditional widefield epifluorescence microscope, and several manufacturers already have commercialized 2D SIM systems on the market. The power of SIM to study processes in cells in 3D has been beautifully demonstrated in the study of nuclear structure and chromatin organization 32, as well as the dynamics of the cell cortex at the cytokinetic furrow 33, 34.
While SIM holds much promise for future studies of cells and cellular architecture in 3D below the diffraction barrier, several caveats apply. First, since multiple exposures with varying phase need to be captured for each Z-plane (15 for 3D SIM), SIM has intrinsic speed limitations. While increases in camera sensitivity and speed have enabled live cells to be studied with high spatial and temporal resolution using 2D SIM, movements of structures by more than one resolution length in the time it takes to acquire the requisite phased images result in image reconstruction artifacts 35. While 2D SIM can be performed at relatively high speeds (up to 11Hz), multicolor 3D-SIM is more practical at speeds of one z-plane every 2-6 seconds 6, 32. For 3D applications this is an important limitation to consider when one is studying dynamic processes. Second, while SIM effectively removes out-of-focus light computationally, it still leaves behind the associated shot noise 28. Thus, in practice, in very thick samples with a large amount of out-of-focus light, 3D SIM may produce lower contrast images compared to confocal approaches. Finally, current implementations of 3D SIM rely on interferometric pattern generation, and so very thick or highly scattering samples will likely degrade the pattern enough that reconstructions will suffer. Future use of non-linear optics with 3D-SIM may extend its useful working distance and reduce photobleaching through the volume of the specimen. In any case, because of its relatively inexpensive implementation on existing microscope platforms and greatly improved resolution, 3D SIM will likely contribute to important advances in understanding organelle structure and function within cells.
Another group of approaches that overcomes the limitations of diffraction are the ‘pointillist’ super-resolution localization techniques including (fluorescence) photoactivated localization microscopy, (f)PALM 36, 37 (referred to henceforth as PALM), and stochastic optical reconstruction microscopy, STORM 38, which in some cases offer localization precision of less than 20 nm. These techniques rely on iterative cycles of activation, excitation, photobleaching coupled with image acquisition, and subsequent localization with subdiffraction accuracy of small numbers of labeled molecules that are separated in space by greater than the Rayleigh limit within a densely labeled specimen (Figure 3). To achieve spatially separated excitation/emission events, PALM uses genetically photoswitchable fluorescent proteins to reversibly switch between emission wavelengths or between a dark and fluorescent state 39, while STORM utilizes reducing buffers and small molecule fluorophoes to similar effect 40. Localization in both cases is achieved by fitting the image of individual molecules to a model PSF 41, computing its centroid 42, or by performing a cross correlation to an experimental PSF 25. As localizations of many isolated fluorophores are collected over hundreds to many thousands of frames to build up a localization image of fluorophores at high density, PALM images are typically acquired in a few tens of seconds to tens of minutes.
PALM and STORM have been extended to the third dimension, either by optically altering the shape of the PSF as a function of axial depth 40, 43 or by simultaneous collection of two planes to collect 3D data which is fit to a 3D PSF 37, thereby enhancing subdiffractive localization capability along the Z-axis. In STORM, use of a cylindrical lens in the optical path creates a controlled astigmatism to distort the PSF as a function of Z position (Figure 4A) 40, 43. By comparing the PSF of a fluorophore to a calibration, a localization precision of ~50-100 nm in the Z direction can be achieved, over an axial range of several microns. This approach has enabled the first super-resolution images of the clathrin-coated pit structure 40, 44, 45. However, imaging thicker 3D samples is difficult, as conventional wide-field illumination activates and excites many out-of-focus probe molecules, increasing background and impeding isolation of single molecules in the imaging plane. Out-of-focus activation and excitation has the additional disadvantage that it potentially wastes localizations, decreasing the effective label density and reducing image resolution. Use of two photon activation 46 in “3D-PALM” can be used to avoid this problem by limiting photoactivation to the focal plane, and has been demonstrated up to ~8 μm deep in the sample while retaining high localization densities 25. Future technical improvements may better confine the illumination, extending the per-molecule photon budget and permitting 3D super-resolution at greater depths.
For 3D imaging of samples up to ~300 nm in thickness, interferometric photoactivated localization microscopy (iPALM) offers unmatched localization precision. iPALM uses localization of single molecules for super-resolution in the lateral (X-Y) dimension, and simultaneous multiphase interferometry for super-resolution in the axial (Z) dimension. Like its 2D counterparts, iPALM depends on photoswitching of small populations of fluorophores to determine X-Y-coordinates (Figure 3). However, in iPALM, the enhanced resolution along the Z axis is achieved by using dual objectives in an opposed, 4Pi configuration (Figure 4), thus allowing each emitted photon to propagate through both top and bottom objectives and be recombined in a beamsplitter. The optical path-length difference between the top and bottom emission beams, and hence the phase of the recombined signal, is directly proportional to the Z coordinate (Figure 4B). Due to the greater collection efficiency of the 4Pi geometry in iPALM, X-Y localization precision is typically higher than in conventional PALM. Additionally, the high sensitivity afforded by interferometry can result in an axial localization precision better than 10 nm 47. Indeed, in iPALM, axial resolution is approximately two times better than the lateral resolution, unlike single-objective-based techniques where axial resolution is typically 2-3x worse than lateral. Due to the high resolution, iPALM is thus advantageous for investigating molecular organization at the ultrastructural level. For example, iPALM was deployed to study focal adhesions 48, integrin-based adhesive organelles that play important roles in cell migration, matrix-remodeling, and mechanotransduction. Due to their density and and complex composition, molecular organization within focal adhesions has long been difficult to access by techniques such as electron microscopy (EM). iPALM revealed for the first time that focal adhesion proteins are stratified along the axial dimension 48.
While iPALM provides high spatial resolution approaching EM, a number of limitations remain. Since direct optical access is required from both sides, the sample must be relatively thin (less than 15-20 um). Likewise, the axial range of imaging depth is limited to ~ 250-600 nm 47. Also, iPALM instrumentation is technically complex and demands very high mechanical and thermal stability. In contrast, the instrumentation for conventional PALM and STORM (Figure 4A) are relatively simple, free localization software 42 is available, and several major manufacturers now offer turnkey commercial platforms with some 3D capability (based on biplane imaging and astigmatism) that are compatible with suitable dyes, making these technologies even more accessible. While PALM and STORM offer superb resolution for 3D imaging, the need to collect hundreds to tens of thousands of raw frames for a single reconstructed image places a severe limitation on the imaging speed. Progress in dye brightness, contrast and attachment chemistry 49 suggests that the field will continue to evolve, with the promise of faster live cell applications, in multicolor and 3D 50. In spite of their limitations in speed, PALM, STORM and iPALM offer near-ultrastructural 3D resolution together with the molecular specificity of fluorescent labeling, and will continue to answer important questions about molecular-scale biological architecture that have eluded electron imaging-based approaches.
Another major set of challenges for 4D imaging has been the inherent speed limitation of point-scanning techniques for optical sectioning, along with the photobleaching that accompanies 4D imaging, as discussed above. Two basic approaches have been used to address these challenges, which are multiplexing of confocal pinholes, such as in spinning disk confocal microscopy (SDCM) and use of orthogonal plane illumination in light sheet fluorescence microscopy (LSFM).
Several methods have been developed for taking advantage of the confocal principle of pinhole-based out-of-focus light rejection while maintaining the ability to generate wide-field images to overcome the speed limitations of point-scanning microscopy. These include slit scanning 51 and pinhole multiplexing methods, including swept-field and spinning Nipkow disk confocal techniques. Of these, the most robust method that has gained wide acceptance among biologists is SDCM. This method uses an array of excitation and emission pinhole apertures on a rapidly spinning disk, such that the pinhole array sweeps the entire field of view over 1000 times per second (reviewed in 52). The high scan speed not only improves image acquisition rate (up to 360 frames per second 53), it also has the effect of lowering the peak excitation light density down to a few μW/μm2, thereby increasing fluorescence efficiency and decreasing photobleaching and photodamage effects compared with point scanning 13. Perhaps most important, since the entire confocal field of view can be captured by a high-QE, low-noise camera instead of a PMT, SDCM systems have more than 50-fold increase in light capture efficiency, resulting in several-fold increase in SNR relative to CLSM or 2PM 12, 13. Because of these advantages, SDCM has been well suited to the study of cells in 3D extracellular matrices 54, engineered tissue in matrices 55, as well as intravital imaging 56, 57. The ability to rapidly collect entire frames at once also enables the quantitative study of very dynamic processes, such as microtubule dynamics in cells in 3D matrices 58.
While SDCM has many powerful features for 4D imaging, there are a few caveats and limitations. First, like other linear excitation imaging methods, SDCM suffers from problems with illumination penetration of the sample and photobleaching of the entire axial column of the specimen at each Z-plane for each time-point. Second, imaging thick specimens suffers from an effect known as ‘pinhole crosstalk’. As light from a point in a focal plane expands over distance, portions of this light can enter into adjacent pinholes, and scattering of light in the sample increases this effect. Third, since pinhole size is fixed in current implementations of SDCM, this limits its use to high-magnification, high-resolution objective lenses. Finally, SDCM suffers from the same axial resolution problems of other confocal methods (Table 1). However, because of its high speed, low noise and reduced photobleaching compared to CLSM, SDCM will continue to be extremely important for imaging fast cell and organelle dynamics in 3D.
In LSFM 59, known in different implementations as ‘selective plane illumination’, SPIM 60, or ‘digital scanned laser light sheet fluorescence microscopy’, DSLM 61, the sample is illuminated with a thin sheet of light from the side of the specimen to illuminate a single XY plane, and fluorescence detection occurs in the direction perpendicular to excitation (Figure 2). This geometry leads to major advantages over confocal and two-photon approaches. First, acquisition speed is greatly increased relative to point-scanning methods, as the entire imaging plane is detected simultaneously. A volume is recorded by scanning the light sheet in one dimension, instead of the 3D scan required for point-scanning. Second, excitation is confined to the focal plane, providing optical sectioning without pinholes, thus boosting detection efficiency while reducing photobleaching and photodamage to rates far below other techniques. Finally, as acquisition is parallelized, each pixel is exposed for the full integration time of a high QE, low noise widefield camera, resulting in ~10-1000-fold higher SNR than point-scanning techniques operating at similar frame rates.
The high SNR and very fast acquisition afforded by LSFM have enabled ground-breaking observations of vertebrate embryonic development 60, 62, 63 and photomanipulation of cardiac pace-making function in vivo 64. Improving the axial resolution of LSFM by using a Bessel beam and structured illumination (see below) has provided high-speed, isotropically resolved 4D imaging of cellular dynamics 24. Three-dimensional cell culture can likewise be imaged over long times using LSFM 65.
LSFM does possess drawbacks compared to other 4D imaging techniques. First, the axial range of LSFM is reduced compared to 2PM, as the effect of scattering degrades the excitation sheet and is more pronounced on the emission side due to wide-field detection. Using a femtosecond laser for two-photon light-sheet excitation mitigates some of this disadvantage, providing an interesting hybrid technique that combines advantages of 2PM and LSFM 66. Second, the spatial resolution of LSFM is generally lower than other techniques, which is why most applications have targeted larger systems such as whole embryos or tissue slices 60-62, 67, instead of single cells. Lateral resolution is determined by the NA of the detection lens, while axial resolution is determined by both the detection objective and the light sheet thickness. The perpendicular excitation/detection geometry required by LSFM forces the use of long working distance, somewhat low NA (typically 1.0 or less) objective lenses. In almost all LSFM implementations, the light sheet is created from a Gaussian beam. Gaussian beams undergo widening at increasing distances from the beam waist, coupling the quality of optical sectioning to the position within the field of view and degrading the effective axial resolution at the sample edges. Exciting a fluorescent sample with a scanned Bessel beam 68 in an LSFM geometry mitigates this problem and improves axial resolution 24. Such a resolution improvement comes at a cost, however, as “side lobes" in the excitation profile cause significant out-of-plane illumination. Removing the contaminating effects of the side lobes requires either structured illumination or two-photon excitation. Finally, implementing LSFM is nontrivial, so in spite of its exceptional promise, until LSFM becomes commercially available, it is likely to remain the province of relatively few labs. However, once commercialized, LSFM promises to be extremely important for imaging cellular dynamics in larger, thick specimens such as whole animals during early development.
The array of imaging modalities available for study of animals, tissues, cells and cellular components in 3D has increased dramatically over the last decade. We have only covered a few promising modalities for 3D cell biology here, but there are others which are also able to give super-resolution images of biological processes in 3D 69, such as stimulated emission-depletion microscopy (STED), which are well covered elsewhere 70, 71. As more of these modalities become commercially available, this will expand the imaging toolbox for a much wider audience of biologists (Table 1). As the audience broadens, more robust reagents and components will inevitably follow. However, each imaging modality has relative strengths and weaknesses that need to be taken into account for a given biological problem; as with any technology, there is no single tool for all jobs. As in the past, the best results will be obtained when the imaging modality is ideally matched to the biological question at hand.