|Home | About | Journals | Submit | Contact Us | Français|
Since the work of Golgi and Cajal, light microscopy has remained a key tool for neuroscientists to observe cellular properties. Ongoing advances have enabled new experimental capabilities using light to inspect the nervous system across multiple spatial scales, including ultrastructural scales finer than the optical diffraction limit. Other progress permits functional imaging at faster speeds, at greater depths in brain tissue, and over larger tissue volumes than previously possible. Portable, miniaturized fluorescence microscopes now allow brain imaging in freely behaving mice. Complementary progress on animal preparations has enabled imaging in head-restrained behaving animals, as well as time-lapse microscopy studies in the brains of live subjects. Mouse genetic approaches permit mosaic and inducible fluorescence-labeling strategies, whereas intrinsic contrast mechanisms allow in vivo imaging of animals and humans without use of exogenous markers. This review surveys such advances and highlights emerging capabilities of particular interest to neuroscientists.
The light microscope has long been one of neuroscientists’ cardinal tools. When used together with Golgi’s technique for staining a sparse population of cells, the light microscope provided the data that drove the famous debate between Golgi and Cajal about whether the nervous system was composed of cells or a syncytium (Cajal 1906, Golgi 1906). Although neuroscientists historically used light microscopy mainly to inspect histological specimens for studies of cellular morphology and the brain’s cyto-architecture, optical microscopy has progressed to where it now routinely provides information about cellular and circuit dynamics, on timescales ranging from milliseconds to months. In addition to this considerable expansion in usage, the basic character of the data provided by the light microscope has also evolved.
Early studies in light microscopy treated images as data represented in pictorial form. These images were either observed directly by eye or captured by photography, but in both cases the data were inspected visually. Over the past few decades, digital image acquisition and laser-scanning microscopy have transformed the data microscopes typically provide into a numerical format, with a specified number of bits per image pixel. This transition has in turn facilitated computational approaches to image data analysis. Today, the ready availability of fast computers is fueling another generation of microscopy techniques that reaches an even higher level of abstraction. Several methods we discuss involve the acquisition of raw images that often lack a straightforward relationship to the structures present in the sample, at least in a way readily discernible by eye (Figures 3, ,4,4, ,10,10, ,11).11). Instead, representations of the sample are reconstructed computationally. The acquired data with these methods are sufficiently divorced from the final reconstructions that the latter images should perhaps be understood as computed hypotheses about the underlying biological reality. These hypotheses may come with statistical likelihoods weighing the evidence and expressing the degree of confidence in the image representation. This reconceptualization of images as hypotheses can have practical consequences by forcing scientists to focus on the ideas being tested and image interpretations needing scrutiny, and to design experiments and analyses accordingly, rather than on optimizing images visual qualities. Optimal configurations for hypothesis testing might even produce results surprisingly poor to the eye. This gradual shift by microscopists toward logical abstraction mimics what has already occurred in other fields, such as astronomy, radiology, and particle physics, in which imaging has long been pivotal to scientific discovery (Galison 1997).
Owing to advances on multiple technological fronts, the present generation of light microscopes can provide data about spatial scales and experimental situations far beyond what was feasible even a decade ago. The past few years have produced remarkable progress in microscopy, including several new optical methods available to help neuroscientists probe ultrastructural issues, as well as other methods for visualizing cellular dynamics in behaving animals. Advances in automation and image analysis are propelling capabilities for rapid screening and large-scale anatomical reconstruction. Gains in optics and computational techniques, as well as an expanding set of contrast markers and functional indicators, underlie much of the recent advancement. However, improvements in complementary areas including tissue processing, animal preparations, and genetic strategies for fluorescence labeling have been equally important.
In this review, we survey progress in light microscopy, mainly over the past three years, with an eye to those advances poised to impact the practice of neuroscience. Many of the areas we discuss are experiencing rapid growth or already have a substantial research literature. Thus, our goal in writing this review was not to convey all relevant technical details. Rather, we sought to make neuroscientists aware of the growing capabilities they have at their disposal, to convey basic strengths and limitations of each approach, and to help readers decide which techniques might be most appropriate for their own research. In choosing topics, we deliberately omitted several exciting areas, except in passing, owing to the prior existence of excellent reviews covering these areas. These omissions include photostimulation, optogenetics, and fluorescent optical indicators. Recent progress on these fronts has helped energize the field and complements the topics we present (Giepmans et al. 2006, Gorostiza & Isacoff 2008, Miyawaki 2005, Shaner et al. 2005, Zhang et al. 2007).
An ongoing challenge in light microscopy concerns how to image deep within dense tissues. Over the range of near ultraviolet (UV), visible, and near infrared (NIR) wavelengths commonly used for biological light microscopy, it is light scattering and not absorption that generally dominates the attenuation of ballistic light propagation. The scattering length expresses the distance over which scattering will reduce light intensities by a factor of 1/e, and typical scattering lengths in the brain for visible and NIR light are in the ~25–100 μm (Yaroslavsky et al. 2002) and ~100–200 μm (Kleinfeld et al. 1998, Oheim et al. 2001) ranges, respectively. By comparison, the corresponding absorption lengths are in the millimeter range. Thus, imaging ~500 μm deep into brain tissue poses the challenge that both illumination and returning signals suffer from multiple e-fold attenuations by scattering of ballistic light propagation. For comparison, the adult mouse neocortex can be >1000 μm thick. Given the desire to look as deeply into the brain as possible, it is crucial to develop imaging modalities that are robust to light scattering.
An established approach for imaging deep into tissue involves two-photon fluorescence excitation (Denk et al. 1990) or other nonlinear optical processes that convert two or more incoming photons into at least one outgoing photon of distinct color. With such processes, the rate of signal generation rises nonlinearly as a function of illumination intensity. To achieve the high instantaneous intensities needed at the focus while keeping the time-averaged intensity below the limit of what the specimen can tolerate, it is common to use ultrashort-pulsed lasers emitting brief (~80–250 fs) but intense pulses. The quadratic dependence of two-photon excitation on illumination intensity leads to a spatial confinement of the excitation volume because of the steep falloff in excitation rate outside the focal plane. This confinement in excitation provides inherent optical sectioning, which can be used to mitigate the effects of emission scattering. Because fluorescence emissions originate from a single, confined focal volume, the emission photons convey useful information about the fluorescence intensity at that focal volume regardless of whether they scatter en route to the detector. Thus, the emission photons need only be collected in as great a number as possible, rather than imaged in a manner that preserves information about their apparent point of origination.
The ability to use scattered emissions as useful signals significantly increases imaging depths into tissue. The NIR wavelengths commonly used for two-photon excitation also reduce scattering of the illumination. Together, these two effects have led to penetration depths of 500–750 μm into the brain, depending on the tissue (Helmchen & Denk 2005). Two-photon imaging also reduces photodamage by using NIR excitation photons of energies lower than those of UV or visible wavelengths and by spatially confining fluorescence excitation, which is associated with photo-induced toxicity. Note that the approach to optical sectioning and signal collection in two-photon microscopy differs substantially from that in confocal microscopy, which uses a pinhole to restrict signals to those photons originating from the laser focus. Confocal microscopy is far less robust to scattering, permitting only ~25–50-μm imaging depths in optically dense tissue, because the pinhole blocks photons that originated from the laser focus but that have scattered (Sabharwal et al. 1999). Confocal imaging is further hampered in dense tissue by its typical use of visible excitation, which scatters more than the NIR illumination used in two-photon microscopy. The reliance on one-photon fluorescence excitation also increases phototoxicity outside the focal plane owing to the lack of excitation confinement. Overall, the advantages of two-photon microscopy have made it the leading microscopy technique for imaging deep within the intact brain or thick brain slices. Still, methods that could extend imaging depths into the brain, even modestly, would be welcomed, in part because such methods would provide opportunities to examine cells in the deeper neocortical layers in live animals.
Five main approaches are currently being explored to extend the penetration of two-photon microscopy. Three of these approaches aim to improve the generation of fluorescence signals, the fourth improves signal collection, and the last uses a thin probe composed of microlenses, termed a “microendoscope,” to guide light to and from deep tissue. The first of these five methods takes inspiration from observational astronomy and involves the use of adaptive optics to correct for deformations in the excitation wavefront that degrade the quality of the focal spot.
Adaptive optical techniques seek to correct both spherical aberrations, which grow progressively worse the deeper one focuses light with a microscope objective into even a uniform medium, as well as lensing effects, which can arise within tissue owing to refractive index inhomogeneities (Albert et al. 2000, Booth et al. 2002, Neil et al. 2000, Rueckel et al. 2006, Sherman et al. 2002, Zhu et al. 1999). A main challenge concerns how one deduces the wavefront deformations occurring within an individual brain. Strategies involve optical assessment of the wavefront (Booth et al. 2002, Neil et al. 2000, Rueckel et al. 2006, Zhu et al. 1999) as well as computational search techniques that seek to optimize signal generation (Albert et al. 2000, Sherman et al. 2002, Wright et al. 2007). Neither approach has been sufficiently successful to date to merit widespread adoption for brain imaging. Nonetheless, this is an area to watch for potentially exciting future developments.
Another approach to improving signal generation in two-photon microscopy employs ultrashort-pulsed regenerative laser amplifiers, which produce pulses of higher energies but usually at lower repetition rates as compared with the ultrashort-pulsed lasers commonly used for two-photon imaging. The amplified pulses retain efficacy to excite fluorescence at greater depths into tissue, which has permitted demonstrations of imaging up to ~850–1000 μm deep into the intact mouse brain (Beaurepaire et al. 2001, Theer et al. 2003). Drawbacks include the considerable cost and size of regenerative amplifiers and their near lack of wavelength tunability. The reduction in pulse repetition rates to ~1–1000 kHz also limits the speed of image-acquisition by raster scanning because each image pixel must receive illumination from at least one laser pulse. More economical, compact amplifier sources with higher repetition rates would facilitate progress, and so the ongoing improvements to ultrashort-pulsed fiber laser amplifiers are of considerable interest.
Although most researchers performing two-photon imaging have used Ti:sapphire lasers, which can have wavelength tuning ranges as broad as ~690–1080 nm, some investigators have pursued the use of alternative, fixed wavelength sources such as Nd:YLF (1.047 μm) (Squirrell et al. 1999), Yb:KYW (~1.033 μm) (Vučinić & Sejnowski 2007), or Cr:forsterite (~1.23–1.27 μm) (Chu et al. 2001, Liu et al. 2001) ultrashort-pulsed lasers. The use of longer wavelengths for excitation can improve depth penetration by diminishing scattering of both the illumination and the often-attendant long wavelength fluorescence emissions. Wavelengths of lasers with Yb-ion-doped gain media overlap those of Ti:sapphire lasers, but the overlap occurs toward the end of Ti:sapphire’s usable range, where the powers produced by Yb-ion-based lasers and amplifiers can be considerably greater. Cr:fosterite lasers operate in a transparent spectral window in which light absorption by water remains modest and the reduction in light scattering within tissue is even greater owing to the longer wavelength (Chu et al. 2001, Liu et al. 2001, Squirrell et al. 1999, Vučinić & Sejnowski 2007). Two main factors in the past that limited the utility of these alternative sources for two-photon imaging was the relative dearth of NIR and red fluorescent markers, especially functional indicators, and a lack of commercial availability. As molecular probes become increasingly available for use with longer wavelengths, these and other alternative ultrashort-pulsed lasers may find more applications.
A complementary approach to extending the penetration of two-photon imaging into tissue involves increasing the collection of fluorescence emissions. Because emissions must only be routed to a photodetector, not projected in an image-preserving manner, it appears possible to raise the numerical aperture (NA) of the collection pathway above that of the microscope objective used for laser beam delivery. One proposal is to equip the objective with an auxiliary nonimaging parabolic or ellipsoidal mirror surrounding the lens body to increase the collection aperture (Vučinić et al. 2006); simulations indicated this might increase collection efficiencies by up to a factor of four as benchmarked against a normal 40 × 0.8 NA water immersion microscope objective. This or similar approaches to collecting more photons seem viable, but such augmented objective lenses are not commercially available.
Finally, two-photon imaging deep within tissue can be accomplished by microendoscopy (Jung et al. 2004, Jung & Schnitzer 2003, Levene et al. 2004). The microendoscope is a thin but rigid optical probe, typically 350–1000 μm in diameter (Figure 1a), which inserts into tissue and conducts light to and from deep tissue locations ( Jung & Schnitzer 2003, Levene et al. 2004). Thus, microendoscopy increases the reach of laser-scanning microscopy into tissue up to the centimeter range (Llewellyn et al. 2008). The microendoscope probe typically comprises 1–3 gradient refractive index (GRIN) microlenses, which use internal variations in the refractive index, rather than the curved refractive surfaces employed by conventional lenses, to guide light (Flusberg et al. 2005a, Göbel et al. 2004, Jung et al. 2004, Jung & Schnitzer 2003, Levene et al. 2004, Monfared et al. 2006, Piyawattanametha et al. 2006) (Figure 1a).
In most designs, the microendoscope acts as an optical relay. If the laser focal spot is scanned just above the top face of the microendoscope probe that lies outside tissue, the probe projects and demagnifies the scanning pattern to a focal plane inside tissue (Figure 1b). Because the microendoscope probe is composed of lenses and is not a pixilated fiber bundle, adjustment of the axial position of the laser focus just above the probe leads to corresponding focal adjustments within tissue. Thus, with the probe held at a fixed location in tissue, two-photon microendoscopy permits the acquisition of 3D image stacks (Piyawattanametha et al. 2006), which can extend up to ~500–650 μm in depth measured from the tip of the microendoscope probe (Figure 1c).
There is considerable flexibility in the choice of microendoscope probes’ specifications: Physical lengths of ~0.5–3 cm, optical working distances of ~150–800 μm, NAs of ~0.4–0.75, and fields of view of ~100–1000 μm are the approximate ranges of typical values. There are, however, important tradeoffs within these ranges between the different parameters. For example, longer working distances generally imply reduced NA values, though to a lesser degree for the larger diameter probes. The moderate costs of microendoscope probes relative to those of microscope objectives facilitate the acquisition of a large set of complementary designs for use in different situations. Resolution values achieved to date by microendoscopy (~0.9–1.2 μm lateral, ~10–12 μm axial) (Flusberg et al. 2005b, Jung et al. 2004, Levene et al. 2004) have not been limited by diffraction, but rather by optical aberrations within the endoscope probes. We expect that further optical engineering will yield next-generation microendoscopes capable of diffraction-limited performance. The small size of GRIN microlenses also permits their incorporation into miniaturized, fiber-optic two-photon microscopes (Engelbrecht et al. 2008, Flusberg et al. 2005b, Göbel et al. 2004, Hoy et al. 2008, Jung et al. 2008, Le Harzic et al. 2008) (see section below on Fiber-Optic Microscopy). However, microendoscopes have neither the tapered shapes nor the small diameters of electrode tips, so neuroscientists need to plan surgical strategies and routes of insertion carefully when placing microendoscopes into the brain to minimize disruption to tissue. Locating the tip of the microendoscope just outside, and not within, the brain structure of interest can help lessen effects in the area being imaged. Nonetheless, it is best to perform control studies to check for any notable effects on tissue in each new experimental configuration.
In addition to two-photon fluorescence, the imaging modalities demonstrated to be compatible with microendoscopy include epifluorescence (Flusberg et al. 2008; Jung et al.2004; Murayama et al. 2007, 2009), confocal (Knittel et al. 2001), and second-harmonic generation imaging (Fu & Gu 2007, Llewellyn et al. 2008). Microendoscopy applications in neuroscience have included in vivo imaging of cochlear microanatomy and circulation (Monfared et al. 2006), CA1 hippocampal neurons (Figure 15d) (Deisseroth et al. 2006, Jung et al. 2004), layer V pyramidal neurons (Levene et al. 2004; Murayama et al. 2007, 2009), and the contractile dynamics of striated muscle sarcomeres in both mice and humans (Figure 13a–f) (Llewellyn et al. 2008). The use of microendoscopy for long-term imaging of cells deep in the mammalian brain is also emerging (Deisseroth et al. 2006), which should facilitate longitudinal studies of how cellular properties might change over the course of learning or aging, brain disease, or in response to new therapeutics (see section below on Long-Term Imaging).
Harmonizing with the goal of imaging deep into tissue are the aims of sampling large tissue volumes and doing so at fast data-acquisition rates. The latter two aims are crucial for neuroscientists wishing to monitor the activities of large populations of individual cells with sufficient time resolution to follow fast biological processes such as cellular Ca2+ or voltage dynamics. Two-photon microscopy usually employs one laser beam scanned in a raster pattern over the sample. This configuration leads to basic trade-offs among the frame-acquisition rate, field of view, and signal-to-noise ratio. Parallel streams of complementary research on molecular probes and optical hardware seek to improve the speed and dynamic range of fluorescent functional indicators, as well as the speed and three-dimensional range of laser-scanning mechanisms. Two-photon microscopy conventionally employs a pair of nonresonant galvanometric scanning mirrors, which are individually limited to line scans of ~1–2 kHz. With typical images of 128 × 128 pixels or more, this restricts frame acquisition rates to about <16 Hz. However, alternative scanning strategies for attaining higher speeds are emerging. Work in recent years has yielded particular progress in methods for 3D laser-scanning.
One approach to faster scanning splits the excitation beam into a set of multiple “beamlets”, which are focused to distinct spots and scanned together across the sample to permit parallel data acquisition from multiple locations (Figure 2a). Such techniques come in different varieties but are often collectively termed multifocal multiphoton microscopy (MMM) (Bewersdorf etal. 1998, Buist etal. 1998). These methods require means for creating the beamlets and recording multiple channels of spatially distinct signals. Typically, either a microlens array (Bahlmann et al. 2007, Bewersdorf et al. 1998, Buist et al. 1998, Kim et al. 2007) or a series of beam splitters (Fittinghoff et al. 2000, Fricke & Nielsen 2005, Lévêque-Fort et al. 2004, Nielsen et al. 2001) divides the laser beam, and a fast camera (Bahlmann et al. 2007, Bewersdorf et al. 1998, Buist et al. 1998, Nielsen et al. 2001) detects the emissions. High-power laser sources can allow up to ~64 beamlets of sufficient intensity to excite two-photon fluorescence (Kim et al. 2007, Niesner et al. 2007). This permits increases in frame rates (e.g., ~640 Hz over a 64 × 64 μm2 total imaging area with a 6 × 6 beam array) (Bahlmann et al. 2007) because each beamlet must be scanned across only a portion of the image. However, having a dense set of focal points can diminish the efficacy of optical sectioning owing to interference between mutually coherent beamlets. [For related reasons, approaches to fast two-photon imaging that involve scanning a focused line of laser excitation also significantly compromise optical sectioning (Brakenhoff et al. 1996).] Time-multiplexing the pulses at the picoseconds scale can alleviate this problem by having the pulses reach the focal plane at different times (Andresen et al. 2001, Egner & Hell 2000, Fittinghoff & Squier 2000).
The Achilles’ heel of multifocal imaging is its vulnerability to emission scattering, which induces cross talk between the different signal channels. Particularly at greater imaging depths for which scattered emissions are the norm, it becomes increasingly challenging to prevent mutual contamination of the signals originating from adjacent excitation volumes (Kim et al. 2007, Niesner et al. 2007). A detection setup with a multianode photomultiplier tube instead of a camera can be somewhat less sensitive to emission scattering by facilitating computational strategies for reducing cross talk, but cross talk is not abolished and still increases with imaging depth (Kim et al. 2007). The division of laser power into multiple channels also reduces the capability for imaging in deep tissue by limiting the power that can be used in each channel to compensate for the attenuation of the illumination by scattering. As higher power lasers emerge, this latter drawback to multifocal imaging will be ameliorated, but emission cross talk would be exacerbated by a use of higher powers to create denser arrays of focal spots.
A few studies have utilized the higher frame rates available in multifocal two-photon microscopy for studies of cellular dynamics. Two studies examined the Ca2+ dynamics of mouse hippocampal CA1 neurons in brain slices (Crépel et al. 2007, Goldin et al. 2007), and another used 64 laser spots for fast in vivo monitoring of Ca2+ dynamics in neurons of the Calliphora vicina (blowfly) visual system (Kurtz et al. 2006). Multifocal two-photon imaging has not yet been demonstrated in live mammals, nor has fast 3D-scanning multifocal imaging emerged.
However, there are several laser-scanning strategies arising for fast, continuous sampling of 3D tissue volumes. One approach builds on the established means for creating 3D image stacks using mechanical movements of either the specimen or the microscope objective. Instead of using successive step-wise axial adjustments, however, the objective lens is set in continuous motion using a piezoelectric actuator, and 3D scan trajectories are created through the combined use of a pair of galvanometric mirrors for lateral scanning (Figure 2b) (Göbel et al. 2007). This approach has allowed 3D data to be acquired at volume acquisition rates of ~10 Hz and has enabled continuous monitoring of neuronal and astrocytic Ca2+ dynamics in up to several hundreds of cells across a ~250 × 250 × 250-μm3 volume in the intact rat neocortex (Göbel & Helmchen 2007, Göbel et al. 2007) (Figure 2c,d). Typical microscope objectives have masses of ~150–250 g that limit the speed of mechanical scanning. One can compensate computationally for temporal lags between the drive signals and the actual movement of the objective (Göbel et al. 2007). Advantages of objective scanning include the substantial tissue volumes that can be sampled, the relative ease of implementation, and the feasibility of using spiral and even user-defined scanning trajectories to sample a large number of cell bodies. Limitations include the speed restrictions and potential susceptibilities to motion artifacts when imaging in awake animals, since with complex 3D scanning trajectories it may be harder to apply image registration algorithms that correct for motion artifacts (see Imaging in Awake Behaving Animals below). Furthermore, because each cell is only sampled at a few image pixels, signal-to-noise ratios are generally lower than in 2D laser-scanning imaging performed at comparable frame rates. Overall, 3D laser-scanning by piezoelectric actuation of the objective is poised to enable unprecedented dynamical studies of hundreds to even thousands of cells in vivo, albeit for now with ~100-ms time resolution.
To achieve faster imaging of functional signals, investigators can use noninertial laser-scanning strategies that permit higher speeds. Acousto-optic deflectors (AODs) steer laser beams by using the diffraction of light by sound waves in a crystal. Changing the frequency of the sound waves alters the propagation angle of the diffracted beam. This method enables the use of a one-dimensional scanning mechanism whose speed limit is mainly set by the acoustic delays needed to modify the sound frequencies in the crystal (~10–30 μ is for typical apertures). Thus, a pair of orthogonally oriented AODs permits beam steering in the two lateral dimensions (Bullen et al. 1997). Moreover, frequency chirping of the acoustic waves permits fast axial scanning (Reddy et al. 2008, Reddy & Saggau 2005, Vučinić & Sejnowski 2007). The degree of frequency chirp affects the collimation of the laser beam—because the acoustic frequency and thus the diffraction angle varies across the beam’s cross-sectional profile—resulting in converging or diverging beams that can be used to modify the axial focal position in the sample. Two basic strategies for 3D scanning exist, involving either two (Vučinić & Sejnowski 2007) or four (Reddy et al. 2008, Reddy & Saggau 2005) AODs.
The former approach to 3D scanning offers higher optical throughput, a smaller footprint, and reduced group velocity dispersion, but at the cost of some distortions in the shape of the focal spot and the inability to dwell with the laser spot stably at axially displaced locations (Vučinić & Sejnowski 2007). The latter approach involves somewhat greater group velocity dispersion but offers more versatility in the choice of scanning trajectory (Figure 2e) (Reddy et al. 2008, Reddy & Saggau 2005). This approach enables fully random-access scanning patterns across the 3D field of view, for high-speed monitoring of a predefined sequence of cells or sites. For example, four AODs have been used to monitor the 3D dynamics of this method, a diffraction grating spectrally back-propagating action potentials within neuronal dendritic trees (Figure 2f) over fields of view as large as ~200 × 200 × 50 μm3 (Reddy et al. 2008). Acquisition cycle times are fast, often in the tens of kilohertz, as determined by the number of sampled points, the re-positioning time (~15 μs) per point, and the dwell time (~5 μs) per point (Reddy et al. 2008). Both two and four AOD scanning strategies benefit from corrective methods for compensating the spatial and temporal dispersion that arise within the AODs (Zeng et al. 2006). To date, the use of AODs has been limited to in vitro studies, but we expect experiments that use fast AOD-based laser-scanning in the intact brain will emerge soon.
Temporal focusing is a means of providing auxiliary optical sectioning that can aid rapid scanning strategies employing ultrashort-pulsed laser illumination (Durst et al. 2006, 2008; Oron et al. 2005; Papagiakoumou et al. 2008; Tal et al. 2005; Zhu et al. 2005). In this method, a diffraction grating spectrally disperses an incident laser beam, which is then collimated and focused onto the sample by the microscope objective (Figure 2g). Because the different wavelengths of light are spatiotemporally separated until they reach the common focal plane, outside the focal plane the spectral bandwidth is effectively reduced at each location and hence the pulse duration is broadened. The pulse recompression and recombination of the different wavelengths that occur at the focal plane provide the “temporal focusing” and help confine the generation of two-photon excitation. Thus, the temporal two-photon excitation. Thus, the temporal two-photon excitation. Thus, the temporal restoring some of the optical sectioning capabilities of two-photon imaging configurations, such as those involving fast scanning with a line of illumination (Figure 2g), in which optical sectioning is compromised as compared with the usual raster-scanning mode (Tal et al. 2005) (see multifocal imaging, above). As compared with raster-scanning a focal spot, the division of laser power across a swept line of illumination does reduce two-photon excitation. Nevertheless, as ultrashort-pulsed laser amplifiers become more widely used for imaging, the combination of temporal focusing and line-illumination-scanning may gain traction for high-speed two-photon microscopy.
In addition to achieving greater penetration and faster scanning, recent work has broadened appreciation for how the repetition rate of pulsed illumination can affect fluorophores’ total time-integrated signal emission. Increasing the time between illumination pulses to the microseconds timescale or longer allows excited fluorophores to relax electronically from dark, meta-stable spin triplet states, which are usually on the dominant pathways to photobleaching and phototoxicity, instead of absorbing another photon that would increase the propensity for photoinduced toxicity (Donnert et al. 2007a). Of course, a reduction in pulse repetition rate also implies a reduction in the fluorescence emission rate, even if each fluorophore lasts for a greater number of excitation cycles.
By comparison, the photophysics underlying an alternative strategy for reducing photodamage in two-photon microscopy by raising pulse repetition rates remains less well understood (Ji et al. 2008). This strategy relies on the observation that in two-photon microscopy both photobleaching (Chen et al. 2002, Patterson & Piston 2000) and photodamage (Hopt & Neher 2001) rise with a dependence on the instantaneous illumination intensity that is steeper than quadratic. Thus, in the regimes in which pulse energies are limited by phototoxicity, a net increase in signal should be obtainable by raising the pulse rate while keeping the energy per pulse approximately constant. Alternatively, the rate of photodamage might be reduced while keeping signal rates constant by increasing pulse rates but reducing pulse energies. Empirically, by increasing the usual ~80 MHz rate of pulse delivery in two-photon imaging up to ~128-fold via the use of a “pulse splitter,” it was possible to either increase signals or reduce photobleaching in fixed and live tissues (Ji et al. 2008). Strikingly, this result implies that a major route for photobleaching depends on photophysical effects at the picoseconds timescale, much briefer than typical ~1–10-ns fluorescence lifetimes or the microseconds lifetimes of dark triplet states.
In parallel with the ongoing progress in two-photon microscopy, alternative approaches to imaging in 3D using one-photon fluorescence excitation are arising, often for applications complementary to those suited for two-photon imaging. Other traditional linear contrast mechanisms, such as light absorption, can often be used as well. As compared with two-photon imaging, all the volumetric imaging methods discussed in this section suffer from relatively poor depth penetration in scattering tissue. However, these methods generally perform well in small, optically transparent samples such as embryos or small organisms and so are often used to study developmental processes. Optical clearing of histological specimens can further permit the study of large tissue volumes. Moreover, some linear contrast imaging methods do permit fast, 3D functional imaging in relatively transparent organisms such as zebrafish embryos (T. Anderson, L. Grosenick, and S. Smith, unpublished data) or in tissues with reduced scattering such as the vomeronasal epithelium (Holekamp et al. 2008). The use of conventional light sources, as opposed to the ultrashort-pulsed lasers used for two-photon imaging, helps to reduce costs.
Light-sheet microscopy (Fuchs et al. 2002, Voie et al. 1993) resembles a less commonly used form of confocal microscopy, the theta (Lindek et al. 1997, Lindek et al. 1999, Stelzer & Lindek 1994) or dual-axis confocal (Wang et al. 2003, Webb & Rogomentich 1999) microscope, but with planar illumination (Figure 3a). Variants of light-sheet microscopy are known as selective plane illumination microscopy (Huisken et al. 2004) or objective-coupled planar illumination (OCPI) (Holekamp et al. 2008). The optical axes for light delivery and collection are oriented at an angle to one another, usually 90 degrees. Cylindrical lenses of modest NA and long working distances are generally used for creating a sheet of illumination. A high NA microscope objective is typically used in the light collection pathway to maintain high resolution and signal-gathering power. A key advantage of light-sheet illumination is the restriction of fluorescence generation to a single plane, which is not the case in conventional epi-fluorescence microscopy. The use of sheet illumination leads to a large reduction in background fluorescence, which improves the attainable imaging depth by partially mitigating the loss of contrast with depth. Sheet illumination also reduces photobleaching occurring at locations not being imaged, which in turn improves sample viability, such as for time-lapse imaging (Holekamp et al. 2008, Huisken et al. 2004). For example, light sheet microscopy was recently used to track the live development of an entire zebrafish embryo, over multiple hours and at single-cell resolution, in an embryo expressing a green fluorescent protein (GFP) fusion that localizes to chromatin (Keller et al. 2008).
If the plane of illumination is rapidly scanned in tandem with the focal plane of the imaging lens, then light sheet fluorescence microscopy can enable fast functional volumetric imaging (Figure 4a) (Holekamp et al. 2008). Although scattering still limits the depths in tissue at which cellular dynamics can be tracked, in tissues with cell bodies within ~150 μm of the tissue surface, such as the vomeronasal epithelium, investigators have tracked Ca2+ dynamics in up to ~700 neuronal somata with one volume acquisition occurring every 2 s over a 150 μm × 425 μm × 250 μm region (Holekamp et al. 2008). Alternatively, this technique has been used to monitor Ca2+ dynamics in 88 cells within a single plane at 200 Hz (Holekamp et al. 2008).
Another set of applications for light-sheet fluorescence microscopy involves efficient examination of histological specimens (Dodt et al. 2007). Rather than cut tissue samples into a large number of slices that must be separately imaged and then realigned computationally, light-sheet microscopy permits high-resolution data to be acquired from an entire intact tissue specimen. A key element of this approach is the process of “optically clearing” the sample to minimize the internal variations in refractive index. This method vastly reduces light scattering and permits large, millimeter-sized fixed tissue samples, such as embryos, or even an entire mouse brain, to be imaged without slicing (Dodt et al. 2007). The attainable axial resolution varies in part with the specimen’s lateral extent since this will dictate the minimum thickness of the illumination sheet that can be used without significant diffraction occurring over the breadth of the sample.
An alternative volumetric technique for studying cleared histological specimens is optical projection tomography (OPT) (Sharpe 2004, Sharpe et al. 2002), which is in several respects an optical analog of X-ray computed tomography (CT). To date, OPT has used either fluorescence or light absorption for contrast. A key similarity with X-ray CT is that 3D images are reconstructed via a “back computation” process that starts with a series of raw 2D projection images. To collect this data, the sample is rotated while an ensemble of 2D projection images are acquired across a wide set of different angular perspectives. Contrast intensities from each voxel in the 3D reconstruction can then be deduced by a back computation algorithm involving the same type of Radon transform used in X-ray CT. OPT permits reconstruction at resolutions as fine as ~5–10 μm of fixed 3D specimens that are ~1–10 mm across (Figure 3b). Unlike with other one-photon volumetric techniques, the resolution of OPT can be readily made isotropic by using fine rotations of the sample. High-resolution reconstructions require a large number of projection images, particularly with large sample volumes (Sharpe 2004). Thus, most applications of OPT have involved fixed specimens, such as of small organisms (McGurk et al. 2007) (Figure 4c) or embryos (Sharpe et al. 2002) (Figure 4d). Recently, OPT permitted the visualization of limb development in a mouse limb bud culture over multiple time points and without optical clearing, which led to a modest loss of resolution (Figure 4e) (Boot et al. 2008).
Like OPT, light field microscopy involves the computational reconstruction of 3D information encoded within a set of raw 2D data sets (Levoy 2009, Levoy et al. 2006, 2009). However, in light-field microscopy the raw data are not even projection images, but rather abstract representations of 3D information. Thus, this technique well illustrates the ongoing rise in abstraction in light microscopy, a trend that will surely increase with time. Light field microscopy employs a microlens array, placed close to the image plane in front of the camera, to segregate information about signal photons’ originating locations and directions of propagation (Figures 3c, ,4b).4b). The microlens array achieves this segregation by transforming the type of raw data gathered by the camera from an ensemble of pixel intensity values into an ensemble of micro-images (e.g. 16 × 16 to 20 × 20 pixels in size) (Figure 4b yellow and orange insets). Each micro-image contains the signals originating from a fixed location within the sample; each pixel within a micro-image represents the intensity of light from that location reaching the camera by a specific propagation route. The maintenance of information about signal origin and direction of propagation implies that 2D images can be reconstructed at different viewing angles by assembling only those pixels with the same coordinate sets across all the micro-images. By applying a “synthetic aperture,” one can also computationally adjust the depth of field in the 2D reconstructions by choosing the maximum propagation angle of the returning rays that contribute to the image. This manipulation is analogous to adjusting the diameter of an iris in the back aperture of a microscope objective.
Computational refocusing at different planes is also possible in light field microscopy. Thus, by assembling a stack of 2D images computationally focused to a sequence of planes, and then applying a deconvolution, 3D reconstructions are also achievable. Because no scanning is needed for this form of volumetric imaging, 3D volume-acquisition rates are limited strictly by photon collection and camera speed. To date, light field microscopy has allowed functional imaging of Ca2+ signals in zebrafish across ~250 × 250 × 70 μm3 3D volumes at 4 Hz (T. Anderson, L. Grosenick, S. Smith, unpublished data). However, researchers face basic trade-offs between the achievable lateral resolution and the number of perspectives or axial planes sampled because with all else held constant, a microlens array that yields an increase in the number of camera pixels per micro-image also yields coarser spatial sampling in the lateral dimensions. Because the relative size of the spacings between the individual microlenses and camera pixels is important for setting the boundaries of this trade-off, progress in light field microscopy will be stimulated by advances in both camera and microlens array technology.
By comparison, progress in holographic microscopy hinges in part on advances in the designs of programmable spatial light modulators (SLM), which can digitally synthesize 2D hologram-generating patterns. Holographic microscopy encodes 3D information by interfering a reference wave, often a plane wave, with spherical waves radiating either toward the sample as illumination or from the sample’s emitters. This interferometric approach to encoding 3D data is basic to the holographic microscopy methods, which have existed for some time (Gabor 1948) and in multiple forms (Poon et al. 1995). Holographic techniques fall into two categories: scanning and nonscanning.
In recent versions of scanning holographic microscopy, illumination reflects off a programmable phase SLM that induces distinct phase shifts in the light reflecting off each of its pixels. This allows the synthesis of an arbitrary, 3D pattern of illumination that can be used to excite fluorescence or other optical processes concurrently at a chosen set of multiple locations in the sample (Lutz et al. 2008, Nikolenko et al. 2008, Papagiakoumou et al. 2008). This illumination strategy also works with two-photon fluorescence excitation (Nikolenko et al. 2008, Papagiakoumou et al. 2008). Alternatively, by scanning the synthesized illumination patterns one can in principle reconstruct an entire volumetric image with reduced background excitation owing to the 3D control of illumination. Current SLM technology enables the illumination patterns to be updated at frame rates up to 60 Hz. The freedom to synthesize arbitrary illumination patterns is also restricted by the number of pixels on the SLM (to date up to ~1920 × 1200 pixels) available for encoding the Fourier transform of the desired pattern. To date, SLMs have permitted concurrent illumination of tens of sample locations and arbitrarily shaped spatial regions, such as the areas occupied by cells (Lutz et al. 2008, Nikolenko et al. 2008, Papagiakoumou et al. 2008).
Holographic imaging has also recently been implemented in a nonscanning manner by reflecting the collected light signals off a single, centered holographic pattern projected onto the SLM (Rosen & Brooker 2008). In this version of holographic microscopy, each emitter in the sample provides two interfering spherical waves with different curvatures reflecting off the SLM. This embeds a separate interferometric hologram for each emitter onto a camera. Three holograms are sequentially recorded per image frame, each using a distinct phase-offset for the reflection pattern on the SLM (Figure 3d) (Rosen & Brooker 2008). The holograms are superposed and convolved with a holographic response function to reconstruct the 3D spatial distribution of emitters.
An alternative use for holographic techniques in microscopy involves the creation of arbitrary illumination patterns for photouncaging or photostimulation (Lutz et al. 2008), including with two-photon excitation for studies in thick tissue (Nikolenko et al. 2008, Papagiakoumou et al. 2008). Thus, holographic illumination methods may provide complementary alternatives to AOD-based approaches for random access imaging and uncaging, albeit with more modest update rates for changing the patterns of illumination but capable of illuminating many points in the sample simultaneously. Similarly, the concepts of light field microscopy can also be used to synthesize arbitrary 3D patterns of illumination by reversing the light path and replacing the camera with a spatially modulated light source (Levoy et al. 2009). Thus, both light field and holographic methods can be used to stimulate not only a set of arbitrarily chosen points, but also a set of arbitrarily chosen regions, such as a collection of cell bodies. Each of these techniques offers unique advantages. Viable spatial light-intensity modulators for light field illumination include digital micromirror devices (DMDs), which can update much faster (up to ~ 1–10 kHz binary image refresh rates) than the programmable phase SLMs used for holographic setups (~60 Hz). However, with light-field illumination a substantial fraction of the light source power would in general be lost because many pixels of the DMD might be dark. In practice, phase SLMs preserve ~50% of the incoming light to synthesize holograms (Nikolenko et al. 2008, Papagiakoumou et al. 2008). SLMs can also synthesize high-resolution patterns in 3D (Lutz et al. 2008, Nikolenko et al. 2008, Papagiakoumou et al. 2008), whereas light field illumination sacrifices lateral resolution as axial resolution improves. Thus, light field strategies for photostimulation permit faster update rates, but holographic strategies can currently generate finer spatial patterns. We expect both illumination techniques will serve key applications in neuroscience.
Addressing several classes of important issues in neuroscience will require the acquisition and analysis of massive data sets. For example, huge amounts of data will ultimately be needed to provide detailed knowledge about the development and higher-order connectivity patterns of distributed neural circuits, to characterize statistically the variations between nervous systems across and within species, and to conduct high-throughput imaging-based screens for mutations or drugs that affect nervous system function or disease progression. In many such cases, it has long been possible to acquire the type and quality of optical data needed but not the full data sets required to address the core scientific issues. Often lacking are the tools for rapid image collection, automated image analysis, and efficient management of vast image data repositories. In the 1990s, the fields of bioinformatics and drug discovery realized sweeping improvements in the speeds and costs of data acquisition, in large part owing to increased parallelization and automation. These high-throughput approaches have inspired and paved the way for emerging approaches to automation in neuroscience using light microscopy. As in bioinformatics, although the raw data will be worth inspecting manually or at least spot sampling, the crux of automated microscopy often lies in the ability to perform rapid sample manipulations or to distill higher-order statistical conclusions that need not be apparent by eye.
One growing approach to the acquisition of large data sets borrows directly from bioinformatics by using microfluidic chips for rapid manipulation of specimens and solution conditions under the microscope, as well as for high sample throughput. Microfluidic “lab-on-a-chip” devices have the virtue of optical transparency, a prerequisite for high-resolution microscopy, and allow high-throughput exchange of samples or sample conditions. In neuroscience, most microfluidic applications for microscopy have involved the model species Caenorhabditis elegans, owing to the ease of manipulating the positions and orientations of these round worms by fluidic control (Figure 5), the battery of available genetic techniques, and the ease of creating mutant libraries (Allen et al. 2008, Chalasani et al. 2007, Chronis et al. 2007, Chung et al. 2008, Cui et al. 2008, Guo et al. 2008, Heng et al. 2006, Hulme et al. 2007, Rohde et al. 2007, Zeng et al. 2008). By allowing worms to be controlled while spatially confined, microfluidic chips present alternatives to traditional means of immobilizing worms for study, such as by anesthesia or cooling (Zeng et al. 2008). Furthermore, Ti:sapphire lasers and amplifiers, familiar to neuroscientists from two-photon imaging, can perform precise micro-surgical procedures in C. elegans, such as cutting a single axonal projection (Allen et al. 2008, Guo et al. 2008, Zeng et al. 2008). 3D two-photon images of subcellular processes can be acquired in awake animals by using highly stable immobilization techniques (Zeng et al. 2008). Microfluidic chips can even be equipped with postsurgical feeding chambers to facilitate survivability for long-term studies (Guo et al. 2008). C. elegans also has the distinction of being the only animal whose entire neuronal wiring diagram, or “connectome,” has been fully characterized (White et al. 1986). Thus, the combination of high-throughput microfluidics with automated image acquisition and analysis should facilitate image-based decision-making algorithms that incorporate prior knowledge about neuronal and synaptic identities.
Because of how they facilitate sample manipulation, throughput, and imaging, microfluidic devices are especially well suited for sorting and screening large populations of worms. Chips can automatically align a worm in a predefined orientation, in which it can be manually or automatically categorized and sorted for later study (Chung et al. 2008, Rohde et al. 2007). Research groups that historically spent tedious hours at the microscope screening worms for mutations will increasingly adopt automated approaches combining microfluidic manipulations with automated image acquisition and machine vision algorithms for image analysis. For example, one system has demonstrated the ability to automatically detect and sort rare mutations at a rate of ~400 worms inspected per hour (Chung et al. 2008). After training the computer under human supervision, worms were screened for abnormal patterns of YFP expression within synaptic endosomes along the nerve cord (Figure 5f,g) (Chung et al. 2008). Other processes that might be tracked by automated means include neurogenesis, neurite extension, and cell migration. Microfluidics have also proven useful in characterizing functional Ca2+ responses of olfactory neurons to many odorants screened in rapid succession (Figure 5h) (Chalasani et al. 2007, Chronis et al. 2007).
Such methods synergize with existing approaches used in massive screens, such as genetic mutagenesis, RNAi-based gene knockdowns, and the use of drug libraries. However, automated microscopy techniques will be able to gather statistics well beyond what is typically possible by labor-intensive manual studies, enabling researchers to identify both faintly discernable and rare phenotypes. The microfluidic techniques being developed currently for use with worms should also find applications with other model organisms, such as Drosophila embryos (Chen et al. 2004, Dagani et al. 2007, Lucchetta et al. 2005) or zebrafish. Moreover, the techniques of automated image acquisition and analysis are also applicable well beyond the realm of microfluidics applications. For example, computer controlled time-lapse microscopy has been used to monitor thousands of cultured neurons in vitro in a study of a Huntington disease model, automatically identifying neurons with inclusion bodies of aggregated huntingtin protein and repeatedly revisiting the same cells to track their death and survival (Figure 6a) (Arrasate et al. 2004).
Although applications involving rapid exchange of samples or solutions can be well suited for microfluidic approaches, another set of applications for high-throughput imaging involves automated inspection of histological brain tissue specimens. The latter applications are progressing on the basis of recent advances in tissue processing and preparation. A key goal is the reconstruction of 3D-image data sets across large blocks of brain tissue, ideally with the possibility of using multiple fluorescent labels concurrently. A number of approaches are arising.
Several techniques draw on established methods for tissue processing in electron microscopy. One longstanding technique known as correlative microscopy provides complementary types of information by inspecting the sample using both optical and electron microscopy (Gaietta et al. 2002, 2006; Giepmans et al. 2005; Larson et al. 2005; Sun et al. 2007; Toni et al. 2007; van Rijnsoever et al. 2008). The optical images can facilitate the identification of specific cell types or molecules tagged with fluorescent markers, ideally while the cells are alive (Gaietta et al. 2002, 2006; Larson et al. 2005; Toni et al. 2007; van Rijnsoever et al. 2008). The optical data also provide anatomical coordinates to guide subsequent electron microscopy studies of nanometer-scale ultrastructure, such as synapses (Figure 6b). Dual labels for light and electron microscopy facilitate this process (Sosinsky et al. 2007). The bleaching of fluorescent dyes (Deerinck et al. 1994, Maranto 1982) can drive the oxidation of a secondary compound to form an electron-dense precipitate, which can subsequently be visualized in electron microscopy. When proteins have been tagged with a tetracysteine motif, the ReAsH biarsenical label can drive very specific photo-oxidation (Gaietta et al. 2002). Some fluorescent quantum dots can also readily be imaged by electron microscopy (Giepmans et al. 2005). Electron microscopy can access ~1 nm ultrastructural information in biological samples (Sosinsky et al. 2007), making it a suitable complement to light microscopy. However, when ultrastructural resolutions of >10 nm will suffice, the techniques of super-resolution fluorescence microscopy (see below) should emerge as powerful alternatives for acquiring nanoscale information directly from optical images, often while the sample is still alive.
Another technique that builds on tissue preparation approaches for electron microscopy is array tomography (Figure 6c) (Micheva & Smith 2007). This method for obtaining 3D image reconstructions involves fixing a tissue specimen in acrylic resin; cutting sections as thin as ~50 nm from the specimen using an ultramicrotome; mounting these sections on glass slides; fluorescence immuno-labeling the tissue; imaging all the fluorescently labeled specimens (to date generally by conventional epi-fluorescence microscopy); and computing a 3D reconstruction of the original tissue block. Array tomography has several virtues. First, the cut sections can be far thinner than the axial resolutions of conventional epi-fluorescence or laser-scanning microscopes. Thus, the axial resolution of the 3D reconstruction is not limited by microscope optics, only by the thickness of the ~50–100-nm tissue slices. Second, blocks of tissue millimeters thick can be sectioned in this way. This method enables, for example, 3D reconstructions over the entire depth of a mouse’s neocortex at ~200 nm lateral and ~50 nm axial resolution. Third, unlike with some approaches in super-resolution microscopy (see section below), array tomography places no inherent restrictions on the type of fluorophores used. Together, these benefits have enabled volumetric reconstructions far superior to those attainable by confocal microscopy, opening a broad set of new neuroscience applications.
For example, using array tomography it should be possible to categorize and assess the spatially varying densities of specific types of synapses across the entire depth of neocortex or other millimeter-sized specimens. Other applications might involve analyzing human brain tissue toward statistically characterizing both normal and diseased brains in terms of circuit, cellular, and molecular parameters. Aiding such characterizations, the sections in array tomography can undergo multiple rounds of immuno-labeling, stripping, and imaging, which allows images to be taken with multiple sets of fluorescent tags and then coregistered. Alternatively, the tissue strips can also be examined by correlative approaches employing scanning electron microscopy if still higher resolution information is desired. Finally, the compatibility of array tomography with nearly all forms of fluorescence microscopy, including the super-resolution techniques, makes this method particularly versatile. To date, the tissue slicing in array tomography is a bottleneck step requiring long hours by a skilled operator. By comparison, the image acquisition using a computer-controlled microscope and specimen stage has been automated. In the future, a greater degree of mechanical automation during tissue slicing will improve array tomography’s ease of use and throughput.
A demonstration of how automated slicing might work directly under the microscope is knife-edge scanning microscopy, in which individual sections down to ~0.5–1.0-μm thick are sliced during simultaneous imaging (Mayerich et al. 2008). A diamond knife is mechanically coupled to a microscope such that the knife is oriented perpendicular to the optical axis and lying within the focal plane. The tissue block is oriented at 45° angles to both the optical axis and the knife blade, allowing the cut sections to be imaged as they are sliced and pass over the blade. 3D reconstructions are computed from the set of raw 2D images. This approach has enabled rapid (~70 mm3 per hour at ~0.6 μm lateral resolution) optical imaging of large tissue volumes (~1 cm3) (D. Mayerich, personal communication; Mayerich et al. 2008). Initial studies with knife-edge microscopy used Golgi and Nissl stains for contrast, but genetically encoded labels could also be envisaged.
As an alternative to mechanical slicing of tissue, a tissue block-face imaging technique termed all-optical histology builds on laser machining and plasma-mediated ablation methods using ultrashort-pulsed regenerative laser amplifiers (Figure 6d) (Tsai et al. 2003). Alternating rounds of two-photon fluorescence imaging and tissue ablation are used to acquire 3D image stacks ~100–150 μm in thickness, followed by laser ablation of the volume just imaged to reveal the tissue below. The depth of ablation is typically chosen to be slightly less than the imaged depth to ensure overlap of images between successive cycles of imaging and ablation. The advantages of this approach include the inherent registration of the images, the method’s speed (~20 mm3 of tissue examined per hour appears feasible with an optimized system), and thus the ability to obtain 3D renderings of substantial blocks of tissue. Unlike in array tomography, the axial resolution is limited to that of conventional two-photon microscopy. Because imaging occurs immediately following optical ablation, as in knife-edge microscopy, fluorescent markers should ideally be impregnated within the tissue block beforehand or expressed genetically, as opposed to being introduced by immunolabeling methods after slicing. However, after optical ablation the tissue surface appears to retain normal immunoreactivity (Tsai et al. 2003), so it should be possible to immunostain after each round of tissue ablation, albeit with a considerable reduction in overall processing speed.
Another means of automated slicing is the automatic tape-collecting lathe ultramicrotome (ATLUM), a tour de force of mechanical engineering in which a block of tissue is progressively shaved while rotating about an axis (Hayworth et al. 2007, Lichtman & Hayworth 2009). The shaved tissue sections can be very thin (<30 nm) and are captured automatically on a reel of carbon-coated Mylar tape. This approach requires careful feedback stabilization of the thickness of tissue being shaved but provides a compact, long-term means of storing the sliced tissue in an addressable manner. This collection technique implies that desired locations in the preserved brain could be accessed again by stereotactic coordinates. To date, ATLUM has primarily been used as a means of preparing large tissue blocks for scanning electron microscopy, but optical imaging using either transmitted light or multicolor fluorescence has been demonstrated following tissue slicing by ATLUM (Lichtman & Hayworth 2009).
In the past few years, there has been considerable discussion within the neuroscience community as to whether it might be feasible by current techniques to generate high-resolution wiring diagrams of neural circuits in an automated or semi-automated manner. The ATLUM is one means of tissue preparation that might facilitate acquiring the requisite large data sets. Fully automated versions of the tissue slicing in array tomography might provide competitive alternatives. Although some groups are pursuing circuit reconstructions by electron microscopy, others are considering the use of automated optical microscopy, either alone or in conjunction with correlative electron microscopy approaches. The use of new multicolor fluorescence labeling strategies, such as the Brainbow approaches discussed below (Lichtman et al. 2008, Livet et al. 2007), may help by labeling dense sets of individual neurons, each of which might be traceable using color as a distinguishing marker. Plainly, the reconstruction of large circuits involving densely packed neurons is a challenge that will hinge on progress in image segmentation and tracing algorithms as much as on labeling and imaging techniques. Computational image analysis, not image acquisition, often remains the bottleneck in volume reconstruction (Zhang et al. 2008). However, in part because the optical, labeling, and tissue preparation techniques for neural circuit reconstruction are still evolving, the corresponding set of computational tools for tracing neurites and creating 3D circuit reconstructions is also still germinating. Two recent reviews on circuit reconstruction (Smith 2007) and bioimaging informatics (Peng 2008) discuss some of the hurdles involved.
A longstanding goal for the use of microscopy in neuroscience has been to develop techniques that permit cellular-level brain imaging in actively moving mammals, toward uncovering and understanding the cellular dynamics underlying specific behaviors. Although in many respects microscopy might appear ill suited for studies in behaving animals, the need for such studies is considerable since anesthesia precludes behavior and substantially alters neural and astrocytic activity (Greenberg et al. 2008, Movshon et al. 2003, Nimmerjahn et al. 2007, Pack et al. 2001, Rinberg et al. 2006). The goal of visualizing individual cells’ activity in actively behaving mammals long remained elusive, but in the past two years two complementary approaches were successfully demonstrated. One of these approaches relies on alert, head-restrained animal preparations that can be studied with conventional microscopy instrumentation. The other approach involves the use of optical fiber to guide light to and from freely behaving animal subjects. Both approaches are compatible with the use of mice, which is important because of the wide availability of genetically modified mouse lines, including fluorescently labeled tool mice and mouse models of brain disease (see sections on fluorescent tool mice and chronic imaging).
In primates, awake, head-restrained preparations have allowed functional brain mapping using intrinsic optical signatures of brain activity (Grinvald et al. 1991) and studies of spatiotemporal responses from neuronal aggregates using voltage-sensitive dyes (Chen et al. 2006, 2008b; Quraishi et al. 2007; Raffi & Siegel 2007; Seidemann et al. 2002; Slovin et al. 2002; Yang et al. 2007). Intrinsic signal imaging in awake, body-restrained rats has facilitated examinations of how cerebral hemodynamics differs between anesthetized and awake subjects (Martin et al. 2002, 2006). Voltage-sensitive dye imaging in head-restrained but behaving mice has enabled the spatiotemporal dynamics of sensory and motor processing to be visualized in the neocortical whisker system with sub-columnar spatial resolution (Ferezou et al. 2007, Petersen 2009, Petersen et al. 2003).
However, cellular-level imaging in awake, head-restrained animals has only recently emerged (Dombeck et al. 2007, Nimmerjahn et al. 2007). By habituating a mouse to periods of head restraint under a conventional upright two-photon microscope, it was possible to visualize the simultaneous Ca2+ dynamics of large numbers of individual cells while the mouse was free to walk or run on a stationary exercise ball (Figure 7a). This assay builds on the longstanding use of head restraint in neuroscience to study locomotor responses in flies walking on a trackball (Götz & Wenking 1973). The more recent two-photon microscopy studies in behaving mice examined Purkinje neuron and Bergmann glial Ca2+ dynamics within the cerebellar vermis (Nimmerjahn et al. 2007) (Figure 7b–e) as well as Ca2+ dynamics of layer 2/3 neurons, astrocytes, and neuropil in the hind-limb sensory neocortex (Dombeck et al. 2007) (Figure 7f–h). In both of these brain areas, there were neurons and astrocytes whose Ca2+ activity correlated with locomotion, opening the door to further studies of the potential roles these cells might play in motor behavior. The use of head-restrained rodent preparations should also generalize to studies of other, nonmotor behaviors. Head-restrained preparations will further be valuable for studies in quietly resting animals, such as for comparing cellular activity patterns that arise in the awake and anesthetized brain. Initial two-photon microscopy studies of visual processing in awake rats have already revealed substantial differences between the two states (Greenberg et al. 2008, Greenberg & Kerr 2009).
Key technical issues for imaging studies in behaving animals concern the motion artifacts that can arise owing to brain displacements during active behavior. Given that typical frame rates for in vivo two-photon microscopy are only ~2–20 Hz, brain displacements can differentially affect the individual line scans within a single image frame (Dombeck et al. 2007). However, even during active locomotion, brain displacements can be kept at modest levels (~1–5 μm) by firm mechanical restraint of the cranium and surgical steps that aid stabilization of the brain (Figure 7c, g). Moreover, many lateral image displacements are correctable offline by image registration algorithms (Dombeck et al. 2007) (see below). It is more difficult to correct axial motion artifacts in software strictly using 2D data sets. Nonetheless, axial motions might be mitigated either by image registration approaches relying on volumetric data or by active stabilization of the focal plane using online measurements of brain displacements (Fee 2000). Even without such corrective measures, the present data sets are of sufficiently high quality so as to provide insights about both neuronal and astrocytic dynamics within densely sampled local networks during active behavior. By comparison, multielectrode electrophysiological recordings generally sample neurons spaced much farther apart (usually ≥100 μm), owing to constraints on electrode density, and inherently miss astrocytes.
One limitation of imaging in head-restrained animals is the intrinsic restriction on animal behavior. Although subjects have the freedom to ambulate in place or even perform tongue or forepaw manipulations, the range of motions and thus the set of behaviors that can be explored remain constrained. Imaging sessions are also limited in duration by the periods of restraint animals can tolerate. On the other hand, in many paradigms the imposed limits on animal behavior are likely to prove advantageous. Virtual reality approaches, in which the animal views a computer screen, navigates simulated environments, and responds to stimuli, may also extend the range of behaviors that can be studied with head fixation. A caveat is that, even for behaviors amenable to head restraint, behavioral performance may be substantially altered. This includes but is surely not limited to the changes in locomotor gait that occur during head fixation. Thus, it will be valuable whenever possible to compare results obtained in head-restrained subjects to those acquired by fiber-optic microscopy in freely behaving animals.
Just as the use of head–restrained alert preparations in imaging started with studies of intrinsic signals, so too the use of fiber optics in freely behaving subjects initially involved intrinsic optical signatures of brain activity. In the 1990s, a few studies explored the use of optical fiber bundles to image intrinsic functional signals in awake behaving cats (Rector et al. 1993, Rector & Harper 1991). The optical fibers both delivered illumination to the animal and returned signals to a remote camera. However, with just a bare fiber bundle and no lenses, the images produced were of poor quality and low resolution. Nevertheless, the idea of using fiber optics to study freely behaving animals continued to develop and was later applied to behaving mice (Adelsberger et al. 2005, Ferezou et al. 2006, Yamaguchi et al. 2001). In these later studies, optical fibers either were used as single point probes that monitored but did not image the tissue’s characteristics (Adelsberger et al. 2005, Yamaguchi et al. 2001) or were used in devices that did not permit individual cells to be distinguished (Ferezou et al. 2006, Murayama et al. 2007, Poe et al. 1996). Voltage-sensitive dye imaging in active mice permitted elegant examination of aggregate neuronal signals associated with sensory processing in the whisker system (Ferezou et al. 2006). Moreover, the first fiberoptic two-photon microscope was introduced in a pioneering study that used a ~25-g microscope to study freely moving rats (Helmchen et al. 2001).
In this portable two-photon microscope, a piezoelectric actuator accomplished laser scanning by vibrating the illumination fiber at mechanically resonant frequencies in the two lateral dimensions (Helmchen et al. 2001). Fiber delivery of femtosecond pulses for two-photon excitation was achieved with the aid of a pair of diffraction gratings to compensate for group velocity dispersion arising in the optical fiber. The microscope allowed imaging of erythrocyte flow and neuronal Ca2+ transients when used in a line-scanning mode. However, motion artifacts that arose when the rats were moving made functional imaging challenging during periods of active behavior. The microscope’s size also precluded its use in mice or lighter weight rats. Nonetheless, this seminal study not only inspired additional technological improvements, including further miniaturization of both one- and two-photon fluorescence microscopes, but also effectively conveyed that the imaging of cells in freely behaving animals was a feasible pursuit.
More recent portable two-photon microscopes for brain imaging have had masses as small as ~1–4 g, in part by employing GRIN microlenses similar to those used for microendoscopy (Engelbrecht et al. 2008, Flusberg et al. 2005b, Göbel et al. 2004, Piyawattanametha et al. 2006). Miniaturized laser-scanning mechanisms under exploration have included microelectromechanical systems (MEMS) scanning mirrors (Piyawattanametha et al. 2007, Piyawattanametha et al. 2006) as well as non-resonant (Sawinski & Denk 2007) or resonant vibration of the illumination optical fiber (Engelbrecht et al. 2008, Flusberg et al. 2005b). MEMS scanning mirrors (e.g., 1 × 1 mm2 or 0.75 × 0.75 mm2 in size) that are microfabricated in silicon allow line-scanning rates of up to ~ 3.5 kHz, which is sufficient to permit frame acquisition rates of ~30 Hz (Piyawattanametha et al. 2006) or fast tracking of erythrocyte flow by line-scanning (Figure 8a–c). An alternative approach to laser-scanning drives the illumination fiber in a resonant, spiral pattern at frame rates up to 25 Hz (Figure 8d,e) (Engelbrecht et al. 2008). This latter approach has allowed investigators to visualize cerebellar Purkinje cell Ca2+ activity in anesthetized rats (Figure 8f,g). To date no two-photon microscope based on micro-optics has provided data from freely behaving animals, but the steady technological progress suggests such an advance might arrive soon. In comparison, a miniaturized high-speed epi-fluorescence microscope based on micro-optics has already achieved cellular-level Ca2+ imaging in freely moving mice (Flusberg et al. 2008) (Figure 8h,i,j).
This portable, 1.1-gram epi-fluorescence microscope permitted frame acquisition rates up to 100 Hz, enabling fast imaging of Ca2+ dynamics in cerebellar Purkinje neurons and of erythrocyte circulation in hippocampus and neocortex (Flusberg et al. 2008). The device used an image-transmitting bundle of optical fibers, a tiny planetary gear system for focusing, and three microlenses including interchangeable objectives of varying lengths for imaging deep or surface tissues (Figure 8h). When used to study Purkinje neurons of the cerebellar vermis, this portable microscope revealed the dendritic Ca2+ spikes of these cells (Figure 8i,j), which are known to closely reflect spike inputs from climbing fiber axons originating in the inferior olive. When the mice were actively moving, the Purkinje neurons exhibited higher rates and pairwise synchrony levels of Ca2+ spiking, as compared with periods of rest. These findings highlight the applicability of high-speed fiber-optic epi-fluorescence microscopy to studies of how neuronal activity patterns may vary across behavioral and physiological states.
Compared with fiber-optic two-photon microscopy (Engelbrecht et al. 2008, Flusberg et al. 2005b, Helmchen et al. 2001), distinct advantages of fiber-optic epi-fluorescence microscopy are its faster acquisition rates, lack of mechanical scanning, broader fields of view, and greater depth of field. These properties all help to confer greater robustness to motion artifacts. Disadvantages of epi-fluorescence microscopy include the more limited optical penetration depth, the increased background fluorescence, and the attendant photobleaching outside the focal plane. However, for cells in accessible tissues such as the olfactory bulb or cerebellar cortex, deeper-lying cells possessing superficial dendrites, or cells reachable by microendoscopy lenses (Murayama et al. 2007, 2009), high-speed fiber-optic epi-fluorescence microscopy should permit a range of studies in behaving mice, including mouse models of brain disease. We expect both two-photon and epi-fluorescence fiber-optic microscopes, as well as head-restrained approaches to imaging in behaving animals, will all find complementary usages in neuroscience.
Methods for loading synthetic fluorescent Ca2+ indicators into large populations of cells have enabled recent imaging studies of the dynamics of up to hundreds of individual neurons and astrocytes in live animals (Greenberg et al. 2008, Kerr et al. 2005, Mrsic-Flogel et al. 2007, Nimmerjahn et al. 2004, Ohki et al. 2006, Orger et al. 2008, Stosiek et al. 2003, Sullivan et al. 2005, Sumbre et al. 2008). Data analysis challenges arising from these studies concern the need for reliable, efficient extraction of Ca2+ activity traces from large sets of individual cells, in both awake and anesthetized animals. Hurdles include the correction of motion artifacts, identification of cellular signal sources, and in many cases the detection of individual action potentials. In principle, these challenges should also pertain to imaging studies of membrane voltage dynamics, but there have not yet been comparably large optical recordings of voltage dynamics across many individual cells in live mammals, owing to the current limitations of voltage-sensitive indicators.
In live animals, brain displacements correlated with physiological rhythms can create image motion artifacts. In awake or behaving animals, brain displacements associated with voluntary animal movements can compound the challenge. With images collected by highspeed fiber-optic epi-fluorescence microscopy in freely behaving mice, lateral motion artifacts were correctible by registering images to a common reference frame, leaving residual jitter at the sub-pixel level (Flusberg et al. 2008). However, if the camera’s acquisition rate was reduced below 75 Hz, some of the individual images appeared blurry, indicating there were noticeable levels of brain displacement occurring during the acquisition of single frames. Thus, with laser-scanning in vivo two-photon Ca2+-imaging, for which frame rates can be as low as ~2–4 Hz (Dombeck et al. 2007, Helmchen et al. 2001), rigid coregistration of entire frames will in general be insufficient to remove fast artifacts. However, many faster artifacts are correctible by laterally registering an image’s individual line-scans to one another, since typical line-scanning rates are ~100–2000 Hz. One method for performing this alignment relied on a probabilistic, hidden Markov model of brain displacement (Dombeck et al. 2007). Alternatively, there are alignment methods that permit deformations of the entire image frame to minimize errors due to brain displacements (Greenberg et al. 2008, Greenberg & Kerr 2009).
In addition to algorithms that correct for brain motion, there is also a need for automated methods for identifying the individual cellular signal sources within large-scale Ca2+-imaging data. Many recent studies have involved manual identification of cell bodies, performed by outlining regions of interest (ROI) by eye within static images (Dombeck et al. 2007, Flusberg et al. 2008, Greenberg et al. 2008, Kerr et al. 2005, Niell & Smith 2005). However, as the number of cells in a typical data set grows, manual approaches to identifying cells will become tedious and unwieldy. Semi-automated approaches have grouped pixels together into ROIs on the basis of their signal correlations (Ozden et al. 2008), but since there are still manual steps involved such approaches cannot be easily scaled to handle the largest available data sets without undue human labor.
Moreover, ROI analyses have mainly been based on heuristic, preconceived notions of what defines the geometric and other characteristics of specific cell types (Göbel et al. 2007, Ohki et al. 2006, Ozden et al. 2008). When these characteristics can be well defined, morphological filters can help find cells of the desired type in an automated or semiautomated fashion (Ohki et al. 2006). However, with the most commonly used synthetic Ca2+ indicators, morphological filters will have considerable difficulty in identifying neuronal dendrites or fine glial processes because these structures do not stand out with high contrast and often blend into the image background except during periods of Ca2+ activation. There are also emerging approaches for decomposing an imaging data set into its constituent signal sources on the basis of general statistical principles (Mukamel et al. 2007). This form of automated approach to cell sorting, which has been based on independent components analysis (Brown et al. 2001, Mukamel et al. 2007, Reidl et al. 2007), does not make assumptions about cells’ locations, shapes, or dye labeling patterns. Rather, cell sorting by independent components analysis relies on the statistics of dynamical signals, implying that even multiple, spatially overlapping cells occupying some of the same pixels can often be separated without significant cross talk. In such cases, ROI analysis tends to suffer badly from cross talk. However, owing to the reliance on the statistics of cells’ dynamics, independent components analysis can overlook cells with very low rates of activity, whereas an ROI analysis might find these cells on the basis of their appearance.
After identification of individual cells within Ca2+-imaging data there is often the question of whether either individual neuronal action potentials or rates of neuronal spiking can be found from the dynamical traces of Ca2+ activity. Nearly all neuron types have voltage-sensitive Ca2+ channels, so action potentials often have stereotyped signatures within Ca2+ activity traces. However, the underlying spike trains are often obscured by noise, the kinetics of Ca2+ binding to the indicator, and the cell’s own Ca2+ buffering mechanisms. Recent studies have applied a variety of approaches for estimation of spike rates or extraction of neuronal spike trains in a digital form. Given knowledge of the Ca2+ signal for a single action potential, generally a fast rise in fluorescence followed by a slower, often exponential decline (Helmchen et al. 1996), template matching (Greenberg et al. 2008, Kerr et al. 2005), or temporal deconvolution (Yaksi & Friedrich 2006) approaches can yield spike trains or rate estimates, depending in part on sampling rates and the intervals between spikes relative to the decay time of the Ca2+ signal for one spike. A simple deconvolution can be improved by using additional information regarding the amplitude and spectrum of noise in the data (Holekamp et al. 2008). More generally, one can apply machine learning techniques to paired imaging and electrophysiological data sets toward creating algorithms for spike recognition (Sasaki et al. 2008). Just as with the development of spike sorting methods in electrophysiology, significant effort will be required to compare the strengths and weaknesses of different approaches to cell sorting and spike extraction across a wide variety of cell types and experimental parameters.
Many fundamental neural phenomena arise over sufficiently small length scales to have eluded visualization by diffraction-limited optical techniques. The sizes of synaptic structures are often just within the resolution limits of conventional light microscopy, and many features of dendritic spines and axonal boutons have remained indiscernible. Molecular processes of biochemical signaling and macromolecular interactions also involve spatial scales smaller than those traditionally accessible by light microscopy. Fine processes densely packed within neuropil resist anatomical tracing by light microscopy if adjacent fibers cannot be adequately distinguished. However, advances in “super-resolution” microscopy have made optical examinations of nanometer-scale phenomena tractable. Although diffraction limits how tightly light can be focused in traditional microscopy, by the judicious manipulation of fluorophores and photoswitches it is possible to obtain optical images of structures at sub-diffraction length scales (Figures 9–11). With such super-resolution tools, neuroscientists can now use light to interrogate “nanoscale” structures once accessible only by electron microscopy. Although correlative light and electron microscopy techniques can still provide images of finer (~1 nm) resolutions than those (~20 nm) regularly produced today by super-resolution fluorescence microscopy, for many applications in neuroscience the latter will suffice.
Early improvements to the resolution of light microscopy were attained by interferometric manipulations that shrunk the size of the point-spread function by raising the effective numerical aperture used to excite and/or gather optical signals. Interferometric approaches include 4Pi (Hell et al. 1997, Hell & Stelzer 1992) and I5M microscopy (Gustafsson 1999, Gustafsson et al. 1999, Shao et al. 2008), two methods that use an opposed pair of high-numerical aperture objective lenses to deliver and gather light at a common focal plane. Overall, these early methods were especially successful in improving axial resolution by up to ~sevenfold and to as good as ~80–150 nm in the case of 4Pi microscopy (Gugel et al. 2004). However, the interferometric methods did not free the resolution limit from the constraints of diffraction. Nor did these methods achieve widespread adoption by biologists, largely because of the technical challenges involved in implementation.
Once microscopists began to use fluorophores’ photophysical and photochemical switching effects, diffraction no longer had a constraining role in setting resolution limits. Demonstrated resolution limits have fallen to scales as fine as ~10 nm (Betzig et al. 2006), and ease of use has increased. Today, we stand at the brink of a widespread adoption of super-resolution techniques. In this section, we explore the properties and implications of the three main forms of optical microscopy that provide resolution limits unconstrained by diffraction. All three categories of super-resolution microscopy discussed below are now either commercially available or are soon to become so.
The first proposed means of breaking the constraints of diffraction is an approach that relies on the photophysics of stimulated emission to narrow the point-spread function, the image of an ideal point emitter, in laser-scanning fluorescence microscopy (Hell & Wichmann 1994). Stimulated emission depletion (STED) microscopy achieves sub-diffraction-limited resolution by both exciting and depleting the available population of fluorophores using a pair of overlapping, concentric laser beams scanned together (Figure 9a–c) (Klar & Hell 1999, Klar et al. 2000). The first beam excites fluorophores lying within a diffraction-limited spot to an excited electronic state. However, the second “STED” beam, which is shaped in cross section like a doughnut, prevents fluorescence at the periphery of the excited area by using stimulated emission, in which the fluorophores return to the ground state by emitting light of the same color as the STED beam rather than emitting fluorescence photons (Figure 9a,b). Thus, only fluorophores at the center of the first beam’s focus are able to fluoresce, which sharpens the point-spread function to the “hole of the doughnut” left unaffected by the STED beam. The ratio of the STED beam’s intensity to the value of intensity needed just to overcome the spontaneous rate of fluorescence emission from the excited state dictates the size of the zone in which fluorescence is still possible. Thus, a more intense STED beam leads to finer resolution (Harke et al. 2008). Using this approach to spatially isolate the set of available fluorophores, which allows sequential examination of neighboring fluorophores by scanning, STED microscopy has achieved imaging resolutions as fine as 16 nm (Donnert et al. 2006, Westphal & Hell 2005), and even ~6 nm in a solid state diamond sample (Rittweger et al. 2009). In theory, the resolution can be made infinitely fine using STED, but in practice the power of the depletion beam cannot be raised beyond what competing processes of photo-damage, photobleaching, and background fluorescence excitation will permit.
As in other forms of laser-scanning imaging, there are important trade-offs in STED microscopy among resolution, signal-to-noise ratio, imaging speed, and deleterious processes such as photobleaching. Because the dual-beam STED strategy produces fluorescence signals from a region of tunable size, and not from individual emitters as do super-resolution approaches based on single-molecule detection, STED microscopy offers considerable control of these trade-offs and so allows versatile optimization under different conditions. STED microscopy has reached frame rates up to 28 Hz at 62-nm resolution in a 2.5 × 1.8 μm2 field of view, albeit with only a few collected signal photons per pixel (Westphal et al. 2008).
Notable results in neuroscience obtained by STED microscopy include several pertaining to the organization of synapses, such as the finding that synaptotagmin I molecules residing in synaptic vesicle membranes remain clustered together in patches on the cell membrane following vesicle release (Willig et al. 2006). STED microscopy has also allowed visualization of the ring-like structures of the protein bruchpilot at the Drosophila neuromuscular junction (Kittel et al. 2006), the SNARE protein syntaxin (Sieber et al. 2006) and its dynamics (Sieber et al. 2007), and the organization of nicotinic acetylcholine receptors (Kellner et al. 2007). Beyond the synapse, STED microscopy has revealed the clustering of the amyloid precursor protein (Schneider et al. 2008) and patterns of channel localization in olfactory neurons (Lin et al. 2007). STED microscopy is readily applicable to live cells and has been used to image the endoplasmic reticulum (Hein et al. 2008), track synaptic vesicle movement in axonal boutons (Westphal et al. 2008), monitor membrane lipid dynamics (Eggeling et al. 2008), and interrogate structural changes of dendritic spines during chemically induced LTP in hippocampal neurons (Figure 9f,g) (Nägerl et al. 2008).
Many technological enhancements to STED microscopy are already underway. Imaging speeds will be raised by multifocal scanning strategies (Hofmann et al. 2005). Additional sculpting of the point-spread function along the axial dimension permits 3D super-resolution imaging (Klar et al. 2000, Schmidt et al. 2008). Novel fluorophores that can better tolerate STED beams of greater intensities without bleaching would enable higher resolutions (Donnert et al. 2006). Dual color STED imaging has already emerged (Donnert et al. 2007b, Meyer et al. 2008). For examination of diffusive effects and reaction kinetics over nanometer-sized length scales, STED techniques can be married with the longstanding approaches of fluorescence recovery after photobleaching (FRAP) (Sieber et al. 2007) and fluorescence correlation spectroscopy (FCS) (Eggeling et al. 2008). Although STED microscopy typically employs pulsed lasers, it was recently implemented using continuous wave lasers, improving ease of use for many users and broadening the set of suitable lasers and fluorescent probes, albeit at the cost of increased photobleaching (Hein et al. 2008, Willig et al. 2007).
STED microscopy is the founding member of a broader family of methods, known as reversible saturable optical linear fluorescence transitions (RESOLFT) imaging, which also includes imaging based on ground-state depletion (GSD) (Bretschneider et al. 2007, Hell & Kroug 1995) and photoactivatable fluorophores (PA-FP) (Dedecker et al. 2007, Hofmann et al. 2005, Schwentker et al. 2007). In all these methods, the core idea remains to temporarily switch off the fluorescence of the indicator to squeeze the effective point-spread function of a scanning microscope. In GSD, instead of depleting fluorophores at the periphery of the excitation spot by stimulated emission, the fluorophores are transiently shelved in a dark triplet state of a long lifetime (Bretschneider et al. 2007). However, the probability of transitioning into a permanently dark, photo-bleached state is raised when the fluorophore is in a triplet state, which complicates the implementation of GSD. PA-FP RESOLFT microscopy relies on fluorophores that can be photoactivated by light of a different color than that used for exciting fluorescence. In STED microscopy, the depletion beam must compete with the nanosecond timescales of fluorescence emission and so requires substantial intensities (~ 10–100 MW/cm2). By comparison, photoactivatable fluorophores can be switched off using much lower beam intensities (~ 10–1000 W/cm2). However, with the photoactivatable fluorophores explored so far, the switching kinetics limit acquisition speeds to ~ 1–50 ms/pixel (Hell 2007, Hofmann et al. 2005). Cross talk between activation and excitation of the fluorophores, as well as the restriction to photoactivatable or photoswitchable fluorophore species, pose other limits to this technique (Hofmann et al. 2005). Overall, STED and other members of the RESOLFT family will allow neuroscientists to address many questions about nanoscopic features of live cells and synapses, with sufficient speed to follow a large set of protein and structural re-arrangements in real time.
The term structured illumination microscopy (SIM) refers to a set of techniques that can provide resolution at length scales finer than the normal limits by illuminating the sample with a sequence of periodic patterns of high spatial frequencies, close to the limit of what the microscope can physically transmit (Gustafsson 1999, 2000; Lukosz & Marchand 1963). The set of raw images is then used to extract information computationally about still higher spatial frequencies and thus provides fine spatial details (Figure 10).
In the simplest such approach, known as linear SIM (Figure 10a–c) (Gustafsson 1999, 2000), each raw image is a product of the illumination pattern with the spatial distribution of fluorophores in the sample. When considered in the Fourier or spatial frequency domain, the spatial frequencies contained in one of the raw images are those obtained mathematically through a convolution of the illumination’s spatial frequency transform with the spatial frequency transform of the distribution of fluorophores. Thus, information about spatial frequencies that cannot normally be observed is encoded within the moiré-like beat patterns that arise between the two frequency sets in the Fourier domain (Figure 10b). The highest spatial frequency so encoded is the sum of the conventional resolution limit and the illumination spatial frequency, or typically about twice the conventional limit. By acquiring a set of several images using different angular orientations and phase shifts for the illumination pattern, returning the spatial information obtained from this image set it to its original position in frequency space, and then applying an inverse Fourier transform, images in linear SIM can be reconstructed at a resolution twice as fine as that of conventional microscopy (Gustafsson 1999, 2000). Similar ideas can be applied to the axial illumination to encode more information about structure along that axis (Bailey et al. 1993) or to allow 3D SIM (Figure 10a) (Gustafsson et al. 2008, Pielage et al. 2008, Schermelleh et al. 2008). However, as with the 4Pi and I5M approaches to improving resolution, the resolution limit of linear SIM is still constrained by the physics of diffraction. This is not so in nonlinear SIM, in which diffraction is no longer the constraining factor (Gustafsson 2005, Heintzmann et al. 2002).
Nonlinear SIM provides even higher resolution by exploiting a nonlinear relationship between the intensity of fluorescence excitation and the rate of fluorescence emission. This nonlinearity effectively induces higher-order spatial harmonics in the pattern of fluorescence excitation, increasing the amount of information that can be encoded in the beat patterns (Figure 10c) (Gustafsson 2005, Heintzmann et al. 2002). One straightforward means for inducing such a nonlinear relationship with sine wave illumination patterns is to have the peaks of the sine waves reach saturating excitation intensities. This approach enables saturating structured illumination microscopy (SSIM), the simplest form of nonlinear SIM. In SSIM, the resolution is set by the number of spatial frequency harmonics that can be extracted while staying above background noise levels. For example, if one harmonic is used, then the resolution limit is approximately threefold smaller than the conventional limit. To date, the most harmonics used in addition to the fundamental frequency has been three, which enabled lateral resolutions of ~50 nm (Gustafsson 2005).
Strengths of SIM are that it records images in a wide field mode without scanning and thus, in principle, could examine very large fields of view, given camera chips of sufficient numbers of pixels. Another virtue is that neither linear nor nonlinear SIM requires fluorophores to be photoswitched or photoactivated, so both approaches work well with conventional fluorophores. This advantage allows the brightest markers and those least susceptible to bleaching to be chosen, making SIM well suited for multicolor imaging (Schermelleh et al. 2008). One disadvantage is that the saturating intensities used in SSIM can promote photobleaching or photodamage. In the future, photoactivated fluorophores might enable nonlinear forms of SIM that employ substantially reduced illumination intensities (Gustafsson 2000, 2005). Although SSIM breaks the diffraction limit, the accurate extraction of information about fine spatial scales requires high signal-to-noise ratios and minimal specimen drift over the time needed to acquire a raw image set. Frame rates are mainly limited by the need to record multiple raw images per reconstruction, which can take many seconds to acquire (Gustafsson 2005).
Neuroscience studies employing linear SIM include an investigation of structural abnormalities in Drosophila synapse organization, with 100-nm lateral and 250-nm axial resolution (Figure 10d) (Pielage et al. 2008). 3D SIM has recently revealed actin microtubule cytoskeletons and synaptonemal complexes (Gustafsson et al. 2008) and permitted imaging of single nuclear pore complexes in mammalian cells (Schermelleh et al. 2008). These studies represent just the beginning, and a key development to watch for is the application of SIM to study live cells, which will necessitate reductions in the required illumination powers.
Several super-resolution techniques rely on the detection and localization of single fluorophores to reconstruct images at resolutions unconstrained by diffraction. This approach builds on a large body of research in single-molecule biophysics that probed the properties of single fluorescence emitters and used these to study macromolecules (Betzig & Chichester 1993, Moerner 2002, Moerner & Kador 1989, Weiss 1999). Single molecule–based approaches to super-resolution imaging localize fluorophores to within tight bounds limited not by diffraction but by photon-counting statistics (Moerner 2007).
Building on work showing that single molecules could be imaged (Ambrose & Moerner 1991) and spatially localized to within uncertainties below the far-field diffraction limit (van Oijen et al. 1998), fluorescence imaging with one-nanometer accuracy (FIONA) enabled researchers to track the individual, 74-nm steps of the molecular motor myosin V, settling a debate about the motility mechanism (Yildiz et al. 2003). FIONA accurately localizes a spatially sparse set of single fluorophores by calculating the centroid position of each fluorophore’s emitted photons. The accuracy of localization is limited not by the diffraction-limited size of an emitter’s image on the camera, but rather by the number of detected photons available for the centroid calculation (Michalet & Weiss 2006, Thompson et al. 2002). [This is akin to the statement in probability theory that a standard error of the mean can be much smaller than the standard deviation, given a large set of observations drawn from the same distribution (Bobroff 1986).] Note that this approach works well only if the spatial distributions of photons from different emitters do not overlap on the camera. Once this condition is violated, the centroids of individual emitters can no longer be calculated accurately.
Recent techniques known as stochastic optical reconstruction microscopy (STORM) (Figure 11a–c) (Bates et al. 2007; Huang et al. 2008a,b; Rust et al. 2006) and photoactivated localization microscopy (PALM) (Figure 11a,d,e) (Betzig et al. 2006; Hess et al. 2006; Juette et al. 2008; Shroff et al. 2007, 2008) build on the basic ability to localize single fluorophores by dividing a denser set of fluorophores into a sequence of sparse fluorophore distributions, each of which has minimal overlap in the images of the individual emitters (Figure 11). Thus, if one finds the individual fluorophores’ locations for each of these sparse subsets, an entire image can be reconstructed by summing the locations of all fluorophores found within all the subsets. The key trick is the means of imaging a sparse subset of the fluorophores at any one time.
This trick is provided by the use of photoswitchable fluorophores, which can be switched on from a dark state using light of a different color than that used for fluorescence excitation. PALM and STORM typically rely on photoactivatable, genetically encoded fluorescent proteins (Betzig et al. 2006, McKinney et al. 2009, Shroff et al. 2007, 2008, Testa et al. 2008, Vaziri et al. 2008) or photoswitchable dye molecules, such as pairs of a cyanine dye and another shorter-wavelength chromophore that together serve as individual photo-switchable fluorophores (Bates et al. 2005, 2007; Huang et al. 2008a,b; Rust et al. 2006) and that can be covalently conjugated (Conley et al. 2008). The details of how a cyanine dye molecule can be photo-switched remain poorly understood, but the shorter-wavelength “activator” dye in the pair is typically used to photo-activate the “reporter” cyanine fluorophore. After being activated, the reporter emits fluorescence when excited by light of the same color that drives the inactivation transition, until inactivation actually occurs. In addition to cyanine dyes or photo-activatable fluorescent proteins, other photo-switchable or photo-uncaged fluorophores can also be used for PALM and STORM (Betzig et al. 2006, Fölling et al. 2007, Lord et al. 2008, Rust et al. 2006).
To activate a sparse set of fluorophores starting from a denser and dark population of emitters, a brief pulse of activation light is applied. The resulting stochastic set of activated fluorophores is then imaged, with sufficient mean numbers of photons collected to permit the desired accuracy of localization. Before activating the next set of emitters, the current set that has already been localized must be inactivated. Inactivation of fluorescent proteins and caged dyes is typically accomplished by photobleaching, which permanently darkens the fluorophores once inspected; the cyanine dyes are inactivated into a dark state that permits subsequent reactivation. Armed with these approaches one can assemble a super-resolution image using repeating sequences of activation, imaging, and either bleaching or inactivation (Figure 11a)
Note that the notion of resolution in PALM and STORM differs somewhat from that in conventional imaging modalities (Ober et al. 2004, Ram et al. 2006, Watkins & Yang 2004). PALM and STORM accurately localize each emitter, but to achieve a high-resolution image, the set of sampled emitters must also densely cover the specimen (Shroff et al. 2008). Meeting the latter proviso typically requires both high labeling densities and many rounds of fluorophore activation. Although the first PALM and STORM images required minutes to hours to localize a sufficient number of emitters (Betzig et al. 2006, Rust et al. 2006), subsequent optimization of excitation and the number of raw frames acquired per image have reduced acquisition times to the ~30-s range (Shroff et al. 2008). It appears further optimization of imaging speed will hinge on the development of brighter probes and faster cameras. Faster acquisition speeds are important for accurate fluorophore localization because drift and sample motion on ~10-nm scales are presently the limiting factors (Betzig et al. 2006, Rust et al. 2006). In longer recordings, drift can be corrected to the precision with which bright, fluorescent beads can be tracked across each frame (Betzig et al. 2006, Rust et al. 2006). To date, researchers have used stochastic localization techniques to image cellular adhesion complexes (Shroff et al. 2007, 2008), mitochondrial networks (Bock et al. 2007, Huang et al. 2008a, van de Linde et al. 2008, Vaziri et al. 2008), endocytotic machinery (Bates et al. 2007, Huang et al. 2008b), microtubule systems (Bates et al. 2007; Egner et al. 2007; Heilemann et al. 2008; Huang et al. 2008a,b), membrane proteins (Betzig et al. 2006, Manley et al. 2008), and localization patterns of regulatory proteins in bacteria (Biteen et al. 2008).
There are several approaches for extending PALM and STORM to 3D imaging. Initial efforts permitted <100-nm axial localization. In one approach, an astigmatism was deliberately introduced into the optical pathway, allowing each fluorophore’s axial position to be encoded at ~50-nm resolution by the shape of its photon distribution on the camera (Huang et al. 2008a,b; Kao & Verkman 1994). In another approach, a 3D point spread function was fit to a pair of images acquired at two distinct focal planes, providing axial information (Juette et al. 2008, Prabhat et al. 2004). Other approaches bring ~20-nm axial localization to super-resolution microscopy. Enhanced axial resolution can be achieved via an interferometric detection scheme that employs two objectives and three or more cameras (Shtengel et al. 2009, von Middendorff et al. 2008). Similar resolution gains can be realized using a single camera and a microscope in which the point-spread function has been shaped like a double-helix, thereby encoding axial information in the angular orientation of the emitted photon distribution (Pavani et al. 2009, Pavani & Piestun 2008). All these methods for 3D super-resolution imaging have to date been limited to specimens a few microns or so in thickness. However, to inspect specimens of greater thickness, optical sectioning techniques such as two-photon excitation might be combined with the above approaches for obtaining nanoscale resolution in 3D (Fölling et al. 2008a, Vaziri et al. 2008).
Although PALM and STORM rest on similar ideas, different laboratories have implemented these methods using different types of photoactivatable fluorophores with distinct strengths and limitations. Genetically expressed photoactivatable fluorescent proteins are compatible with live cell imaging, demonstrated at ~0.03 Hz (Shroff et al. 2008), and high-resolution single molecule tracking (Manley et al. 2008, Ram et al. 2008). However, after multiple rounds of activation, imaging, and bleaching, there are progressively fewer unbleached fluorescent proteins remaining, implying a declining density of fluorophores available for activation. Presently there is also a limited set of photoactivatable fluorescent proteins, but more are emerging including some reversibly switchable varieties (Andresen et al. 2008, Biteen et al. 2008, McKinney et al. 2009, Stiel et al. 2008, Subach et al. 2009). Although the brighter photoactivatable fluorescent proteins such as Kaede and EosFP have yielded hundreds of detected photons before bleaching (Betzig et al. 2006, McKinney et al. 2009), photoswitchable cyanine dyes have provided up to ~6000 detected photons per switching cycle (Huang et al. 2008a), facilitating accurate localizations. The use of cyanine dye pairs also permits a substantial number of different color labels for multicolor imaging (Bates et al. 2007). However, cyanine dye photoswitching requires at least a low concentration of thiol-based reducing agents (Bates et al. 2005). Labeling intracellular proteins with dyes also poses some generic challenges. Immunolabeling is a common approach for dye-labeling proteins of interest, but the physical size of antibodies limits the density of fluorophore labels and thus imaging resolution. Direct conjugation of photoswitchable dyes to primary rather than secondary antibodies would improve this limit (Huang et al. 2008b). Direct coupling of dye molecules to specific proteins through hybrid chemical genetic approaches (Fernández-Suárez et al. 2007, Griffin et al. 1998, Popp et al. 2007) should facilitate super-resolution imaging by capitalizing on the specificity of genetically encoded tags in living cells and the brightness of synthetic fluorophores. Single molecule–based imaging has also been implemented with conventional fluorophores using ground-state depletion (Fölling et al. 2008b) by optically shelving most of the fluorophores into long-lived dark triplet states and imaging the remaining fluorophores. This approach opens up a much wider catalog of usable fluorophores for single molecule imaging but involves higher levels of unintended photoactivation during fluorescence excitation. The stochastic localization methods have also been extended to encode polarization anisotropies, in addition to emitter positions (Gould et al. 2008, Testa et al. 2008). In the future, stochastic localization techniques should be applicable to large fields of view without compromising speed or resolution. However, this will require cameras with large numbers of pixels and fast acquisition speeds, as well as wide-field, high-NA microscope objective lenses.
All the main super-resolution microscopy approaches are likely to find their own niches within neuroscience. Thus, super-resolution imaging should permit studies of synaptic and macromolecular rearrangements in response to biochemical, synaptic, or electrical stimuli and will likely also be useful for tracing axons and deducing patterns of neuronal connectivity. In many cases there may be a choice between multiple super-resolution techniques adequate to address neuroscientists’ questions about ultra-structure. With the ability to localize single dye molecules, PALM and STORM may often be especially well suited for probing macromolecular interactions. STED may be particularly suited for studies of diffusion and rapid reaction kinetics, given this method’s ability to park the laser beams at a single spot for fast, continual data acquisition. Although not yet demonstrated, SIM will likely soon be extended to live specimens, which would be a boon, given the method’s ability to use conventional fluorophores. The reconstruction of neural wiring diagrams may also be greatly aided by the combination of automated tissue processing techniques and super-resolution microscopy. The quality of the images provided will dictate the degree of difficulty in the computational challenges of image segmentation and axon tracing. Animals with patterns of fluorescence labeling carefully designed to facilitate the identification of neural circuitry and synaptic connections may be particularly important (see mouse genetic strategies below).
Three types of laser-scanning microscopy based on coherent forms of optical contrast have found a growing set of applications in neuroscience research. As in two-photon microscopy, each imaging approach described here requires scanning one or more laser focal spots within the sample. However, unlike the majority of fluorescence techniques, these coherent techniques are often (but not always) used without exogenous labels, thereby exploiting the intrinsic optical contrast mechanisms of tissue itself. Avoidance of fluorescence as the contrast mechanism helps to reduce phototoxicity, which can occur via photophysical side pathways after fluorescence excitation. The ability to inspect unstained tissues facilitates potential clinical applications.
Optical coherence tomography (OCT) relies on backscattering of light as the contrast mechanism and provides optical sectioning in a way analogous to how forms of ultrasound imaging use round-trip time-of-flight measurements to determine the depths of signal sources (Huang et al. 1991). Because light travels too fast for the travel delays to be measured directly, OCT uses a low-coherence interferometer to compare the optical path lengths of light backscattered from objects in the sample with the path length of the interferometer’s reference arm. The low-coherence interferometer in OCT is often implemented in optical fiber, owing to the resulting portability and ease of alignment, the availability of economical but excellent fiber optic components for telecom wavelengths, and the benefits for making miniaturized imaging probes. The accuracy to which the optical path length comparisons can be performed, and thus OCT’s axial resolution, is inversely proportional to the spectral bandwidth of the illumination source. Thus, investigators have used broad bandwidth but spatially coherent light sources (~50–400 nm) to attain axial resolutions from tens of microns down to the submicron range (Aguirre et al. 2006b, Cense et al. 2004, Drexler et al. 1999, Leitgeb et al. 2004, Potsaid et al. 2008, Považay et al. 2002, Srinivasan et al. 2006, Wojtkowski et al. 2004). A weakly focused illumination beam is usually scanned laterally across the sample, either in 1D for the acquisition of a 2D cross-sectional image or in 2D for the acquisition of an image volume. Diffractive considerations govern the width of the beam waist and thus OCT’s lateral resolution.
The best-developed application for OCT is the inspection of the live human retina (Drexler & Fujimoto 2008). The resolution, sensitivity, and acquisition speed of OCT have all improved to where, in combination with adaptive optical techniques for correcting the optical wavefront aberrations that arise in the eye, ultrahigh-resolution OCT systems can image across the retinal laminae and even resolve individual human photoreceptors (Figure 12a) (Zawadzki et al. 2008). OCT’s speed of acquisition (e.g. ~36 frames/s with 50-μs line exposure time; Zawadzki et al. 2008) is important for overcoming potential motion artifacts from saccadic eye movements. Careful comparisons to histological tissue specimens have been important for proper image interpretation because of confounds that can arise between absorption and scattering and from uncertainty in the biological features that scatter light (Anger et al. 2004, Gloesmann et al. 2003). However, the capability for simultaneously visualizing human photoreceptors and other aspects of retinal microanatomy will not only permit improved diagnostics for clinical ophthalmology, but also might allow comprehensive studies of how human visual perception correlates with detailed aspects of the eye’s construction and vascular responses. Forms of OCT that detect Doppler shifts incurred in the sample (Chen et al. 1997, Izatt et al. 1997) are capable of measuring blood flow speeds in the human retina (Leitgeb et al. 2003, White et al. 2003) and have also permitted 3D reconstructions of cerebral microcirculatory speeds in mice (Wang et al. 2007b, Wang & Hurst 2007).
Although OCT has, to date, mainly enabled imaging of anatomical structures or blood flow speeds, possibilities for functional measurements of brain activity by OCT are under exploration. Several OCT studies have examined intrinsic changes in light reflectance that occur in response to visual stimulation in isolated vertebrate (Yao et al. 2005) and mammalian (Bizheva et al. 2006) retinae, as well as in the retinae oflive rats (Srinivasan et al. 2006) and human subjects (Srinivasan et al. 2009). The use of functional OCT for 3D intrinsic signal mapping in the mammalian brain is also being explored (Aguirre et al. 2006a, Chen et al. 2008a, Maheswari et al. 2002, Maheswari et al. 2003, Rajagopalan & Tanifuji 2007). This might provide a 3D alternative to the conventional use of intrinsic optical reflectance signatures of brain activity for functional brain mapping. However, more work remains to be done toward understanding the depth dependence of activity-related reflectance changes. A few in vitro studies have also used OCT to detect action potentials in excised invertebrate nerves, relying on small changes in refractive index or nanometer-scale changes in axonal volume or geometry that modulate a nerve’s light scattering properties (Akkin et al. 2004, 2007; Fang-Yen et al. 2004; Lazebnik et al. 2003). Further development of this approach would be of interest because it might conceivably lead to functional measurements in the intact nervous system, such as in the optic nerve in the context of clinical ophthalmology.
Coherent anti-Stokes Raman scattering (CARS) microscopy creates images by using intrinsic signatures of molecular vibrations within the sample for contrast generation. A molecular vibration of interest is probed using nonlinear interactions in the sample between three photons from two laser beams, known as the pump beam and the Stokes beam, whose frequency difference is tuned to match the vibrational resonance. The CARS signal, which has a frequency equal to the sum of the pump laser beam frequency and that of the resonance, rises linearly with the intensity of the Stokes beam and quadratically with the intensity of the pump beam. This enables 3D optical sectioning, as in other forms of nonlinear optical microscopy. However, a nonresonant background signal also arises from the specimen even when the frequency difference of the two lasers is detuned from a molecular resonance. This nonresonant background can overwhelm fainter CARS signals from vibrational resonances. Thus, unlike with two-photon imaging for which femtosecond laser pulses are generally used, in CARS microscopy it is common to use pulse durations of picoseconds, which permit superior signal-to-background ratios because the narrower spectral bandwidths of these pulses are better matched to those of vibrational Raman bands (Cheng et al. 2001).
The integration of NIR lasers, interferometric detection strategies, epi-detection approaches for the study of nontransmitting specimens, and polarization-sensitive techniques (Cheng 2007, Volkmer et al. 2001, Zumbusch et al. 1999) has greatly improved early forms of CARS microscopy (Duncan et al. 1982). The strongest CARS signals tend to come from vibrations of C-H bonds, which are rich within lipids. Thus, for neuroscientists, CARS offers an intrinsic signal for inspecting nerve myelination and its degradation in vitro (Figure 12b,c), in animal disease models, and potentially in clinical contexts (Fu et al. 2007, Wang et al. 2005). CARS microscopy has allowed imaging of single myelinated axons in the live mouse brain (Fu et al. 2008). Additional in vivo studies have involved imaging of mouse brain tumors (Evans et al. 2007) and rat spinal cord (Wang et al. 2007a).
Just recently, a technique that may supplant CARS microscopy in many applications has emerged. Stimulated Raman scattering (SRS) microscopy is another form of coherent Raman imaging that provides information about molecular vibrations, but SRS microscopy does not suffer from a nonresonant background signal, greatly aiding detection sensitivity and interpretation of signals (Freudiger et al. 2008). Like CARS microscopy, SRS imaging appears well suited for examination of myelin sheaths and other tissues rich in lipids.
Second-harmonic generation (SHG) microscopy relies on coherent frequency doubling of the illumination for generating optical contrast. The nonlinear SHG process converts two incoming photons into one outgoing photon of twice the frequency and permits inherent optical sectioning. Unlike with two-photon excited fluorescence, SHG does not involve light absorption and then reemission after a ~1–10-ns delay. Rather, frequency doubling is essentially instantaneous, and the outgoing light is coherent with the illumination. Biological specimens that consist of highly ordered but directionally asymmetric molecular assemblies, such as collagen or striated muscle fibers, tend to produce strong SHG signals owing to the dependence of frequency doubling on a broken spatial inversion symmetry within the sample material structure (Campagnola & Loew 2003). Because SHG does not require absorption, it is often well generated over a broader range of illumination frequencies than is typical for fluorescence excitation.
A recent application of SHG of pertinence to the study of motor control and neuromuscular diseases involves in vivo imaging of sarcomeres, the basic contractile units of striated muscle (Llewellyn et al. 2008). Sarcomere force-extension relationships are a crucial determinant of a muscle’s force production, but previous methods for examining sarcomeres and their extension lengths generally required excision of muscle fibers out of their native physiological and biomechanical contexts and thereby removed them from the nervous system’s control signals. By inserting microendoscopes (350–1000 μm in diameter) (Figure 1a) into live muscles (Llewellyn et al. 2008), investigators have imaged the extension lengths of individual sarcomeres by using the strong SHG signals that are thought to arise in sarcomeres’ myosin motor tails (Campagnola & Loew 2003, Campagnola et al. 2002, Plotnikov et al. 2006) (Figure 13a–f). In this way, they could visualize the 3D structures of muscle fibers, characterize sarcomeres’ micron-scale extension lengths in vivo, and monitor sarcomeres’ millisecond-scale contractile dynamics (Figure 13a–f).
The application of minimally invasive SHG microendoscopy to animal models and human subjects (Llewellyn et al. 2008) opens the door to novel studies of how signals from the nervous system control body movements, to in vivo imaging studies of animal models of neuromuscular disorders, and potentially to new clinical diagnostics, surgical strategies, and means of monitoring disease progression. Many neurological diseases of motor control, such as muscular dystrophy, as well as disorders such as muscle contractures caused by stroke or cerebral palsy, likely involve disruptions to sarcomere lengths and force-extension relationships (Plotnikov et al. 2008, Pontén et al. 2007). Further, multiple genetic diseases of motor control have recently been traced to mutations in sarcomeric proteins (Laing & Nowak 2005) and may cause disruptions in sarcomere structures and lengths (Plotnikov et al. 2008). Because the microendoscopes used for sarcomere imaging can be inserted through hypodermics of sizes comparable to those for conventional electromyography (EMG), future approaches may combine SHG microendoscopy with EMG to allow simultaneous monitoring of the electrical signals driving muscle contraction and the resulting mechanical responses.
A key application of SHG microscopy that does not rely on intrinsic signals involves the insertion into neuronal membranes of a chromophore sensitive to electric fields, thereby allowing imaging of membrane voltage transients. This combines the advantages inherent to nonlinear optical microscopy, including optical sectioning and significant penetration into thick tissue, with those of SHG, including lack of light absorption and relative enhancement of signals from ordered cell membranes as opposed to cytoplasmic regions, for the study of neuronal voltage dynamics. Upon incorporation into neuronal membranes, certain molecules with asymmetric structure can respond rapidly to electric fields with changes in their electro-optic or electro-chromic properties, changes in mean angular orientation relative to the membrane surface, or a combination of these effects (Jiang et al. 2007, Pons et al. 2003, Pons & Mertz 2006). Such effects can yield sizable voltage-dependent SHG signals in neurons [e.g. ~7.5% (Dombeck et al. 2005) to ~10%–12% (Nuriya et al. 2006) changes per 100 mV in brain slices, and up to ~25% per 100 mV in cultured neurons (Nemet et al. 2004)].
Investigators have demonstrated the use of SHG microscopy to measure neuronal action potentials in cultured Aplysia neurons using a membrane-bound styryl dye (Dombeck et al. 2004), in cultured hipppocampal pyramidal neurons using a retinal photopigment (Nemet et al. 2004), and in mammalian brain slices using FM 4–64 (Araya et al. 2006, Dombeck et al. 2005, Nuriya et al. 2006). Typically, voltage responses are measured by fast line-scanning of the laser focal spot over the sample and then averaging signals from multiple stimulation trials. The modest dynamic range ofSHG signals has generally made the detection of voltage responses at the level of single trials challenging, albeit not impossible (Sacconi et al. 2006, 2008).
SHG membrane-potential imaging with the dye FM 4–64 has permitted direct measurements of membrane potential in intracellularly filled mouse pyramidal neurons, enabling the first studies of membrane potential dynamics in dendritic spines (Figure 13g–k) (Nuriya et al. 2006). With this dye, the mechanisms of voltage sensitivity appear to be purely electro-optic (Jiang et al. 2007). SHG is sensitive to voltage changes on fast (<1 ms) timescales, and the changes in membrane potential in spines during action potential invasion were detectable (Figure 13i–k) (Nuriya et al. 2006). A later study using similar methods of monitoring membrane potential in layer 5 mouse neocortical pyramidal neurons reported a negative correlation between the lengths of dendritic spine necks and the transmission of voltage changes between the soma and the spine heads (Araya et al. 2006).
The cloning of GFP opened the door to a wide set of possibilities for expressing fluorescent labels under genetic control (Chalfie et al. 1994). Today, fluorescent proteins of many color variations (so-called XFPs) have been created, permitting concurrent visualization of multiple genetically expressed fluorescent tags. Because absorption cross-sections for two-photon fluorescence excitation tend to be spectrally broad, a single wavelength setting for the illumination often suffices in two-photon microscopy to excite more than one fluorophore species. For neuroscientists, the introduction of mice that brightly expressed XFPs in neuronal subsets under the control of the Thy1 promoter furthered imaging opportunities in many directions (Feng et al. 2000). These mice facilitated live animal imaging studies, including studies of motor axon dynamics and neuromuscular junction formation (Keller-Peck et al. 2001, Nguyen et al. 2002, Walsh & Lichtman 2003), as well as of dendritic spine turnover (see next section). Other notable genetic techniques for creating fluorescently labeled “tool mice” have since arisen, some of which build on the proven success of using the Thy1 promoter to drive high expression levels of fluorescent protein. This section discusses genetic mosaic strategies for fluorescence labeling, in which a single adult animal has different sets of somatic cells with distinct genotypes.
All the mosaic strategies described here rely on Cre recombinase mediated DNA recombination. The enzyme Cre recombinase (Cre) recognizes specific directional DNA sequences, the canonical loxP site and its variants, which can be inserted to flank DNA segments of interest (sometimes called “floxed” segments). Cre recombinase can bind to two loxP sites and promote recombination of the DNA between the two sites. If both loxP sites have the same directional orientation, recombination excises the floxed segment. If the loxP sites have the opposite directional orientation, recombination inverts but does not excise the floxed segment. Translocation recombination is also possible between separate strands of DNA, leading to an exchange of DNA sequences starting from the loxP sites. As elaborated below, this set of possibilities permits sophisticated manipulations of gene expression. The wide availability of Cre driver mouse lines, in which Cre recombinase is expressed within specific populations of cells, as well as driver lines permitting expression of ligand-inducible forms of Cre facilitate the spatiotemporal control of DNA recombination and thus of XFP expression.
As useful as the original Thy1-XFP mice are, recently created mosaic mice with neurons labeled in a rainbow of colors should further facilitate experimentation by allowing dense sets of individual cells and their overlapping processes to be distinguishable by color. Livet et al. (2007) have described two transgenic strategies for creating such multicolor “Brainbow” mice (Figure 14a–c). The ability to trace neuronal processes for mammalian neural circuit reconstructions may be greatly aided in Brainbow mice by the pantheon of spectrally separable tags (see Automated Microscopy section). Brainbow mice may also serve neuronal lineage analyses, allowing individual neuronal progenitor cells and their progeny to be labeled in a common color from among a large palette of possibilities. For experiments that were previously feasible only with the sparsely expressing lines of the original Thy1-XFP mice, Thy1-Brainbow mice may enable larger data sets to be acquired from each animal because the capabilities for spectral separation may allow use of lines with denser neuronal labeling patterns.
The DNA constructs used to generate Brainbow mice exploit Cre-mediated recombination. In the Brainbow-1 strategy (Figure 14a, top), distinct XFP exons are floxed by three mutually exclusive types of lox sites, the canonical loxP site and two incompatible lox site variants (Livet et al. 2007). Upon exposure to Cre recombinase, one of three mutually exclusive gene excisions can occur, resulting in the expression of one of three distinct XFPs following recombination. A fourth color can be used to indicate the lack of recombination.
The Brainbow-2 strategy (Figure 14a, bottom) exploits inversion of floxed XFP-coding segments. Cre recombinase can invert each segment multiple times, so the segments randomly settle in either a forward or backward orientation, yielding two different XFP-coding possibilities. Additionally, excisions can occur in Brainbow-2 (Figure 14a, bottom). The total set of inversion and excision possibilities yields a random expression of one of four XFPs.
For both Brainbow strategies, the random integration of multiple Thy1-Brainbow constructs into the mouse genome led to an even wider set of color labels owing to the large set of combinatorial possibilities for XFP expression from each incorporation site. Crossing Thy1-Brainbow mouse lines with Cre driver transgenic mouse lines led to ~100 different color hues arising from the combinatorial expression of three or more XFPs, which can be expressed in a wide variety of glial and neuronal labeling patterns. For example, Brainbow mice have permitted multicolor imaging of densely packed neurons and their processes, such as in peripheral motor nerves; cerebellar granule cells and mossy fibers; brain stem axonal tracts (Figure 14b); and the hippocampal dentate gyrus (Figure 14c). In one of the Brainbow-1 mouse lines astrocytes were also labeled, an unexpected effect since Thy1 usually drives expression in neuronal subsets.
The Brainbow strategy will likely undergo refinements that will continue to increase its utility. The current use of the Thy1 promoter limits expression of the XFPs to late embryonic and postnatal stages and hinders use of the strategy for early developmental studies. For cases in which immuno-staining is required, it is unfortunate that antibody labeling cannot differentiate between some of the fluorescent proteins because of their structural similarity (for example, GFP, YFP, and CFP). GFP also has close spectral similarity to YFP and CFP, so in Brainbow-2 lines a nuclear localization signal was tagged to GFP to aid disambiguation of the fluorescence signals (Livet et al. 2007). As the number of available XFPs grows, the combinatorial possibilities for Brainbow color labeling will expand even further. Moreover, as the super-resolution microscopy techniques continue to develop, the ability to trace individual axons within densely labeled axonal bundles in Brainbow mice should also improve.
In addition to permitting expression of multiple color markers as in Brainbow, mosaic genetic techniques are well suited for studying sparse sets of genetically modified neurons among a wild-type background population of cells, and often for examining spatiotemporal patterns of circuit formation in the developing brain. For such studies in mice, an approach called mosaic analysis with double markers (MADM) yields mosaic mice expressing two fluorescent markers and has the option of performing a conditional gene knock-out (Zong et al. 2005). The MADM strategy was inspired in part by a technique called mosaic analysis with a repressible cell marker (MARCM) for use in Drosophila (Lee & Luo 1999) and uses Cre-loxP mediated interchromosomal recombination in somatic cells to create mosaic animals (Figure 14d–f). MADM mice start life with homologous chromosomes containing reciprocally chimeric genes knocked in at identical chromosomal loci. If GFP and a red fluorescent protein (RFP) are used as the two markers, the chimeric genes contain split exons of either GFP/RFP or RFP/GFP (written here as N-terminus/C-terminus), with a loxP site lying within an intron between the two exons (Figure 14d). Because the intron interrupts the coding sequences of the XFPs in different reading frames, functional fluorescent proteins are not produced from the original chimeric genes. Fluorescent protein expression is enabled only when MADM mice are crossed with a mouse Cre driver line to introduce Cre-recombinase in designated cell populations.
In the progeny of this cross, after DNA replication in G2 phase (Figure 14d, left), Cre-mediated interchromosomal recombination events can occur across arms of the two chromosomes prior to mitosis (Figure 14d, middle). These comparatively rare interchromosomal recombination events yield two chromosomes that each have a reconstituted XFP-encoding gene in one of the chromatids. There are then two possibilities for how mitosis can proceed (Figure 14d, right). In an X segregation, in which the recombinant chromatids from the two chromosomes are distributed to different daughter cells, one daughter cell will be heterozygous for a functional GFP and the other heterozygous for a functional RFP (Figure 14d, right top). If a Z segregation occurs, with the recombinant chromatids from the two chromosomes reaching the same daughter cell, one daughter cell will be double labeled with both GFP and RFP and the other will be unlabeled (Figure 14d, right bottom). Note that if recombination occurs in a G1 phase or postmitotic cell, this cell will be double labeled.
To accomplish a mosaic pattern of conditional gene knock-out, a mutation of interest is placed distal to the chimeric gene on one arm of the chromosome, such as the GFP/RFP arm. If X segregation occurs following G2 recombination, the GFP-labeled cell will be homozygous for the mutation, whereas the RFP-labeled cell will be homozygous for the wild-type gene. If Z segregation occurs, both the unlabeled and the double-labeled daughter cells will be heterozygous for the mutation. Thus, with MADM, the genotype of any labeled cell can be discerned by its color. This labeling strategy facilitates tightly controlled experiments because in individual mice there will be fluorescently labeled mutant and nonmutant cells available for inspection.
Cell labeling using MADM has been demonstrated across a variety of areas in the mouse brain, including hippocampus, dentate gyrus (Figure 14e), neocortex (Figure 14f), cerebellum, and retina. MADM has also permitted clonal analyses of neurogenesis, differentiation, and axonal projection patterns of cerebellar granule cells (Espinosa & Luo 2008, Zong et al. 2005). To date, only the well-characterized ROSA26 chromosomal locus that lies on the long arm of chromosome 6 has been employed as the knock-in site, presently limiting the conditional knock-out to those genes lying distal to ROSA26. Another limitation of the existing MADM mice that will improve in the future concerns the insufficient brightness of the RFP for in vivo imaging. To date, visualization of the RFP marker has been demonstrated only by immunolabeling. As these aspects of MADM are refined, the approach will facilitate in vivo imaging studies of neuronal circuit establishment, with direct comparisons between mutant and wild-type cells in the same brain area.
Another mosaic technique for fluorescence labeling of single cells is single-neuron labeling with inducible Cre-mediated knock-out (SLICK) (Figure 14g–h) (Young et al. 2008). To generate SLICK mice, separate back-to-back copies of the Thy1 promoter drive the two genes coding for YFP and for CreERT2, a ligand-inducible form of Cre-recombinase that is induced by injection of the drug tamoxifen (Figure 14g, top). To create a mosaic conditional genetic knockout or a conditional transgenic (Figure 14g, bottom), SLICK mice are crossed with another mouse line in which a gene of interest is either floxed or preceded by a floxed transcriptional stop sequence, respectively. Systemic administration of tamoxifen at chosen time points permits temporal control over Cre recombinase activity in the progeny of the cross.
To demonstrate the strategy, multiple lines of SLICK mice have been created and characterized. In one example, SLICK mice were crossed with a Cre reporter strain in which a floxed STOP precedes a lacZ sequence coding for β-galactosidase. Upon tamoxifen administration, lacZ expression as assessed by immunofluorescent staining for β-galactosidase and YFP fluorescence were jointly visible in brain slices, illustrating dual expression of YFP and CreERT2 within single cells (Figure 14h). SLICK mice have also been used to study the elimination of synaptic transmission at the adult mouse neuromuscular junction by a mosaic conditional knock-out of the choline acetyltransferase (Chat) gene, which encodes an enzyme involved in acetylcholine synthesis. In YFP-labeled motor axons, neuromuscular junctions stably persisted in adult animals even four to eight weeks after depletion of the ChAT protein (Young et al. 2008). The coupling of fluorescence labeling and conditional knock-out, which should ideally be checked for each floxed gene, was found to be >95% for the Chat knock-out. A key feature of SLICK is that in an in vivo imaging study there is the possibility of inspecting the same individual neurons before and after the induction of gene knock-out.
As with MADM mice, SLICK mice are well suited for examination of the effects of conditional gene knock-out on neurons’ morphology and integration into functional circuits. Both MADM and SLICK enable mosaic genetic manipulation of cells, but the two approaches offer distinct advantages. SLICK offers temporal control via inducible genetic manipulation of neuron types in which the Thy1 promoter is active, which often starts in late embryonic development and continues into adulthood. MADM is used to study mitotic cells, which for neuronal precursors generally implies during early brain development. A strength of MADM is the simultaneous generation of two colors of fluorescently labeled cells that allow internal controls (RFP- and double-labeled cells) for conditional gene knock-out studies. Following tamoxifen administration, the YFP-labeled cells generated in SLICK are very likely, but not absolutely guaranteed, to have the conditional gene knock-out. MADM can also be used to study mutations of any kind lying distal to a MADM chromosomal knock-in site, whereas SLICK requires floxed conditional alleles.
An ability to monitor individual neurons repeatedly by long-term time-lapse imaging is crucial for studies of how the nervous system evolves during development, aging, learning, or other life experiences, or over disease progression. Chronic mouse preparations permitting time-lapse imaging are well established for studies of the peripheral nervous system (Lichtman et al. 1987, Purves et al. 1986, Purves & Hadley 1985) and cranially implanted tumors (Melder et al. 1995, Yuan et al. 1994). However, XFP-labeled transgenic mice have recently facilitated studies of neurons in the periphery (Bishop et al. 2004, Gan et al. 2003, Walsh & Lichtman 2003) and, in combination with two-photon microscopy, have enabled chronic imaging studies within the mammalian brain.
A main advantage of chronic imaging preparations concerns the experimental designs that can generally be achieved by longitudinal studies, in which one follows individual animals over time, as compared to population studies, in which different sets of animals are inspected at distinct time points. Information about dynamics in individual subjects is inherently absent when each subject is examined only once, e.g., at postmortem. For disease studies, analysis of within-subject dynamics is often crucial because it can reveal how early cellular-level symptoms might predict disease outcomes or how initial cellular signatures of disease might be reversed by intervention. Longitudinal imaging can also permit a substantial reduction in the number of animals needed because each animal ideally provides data at all time points, not just one.
In 2002, two different chronic mouse preparations emerged for in vivo time-lapse two-photon imaging of neuronal dendrites and spines. The two groups responsible both examined layer 5 neocortical neurons in Thy1-XFP mice over periods of weeks to months (Grutzendler et al. 2002, Trachtenberg et al. 2002). One of the chronic preparations drew on earlier usages of implanted glass “cranial windows” that had enabled seminal intravital fluorescence microscopy studies of brain tumor angiogenesis (Melder et al. 1995, Yuan et al. 1994). By implanting glass windows into the cranium it was possible to monitor spines on large portions of the dendritic trees of XFP-labeled neocortical pyramidal neurons, repeatedly for up to ~30 days after surgery and ~100 μm below the pial surface of the brain (Trachtenberg et al. 2002). The other chronic preparation involved shaving the mouse’s cranium to be sufficiently thin (~30–50 μm) such that two-photon imaging could be performed through the skull to inspect neocortical neuronal dendrites up to ~ 100–200 μm below the pia (Grutzendler et al. 2002). The skull regrows and repeated thinning renders it opaque, so the thinned skull preparation generally provided data only at a few time points (e.g., ~4) but up to ~18 months apart (Zuo et al. 2005a). The two methods yielded different results regarding the long-term stabilities of dendritic spines, which generated debate in part about methodologies (see below) (Xu et al. 2007), but results obtained with both approaches did suggest the majority of layer 5 pyramidal cell spines in the adult brain are stable over months (Holtmaat et al. 2005, Zuo et al. 2005a).
Multiple laboratories have since used chronic imaging techniques to study the stability and turnover of dendritic spines from layer 2/3 and layer 5 neocortical pyramidal neurons under various experimental manipulations in adult mice (Chow et al. 2009; Holtmaat et al. 2005, 2006; Knott et al. 2006; Majewska et al. 2006; Trachtenberg et al. 2002; Zuo et al. 2005a,b) (Figure 15a,c,e). Other studies have examined axonal dynamics (Figure 15b), (De Paola et al. 2006, Nishiyama et al. 2007, Portera-Cailliau et al. 2005), dynamic remodeling in layer 2/3 interneurons (Lee et al. 2006, 2008), redistribution of PSD-95 in dendrites of layer 2/3 pyramidal neurons (Gray et al. 2006), as well as the dynamics of neurogenesis and integration of new dendrites in the olfactory bulb (Mizrahi 2007, Mizrahi et al. 2006). Long-term imaging is also being established for use in deeper brain tissues using methods that allow the repeated insertion of microendoscope probes to the same tissue site (Figure 1c), for example to visualize CA1 hippocampal pyramidal neurons by two-photon microendoscopy (Figure 15d) (Deisseroth et al. 2006).
Another important application of time-lapse microscopy in XFP-expressing mice concerns the study of animal models of brain disease or injury because repeated imaging allows the temporal dynamics of disease and recovery to unfold within individual subjects (Misgeld & Kerschensteiner 2006, Pan & Gan 2008). Several studies have investigated the dynamics of dendritic spines proximal to an ischemic stroke (Figure 15e) (R. Mostany & C. Portera-Cailliau, unpublished observations; Winship & Murphy 2008, Zhang et al. 2005), and another has examined spine dynamics in a mouse model of prion disease (Fuhrmann et al. 2007). By crossing mouse models of Alzheimer disease with XFP-expressing lines, other time-lapse studies have monitored neuronal (Figure 15f) (Brendza et al. 2005, Meyer-Luehmann et al. 2008, Spires et al. 2005, Tsai et al. 2004) and microglial (Koenigsknecht-Talboo et al. 2008, Meyer-Luehmann et al. 2008) changes at the sites of amyloid plaques. Not all time-lapse imaging studies of disease have required two-photon microscopy. XFP-expressing mice have also facilitated studies of spinal cord injury by confocal microscopy (Figure 15g) (Davalos et al. 2008; T. Misgeld, unpublished data). Furthermore, time-lapse SHG microscopy has permitted imaging of fibrillar collagen during the growth of brain tumors (Brown et al. 2003, Perentes et al. 2009).
Among the groups using time-lapse imaging in the brain to study dendritic spines, discussion has arisen over whether the different surgical preparations can affect measurements of spine stability. With the cranial window preparation, layer 5 pyramidal neurons in mouse whisker barrel cortex had spine turnover rates of ~50% per month in 6–10-week old mice (Holtmaat et al. 2005, Trachtenberg et al. 2002). Results from the thinned skull preparation suggested this turnover rate was ~10% per month in the visual cortex (Grutzendler et al. 2002). These seemingly conflicting results stimulated discussion about the factors that might influence both the real and the measured stability of dendritic spines (Knott & Holtmaat 2008, Majewska et al. 2006, Xu et al. 2007). Biological factors that might influence spine turnover rates in long-term imaging studies include the animals’ ages (Holtmaat et al. 2005, Zuo et al. 2005a), experimental manipulations such as whisker trimming (Holtmaat et al. 2006, Zuo et al. 2005b) or visual deprivation (Hofer et al. 2009), and differences between brain areas (Holtmaat et al. 2005, Majewska et al. 2006, Zuo et al. 2005a). Glial activation beneath an implanted cranial window was proposed as a factor, or as a symptom of factors, that could promote spine turnover (Xu et al. 2007). However, other studies that used both thinned skull and cranial window assays for imaging experiments did not report any notable differences between the two in spine turnover rates (Chow et al. 2009, Majewska et al. 2006; see also Holtmaat et al. 2009). Still other factors may influence the ability to make accurate determinations of spine turnover. These include user-dependent variability in spine-counting methods, the brightness and patterns of fluorescence labeling, and any potential differences that might exist between the two surgical preparations in the capacity to resolve the smallest and likely the most labile spines (Holtmaat et al. 2005, 2006).
Given these sources of potential variability, it will be important for scientists to benchmark results across laboratories and to demonstrate equal capabilities for resolving spines and comparable criteria for scoring spine stability. It seems equally critical to have experimental designs that, whenever possible, permit quantitative comparisons across control and experimental sets of animals to isolate the factors that may influence neuronal dynamics. As these issues become better explored and controls become standardized, chronic in vivo imaging will become an even more powerful tool for neuroscientists to study gradual changes in the intact brain.
The past few years have been exceptionally productive for the development of new microscopy techniques. Major breakthroughs have occurred on multiple technological fronts, impacting neuroscientists’ abilities to examine the nanoscale, create large-scale tissue reconstructions, and image cellular properties in live animals. These newfound capabilities will further expand the role of microscopy in neuroscience research. Many of the methods discussed here remain in early stages of development, but as these approaches progress we will start to see sophisticated combinations of techniques, such as combinations of complex fluorescence-labeling strategies with in vivo or super-resolution microscopy. Another subset of techniques is intended for the acquisition of massive data sets, which pose unprecedented challenges for neuroscientists regarding data management, analysis, and mining.
Overall, the technological complexity of image-based experimentation in neuroscience continues to grow, and in the long run, the highly collaborative approaches that arose in other imaging disciplines such as medical imaging or astronomy might become common in certain specialties within neuroscience, requiring large-scale automated imaging and massive data sets. In addition to the potential alterations in collaborative norms, the basic conceptual understanding of an image will also evolve. As explored in this review, a growing number of techniques do not directly capture the final images of the specimen, but rather create them computationally on the basis of optical data of various forms. Image data sets of many types now require extensive, often model-based, computational analysis just to be interpreted. As microscopy grows in logical abstraction, our images— increasingly processed, reconstructed, filtered, deconvolved, and distilled—will assume some of the attributes of computational hypotheses. This view of images as mere hypotheses, rather than as direct depictions of reality, should not deter us from image-based experimentation. On the contrary, the notion of images as hypotheses should push us toward new logical and statistical strategies for hypothesis testing.
We have chosen these challenges as examples of technological achievements that would address key needs of neuroscience research across the different areas covered by this review:
This review greatly benefited from critical feedback on portions of the text and contributions of images generously provided by many neuroscientists and microscopists. For this substantial help we gratefully thank D. Albrecht, T. Anderson, C. Bargmann, R. Barretto, E. Betzig, E. Bushong, P. Carlton, J.-X. Cheng, D. Chiu, G. Davis, D. Dombeck, M. Ellisman, C. Engel-brecht, J.S. Espinosa, G. Feng, S. Finkbeiner, W. Gan, M. Gustaffson, S.W. Hell, F. Helmchen, T. Holy, D. Kleinfeld, M. Levoy, J. Lichtman, H. Lu, L. Luo, M. Martone, D. Mayerich, T. Misgeld, W.E. Moerner, U.V Nägerl, A. Nimmerjahn, C. Portera-Cailliau, M. O’Connell, J. Rosen, P. Saggau, T. Sasaki, J. Sharpe, K. Shen, H. Shroff, S. Smith, D. Tank, P. Tsai, J. Trachtenberg, D. Vučinić, J. Werner, C. Xu, M.F. Yanik, R. Yuste, R. Zawadzki, X. Zhuang, and Y. Ziv. We particularly thank P. Saggau, C. Xu, and X. Zhuang for extended discussions.
Our laboratory’s work on microscopy is supported by a Stanford Bio-X fellowship (B.A.W.), a Stanford Graduate Student fellowship (L.D.B.), NSF Graduate Fellowships (L.D.B. and E.A.M.), and grants from NINDS, NIBIB, NIDA, ONR, NSF, and the Packard, Coulter, and Beckman Foundations (M.J.S.).
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.