|Home | About | Journals | Submit | Contact Us | Français|
Live imaging has gained a pivotal role in developmental biology since it increasingly allows real-time observation of cell behavior in intact organisms. Microscopes that can capture the dynamics of ever-faster biological events, fluorescent markers optimal for in vivo imaging, and, finally, adapted reconstruction and analysis programs to complete data flow all contribute to this success. Focusing on temporal resolution, we discuss how fast imaging can be achieved with minimal prejudice to spatial resolution, photon count, or to reliably and automatically analyze images. In particular, we show how integrated approaches to imaging that combine bright fluorescent probes, fast microscopes, and custom post-processing techniques can address the kinetics of biological systems at multiple scales. Finally, we discuss remaining challenges and opportunities for further advances in this field.
Over a wide range of scales, structures inside living organisms are highly dynamic. Chromatin moving inside the nucleus, morphogen diffusion, vesicle trafficking, cell migration, and organ morphogenesis are just a few key processes that span several orders of magnitude in size and speed. Although observing such processes over time has become possible in recent years, the role of biological motion in cell function remains poorly understood. Furthermore, quantitative kinetics characterization of enzymatic activity, protein maturation, or complex genetic networks, for example during cell differentiation, remains extremely scarce. Images captured over time constitute the ideal starting point to answer many of these questions. Live imaging is, however, notoriously difficult when high spatial and temporal resolutions are required.
In developmental and cell biology, an increasing body of work builds upon the availability of dynamic imaging techniques. Examples include cell motion analysis (Hass and Gilmour, 2006; Forouhar et al., 2006), cell-lineage tracing (Mathis et al., 2001; Hirose et al., 2004) or characterization of cell remodeling (Kulesa and Fraser, 2002; Kozlowski et al., 2007; Yang et al., 2007). Recently developed microscopes allow imaging of ever-faster processes and offer the possibility of studying morphogenesis and cellular dynamics at an unprecedented temporal resolution (Liebling et al., 2006). Several other fields in biology, such as high throughput imaging of cell cultures (Carpenter, 2007; Pepperkok and Ellenberg, 2006; Bakal et al., 2007) and systematic in vivo imaging of small animals for drug or phenotype screening (Starkuviene and Pepperkok, 2007; Burns et al., 2005) greatly benefit from and drive these improvements as they evolve to include more complex samples.
Imaging live samples at high speed is more demanding than imaging fixed samples. It requires a multidisciplinary approach and careful planning of the entire imaging protocol. We briefly survey the fundamental concepts and challenges that are of importance for dynamic in vivo imaging (including resolution, detectability, and vital fluorescence labeling). We then present technological developments to push resolution limits in both space and time at various levels of the imaging procedure, from sample preparation to image acquisition, processing, or analysis. Finally, we conclude by giving potential applications of the described techniques in developmental biology and biophysics and by discussing the importance of building seamless collaborations between biologists, microscopists, and engineers to take advantage of yet under-utilized paradigms in each of these fields.
Dynamic processes in cellular biology span a broad range of velocities and scales. Some examples of this diversity are the speed of cell migration [140–170 μm/h for neural crest cells (Kulesa and Fraser, 2000)], telomere motion in yeast [~0.05 μm/sec (Gasser, 2002)], fast calcium waves [10–50 μm/sec (Jaffe and Créton, 1998)], red blood cell motion in the developing cardio-vascular system of rodents [1–10 mm/s (Jones et al., 2004)], and the frequency of beating cilia [3–40 Hz (Sisson et al., 2003). Even though it is tempting to consider that the required microscope frame-rate to image motion only depends on the speed of the imaged sample, the required spatial resolution must be taken into account too. For example, the overall position of a small motile sample, say a paramecia in a dish, can be followed under a low magnification microscope even though its shape cannot be resolved. In order to observe the sample’s internal shape, however, both a higher magnification (with a higher resolution) and a higher imaging frame-rate are required. Indeed, at a high magnification, the moving sample sweeps across the field of view faster than at a low magnification. Therefore, temporal resolution (that is, the frame-rate) and spatial resolution cannot be considered independently. In the following section, we detail some guidelines on how to determine the required frame-rate in the general case of a moving sample.
Techniques that aim at improving spatial resolution often require that the sample be (nearly) immobile. Evaluations of their performance rarely take motion into account. If we examine a moving fluorophore, we must revisit the concept of point spread function (PSF) and the resolution that it characterizes (see Table Table1).1). In particular, we must consider the integration time—the time over which light is gathered by the detector—when modeling the image of a moving object: a new PSF, obtained through a normalized sum of PSF’s (each PSF in the sum corresponding to a different position of the sample) combines the effect of purely optical limitations with motion artifacts [Figs. [Figs.1A,1A, ,1B].1B]. If the PSF of an immobile source has a width Δx, the width Δx′ of the point spread function corresponding to the moving source is approximately given by
where T is the integration time and v the velocity of the source. The second term in the sum, Tv, captures the blur induced by motion. As T or v increase, the resulting PSF can be considerably larger than its original version and effects due to motion can rapidly exceed those due to diffraction alone.
Conversely, when sources separated by a distance Δx′ must be resolved in a sample that moves at a velocity v, the required frame-rate, considering a microscope that can resolve immobile sources separated by a distance Δx, is given by (assuming Δx′>Δx)
Clearly, when the microscope does not permit imaging immobile structures with sufficient resolution (Δx′Δx) imaging the same structure as it moves is not possible.
Based on Eq. 2 (and assuming Δx′Δx), we considered several biological specimens (gathered in Table Table2)2) and displayed them according to their speed and the required spatial resolution in the graph of Fig. Fig.2.2. Based on their location in this graph, the required frame-rate can be deduced. The required frame-rate increases either as a consequence of increased sample speed or increased spatial resolution requirements. We superimposed the approximate frame-rates and achievable resolutions of some widespread and emerging microscopy techniques (Table (Table3).3). It is apparent that improvements of both temporal and spatial resolution are required to image many classes of biological samples.
Often, the limitation to fast imaging is not the microscope but the fluorophore. Major parameters to take into account are the fluorophore brightness (which is proportional to the product of the extinction coefficient, a quantity related to the fraction of photons that are absorbed by the dye, and the quantum yield, the fraction of absorbed photons that yield fluorescence photons), its fluorescence lifetime, and the concentration. The brighter a dye, the fewer photons are required to illuminate the sample, which allows for illumination time reduction. An increased fluorophore concentration allows decreasing the excitation time significantly (while keeping the emitted photon count constant), but a higher dye concentration can increase photo-toxicity and lead to self-quenching of the dye resulting in the reduction of its quantum yield. Finally, the fluorescence lifetime corresponds to the average time between a fluorophore being excited and emitting a photon [see Figs. Figs.3A,3A, ,3B].3B]. It directly affects the maximal repetition rate at which images can be acquired. Since acquisition of a single image requires going through one full cycle of excitation and light emission, decreasing the time between sample excitation (e.g., by faster scanning) near the fluorescence lifetime results in the inability to determine at which cycle an emitted photon was excited (given that emission times are stochastic, see below). The optimal settings of fluorophore concentration, illumination time, and intensity must usually be determined experimentally.
In order to keep the photon count per pixel constant when increasing the frame rate, it might be tempting to simply increase the illumination power. Thereby, the probability of fluorophore excitation and the number of emitted photons would remain constant (see Fig. Fig.3).3). However, a high illumination intensity increases the risk of fluorophore saturation, photo-bleaching, photo-toxicity, and photo-damage. Photo-toxicity is still poorly understood but likely involves photogenerated oxidative stress during sample illumination and fluorescence emission (Lichtman and Conchello, 2005). Setting the laser power as low as possible and using dyes with a high quantum yield allow limiting photo-toxicity. When using digital cameras, photon counts can be improved by increasing the photon collection area (pixel-binning) as an alternative to increasing illumination power. This implies, however, a decrease in spatial resolution.
When increasing the frame rate while keeping every other imaging parameter unchanged, fewer photons are captured by the imaging system as the time interval between frames decreases. The PSF should merely be considered a probability density function that a photon, emitted by a fluorophore, hits the imaging surface at a given position. The higher the value of the point spread function at a given location, the higher the probability of a photon hitting there. In addition to the shape of the PSF, it is the total number of measured photons that specify the quality of the image [see Fig. Fig.1C,1C, ,1D,1D, ,1E,1E, ,1F1F].
The number of photons that are emitted during a given time interval is stochastic but can be modeled, for example, by a Poisson process. If the average number of photons, N, emitted in a set time interval is given, the probability that a number, n, of photons are emitted is given by the probability function
Histograms showing the number of emitted photons during multiple experiments and for an average number of photons N=6 and 60 are shown in Figs. Figs.3D,3D, ,3E,3E, respectively. The average number of measured photons, the signal, is n=N and the standard deviation, the noise, is . The ratio between the two is plotted in Fig. Fig.3C.3C. As the average number of measured photons increases, the relative amount of noise decreases. This is visible in Fig. Fig.1G,1G, where confocal images have been acquired with increasing scan speed, that is, a reduced dwell time (time corresponding to the excitation time for a given pixel) and images appear more grainy when acquisition time is decreased.
In addition to the traditional challenges faced when imaging fixed samples at high magnification (Stelzer, 1998; Brown, 2007), sample motion can introduce other artifacts that should be avoided at all costs. To help the reader recognize these defects and determine how to properly balance all imaging parameters, we discuss below several common, yet hard to spot, situations in which insufficient temporal resolution can lead to dramatic data misinterpretation.
Insufficient time-resolution (that is when the frame-rate is too low for the measured phenomenon) may lead to two main artifacts: motion blur and aliasing. Blurring arises because in order to capture a single frame, light is gathered over a certain period of time (integration time) rather than instantaneously. Particles that move too quickly under the microscope yield streaks that follow their trajectory (motion blur). Similarly, details appear blurred on photographs taken with too long an exposure time [see Figs. Figs.1C,1C, ,1D,1D, ,1E,1E, ,1F,1F, ,1G,1G, ,5D,5D, ,5G].5G]. Further processing and analysis, e.g., particle tracking, is difficult on such images since the objects are not well localized.
Temporal aliasing can occur when images are acquired with a short integration time, but the time interval between two frames is too large to permit faithful replication of the original signal when it is played back. Imaging cyclic or oscillatory motion is particularly prone to this effect, which can make the motion appear to take place at a wrong frequency. A classical example of temporal aliasing is the wagon-wheel effect in motion pictures: rotating stagecoach wheels or helicopter propellers often appear to be rotating at a slow frequency or in the wrong direction. In order to avoid aliasing, the acquisition rate should be at least twice that of the highest frequency to be imaged. Conversely, for a given frame-rate, the highest frequency that may be imaged without aliasing (Nyquist frequency) is equal to half that frame-rate. For example, successful characterization of the direction of cilia rotation in the mouse node necessitates the use of high frame rate acquisition (Nonaka et al., 1998; Nonaka et al., 1999).
Often, the velocity itself is the object of study and, therefore, unknown beforehand. For example, when studying vesicular transport, an improper acquisition rate can generate spectacular artifacts with apparent changes in transport direction from anterograde to retrograde [Figs. [Figs.4D,4D, ,4E,4E, ,4F,4F, ,4G,4G, ,4H4H and Supplemental movie (EPAPS)]. In that case, spatial resolution and field of view extent should be sacrificed [possibly by limiting the measurements to scanning a single line (Jones et al., 2004)] in order to reach a high enough frame-rate and characterize speed and eventually the minimal frame-rate. A simplified example that illustrates this situation is depicted in Figs. Figs.4A,4A, ,4B,4B, ,4C.4C. Indistinguishable particles travel with a velocity, v, along a path and are separated by a distance, d. When images are acquired with a time interval, Ts, between two frames, for the particle direction and trajectory to be extracted unambiguously the distance Tsv traveled by the particles in between two frames must be less than half the distance d separating the particles, viz.,
Cell-lineage studies based on time-lapse image series can suffer from similar artifacts. Despite the slow cell motion, images are acquired over large fields of view and extended periods of time to get a comprehensive map of cell behavior at the scale of the embryo (Fraser and Stern, 2001). Again, the frame-rate should be such that the maximal distance traveled by any cell in the time between two frames is smaller than half the minimum distance between the centers of any two cells.
When images are raster scanned from the upper left corner to the lower right corner [see Fig. Fig.5A],5A], all pixels are, by definition, not acquired simultaneously. When the imaged structure moves during the scan, artifacts can appear that give the structure a deformed shape, such as for the heart illustrated in Fig. Fig.5D.5D. This problem not only occurs at the level of a single frame, but also when acquiring z-stacks of moving structures [Fig. [Fig.5G].5G]. Again, increasing the scanning speed (this time in all spatial directions) would be a way to overcome this problem. In the case of a periodic motion, as discussed in the Image registration section, this problem can be overcome through sequential acquisition of slice-sequences and subsequent temporal registration (Liebling et al., 2005).
A variety of microscopes have been developed recently to permit imaging of fluorescent samples at high speed. We concentrate on some techniques that are available commercially or that have been designed for biological in vivo imaging. The different microscopes' characteristics are given in Table Table33.
Widefield fluorescence microscopes have largely benefited from recent advances in digital camera technology. Since the camera can capture an entire 2D field of view, this technique is potentially very fast with camera speed and sensitivity as its major limitations. Indeed, at high frame-rates, cameras require very good detection performance to capture the small number of photons emitted over a short integration times. Although widefield microscopy does not, in its simplest form, allow for optical sectioning, several modification have been proposed to improve performance in this regard. These include deconvolution microscopy (Swedlow et al., 1997), structured illumination (Neil et al., 1997), and interference-based illumination and detection (Gustafsson et al., 1999). Widefield microscopy (as opposed to confocal microscopy, see below) can yield very high signal-to-noise levels since no emitted light is rejected. It is, however, fluorescence light emitted throughout the entire sample depth that contributes to this signal. Therefore, in order to achieve optical sectioning, it is necessary to acquire multiple images and carry out time-consuming computational post-processing (structured illumination, deconvolution).
Point scanning. Confocal laser scanning microscopy (CLSM) has long suffered from slow frame rates since a focused laser beam must be raster-scanned over the whole image in order to acquire one single frame [see Fig. Fig.5A].5A]. Fast raster scanning can be achieved by mounting one of the scan mirrors on a resonant scanning galvanometer. Although the speed at which the scanning itself can be carried out may be the limiting factor, it is usually not sufficient to increase scanning speed to get satisfactory images at high frame rates. Indeed, faster scanning reduces the dwell time of the laser beam on any one pixel and, as a consequence, the brightness of the sample (see Fig. Fig.3).3). The use of high sensitivity detectors, typically photo-multiplier tubes (PMT) and avalanche photo-diodes (APD), can only partially compensate for these low photon counts resulting from the high scanning speed. Additionally, optimized optical paths and low-loss wavelength selection are essential features of fast point-scanning confocal microscopes. Despite the fact that scanning hampers speed, confocal laser scanning microscopy gives flexibility in terms of regional scanning and yields excellent spatial resolution.
Multi-beam scanning. Parallelization of the single beam illumination process is at the heart of several variations of CLSM. In single beam scanning, one beam is scanned over the entire sample. Laser intensity must be increased as scanning speed increases in order to retain a high fluorophore excitation probability for these short dwell times. When multiple beams are scanned, the excitation light is divided into several beams that are focused at different locations on the sample (for example, on a grid or a spiral pattern). Since there are multiple beams scanning in parallel, the dwell time for any given pixel is proportionally longer than in the case of a single scanned beam. As a consequence, for a given frame-rate, the illumination light intensity per beam can be lowered, thereby reducing the risk of phototoxicity and fluorophore saturation (Graf et al., 2005; Tadrous, 2000; Egner et al., 2002a), (see Table Table33).
One example that takes advantage of multi-beam scanning is spinning disk confocal microscopy (SDC). In one configuration, two spinning conjugated disks, one bearing micro-lenses and the other confocal pinholes permit splitting the laser beam into multiple spots and acquire images at high frame-rates onto a camera as the detector (Tanaami et al., 2002). It is schematically shown in Fig. Fig.5B.5B. The high degree of parallelization (over 103 simultaneous beams) permits imaging at high frame-rates with a limited increase of illumination power. In addition to spinning disks, other configurations for fast multi-beam scanning exist, including geometries of oscillating pinhole arrays and spinning line arrays that produce virtual pinholes. Multiple beam confocal microscopes are popular among cell biologists due to the fact that fast imaging can be carried out with lower light intensities than with single beam confocal microscopes (for a given frame-rate).
The limitations include the fact that the pinhole size cannot be varied and, therefore, ties the microscope to a given magnification, the high sensitivity to any mismatch between disk rotation frequency and camera frame-rate as well as limited options for region of interest illumination and imaging. SDC also have lower penetration depth than single beam scanning microscopes because of crosstalk between pinholes (out of focus photons that are blocked by one pinhole can still be collected by another pinhole) due to increased light scattering when imaging thick samples (Graf et al., 2005). Frame-rates are typically limited by the maximum camera speed (see Table Table33).
Line scanning. By simultaneously illuminating an entire line (line scanning) that is scanned across the sample (instead of a single point in classical CLSM), by using a slit instead of a pinhole to reject out-of-focus light, and by using a line instead of a point detector, fast confocal imaging can be achieved (Brakenhoff and Visscher, 1992; Wolleschensky et al., 2006). Similar to multiple beam confocal microscopy, this technique takes advantage of the parallel illumination and acquisition from multiple points (retaining its advantages regarding reduced illumination requirements), but the ability to adjust the slit size makes it a versatile alternative, in particular for fast imaging deep inside embryos (Liebling et al., 2006; Lucitti and Dickinson, 2006).
An alternative to confocal microscopy is the recently proposed selective plane illumination microscope (SPIM). In SPIM, an entire plane of the sample is illuminated laterally and collection is carried out with a wide-field microscope (Huisken et al., 2004). SPIM offers two main advantages over confocal microscopes: First, since it is, in essence, a whole-field imaging technique, no light needs to be rejected in order to achieve optical sectioning and, second, photo-bleaching is limited since only the plane of interest is illuminated. Optical sectioning is achieved through excitation alone. Also, since individual slices are acquired in a single shot by a digital camera, it has the potential for high-speed imaging. SPIM has been used for the imaging of whole embryos at low magnification (Huisken et al., 2004; Verveer et al., 2007; Scherz et al., 2008) or individual structures at higher magnification. SPIM requires that sample mounting be reconsidered since the sample must be optically acces-sible both laterally and axially. This also makes imaging at very high magnifications more difficult.
In two-photon microscopy, fluorophores are excited by two photons of approximately half the energy (or twice the wavelength) of the photons used in confocal microscopy. Since scattering loss is less for longer wavelengths (near infrared), two-photon excitation light can penetrate deeper into tissues. Also, photons at higher wavelengths carry less energy and are, therefore, less detrimental to the tissue outside of the excitation volume. Recent progress in laser engineering has made multi-photon microscopes as convenient to use as confocal microscopes and has contributed considerably to the expansion of live imaging to thick tissues. Yet, the high cost of the light-sources usually makes this an expensive technique. Similar to the evolution in fast confocal microscopy, parallelization has lead to new multi-photon microscopes. In multi-focal multi-photon microscopy (TriM) (Bewersdorf et al., 1998; Egner and Hell, 2000), the beam is split up as it cascades repeatedly through a beam-splitter. Since detection is carried out in wide-field, the major limitation is cross-contamination from the multiple beams at the detection level along with the necessity for a very high power laser that can be split into multiple beams of sufficient power each.
Image processing plays a central role in fluorescence microscopy (Vonesch et al., 2006). In recent years, several groups have developed integrated imaging paradigms where experiments intrinsically combine microscopy and image processing. Such combined approaches can be very fruitful since, for example, physical limitations can be overcome via software solutions or, conversely, image analysis that requires sophisticated algorithms can be considerably simplified using alternative microscopes and help push temporal resolution.
Widefield microscopy allows for very fast image acquisition. In order to achieve optical sectioning, however, it requires post-processing deconvolution. Deconvolution algorithms, which are often iterative, can be extremely slow and require major computing power. For deconvolution microscopy to become as straightforward to use as confocal microscopy, algorithms several orders of magnitude faster are required. Nevertheless, since restoration often does not need to be carried out at the time of acquisition, wide-field imaging coupled to deconvolution algorithms appears as a viable alternative to overcome the slowness of traditional confocal microscopes, yet retaining optical sectioning abilities. High-speed wide-field imaging platforms coupled to off-line deconvolution computational cores have been developed for that purpose (Racine et al., 2007).
Some biological structures are moving so fast that conventional 3D image acquisition is no longer possible. For example, in the embryonic heart, the heart wall can reach a velocity approaching 1 millimeter per second, which can create scanning artifacts that prevent proper shape characterization [see Fig. Fig.5D,5D, ,5G].5G]. Recently, an approach to overcome these problems has been implemented (Liebling et al., 2005, 2006). Using a fast confocal slit-scanning microscope, the focus is kept the same for a whole rapid sequence of images and then is only changed to repeat the acquisition of the next image sequence at high speed. Since the heart motion is periodic, 3D volumes can be reconstructed through temporal registration over the entire heartbeat. Through an alternative acquisition procedure and a computational reconstruction algorithm that takes advantage of the cyclical movement of the heart, one can thus overcome weaknesses in the microscope acquisition rate.
Another challenge is the imaging of structures within growing tissues over several hours, which results in large overall displacements of the region of interest. Such a situation is described in Figs. Figs.6A,6A, ,6B,6B, ,6C,6C, where the cells of a developing zebrafish embryo that lie on top of the yolk sack undergo tremendous displacements. The most common solution is to increase the field of view in order for the region of interest to remain in view for the whole duration of the time-lapse. Then the region of interest can be recursively aligned over the course of the time-lapse and relative cell motions (or their absence) can be revealed [Figs. [Figs.6D,6D, ,6E,6E, ,6F].6F]. Such approaches require, however, that larger fields of view be scanned. A more desirable technique would be to adjust the imaging position adaptively, which has been implemented for simple systems (Rabut and Ellenberg, 2004). By using such an approach for in vivo imaging, the field of view could be decreased significantly and the frame-rate increased accordingly.
To the human eye, images of small molecules acquired with wide-field fluorescent microscopes can appear to be static due to poor spatial resolution or rapidly moving due to the high level of noise. Therefore, it is very difficult to demonstrate that structures are indeed immobile or, conversely, highly motile. An interesting approach has been described (Soutoglou et al., 2007; Thomann et al., 2003) where, using sub-resolution tracking combined with a statistical framework for data modeling and analysis, significant movements and colocalization can be determined. Interestingly, the trajectory accuracy can exceed the optical resolution, even though the source images are of standard optical resolution, but provided the acquisition speed is high enough.
High-throughput screens require the analysis of huge numbers of samples over short periods of time as well as fully automated protocols. So far, high-throughput approaches have proven successful for imaging cell morphology changes in response to drugs (Carpenter 2007) or gene knock-downs (Bakal et al., 2007). A number of plate readers allowing automated image acquisition are available on the market (Pepperkok and Ellenberg, 2006) and can be associated with computers that segment and discriminate between basic phenotypes (Bakal et al., 2007). By coupling automated stages to fast microscopes, prospects for extending such techniques to whole-animal imaging and developmental biology studies are excellent. They should, in particular, allow increasing the number of observations, to image several embryos simultaneously and to limit the time spent in front of the microscope (Megason and Fraser, 2007).
It is unquestionable that independent development of fluorescent probes, novel microscopes, and general-purpose image processing and analysis software will enlarge the scope of possible applications for fast imaging over the years to come. We believe that an even greater potential lies in the judicious combinations and integration of existing (as illustrated in the previous section) and upcoming methods to build novel applications. To conclude this perspective we provide several pointers to possible extensions of existing methods and interactions that we believe can tremendously improve fast imaging. Developmental biology and biophysics are likely to benefit from these advances.
Successful imaging projects frequently depend on interactions between two or more fields. Breakthroughs in both imaging and biology were achieved by combining molecular or developmental biology with microscopy (Mathis et al., 2001; Livet et al., 2007; Scherz et al., 2008), biology with physics (Hove et al., 2003; Supatto et al., 2005), biology with image analysis (Soutoglou et al., 2007), or biology with mathematical modeling (Surrey et al., 2001). Considering the complexity of each individual technology, combined with the biological challenges of keeping the sample in optimal conditions for fast imaging, tight collaborations between biologists, chemists, computer and electrical engineers, mathematicians, and physicists are highly desirable. When each contributing party is also involved at the time of experiment planning, major hurdles in the image acquisition, processing, or analysis can be identified early on and more possibilities for alternatives can be found. Clearly, such multidisciplinary projects reach their full potential in academic environments where multiple departments are concentrated in close geographical areas, but this is by no means a requirement.
With the standardization of live imaging protocols and the growing accessibility to live samples, live embryos can be considered the new “test tube” for biophysicists or “culture dish” for cell biologists. Many structures that fascinate biophysicists, such as cilia, the cell membrane, the cytoskeleton, and vesicles are now accessible by dynamic imaging in live tissue. Clearly, biology is different in 2D than 3D as has been shown, for example, during cell biology experiments in 2D and 3D culture media (Griffith and Swartz, 2006) or by analyzing intracellular microtubule growth in 3D (Keller et al., 2007), demonstrating the need for finer 3D imaging methods to understand biological systems. This will be of particular interest for tissue engineering and regenerative medicine where the complex interaction between genetics and cell environment both in culture and in tissue needs to be understood. Furthermore, measurements of 3D deformation at the tissue scale with cellular resolution open new possibilities for studying tissue biomechanics and the numerous signaling pathways involved that have been proposed to be reactive to mechanical deformations (Farge, 2003; García-Cardeña et al., 2001; Nichols et al., 2007; Avvisato et al., 2007). Again, imaging deformations at microscopic scales does not only require excellent spatial but also temporal resolution.
A particularly challenging issue in developmental biology is the integration of genetic networks, cell identity, and cell behavior during embryonic morphogenesis and patterning. The dynamic interplay between morphogens, gene expression, cell proliferation, and tissue growth control is still poorly understood. Most of the imaging tools are now available to address these questions. On one hand, several groups succeeded in imaging morphogen spreading in live embryos (Kicheva et al., 2007; Gregor et al., 2007). On the other hand, genetic networks are being intensively studied in whole embryos, generating very precise data about the dynamics of gene activation and repression in response to morphogens (Isalan et al., 2005; Stathopoulos and Levine, 2005). Furthermore, the ability to image the dynamics of transcription factors binding DNA (Elf et al., 2007), the speed of transcription (Janicki et al., 2004), and chromatin motion (Soutoglou and Misteli, 2007) opens some unexplored areas of integrated imaging in different fields of biology, from molecular biology to embryology. Addressing these questions will require imaging at multiple scales with high temporal resolution. This is the promise for exciting interdisciplinary research and for significant progress towards a more global understanding of biological systems.
Supplemental Video S2C–F.mov
Supplemental Video S2G.mov
Supplemental Video S4D–H.mov
Supplemental Video S5G.mov
Supplemental Video S6A–C.mov
Supplemental Video S6D–F.mov
We thank Elaine Bearer for providing us with the squid giant axon data used in Fig. Fig.4.4. We thank Willy Supatto, Thai Truong, and members of the Biological Imaging Center, Beckman Institute at Caltech for discussions and comments. M.L. received support from the Swiss National Science Foundation (Fellowship PA002-111433) and J.V. from the Human Frontier Science Program (HFSP). This work was also supported by a NIH/NICHD grant (PO1HD037105).