|Home | About | Journals | Submit | Contact Us | Français|
Camera-based optical imaging of the exposed brain allows cortical hemodynamic responses to stimulation to be examined. Typical multispectral imaging systems utilize a camera and illumination at several wavelengths, allowing discrimination between changes in oxy- and deoxyhemoglobin concentration. However, most multispectral imaging systems utilize white light sources and mechanical filter wheels to multiplex illumination wavelengths, which are slow and difficult to synchronize at high frame rates. We present a new LED-based system capable of high-resolution multispectral imaging at frame rates exceeding 220 Hz. This improved performance enables simultaneous visualization of hemoglobin oxygenation dynamics within single vessels, changes in vessel diameters, blood flow dynamics from the motion of erythrocytes, and dynamically changing fluorescence.
In response to almost any stimulus, discrete regions of the brain will experience an increase in blood flow. It is this hemodynamic response that provides a signal that can be measured via functional magnetic resonance imaging (fMRI) . Mapping these changes in blood flow has significantly improved our understanding of the way the brain organizes and processes information [2–5]. In 1986 Grinvald et al. demonstrated that these changes in blood flow could be detected by simply imaging the exposed cortex under optical illumination using a photodiode array . Since then, improvements in digital camera technology, light sources, and optical filters have led to widespread use of so-called ‘optical intrinsic signal imaging’ (OISI) for neuroscience research [5,7–10]. These intrinsic signals, corresponding simply to changes in reflected light intensity, are now understood to primarily originate from variations in the concentration of oxy- and deoxyhemoglobin that occur as a result of changes in vessel diameter, oxygen delivery, and oxygen extraction .
Blood flow changes can be detected using a wide range of optical illumination wavelengths, yet the wavelength chosen can dramatically affect the conclusions that are drawn about the spatial extent and temporal evolution of the hemodynamic response [7,12,13]. This is because oxy- and deoxyhemoglobin (HbO2 and HbR) have unique absorption spectra and exhibit different spatiotemporal responses in the vascular compartments within the cortex (arteries, veins, and capillaries). A significant advance in recent years has been the use of multiple illumination wavelengths, allowing spectroscopic analysis and therefore direct estimation of the changes occurring in HbO2 and HbR concentrations [12,14]. Adding simultaneous measurements of blood flow to this approach makes it possible to infer oxygen delivery and extraction dynamics. These hold the potential to link hemodynamic changes to the metabolic demands of the cortex and are therefore closer measures of the underlying neuronal activity [15,16]. To date, blood flow dynamics in the exposed cortex have been imaged using speckle-flow imaging  or evaluated at discrete points using Laser Doppler .
Even before OISI was demonstrated, voltage sensitive dyes were used for cortical mapping of changes in membrane potential . More recently, calcium sensitive dyes have provided a means to measure changes in neuronal intracellular calcium concentrations [11,20]. Both of these types of dye are fluorescent and require that the cortex be stained prior to imaging. Measurements of these dyes have often been impaired by concurrent fluctuations in hemodynamics . Therefore, an ideal cortical imaging system would allow high-speed simultaneous capture of fluorescence, HbO2 and HbR absorption, and blood flow signals.
The conventional approach to acquiring multispectral imaging data is to use a white light source, such as a halogen or mercury-xenon lamp, and band-pass filters to select appropriate wavelengths [15,22,23]. Filter wheels are typically used to provide switching of filters, with the wheel generating a series of triggers that instruct a camera when to capture each frame. This filtered light is commonly aligned into a flexible fiber-optic conduit to allow directed illumination of the cortex of the animal. The main disadvantage of this approach is that cameras are typically not able to acquire at their maximum frame rate when driven by an external trigger. In addition, a filter wheel will generally have 6 positions, requiring purchase of a filter for each (even if fewer than 6 wavelengths are required). Unless duplicate filters are purchased, this limits the time in which one multispectral frame can be acquired to 1/6th of the triggered frame rate of the camera (e.g. 30 frames per second with 6 filters provides only 5 ‘spectral frames’ per second [16,24]). This can also affect the accuracy of spectral analysis since each spectral image is captured at a different time-point. Further, even the highest performing white light sources do not have uniform spectral density, such that certain wavelengths may be less powerful than others. Galvanometer-based filter switching systems have recently become available (e.g. allowing 8 ‘spectral frames’ per second with 4 multiplexed wavelengths ). While faster and more versatile than filter wheels, they are more costly and still limited by the total power and spectral range of the light source. Tunable optical filters positioned in front of the camera are typically lossy, slow, and not wholly effective in producing pure wavelength bands (essential for fluorescence excitation). Detailed spectral analysis has been achieved by imaging a linear spectrometer onto the cortical surface, however alignment is challenging and the lack of a 2D image prevents detailed analysis [18,25]. Another approach is to use broadband illumination and a color camera which has a suitable Bayer mask, or a spectral image-splitter, allowing two or more bands to be detected in parallel [26–28]. Color cameras typically suffer from reduced signal-to-noise owing to optical filter losses, lower bit depth, and ¼ sized pixels compared to monochrome cameras. Image splitters reduce image resolution. For both, there is the danger of sample heating and phototoxicity from high intensity broadband illumination, and the inability to acquire multiplexed fluorescence data. Simultaneous imaging of speckle-flow and optical reflectance data has been demonstrated using two co-aligned cameras and a dichroic filter to spectrally discriminate light originating from a 785 nm laser diode (for speckle) and a filter-wheel with filters between 560 – 610 nm [16,23,29]. However, the use of multiple cameras significantly increases system cost and complexity and exact pixel-by-pixel image registration is rarely possible.
Our new multispectral imaging system overcomes these challenges at a reduced cost through the use of multiple co-aligned high power, rapidly modulated light emitting diodes (LEDs), a high-speed microcontroller-based synchronization circuit, and an inexpensive, fast, monochromatic camera. The system has been configured to allow the camera to free-run at its maximum frame rate. A signal from the camera indicating real time frame-grabs is used to synchronize sequential strobing of the LEDs via the microcontroller. Not only does this circuit allow for increased frame rates over filter wheel systems, and novel programmable strobe sequences, the broad range of LED light sources available today means that high power can be obtained at almost any wavelength. Also, the power of each LED source can be tuned individually to provide optimal signal-to-noise and dynamic range, and each LED can be individually filtered to block any wavelengths that are not required. Since each LED is illuminated in turn, appropriate emission filters in front of the system’s camera can allow acquisition of rapidly multiplexed fluorescence and absorption data within a single run.
The system is routinely able to acquire single images at > 220 frames per second (fps). For biomedical applications that require two different wavelengths, our system can acquire complete multispectral image sets at > 110 fps. This increased speed means that spectral analysis more accurately reflects the instantaneous state of the tissue. In addition, the speed and spatial resolution of our system also allows the motion of red blood cells to be discerned throughout the cortical vessels, thereby allowing the velocity of blood flow to be measured and mapped across the field of view using the same data.
In this article, we describe the design and implementation of our new multispectral imaging system and demonstrate how it can be used to quantitatively study the spatiotemporal dynamics of hemoglobin oxygenation, changes in blood vessel diameter, blood flow dynamics from the motion of red blood cells in superficial vessels of the exposed cortex, and local neuronal activity from the fluorescence dynamics of a calcium sensitive dye.
Our multispectral imaging system employs a Pantera 1M60 12-bit CCD camera (Dalsa) and a Solios XCL/eCL framegrabber in the single-Medium CameraLink configuration (Matrox). A CameraLink breakout box (CLB-501, Vivid Engineering) is used to access all CameraLink signals. Camera acquisition parameters, LED strobe synchronization, and image acquisition are controlled through a custom C image acquisition GUI which includes generic, off-the-shelf functions from the Matrox Imaging Library Lite 8.0 software library (Matrox). A 3X optical, long working distance optical zoom lens is used to provide flexibility in choosing the field of view (VSM 300, Edmund Optics).
To synchronize LED modulation with image acquisition we utilize the TTL exposure integration signal (EXSYNC) generated by the camera as the master clock for image acquisition (black arrows, Fig. 1). In Exposure Mode 4, the Pantera 1M60’s exposure time is driven by a TTL pulse train generated by a PCI clock card (PCI-6601, National Instruments) internal to the image acquisition computer (red dashed arrows, Fig. 1). This TTL pulse train instructs the camera when to integrate signal on its sensor. The EXSYNC signal, generated internally by the camera to indicate when it is actually integrating signal, is routed to the interrupt input ports on the microcontroller (Arduino Diecimila, Sparkfun Electronics). A custom loop function written to the microcontroller reads this TTL signal and serially strobes multiple LEDs locked to the edges of the EXSYNC signal (blue and green arrows, Fig. 1). The EXSYNC signal is accessed via the CameraLink breakout box. Through utilization of the camera-generated EXSYNC signal, the camera is allowed to free-run at the user-defined framerate during image acquisition, unhindered by any external synchronization issues. In addition, because illumination is locked precisely to each frame grab, our system does not require any form of shuttering of the CCD sensor to avoid streaking artifacts. This reduces the cost and complexity of the required CCD camera and reduces photobleaching and photodamage of the cortex by illuminating the sample for the minimum amount of time necessary.
Currently, the system can acquire images at up to 220 fps (the camera’s maximum frame rate), with 128 x 128 pixels (8 x 8 binning), or 160 fps with 256 x 256 pixels (4 x 4 binning). These image acquisition speeds allow complete dual-spectral image sets (a set of separate 470 and 530 nm illumination images) to be acquired at 110 and 80 fps respectively.
Low-cost, high power LEDs have recently become commercially available which offer spectral bandwidths of approximately 25 – 75 nm and optical powers approaching 1 W. As Fig. 1 illustrates, two LEDs and a combining dichroic beamsplitter can be used to create a multiwavelength illuminator that does not require physical movement to change the illumination wavelength. The LEDs reach their full illumination intensity in < 25 μs, and can be modulated at rates up to 10 kHz with commercially available drivers (LEDD1, Thorlabs).
To quantify the concentrations of HbO2 and HbR we chose two LEDs with narrow bandpass filters centered at 470 and 530 nm (MBLED, MGLED, FB470-10, FB530-10, Thorlabs) . We selected lower wavelengths than are usually chosen, to accentuate the contributions of superficial pial vessels. To simultaneously image total hemoglobin (HbT = HbO2 + HbR) and fluorescence from calcium sensitive dye Oregon Green 488 BAPTA-1 AM (Invitrogen), the 470 nm LED is replaced with a broadband ‘cyan’ LED fitted with a 460 ± 30 nm bandpass filter (corresponding to the excitation peak of Oregon Green), and a 500 nm long pass dichroic filter is placed in front of the camera (MCLED Thorlabs and FF01-460/60-20, BLP01-488R-25, Semrock). The 530 nm LED permits imaging of HbT responses since 530 nm is close to an isosbestic point for HbO2 and HbR. A third LED (e.g. a red MRLED, FB630-10 Thorlabs) can also be incorporated into the illuminator to allow separation of HbO2 and HbR signals as well as fluorescence imaging in parallel.
In-vivo cortical imaging experiments were performed using female Sprague-Dawley rats (300 ± 40 g) undergoing electrical hindpaw stimulation. Rats were prepared under isoflurane anesthesia (0.5-4% inhalation in oxygen) and then switched to intravenous alpha-chloralose (40 mg kg−1h−1) during stimulation and imaging. Animals were mechanically ventilated with a 3:1 ratio of air to oxygen. Femoral arterial and venous catheters were placed to allow continual blood pressure monitoring and delivery of intravenous alpha-chloralose. The head was then fixed in a stereotaxic frame and a portion of the skull (5 mm x 7 mm) was removed to expose the somatosensory cortex. The IVth ventricle was opened to relieve intra-cerebral pressure prior to removal of the dura mater. For fluorescence imaging, the calcium indicator Oregon Green 488 BAPTA-1AM was pressure injected into the cortex with glass pipettes (Picospritzer III, Parker Instrumentation). The dye was allowed to incubate for 1 hour prior to imaging . Dental acrylic was then used to seal a glass coverslip with a drop of agarose in artificial cerebrospinal fluid over the exposed region, creating a window for imaging.
Somatosensory pathways were activated via electrical hindpaw stimulation. Electrodes were placed on the right hindpaw and connected to an electrical stimulus unit (A360, WPI) delivering 3 ms pulses at 3 Hz with 1.0 ± 0.1 mA amplitude. Each imaging run consisted of 6 seconds of no stimulus, followed by 4 seconds of stimulus pulses, and then 12 seconds of no stimulus. During image acquisition, a stimulus control computer was used to synchronize electrical stimulation to image acquisition (stimulus trigger, red arrows, Fig. 1) and record blood pressure and ventilation signals (yellow arrows, Fig. 1). Image sets from 20 repeated stimulus runs were co-registered and averaged for data analysis. All animal procedures were approved by the Columbia University Institutional Animal Care and Use Committee.
Ultra-fast imaging of the exposed cortex with multispectral illumination allows us to visualize wide field dynamics of blood oxygenation in individual arteries, veins, and discrete regions of the parenchyma simultaneously in a single imaging run. We converted our 470 nm and 530 nm image sets to images of HbO2 and HbR using the Modified Beer Lambert law and standard HbO2 and HbR absorption spectra  . We utilized average path length estimates provided by a Monte Carlo simulation of light propagation in brain tissue .
The high spatial and temporal resolution of our imaging system makes it possible to clearly visualize and measure the evolution of blood oxygenation in individual vessels. Figure 2 shows a gray scale image of an example field of view, and the resulting converted HbO2, HbR, and HbT images at t = 11 seconds; just following the end of stimulation. From these maps, we are able to see that the spatial distribution of HbO2, HbR, and HbT differs between the arteries, veins, and parenchyma. We can further explore temporal oxygenation dynamics by extracting the time courses of HbO2, HbR, and HbT concentrations from the whole region (Fig. 2) or from specific regions such as arterioles, veins and the parenchyma . Arterial, venous, and parenchymal regions can be readily distinguished based on their differing baseline oxygenation properties. The RGB image in Fig. 3(a) was generated by creating a bitmap whose red channel is the ratio of baseline images at 530 nm and 470 nm images, whose green channel is the 530 nm baseline image, and whose blue channel is the ratio of baseline images at 470 nm and 530 nm. This image shows arteries in red, veins in purple, and parenchyma in green and yellow.
Our new optical imaging system allows us to take fast, wide field images of cortical surface vessels with sufficient resolution to quantify not only changes in blood oxygenation, but also vessel diameter and blood flow velocity simultaneously across the entire field of view. This makes it possible to capture many aspects of vascular dynamics from a single imaging data set. Figure 3 demonstrates our ability to zoom in on different regions of the wide field image to extract oxygenation, vasomotor, and blood flow dynamics on a data set collected at 60 fps.
Close up HbO2 images in Fig. 3(b) show the mixing of blood from two vein branches with different oxygenation levels. From the oxygenation maps in Fig. 2, we note that the more oxygenated blood is draining from the center of the activated parenchyma, while the blood draining from regions outside of the activate area is less oxygenated relative to baseline.
In addition to spectroscopic analysis of individual vessels, we are also able to track changes in vessel diameter during the stimulus run using the 530 nm data set (Figs. 3(c) - 3(f)). The cross sectional diameters shown by the lines in Fig. 3(c) are plotted over time in Fig. 3(d) and were calculated from the normalized full width half minimum of the image intensity along a line perpendicular to the vessel at each point. Comparing these vessels with the ratio map in Fig. 3(a), we see that the arteries exhibit a significant vasodilation whereas the veins do not .
The high spatial and temporal resolution of our data also allows us to extract blood flow velocity in surface veins from the movement of red blood cells in the blood plasma. Figure 3(e) shows a line segment along a vein that is plotted over time. The dark stripes correspond to red blood cells traveling along the length of the vessel, and the slope of the lines is proportional to the speed at which the red blood cells are traveling . Simple correlation analysis of these space-time plots permits accurate calculation of both the speed and direction of flow in any selected vessel segment . Flow dynamics in two separate vein branches from a single run are shown in Fig. 3(f), indicating faster flow in the smaller vein. These analyses can similarly be applied to all vessels in the field of view to provide information on the spatial extent of diameter and flow dynamics. Our data sets are also suitable for wide field optical flow tracking algorithms .
Figure 4 demonstrates our system’s ability to measure both hemodynamic and fluorescence dynamics in parallel. Figure 4(a) shows images of the exposed cortex injected with fluorescent calcium indicator Oregon Green 488 BAPTA 1-AM under 530 nm and 490 nm illumination (with a 500 nm long pass filter in front of the camera). Time courses of both calcium sensitive dye fluorescence and 530 nm reflectance for a selected region of the parenchyma can be extracted and plotted together to show the temporal relationship between neuronal activity and the hemodynamic response (Fig. 4(b)).
From these time courses, we can see rapid changes in calcium sensitive dye fluorescence that correspond to each electrical pulse delivered to the hindpaw at 3 Hz, as well as the slow hemodynamic response that reaches its maximum amplitude at the end of the 4 second stimulation period. The high temporal resolution of our system makes it possible to visualize the spatiotemporal evolution of both calcium and HbT dynamics on drastically different time scales. Figure 4(c) shows images of the full field of view during the evolution of a single calcium spike (over 270 ms) and the entire hemodynamic HbT response (over 6 seconds). For these images, the 530 nm signal was converted into ΔHbT as described above (under the approximation that 530 nm is an isosbestic point for oxy- and deoxyhemoglobin). We note that the region of greatest activity during a calcium spike co-localizes well with the region of greatest HbT change. This data set was collected with two LEDs at 30 fps with a 25 ms exposure time.
Our new spectroscopic optical imaging system design allows high-speed digital cameras to acquire images at their maximum frame rates, unhindered by external synchronization limitations. By utilizing rapidly modulatable LEDs and a high-speed microcontroller-based modulation circuit, our system is able to generate arbitrary high-speed strobe patterns with no moving parts. The total cost of our system was approximately $12,000, which includes the computer and the ~$10,000 cost of the camera and frame grabber.
We have successfully tested the ability of the microcontroller to generate strobe signals up to speeds of 10 kHz, indicating that with little modification, our system design can employ the latest generation of super high-speed cameras (with frame rates and resolutions well exceeding 1 kHz and 1 megapixel, respectively). Maximum frame rates need only be limited by the ability of image acquisition computers to transfer acquired images to disk. This rate is determined by the bus speeds, system memory and hard disk write speeds, all of which are continually improving. The simplicity of our LED-based illuminator design, and the flexibility of the microcontroller software allow for additional illumination wavelengths to be rapidly introduced to the multispectral imaging system at low cost. The ultimate physical speed limit will be dictated by the number of available photons and our ability to collect them.
In summary, we have presented a new low-cost approach to high speed multispectral optical imaging. We have demonstrated in-vivo data acquisition and image analysis techniques that can exploit such high-speed data, including the ability to quantify oxygenation, blood flow, and intracellular calcium dynamics in the living rodent cortex.
We thank Yevgeniy Sirotin and Aniruddha Das for their contributions to this work. Funding was provided by the following NIH grants: (NINDS) R21NS053684 and R01NS063226, (NEI) R01EY019500 and (NCI) U54CA126513, and the Human Frontier Science Program. M. Bouchard and B. Chen are funded by NDSEG and NSF fellowships. Matthew Bouchard and Brenda Chen contributed equally to this work
OCIS codes: (170.0110) Imaging systems; (170.3890) Medical optics instrumentation; (100.2960) Image analysis.