|Home | About | Journals | Submit | Contact Us | Français|
Intrinsic Optical Signal (IOS) imaging is a widely accepted technique for imaging brain activity. We propose an integrated device consisting of interleaved arrays of gallium arsenide (GaAs) based semiconductor light sources and detectors operating at telecommunications wavelengths in the near-infrared. Such a device will allow for long-term, minimally invasive monitoring of neural activity in freely behaving subjects, and will enable the use of structured illumination patterns to improve system performance. In this work we describe the proposed system and show that near-infrared IOS imaging at wavelengths compatible with semiconductor devices can produce physiologically significant images in mice, even through skull.
Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex for nearly two decades . Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS) that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ benchtop equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. A monolithically integrated sensor using arrays of sources and detectors (Fig. 1) operating in the near-infrared (NIR) region would overcome these limitations and enable long-term study of freely-behaving (un-anesthetized) subjects. The use of near-infrared wavelengths is also significant because it enables the use of low-cost semiconductor materials, such as gallium arsenide (GaAs) which are in widely used for optical communication.
To develop such a system, it is necessary to study the characteristics of IOS at far-red and near-infrared wavelengths. In particular, we are interested in the relative intensity change of the backscattered light (ΔR/R) from a given region of the brain between periods of activity and inactivity. The relative intensity change is influenced by the absorption of light by blood, which is dominated by oxyhemoglobin, deoxyhemoglobin and water. Visible wavelengths typically used for IOS imaging (510 to 650 nm) are absorbed more strongly than NIR light (690 nm to 850 nm), and thus produce stronger intensity changes [2-4]. However, higher absorption also limits penetration depth, requiring visible light IOS images to be taken through a craniotomy. NIR light encounters significantly less absorption than visible light, allowing for the possibility of IOS imaging through the skull. The last several years has seen considerable interest in the use of NIR light for minimally invasive imaging of human and animal subjects using fiber-based systems.[5-9]
One disadvantage of reduced absorption is that photons will experience more scattering events before being detected. This raises the background intensity and understand the implications of imaging IOS at near-infrared wavelengths by comparing maps taken at selected wavelengths between 610 nm (red) and 850 nm (NIR). Additionally, we seek to observe and understand the effects of scattering through the skull by comparing maps taken with and without the skull removed.
The sensor we envision is depicted in Fig. 1. It is an interleaved array of GaAs light emitters (lasers or LEDs) and photodiode detectors. Each of the emitters and detectors will be individually addressable in order to enable structured illumination. This system can be embedded on or in the skull as illustrated in Fig. 2, and combined with wireless telemetry (Fig. 3) to allow for the continuous observation of freely behaving subjects.
The ability to continuously observe freely behaving subjects over long periods of time is significant for the field of neuroscience because much of our current knowledge of brain function is derived from intermittent observations of anesthetized or immobilized subjects. The ability to make continuous observations of freely behaving subjects will allow the neuroscience community to answer questions that cannot be addressed using conventional techniques. Such questions include those involving the progression of disease over time, the effect of drugs on brain function, and the interaction between sensory and motor function in the brain.
The use of structured illumination as opposed to flood illumination also offers significant advantages. Point sources, such as lasers, can increase the resolution and penetration depth of imaging systems  and periodic patterns enable the user to control the penetration depth . In addition, multiple light sources and multiple detectors allow the use of temporally phased illumination, and may allow the use of sophisticated signal processing algorithms to further enhance performance.
Our work uses an established imaging procedure published by Kalatsky and Stryker. Animal imaging was performed at the University of California, San Francisco (UCSF) according to a protocol approved by the UCSF Institutional Animal Care and Use Committee. The experimental setup is shown in Fig. 4. An anesthetized C57BL6 wild-type mouse is given a visual stimulus from a computer monitor consisting of a horizontal white stripe on a grey background (50% contrast.) Images are obtained by illuminating the visual cortex at 610 nm, 690 nm, 750 nm, 775 nm, or 850 nm. Light from a tungsten lamp is filtered at a given wavelength using interference filters with a FWHM bandwidth of 10 nm and delivered via an optical fiber. Images without a craniotomy (but with scalp removed) are taken first, then a small section of skull above the visual cortex is removed and images taken again. The stimulus is delocalizes the signals, making them more difficult to detect and maps more difficult to resolve. This work seeks to swept repeatedly in elevation across the visual field at a frequency of 0.125 Hz for 90 cycles (Fig. 5), with a DALSA 1M30 CCD camera capturing images at 30 frames per second. The stimulus is then swept in the opposite direction and images taken. Signals recorded from the two sweeps are then subtracted to remove shifts caused by hemodynamic delay.
After the images are recorded, a Fourier transform in time is performed for each pixel, and the signal is filtered for components at the sweep frequency and normalized to improve the signal to noise ratio (Fig. 6.) The result is two maps, one of signal amplitude, indicating the relative intensity change, and one of phase, corresponding to the position of the stimulus within the visual field. Animals are euthanized after the entire set of maps is taken.
The maps obtained without craniotomy (i.e. through the skull) and with craniotomy are shown in Fig. 7 and Fig. 8, respectively. The lower maps show amplitude, with darker regions indicating stronger activity. The primary visual cortex is located in the dark region shown in the amplitude maps. The upper maps show phase, with similar colors indicating similar phase, thus highlighting regions that are active at the same time. The functional organization of the visual cortex can be seen clearly in the phase maps, where well-defined areas of similar color indicate that neurons responsible for a given area of the visual field are grouped together. Fig. 8 shows some disorganization in the maps taken at 690 nm and 750 nm, which we believe are due to fluctuations in the level of anesthesia. Despite this, we can still draw the conclusion that maps obtained through skull are more diffuse than those taken through craniotomies, but still show distinct features of cortical function and can be used for neuroscience research.
Each set of maps from left to right in Fig. 7 and Fig. 8 was taken at progressively longer wavelengths. It is evident from the fading black region in the amplitude maps that the signal-to-background ratio decreases as wavelength moves from visible to NIR. Fig. 9 shows that this is due to degradation in detected signal level rather than an increase in background.
In the study with craniotomy, the signal level at 850 nm increases, which is inconsistent with the trend observed in the study without craniotomy. We believe this is due to a slight decline in the effectiveness of the anesthetic toward the end of the imaging session, which caused a stronger response. This multi-wavelength, no-craniotomy/craniotomy study required an imaging session nearly eight hours long. Typical single-wavelength imaging sessions last less than two hours (including time required for image processing.)
In addition to declining signal-to-background ratio, the spatial character of the signal becomes delocalized, as shown by degradation in the definition of the phase map with increasing wavelength. Decreased signal-to-background ratio and increasing delocalization are consistent with reduced absorption. Photons experience many scattering events and intermingle with those from neighboring regions of the brain, leading to a lower detected signal and more diffuse phase maps.
We describe a proposed integrated semiconductor sensor for minimally invasive imaging brain function using near-infrared light. We have shown that it is possible to obtain images useful for studying brain physiology through the skull of mice at wavelengths compatible with GaAs-based semiconductor sources and detectors. Future research will seek to better understand the propagation of spatially modulated light through brain tissue in order to optimize the sensor design. We will also seek to improve the temporal resolution of this technique in order to observe faster hemodynamic phenomena.
This work was supported in part by the U.S. National Science Foundation under Grant BES-0423076, by the Center for Integrated Systems at Stanford University and by the Stanford University Office of Technology Licensing Birdseed Fund.