Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Conf Proc IEEE Eng Med Biol Soc. Author manuscript; available in PMC 2008 October 6.
Published in final edited form as:
PMCID: PMC2562279

Integrated Semiconductor Optical Sensors for Chronic, Minimally-Invasive Imaging of Brain Function

Thomas T. Lee, IEEE, Student Member
Thomas T. Lee, Thomas T. Lee is with the Department of Electrical Engineering, Stanford University, Stanford, CA 94303 USA (phone: 650-725-6970; fax: 650-725-4659; e-mail: ude.drofnats@teel)
Ofer Levi, IEEE, Member
Ofer Levi, Ofer Levi is with the Departments of Electrical Engineering and Applied Physics, Stanford University, Stanford CA 94305 USA. (e-mail: ude.drofnats.wons@ivel)
Jianhua Cang, IEEE, Senior Member
Jianhua Cang, Jianhua Cang is with the Department of Physiology, University of California, San Francisco, 94143 USA, (e-mail: ude.fscu.yhp@gnac)
Megumi Kaneko, IEEE, Senior Member
Megumi Kaneko, Megumi Kaneko is with the Department of Physiology, University of California, San Francisco, 94143 USA, (e-mail: ude.fscu.yhp@okenakem)
Michael P. Stryker, IEEE, Senior Member
Michael P. Stryker, Michael P. Stryker is with the Department of Physiology, University of California, San Francisco, 94143 USA, (e-mail: ude.fscu.yhp@rekyrts);
Stephen J Smith, IEEE, Senior Member
Stephen J Smith, Stephen J Smith is with the Department of Molecular and Cellular Physiology and the Neurosciences Program, Stanford University, Stanford CA 94305 USA. (e-mail: ude.drofnats@htimsjs);
Krishna V. Shenoy, IEEE, Senior Member
Krishna V. Shenoy, Krishna V. Shenoy is with the Department of Electrical Engineering and the Neurosciences Program, Stanford University, Stanford CA 94305 USA. (e-mail: ude.drofnats@yonehs)
James S. Harris, Jr., IEEE, Fellow


Intrinsic Optical Signal (IOS) imaging is a widely accepted technique for imaging brain activity. We propose an integrated device consisting of interleaved arrays of gallium arsenide (GaAs) based semiconductor light sources and detectors operating at telecommunications wavelengths in the near-infrared. Such a device will allow for long-term, minimally invasive monitoring of neural activity in freely behaving subjects, and will enable the use of structured illumination patterns to improve system performance. In this work we describe the proposed system and show that near-infrared IOS imaging at wavelengths compatible with semiconductor devices can produce physiologically significant images in mice, even through skull.

I. Introduction

Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex for nearly two decades [1]. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS) that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ benchtop equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. A monolithically integrated sensor using arrays of sources and detectors (Fig. 1) operating in the near-infrared (NIR) region would overcome these limitations and enable long-term study of freely-behaving (un-anesthetized) subjects. The use of near-infrared wavelengths is also significant because it enables the use of low-cost semiconductor materials, such as gallium arsenide (GaAs) which are in widely used for optical communication.

Fig. 1
Schematic depiction of sensor array with emitters activated in a hypothetical structured illumination pattern.

To develop such a system, it is necessary to study the characteristics of IOS at far-red and near-infrared wavelengths. In particular, we are interested in the relative intensity change of the backscattered light (ΔR/R) from a given region of the brain between periods of activity and inactivity. The relative intensity change is influenced by the absorption of light by blood, which is dominated by oxyhemoglobin, deoxyhemoglobin and water. Visible wavelengths typically used for IOS imaging (510 to 650 nm) are absorbed more strongly than NIR light (690 nm to 850 nm), and thus produce stronger intensity changes [2-4]. However, higher absorption also limits penetration depth, requiring visible light IOS images to be taken through a craniotomy. NIR light encounters significantly less absorption than visible light, allowing for the possibility of IOS imaging through the skull. The last several years has seen considerable interest in the use of NIR light for minimally invasive imaging of human and animal subjects using fiber-based systems.[5-9]

One disadvantage of reduced absorption is that photons will experience more scattering events before being detected. This raises the background intensity and understand the implications of imaging IOS at near-infrared wavelengths by comparing maps taken at selected wavelengths between 610 nm (red) and 850 nm (NIR). Additionally, we seek to observe and understand the effects of scattering through the skull by comparing maps taken with and without the skull removed.


The sensor we envision is depicted in Fig. 1. It is an interleaved array of GaAs light emitters (lasers or LEDs) and photodiode detectors. Each of the emitters and detectors will be individually addressable in order to enable structured illumination. This system can be embedded on or in the skull as illustrated in Fig. 2, and combined with wireless telemetry (Fig. 3) to allow for the continuous observation of freely behaving subjects.

Fig. 2
Placement of sensor in skull relative to other cranial tissues.
Fig. 3
Long-term monitoring of freely behaving rodent subject using proposed sensor and wireless telemetry

The ability to continuously observe freely behaving subjects over long periods of time is significant for the field of neuroscience because much of our current knowledge of brain function is derived from intermittent observations of anesthetized or immobilized subjects. The ability to make continuous observations of freely behaving subjects will allow the neuroscience community to answer questions that cannot be addressed using conventional techniques. Such questions include those involving the progression of disease over time, the effect of drugs on brain function, and the interaction between sensory and motor function in the brain.

The use of structured illumination as opposed to flood illumination also offers significant advantages. Point sources, such as lasers, can increase the resolution and penetration depth of imaging systems [10] and periodic patterns enable the user to control the penetration depth [11]. In addition, multiple light sources and multiple detectors allow the use of temporally phased illumination, and may allow the use of sophisticated signal processing algorithms to further enhance performance.


A. Materials and Methods

Our work uses an established imaging procedure published by Kalatsky and Stryker[12]. Animal imaging was performed at the University of California, San Francisco (UCSF) according to a protocol approved by the UCSF Institutional Animal Care and Use Committee. The experimental setup is shown in Fig. 4. An anesthetized C57BL6 wild-type mouse is given a visual stimulus from a computer monitor consisting of a horizontal white stripe on a grey background (50% contrast.) Images are obtained by illuminating the visual cortex at 610 nm, 690 nm, 750 nm, 775 nm, or 850 nm. Light from a tungsten lamp is filtered at a given wavelength using interference filters with a FWHM bandwidth of 10 nm and delivered via an optical fiber. Images without a craniotomy (but with scalp removed) are taken first, then a small section of skull above the visual cortex is removed and images taken again. The stimulus is delocalizes the signals, making them more difficult to detect and maps more difficult to resolve. This work seeks to swept repeatedly in elevation across the visual field at a frequency of 0.125 Hz for 90 cycles (Fig. 5), with a DALSA 1M30 CCD camera capturing images at 30 frames per second. The stimulus is then swept in the opposite direction and images taken. Signals recorded from the two sweeps are then subtracted to remove shifts caused by hemodynamic delay.

Fig. 4
Imaging setup. A C57BL6 mouse is placed under an illuminating fiber and CCD camera. Illumination is provided by a tungsten lamp. A cover slip supported by agarose gel presents a flat surface for imaging.
Fig. 5
Visual stimulus. A horizontal white line is swept repeatedly in elevation across a black background with a period of eight seconds.

After the images are recorded, a Fourier transform in time is performed for each pixel, and the signal is filtered for components at the sweep frequency and normalized to improve the signal to noise ratio (Fig. 6.) The result is two maps, one of signal amplitude, indicating the relative intensity change, and one of phase, corresponding to the position of the stimulus within the visual field. Animals are euthanized after the entire set of maps is taken.

Fig. 6
Image processing. A pixel-by-pixel FFT is used to select only the component at the stimulus frequency, thereby filtering out noise from other physiological and physical processes.

B. Results and Discussion

The maps obtained without craniotomy (i.e. through the skull) and with craniotomy are shown in Fig. 7 and Fig. 8, respectively. The lower maps show amplitude, with darker regions indicating stronger activity. The primary visual cortex is located in the dark region shown in the amplitude maps. The upper maps show phase, with similar colors indicating similar phase, thus highlighting regions that are active at the same time. The functional organization of the visual cortex can be seen clearly in the phase maps, where well-defined areas of similar color indicate that neurons responsible for a given area of the visual field are grouped together. Fig. 8 shows some disorganization in the maps taken at 690 nm and 750 nm, which we believe are due to fluctuations in the level of anesthesia. Despite this, we can still draw the conclusion that maps obtained through skull are more diffuse than those taken through craniotomies, but still show distinct features of cortical function and can be used for neuroscience research.

Fig. 7
Images taken without craniotomy (i.e. through skull.) Signal-to background decreases and maps become more diffuse as wavelength increases. Signal is measured in areas denoted by red circles and background level is measured over areas denoted by blue circles. ...
Fig. 8
Images taken with craniotomy. Signal is stronger and maps more sharply defined than without craniotomy. Signal is measured in areas denoted by red circles and background level is measured over areas denoted by blue circles.

Each set of maps from left to right in Fig. 7 and Fig. 8 was taken at progressively longer wavelengths. It is evident from the fading black region in the amplitude maps that the signal-to-background ratio decreases as wavelength moves from visible to NIR. Fig. 9 shows that this is due to degradation in detected signal level rather than an increase in background.

Fig. 9
Signal-to-background analysis. Decreasing signal-to-background ratio is due to decreasing signal rather than increasing background. The increase at 850 nm in the craniotomy case is attributed to a slight decline in effectiveness of anesthesia. Error bars ...

In the study with craniotomy, the signal level at 850 nm increases, which is inconsistent with the trend observed in the study without craniotomy. We believe this is due to a slight decline in the effectiveness of the anesthetic toward the end of the imaging session, which caused a stronger response. This multi-wavelength, no-craniotomy/craniotomy study required an imaging session nearly eight hours long. Typical single-wavelength imaging sessions last less than two hours (including time required for image processing.)

In addition to declining signal-to-background ratio, the spatial character of the signal becomes delocalized, as shown by degradation in the definition of the phase map with increasing wavelength. Decreased signal-to-background ratio and increasing delocalization are consistent with reduced absorption. Photons experience many scattering events and intermingle with those from neighboring regions of the brain, leading to a lower detected signal and more diffuse phase maps.


We describe a proposed integrated semiconductor sensor for minimally invasive imaging brain function using near-infrared light. We have shown that it is possible to obtain images useful for studying brain physiology through the skull of mice at wavelengths compatible with GaAs-based semiconductor sources and detectors. Future research will seek to better understand the propagation of spatially modulated light through brain tissue in order to optimize the sensor design. We will also seek to improve the temporal resolution of this technique in order to observe faster hemodynamic phenomena.


This work was supported in part by the U.S. National Science Foundation under Grant BES-0423076, by the Center for Integrated Systems at Stanford University and by the Stanford University Office of Technology Licensing Birdseed Fund.


1. Grinvald A, Lieke E, Frostig RD, Gilbert CD, Wiesel TN. Functional architecture of cortex revealed by optical imaging of intrinsic signals. Nature. 1986;324:361–364. [PubMed]
2. Prahl S. Optical Absorption of Hemoglobin., accessed on September 15, 2005.
3. Shtoyerman E, Arieli A, Slovin H, Vanzetta I, Grinvald A. Long-term optical imaging and spectroscopy reveal mechanisms underlying the intrinsic signal and stability of cortical maps in V1 of behaving monkeys. Journal of Neuroscience. 2000;20:8111–8121. [PubMed]
4. Dunn A, Devor A, Dale A, Boas D. Spatial extent of oxygenation metabolism and hemodynamic changes during functional activation of rat somatosensory cortex. NeuroImage. 2005;27:279–290. [PubMed]
5. Francheschini M, Boas D. Noninvasive measurement of neuronal activity with near-infrared optical imaging. NeuroImage. 2004;21:372–386. [PubMed]
6. Gibson A, Austin T, Everdell N, Schweiger M, Arridge S, Meek J, Wyatt J, Delpy D, Hebden J. Three-dimensional whole-head optical tomography of passive motor-evoked responses in the neonate. NeuroImage. 2006;30:521–528. [PubMed]
7. Hebden J, Gibson A, Austin T, Yusof R, Everdell N, Deply D, Arridge S, Meek J, Wyatt J. Imaging changes in blood volume and oxygenation in the newborn infant brain using three-dimensional optical tomography. Physics in Medicine and Biology. 2004;49:1117–1130. [PubMed]
8. Morren G, Wolf U, Lemmerling P, Wolf M, Choi J, Gratton E, De Lathauwer L, Van Huffel S. Detection of fast neuronal signals in the motor cortex from functional near-infrared spectroscopy measurements using independent component analysis. Medical and Biological Engineering and Computing. 2004;42:92–99. [PubMed]
9. Tanner K, D'Amico E, Kaczmarowski A, Kukreti S, Malpeli J, Mantulin W, Gratton E. Spectrally resoloved neurophotonics: a case report of hemodynamics and vascular components in the ammalian brain. Journal of Biomedical Optics. 2005;10:064009. [PubMed]
10. Stetter M, Obermayer K. Simulation of scanning laser techniques for optical imaging of blood-related intrinsic signals. Journal of the Optical Society of America-A. 1999;16:58–70. [PubMed]
11. Cuccia DJ, Bevilacqua F, Durkin AJ, Tromberg BJ. Modulated Imaging: quantitative analysis and tomography of turbid media in the spatial-frequency domain. Optics Letters. 2005;30:1354–1356. [PubMed]
12. Kalatsky VA, Stryker MP. New paradigm for optical imaging: Temporally encoded maps of intrinsic signal. Neuron. 2003;38:529–545. [PubMed]