|Home | About | Journals | Submit | Contact Us | Français|
Voltage-sensitive dye imaging (VSDI) allows population patterns of cortical activity to be recorded with high temporal resolution, and recent findings ascribe potential significance to their spatial propagation patterns—both for normal cortical processing and in pathologies such as epilepsy. However, analysis of these spatiotemporal patterns has been mostly qualitative to date. In this report, we describe an algorithm to quantify fast local flow patterns of cortical population activation, as measured with VSDI. The algorithm uses correlation of temporal features across space, and therefore differs from conventional optical flow algorithms which use correlation of spatial features over time. This alternative approach allows us to take advantage of the characteristics of fast optical imaging data, which have very high temporal resolution but less spatial resolution. We verify the method both on artificial and biological data, and demonstrate its use.
Recent advances in voltage-sensitive dye imaging (Grinvald and Hildesheim, 2004; Lippert et al., 2007) allow the stable recording of many trials of voltage-sensitive dye signal with high signal-to-noise ratio, in single trials without averaging. These recordings allow sensitivity which rivals local field potential recordings at sample rates exceeding 1 kHz, and with total recording times from an imaging field of over 10 minutes. This large amount of high quality data requires novel methods of quantification.
Classical studies in optical imaging have sought to identify global trends in sensory-evoked population activity patterns, based on averaged trials. Most often, analyses have relied on thresholded areas of activation, at a given time after sensory stimulation (Peterson et al., 1998; Yazawa et al., 2001). While this method can compare the spatial characteristics of mapped sensory responses, they only provide global geometric quantification, and cannot document single-trial dynamics such as propagation within and between cortical areas. More recent studies have compared sensory-evoked activation time courses from preselected areas of the cortex (Civillico and Contreras, 2006; Roland et al., 2006; Xu et al., 2007). This allows characterization of the spread of cortical population activity—but is not applicable to the analysis of spontaneous emergent activity, which does not have a fixed spatial course and cannot be averaged by external timing events (e.g. Huang et al., 2010b).
While such sensory-evoked analysis techniques have revealed much regarding the dynamics of cortical sensory transduction, it is increasingly recognized that spontaneous dynamics can have a dominant influence on the trial-to-trial variability of sensory transduction (Arieli et al., 1996). In this report, we develop a novel temporospatial pattern matching algorithm to detect the propagation of cortical population activity, in order to overcome the limitations of previous analyses. We sought in particular to quantify the three prototypical spatiotemporal patterns of population activity measured by voltage-sensitive dye imaging: plane waves, spiral waves, and point sources (Wu et al., 2008), with quantification of spontaneous as well as sensory-evoked activity patterns.
The problem of detecting spatiotemporal patterns of activity in voltage-sensitive dye imaging data can be compared to biological motion vision. Indeed, spatiotemporal patterns of VSDI activity can be immediately perceived by observers from pseudocolor movie displays on a recording interface. Therefore, in implementing our algorithm, we obtained hints from classical computational theories of retinal motion processing (Hildreth and Koch, 1987). The principle of these theories is that motion can be detected by comparing constant signal features between spatially separated detectors at different time points—if a feature is detected in one detector and subsequently in another detector, relative motion of the input signal between these detectors can be inferred.
Our algorithm integrates time delays of waveform features across space (i.e. Σ df(t) / dX, where f(t) is the waveform from a single detector, and X is space ). This is the converse of conventional optical flow algorithms, which integrate motion in spatial features (such as pixel luminosity, spatial gradients, or spatial phase) over time (i.e. I/t, where I is luminosity). In order to emphasize this difference from conventional spatiotemporal algorithms, we use the somewhat less common term “temporospatial” in particular when describing our algorithm—that is, our algorithm first uses temporal pattern matching (temporo-), and then aggregates the results of this matching in space (spatial). Conventional image-shifting algorithms have been developed heavily for video image processing, where data has dense spatial information but relatively low temporal resolution. VSDI of cortical population activity gives data with diametrically opposite characteristics—while high sampling rates can be obtained, spatial resolution is limited due to technical constraints of imaging devices, biophysical constraints of light scatter in the tissue, and anatomical constraints from the diffuse lateral extension of dendritic trees. Our “temporospatial” approach may allow us to take full advantage of the nature of fast VSDI data, which has high temporal resolution but lower spatial resolution.
In this report, we first describe our flow algorithm. Next, we describe the key features of our reference implementation. Then, we test the implementation on artificial and biological data, to confirm the validity of the algorithm and demonstrate its use. Finally, we discuss the experimental constraints of VSDI data, and then explain the benefits, limitations, and applicability of our flow quantification algorithm as pertains to these constraints.
The data used to demonstrate the methods below are from anesthetized in vivo preparations of rodent neocortex (rat and Mongolian gerbil) and from tangential neocortical slice models of epilepsy (rat). Both preparations were stained with voltage-sensitive dyes (fluorescent dyes for the in vivo preparations, absorption dyes for the slice preparation), and recorded with a hexagonally-packed 464-channel photodiode array (WuTech Instruments, Gaithersburg MD) at 1600 frames/sec sampling rate. This instrument allows the capture of small fluctuations (0.01%) in background fluorescence, with a high signal-to-noise ratio. Furthermore, the tight hexagonal packing of the detector field allows for ideal sampling of directionality. Our experimental methods have been described in detail elsewhere (Jin et al., 2002; Bai et al., 2006; Lippert et al., 2007; Takagaki et al., 2008a; Huang et al., 2010a, 2010b). Experiments were performed in accordance with local laws and guidelines, under approval of Georgetown University Animal Care and Use Committee, or the ethics committee of Saxony-Anhalt, Germany. All analyses were performed with programs written in Java (Sun Microsystems, Santa Clara, USA) and Mathematica (Wolfram Research, Champaign, USA). A rudimentary version of this analysis was used in a previous report (Takagaki et al., 2008b).
As a first step, pairwise flow between two detectors (i.e. pixels) must be detected. Spatial flow of signal between data from any two detectors can be defined as a scalar value based on the time shift which is necessary to obtain maximum correlation of signal in a certain time window (Figure 1A). As discussed below (section 4.3), several other methods of flow quantification in a pair of detectors are possible, each with differing strengths and applicable to different biophysical data types. The correlation method used here is well suited for most data from voltage-sensitive dye recordings of cortical population activity, since the method is resistant to random quantal shot noise in each detector (which is the limiting noise source in this type of high-sensitivity imaging), and to differential staining and differential bleaching kinetics within the imaging field (Takagaki et al., 2008a) which leads to differential signal scaling in each detector. This characteristic of VSDI data is discussed later in section 4.1.
This scalar flow value between two detectors can be calculated for a cluster of multiple neighboring detector pairs, to obtain a flow vector F (Figure 1B) which represents the local flow pattern within the area of this cluster. Namely, first, correlations are calculated for each pair of detectors in the cluster, for each time shift. Next, the shift giving maximum correlation is determined as the delay value (d) for each detector pair (Figure 1A). The maximum correlation coefficient is taken as an indicator of delay reliability (r), and is used later to weight the delays during spatial aggregation. Next, all 12 delay values in the cluster are combined into a flow vector F = (d1, , d12) (Figure 1B). This flow vector is treated as raw data representing the local flow pattern. All 12 delay reliability values are likewise combined into a delay reliability vector R = (r1, , r12).
In the same manner, flow vectors can be calculated for larger clusters of detectors which are further separated (Figure 1C, D).
As a caveat, in this analysis, we perform subsequent template matching of the flow vectors based on the original dimensions of time shift per interdetector distance (i.e. [s/m]) instead of inverting the measurements into the more intuitive velocity ([m/s]). The reason for this choice is two-fold. First, cortical population activity can propagate at velocities which are fast compared to the frame rate of data acquisition (for the biological data presented in this report, 1.6 kHz frame rate). Inverting a measurement of 0 frames per interdetector distance into velocity would lead to divide-by-zero errors. Second, if the flow of population activity is orthogonal to the detector pair in question, the flow between these two detectors will be zero, also leading to a divide-by-zero condition. In either case, the divide-by-zero or infinity condition would prohibit valid data aggregation. After spatial template matching is completed in the original units [s/m] (described below), aggregate findings can be converted to velocity units [m/s] by inversion, in order to facilitate more intuitive understanding.
The local flow vectors obtained in the previous section (Figure 1 B-D) are next matched to flow templates. To illustrate the algorithm, we will describe it for the simplest case of detecting flow within a 7-detector, hexagonal region of interest (Figure 1B).
In our algorithm, we will assume that the propagation of population activity can be represented as the linear sum of four orthogonal flow templates, x-translation, y-translation, source, and counterclockwise rotation (Figure 2). These templates represent the three known propagation patterns, which are plane waves, sources, and spiral waves (Wu et al., 2008). In this decomposition, sinks are represented as negative match to the source template, and clockwise rotation is represented as negative match to the rotation template.
Saddle points are also theoretically possible with collision of various wavefronts, and they are not represented within this local template repertoire. This is not a major issue, since first of all, saddle points have not been observed in voltage-sensitive dye data to date. Furthermore, spatially isotropic saddle points will be overlooked by the current algorithm, but will not bias the results regarding the four presented templates of interest.
Before template matching, the flow vector and the template vectors are both weighted by the delay reliability vector (R) as follows:
where RT is the transpose of the reliability vector R, and
As described previously in the text, F is a vector of the calculated delays for each neighboring detector pair (Figure 1B), and Tn are the 4 template vectors: x-translation (n=1, Figure 2A), y-translation (n=2, Figure 2B), source and rotation (n=3,4, Figures 2C and D)
Next, in order to match the templates, the following weighted inner product is calculated:
where is the transpose of the template vector . The four scalar Pn values specify the overall x-translation, y-translation, source and rotational character of activity within the detection field.). P1 =1 (x-translation) or P2 =1 (y-translation) would signify that overall x- or y-translation in the detection field is 1 frame/detector interval. Likewise, P3 = 1 (source) signifies a point source spreading at 1 frame/detector, and P4 = 1 (rotation) signifies a counter-clockwise rotation of 1 frame / π/3 radians. By using weighted flow vector F* and weighted template vectors , the spatial summation will emphasize information from those detector pairs within the detection area which have shifted waveforms with the highest similarity to each other. For standard voltage-sensitive dye data, this does not add much accuracy, since the pairwise delay reliabilities (correlation coefficients) are all well above 0.9.
If one assumes a linear summation, the predicted flow vector (F+) based on template match values is as follows:
The template match reliability value is defined as the Pearson correlation coefficient between F+ (predicted flow vector from template match) and F (actual flow vector). The template match reliability value can be as large as 1 for flows which are completely accounted for by the four templates, or as small as 0 for data which is completely random and has no significant spatial pattern. This template match reliability value allows us to screen out spurious flows detected stochastically during segments of the data lacking coherent spatial patterns.
The template matching algorithm as described above is easily extrapolated to detect flow within larger regions of interest (Figures 1C, D). This is of particular interest since the propagation velocities are often very fast compared to the frame rate, and can therefore traverse directly neighboring detectors (Figure 1B) in under one frame, which limits detection sensitivity. By comparing detector pairs which are further separated (Figures 1C, D), a larger timing difference is detected between detectors, and faster velocities can be captured effectively. On the other hand, this leads to lower spatial resolution and lower sensitivity towards very local activity patterns. The appropriate spatial scale must be determined from the spatial scale of target data.
This algorithm is very data intensive and complex, requiring comparisons between multiple detectors at multiple time points to be stored and aggregated. In order to expedite calculation, we have implemented the core algorithm in Java, using a pipes and filters programming pattern (Figure 3). Source code is available at http://nounou.sourceforge.org for inspection. The final output of the data stream can be either to a scientific scripting language like Mathematica or MatLab, or to an interactive Java graphics display. Java has the distinct advantage that it gives equivalent results on each supported platform, and can easily be integrated into the two main scientific scripting languages, Mathematica or MatLab. Java integration with these two scientific packages is seamless on all of the main operating systems and platforms.
In order to expedite calculations, the data were maintained as 32-bit integers, with appropriate scaling. This is possible without significant loss of data resolution, since raw imaging data are usually in the range of 12-16 bits. All computation-intensive Java calculations were performed as 64-bit integers (Java “long”), and appropriate precautions were taken for data overflow. Initial testing showed a consistent speed advantage of more than 1 order of magnitude for basic operations such as matrix multiplication and convolution, compared to the speed of routines using standard floating point numbers (Java “double”).
The program consists of “filterblocks” (Figure 3A) which are strung together into a data processing stream (Figure 3B). Data can be requested by each “filterblock” as needed. Interfaces are provided for requesting a single point of data (from a specified detector and frame), a single frame of data (snapshot in time), or a single trace of data (from a single detector). Each datum requested by the downstream consumer filterblock triggers only the necessary calculations upstream, which allows for deferred “lazy” evaluation. A buffering filterblock can be inserted midstream, to keep intermediate results in memory. Display filterblocks can also be inserted at any point, to monitor the result of intermediate caluculations.
Sampling rate and scaling information is also passed through the data stream. This allows a filterblock, for example, to downsample data in time—the downsampling filterblock would send a lower sampling rate downstream than what it receives from upstream, and would only need to answer downstream requests for the downsampled datapoints.
Layout information is also passed through the data stream. This layout includes information on “neighbor detectors”, which is used for the flow algorithm described in this report. Encapsulation of this layout information into the data stream allows for easy adaptation of the algorithm to different data formats. Furthermore, filters involving changes in the spatial information can also be easily achieved (for example, with spatial downsampling or spatial interpolation), by a filterblock which sends different layout information downstream than the information it receives from upstream.
First, we tested with purely translational propagation (Figure 4A). This activity simulates propagating waves of cortical activity. Propagating waves are a dominant pattern seen very commonly in high-resolution imaging, and they propagate both within cortical fields and between cortical fields—both in response to sensory phenomena, and spontaneously (Arieli et al., 1996; Tsodyks et al., 1999; Senseman and Robbins, 2002; Kenet et al., 2003; Petersen et al., 2003a; Roland et al., 2006; Lippert et al., 2007; Xu et al., 2007; Takagaki et al., 2008b). The artificial data used consisted of half-sin plane waves, which propagate at given speeds through the imaging field. Figure 4B demonstrates that the flow of propagation is detected precisely and accurately from this artificial data. The detection is valid throughout a wide range of propagation speeds, with very small standard deviation.
The results in Figure 4B were obtained from artificial data consisting of pure sinusoids, with no noise content. Figure 4C shows the detection results for a waveform consisting of sinusoids with various amounts of noise, added independently to each detector trace (the flow was set to propagate 1 frame interval over 3 recording frames). This stochastic noise models “shot noise” in high-sensitivity voltage-sensitive dye imaging, where quantum fluxes in the number of fluorescent photons are detected as stochastic noise. The results from Figure 4C demonstrate that our algorithm is resistant to such random noise, which would be expected given the nature of correlation coefficients, which were used to detect pairwise flows. The algorithm maintains good flow detection with even a signal-to-noise ratio of 2:1 (noise amplitude of 50% of the signal).
Figure 4D depicts flow detection at various angles of propagation. It demonstrates that even though the detector layout is hexagonal, and thus the detector pair directions for single pairs of detectors are limited to 60 degree increments, intermediate flow directions are detected correctly in increments far smaller than that of the physical detector layout.
Figure 5 illustrates the results of applying the algorithm to two other types of propagation, target wave (sources) and rotation (spirals). Figure 5A illustrates a concentric spreading half-sin wave, which simulates activity arising from a point source in the cortex, such as is seen in barrel cortex (Petersen and Sakmann, 2001; Derdikman et al., 2003; Civillico and Contreras, 2006; Lippert et al., 2007). Figure 5B demonstrates that the source flow can be detected precisely and accurately over a wide range of propagation speeds.
Figure 5C illustrates artificial rotational data, consisting of sinusoidal pinwheels rotating around a center phase singularity. This simulates spiral activity, which has been observed in myocardial models during fibrillation (Salama and Choi, 2007), and also in cortical preparations (Prechtl et al., 2000; Huang et al., 2004; Mohajerani et al., 2010; Huang et al., 2010b). Figure 5D demonstrates that the rotation can be detected within a range of values, but at fast and slow rotations, the detection fails. At very fast rotations, the detection fails because the rotation is aliased, leading to systemic underestimation of the rotation flow. With very slow rotations, the frame shifts necessary for maximum correlation start to exceed the maximum shift, and flows are not detected correctly. Due to the increased risk of aliasing with a rotational (i.e. cyclically oscillating) spatiotemporal patterns, the maximum shift window must be adjusted carefully when dealing with such data, such that the detection range covers the data frequency. This aliasing would theoretically also affect repetitive plane waves or repetitive sources—although such patterns have not been observed in biological activity to date.
In this section, we demonstrate the translational detection illustrated in Figure 4, using the example of spontaneous plane waves in the rat visual cortex under urethane anesthesia (in the so-called “synchronized” state) (Steriade et al., 1993; Clement et al., 2008; Curto et al., 2009). In such synchronized states, large-amplitude waves of activity travel spontaneously, both within cortical areas and between cortical areas. Figure 6 illustrates the detection of translational propagation, for a single trial of such data. One can see from the visual movie frames (Figure 4B) that different wave epochs travel in different directions, and this is also reflected in the detected flow patterns (Figure 4C).
Whether these propagation patterns carry biological information has been a matter of active conjecture (Bullock, 1995; Massimini et al., 2009; Roland, 2010). Our method is expected to contribute to addressing this issue. For example, using a rudimentary version of this analysis, which only encompassed translational activity, we have previously shown that the directions of such spontaneous activity within the barrel field of the somatosensory cortex are random in direction, whereas spontaneous activity propagating between somatosensory and visual cortices show strong anisotropy favoring the axis connecting the two neighboring sensory cortical areas (Takagaki et al., 2008b).
Next, we demonstrate the source detection illustrated in Figure 5B, using the example of whisker-evoked activity in the barrel cortex in vivo. It is well known that such activity varies widely from trial to trial, with strong influence from the spontaneous activity (Arieli et al., 1996; Petersen et al., 2003b; Lippert et al., 2007). Here, this is seen in the movie frames (Figure 7A) and contour plots from a selected timepoint (Figure 7B). Despite this wide variability, the source flow localization, which reflects somatotopic barrel input, should be conserved. When our new algorithm is applied to single trials, we can detect that this is indeed the case—even though each trial differs greatly, with the flow algorithm, each single trial shows a fixed location for the source of evoked activity (Figure 7C). When patterns from the deflection of multiple different whiskers are aggregated, we can see the somatotopic localization of these detected sources (Figures 7D, E).
In order to demonstrate the detection and quantification of spiral activity in biological data, we use two epileptiform preparations which have shown spiral dynamics in their organization.
Our first example is in a slice model of epilepsy. Application of bicuclline and carbachol to neocortical slices triggers oscillations of 4-15 Hz, and we previously demonstrated that in tangential slices of visual cortex, this oscillation can organize into stable spiral dynamics lasting up to 30 cycles (Huang et al., 2004). Using our algorithm, one can precisely detect the center of spiral activity, as well as quantify the rotation speed at this spiral center (Figure 8A). In the top row of this panel, a trace of voltage-sensitive dye activity from a single detector is shown, with robust oscillation at 10-11 Hz. In single traces, the spatial organization of this activity into spirals is not recognizable. A segment of this activity is displayed as pseudocolor imaging frames, one frame every 10 ms. Here, a clear counterclockwise spiral dynamic is evident. In the third row of this panel, the present algorithm is applied to give the flow field of activity (black arrows), as well as the rotation value in color (red: positive/counterclockwise). The algorithm will allow precise quantification of the stability/drift of spiral centers, in such preparations.
The algorithm is also able to detect spiral dynamics in vivo. The Mongolian gerbil is known as a model of cortical epilepsy, since some individuals develop idiopathic generalized tonic-clonic seizures which can be spontaneous or environmentally triggered. While the precise mechanisms of gerbil epilepsy are unknown, not all individuals, even within the same litter, develop epilepsy—this suggests a complex genetic penetrance. Environmental factors are also implicated by the anectodal observation that socially enriched rearing can minimize (but not eliminate) the risk of epileptogenesis. Epileptiform bursts in the theta range (at around 7-8 Hz) in epileptic gerbils can organize into spiral dynamics (Figure 8B, Takagaki et al, in preparation). In contrast to the previous example, the spirals in this preparation propagate in a clockwise direction. Figure 8D shows blue pixels demonstrating clockwise (negative) rotation detected at the spiral center. By using the flow algorithm, both the location of the spiral center, as well as the rotation speed, can be successfully quantified. This quantification is a prerequisite for detailed testing of the pathological significance of such activity.
In this report we have described an algorithm which can better describe local propagation patterns of cortical activity in voltage-sensitive dye data. We decompose the propagation patterns in each local area into four templates, x-translation, y- translation, source and rotation. Our flow method is inspired by biological movement detection, and takes better advantage of the characteristics of high sensitivity voltage-sensitive dye imaging (VSDI) data, which has high temporal resolution but limited spatial resolution. Our method can reliably identify and quantify propagation patterns on a single trial basis.
Quantification of scientific data requires many assumptions, and therefore depends critically on characteristics of the experiment from which data is analyzed. This is especially true in technically advanced and intricate measurements such as high-sensitivity optical imaging. In this section, we describe the key characteristics of high-sensitivity VSDI data from cortical tissue, as pertains to the quantification of spatial propagation. The key relevant features of such data are as follows: (1) high temporal resolution and lower spatial resolution, (2) often low signal-to-noise ratio of biological activity, especially for “desynchronized” preparations closer to physiological wakefulness, and (3) differential absolute scaling of data from each detector. Each of these characteristics will be described below, and related to the key features of the algorithm presented here.
First of all, the voltage-sensitive dye signal has a very high temporal resolution. In isolated invertebrate cell preparations, which were used predominantly in the early days of the technique, the optical signal has been shown to correspond with sub-millisecond response time to intracellular membrane recordings (Ross et al., 1977). With the advent of slice preparations and in vivo preparations, the main focus has shifted from recording of single cell events to population activity patterns. In cell populations, optical deflections do not always correspond exactly to recordings from a single cell—although in certain synchronized states and in certain populations of the cortex, a close correlation has been documented (Petersen et al., 2003a, 2003b). The overall inexact correlation of optical and single-unit or local field recordings may lead to the illusion that voltage-sensitve dyes have lower temporal resolution—however, this is only a manifestation of the signal aggregation of a large number of spiking neurons.
Since VSD signal arises directly from stained membrane, the spatial resolution is high compared to electrical measures of population activity (such as ECoG or EEG, which derive from current flow in the volume conductor). Spatial resolution is on the order of 100-300 µm (with practical limits of up to 102-103 detectors from a several mm field), limited by the extent of neuronal arborization (Petersen et al., 2003a). Given this nature of the signal, our method seeks to take maximum advantage of the temporal resolution, while coping with the lower spatial resolution.
The second relevant feature of voltage-sensitive dye signal is that it often contains a large amount of stochastic “shot” noise. To a large extent, this can be filtered out, but when judging propagation of fast transients, such filtering will reduce temporal resolution. Such “shot” noise is particularly large compared to the signal, when the cortex is in a lower amplitude, more “desynchronized” states—this is analogous to signal-to-noise ratios in scalp EEG or ECoG (Niedermeyer and Silva, 2004). Given this nature of the signal, our method takes advantage of the correlation coefficient as a measure to judge spatial pattern coherence—this measure is very robust against random statistical noise in single trials.
The third relevant feature of voltage-sensitive dye signal in vivo is that it is a relative signal, and is very difficult to normalize. While pioneering studies of intact preparations used waveforms to make physiological, but not necessarily quantitative statements (Grinvald et al., 1984), more recent studies have unquestioningly used the normalization ΔF/F0 (ΔF representing signal amplitude, F0 representing resting light intensity) as a quantitatively valid measure. However, the ΔF/F0 normalization has only been rigorously examined in isolated invertebrate preparations, where all of the fluorescence detected can be assumed to arise from neuronal membrane. In vivo, the ΔF/F0 normalization has dynamic instabilities both over time, within a spatial field, and between experiments, due to variabilities in staining or illumination—that is, the ΔF and the F0 arise from different components of the neuropil/cortical mantle, and they do not have a fixed ratio across experiments, nor do they maintain a fixed ratio during the time course of a single experiment (Takagaki et al., 2008a). Given this nature of the signal, our correlation coefficient algorithm is also beneficial, since it only presupposes that the general waveform of ΔF is preserved during propagation, but does not make any assumptions about the scaling of the signal in each detector.
Previously, estimates of propagation speeds have been obtained by measuring the timing of events at remote individual detectors, and dividing the detector separation by the time delay (e.g. Civillico and Contreras, 2006; Xu et al., 2007; Takagaki et al., 2008b). However, this method has several shortcomings. First, this method depends on the choice of detectors at which to measure the speed. If the propagation of the wave follows a fixed path, such as tangential to the cortex in a coronal slice, this is not an overwhelming limitation, but in two-dimensional preparations such as in in vivo imaging of exposed cortex, a choice of measuring detectors which is not perfectly parallel to the propagation path will cause systematic underestimation. Second, this method can only take into account certain features of the wave which are readily discernible, such as the rise time to half-height, or the peak time. Third, wave propagation speed changes both within a single cortical area (Petersen et al., 2003b; Lippert et al., 2007) and when propagating between different cortical areas (Xu et al., 2007; Takagaki et al., 2008b). Since this method can sample a selected number of detector pairs only, it potentially misses such local variations in propagation pattern.
Han and colleagues used a novel analysis of “center of mass” of ΔF/F0 to quantify evoked and spontaneous trajectories of activity across the rat visual cortex (Han et al., 2008). Their results demonstrate that activity patterns can be more confined to certain trajectories, shortly after exposure to fixed visual stimuli. However, since the ΔF/F0 is not necessarily an absolute value with zero origin (Takagaki et al., 2008a), the arithmetic center of mass does not have an obvious biophysical interpretation. The measure is also vulnerable to the dynamic biases in the values of ΔF/F0 itself, which compound over the course of an experiment (Takagaki et al., 2008a), and would be especially problematic for a pre vs. post type of comparison, as necessitated by a plasticity paradigm. Furthermore, the investigators use the whole frame of data, instead of defining a specific region of interest, and peripheral non-visual pixels could be diluting their results and lowering their sensitivity. While it does seem clear that a positive change related to stimulus exposure occurs, the problems in their center of gravity measure may very well be hindering their sensitivity for finding more meaningful pattern development.
Roland et al. analyzed feedback waves in isoflurane-anesthetized ferret visual cortex (Roland et al., 2006). They use an algorithm which takes matched pixel gray values from neighboring frames, and uses this to evaluate flow. This is essentially an adapted visual flow analysis, as described in the introduction. While this allows automated analysis of local flow, including spontaneous flow, its main limitation is the assumption of equal intensity during propagation. For example, certain types of propagating activity change (decrease) their amplitude as they propagate intracortically (Lippert et al., 2007; Xu et al., 2007; Takagaki et al., 2008b), which may lead to systematic overestimates of the propagation speed. Likewise, the propagation will depend heavily on the normalization method used. In further work from the same laboratory, an improved wavefront analysis based on regression of scaled amplitudes is described (Ahmed et al., 2008). While these methods allow characterization of a very clear wave in a relatively well-defined spatial location, they cannot be applied to systems without a clear unidimentional propagation axis, and is still dependent upon the normalization.
Bennuci et al. (Benucci et al., 2007) utilize reversing contrast gradients in the cat visual cortex, thereby allowing propagation of FFT amplitude in the second-harmonic response to distinguish traveling waves along a retinotopic axis and standing waves along an orientation axis. Several factors allow them to collapse their data into a reliable FFT measure: their clever experimental stimulus paradigm, the oscillatory entrainment of activity in the visual cortex, and the clear mapping of retinotopy and orientation. Their method is reminiscent of fourier imaging for static mapping (Kalatsky and Stryker, 2003), but applied dynamically. While this method is effective for inferring propagation in stimulus-entrained activity, it would be difficult to implement in spontaneous activity, even where there is a rhythmic component such as in alpha activity, sleep rhythms or pharmacological oscillations, since these physiological rhythms contain large variation in frequency over time.
Compared to these previous quantifications, the flow measure we present here does not require a trigger to judge onset, thereby allowing non-biased evaluation of the flow of spontaneous activity—even in the absence of large “waves.” Furthermore, it is very resistant to instabilities in signal normalization, since it uses shifted correlation coefficient of overall waveforms to judge propagation (regardless of how individual compared detectors are scaled relatively, the correlation coefficient is normalized to a maximum of 1). For the same reason, this flow measure is also resistant to amplitude changes in the waves as they propagate. We have previously used a rudimentary form of this shifted correlation algorithm to show that the directions of spontaneous activity within a cortical field (within the barrel field of the somatosensory cortex) are random in direction, whereas spontaneous activity propagating in areas between fields (between somatosensory and visual cortices) show strong anisotropy favoring the axis connecting the two neighboring sensory cortical areas (Takagaki et al., 2008b). Compared to this previous rudimentary version, which only detected translational flow, the robustness of this previous algorithm has been greatly improved by addition of source and spiral patterns, and it can now account for the complete array of cortical activity patterns which have been observed to date.
The algorithm described here uses time-shifted correlation coefficients as the index of propagation between a pair of detectors. This has the benefit of the data being well normalized with regards to each other, regardless of the scaling of each detector. Furthermore, the correlation coefficient measure is very robust towards stochastic “shot” noise, which arises from quantum fluctuation in the photon flux. However, other metrics can also be considered.
For example, it would be intuitive to substitute with a cross-correlation to detect the peak cross-correlation. As discussed in previous sections, however, the scaling of individual detectors in a high-sensitivity imaging experiment is not necessarily homogenous between detectors or stable across the experiment in time (Takagaki et al., 2008a), and thus normalized cross-correlation would be required. This eliminates much of the calculation speed value of a cross-correlation, since FFT methods are not applicable. Furthermore, the cross-correlation must be local, since propagation directions shift rapidly over time. When dealing with short segments of highly variable multichannel data, the standard normalization for cross-correlation, which involves subtracting the mean and dividing by standard deviation, is not appropriate since the two shifted trace segments should not necessarily normalize to the same standard. Additionally, biological data in question is usually not stably cyclical, which also limits validity of standard cross-correlation algorithms.
When dealing with data recorded from highly oscillatory events, an alternative method for detecting flow between pairs of detectors would be to substitute the isophase time delay for Hilbert transforms of each detector trace, and submit these delays to the same spatial aggregation. Using the Hilbert phase has the advantage of giving flow defined for a discrete time point, instead of for a time window. Furthermore, it is computationally less intensive, as it requires only one convolution with a Hilbert kernel per detector trace of data, as opposed to the correlation algorithm requiring shifted correlations for each time window and detector pair.
Another strategy would be to define the flow by some type of normalized signal threshold (Ahmed et al., 2008). The delay of reaching these thresholds could be taken as a measure of delay between two detectors in a pair. However, normalization in voltage-sensitive dye recordings in vivo is often fraught with dynamic instabilities (Takagaki et al., 2008a), both within the imaging field and longitudinally over the course of the imaging experiment, so these biases must be carefully monitored, over both space and time. Furthermore, the absolute magnitude of a population activity pattern may increase or decrease during spatial propagation, which may also lead to systematic biases, depending upon how the data is scaled.
In special classes of data, other metrics of the waveform can also be used to detect flow. For example, epileptic and epileptiform imaging data contain sharp inflection points, such as spikes and sharp waves, and comparing the timings of these inflections between detectors can give information about propagation. In activity patterns with clearly defined bursts of activity, the rise time to half maximal hight can be used as well.
Spatially, the algorithm is described and implemented here for a hexagonally packed detector array, which gives high packing ratio and maximally isotropic sampling, and is therefore ideal for the purpose of flow detection. However, the algorithm described here easily adapts to square-packed arrays, which are commonly found in CCD- and CMOS-based imaging devices. One simply needs to change the template vectors, described in Equations 3-6 and in Figure 1 B-D, to a square layout. For example, the 12-element nearest-neighbor templates described in Equations 3-6 and Figure 1B would be substituted with 8-element template vectors, given the four direct neighbors of each pixel. Care must be taken to provide the correct scaling.
Experiments were supported by NIH grant NS036447 to JW, and German Research Foundation (Deutsche Forschungsgemeinschaft) grant SFB-779 and German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung) Bernstein collaboration grant 01GQ0733 to FWO. KT was supported by an Armar and Umeko Strauss MD/PhD scholarship and a fellowship from the Alexander von Humboldt Foundation. We would like to thank Xiaoying Huang, Michael Lippert, and Tim Wanger for scientific input.
Disclosure/Conflict-of-Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.