Humans and other species continually perform microscopic eye movements, even when attending to a single point [1-3]. These movements, which include microscopic drifts and microsaccades, are under control of the oculomotor system [2, 4, 5], elicit strong responses throughout the visual system [6-11], and have been thought to serve important functions [12-16]. The influence of these fixational eye movements on the acquisition and neural processing of visual information remains unknown. Here, we show that during viewing of natural scenes, microscopic eye movements carry out a crucial information-processing step: they remove predictable correlations in natural scenes by equalizing the spatial power of the retinal image within the frequency range of ganglion cells' peak sensitivity. This transformation, which had been attributed to center-surround receptive field organization [17-19], occurs prior to any neural processing, and reveals a form of matching between the statistics of natural images and those of normal eye movements. We further show that the combined effect of microscopic eye movements and retinal receptive field organization is to convert spatial luminance discontinuities into synchronous firing events, thus beginning the process of edge extraction. In sum, our results show that microscopic eye movements are fundamental to two goals of early visual processing —redundancy reduction [20, 21] and feature extraction— and, thus, that neural representations are intrinsically sensory-motor from the very first processing stages.
The spatial variation of the extracellular action potentials (EAP) of a single neuron contains information about the size and location of the dominant current source of its action potential generator, which is typically in the vicinity of the soma. Using this dependence in reverse in a three-component realistic probe + brain + source model, we solved the inverse problem of characterizing the equivalent current source of an isolated neuron from the EAP data sampled by an extracellular probe at multiple independent recording locations. We used a dipole for the model source because there is extensive evidence it accurately captures the spatial roll-off of the EAP amplitude, and because, as we show, dipole localization, beyond a minimum cell-probe distance, is a more accurate alternative to approaches based on monopole source models. Dipole characterization is separable into a linear dipole moment optimization where the dipole location is fixed, and a second, nonlinear, global optimization of the source location. We solved the linear optimization on a discrete grid via the lead fields of the probe, which can be calculated for any realistic probe + brain model by the finite element method. The global source location was optimized by means of Tikhonov regularization that jointly minimizes model error and dipole size. The particular strategy chosen reflects the fact that the dipole model is used in the near field, in contrast to the typical prior applications of dipole models to EKG and EEG source analysis. We applied dipole localization to data collected with stepped tetrodes whose detailed geometry was measured via scanning electron microscopy. The optimal dipole could account for 96% of the power in the spatial variation of the EAP amplitude. Among various model error contributions to the residual, we address especially the error in probe geometry, and the extent to which it biases estimates of dipole parameters. This dipole characterization method can be applied to any recording technique that has the capabilities of taking multiple independent measurements of the same single units.
Multisite recording; Inverse problem; Passive conductor model; Lead field theory; Finite element method (FEM)
It is becoming increasingly clear that the brain processes sensory stimuli differently according to whether they are passively or actively acquired, and these differences can be seen early in the sensory pathway. In the nucleus of the solitary tract (NTS), the first relay in the central gustatory neuraxis, a rich variety of sensory inputs generated by active licking converge. Here we show that taste responses in the NTS reflect these interactions. Experiments consisted of recordings of taste-related activity in the NTS of awake rats as they freely licked exemplars of the five basic taste qualities (sweet, sour, salty, bitter, umami). Nearly all taste-responsive cells were broadly tuned across taste qualities. A subset responded to taste with long latencies (>1.0 s), suggesting the activation of extra-oral chemoreceptors. Analyses of the temporal characteristics of taste responses showed that spike timing conveyed significantly more information than spike count alone in almost half of NTS cells, as in anesthetized rats, but with less information per cell. In addition to taste-responsive cells, the NTS contains cells that synchronize with licks. Since the lick pattern per se can convey information, these cells may collaborate with taste-responsive cells to identify taste quality. Other cells become silent during licking. These latter “anti-lick” cells show a surge in firing rate predicting the beginning and signaling the end of a lick bout. Collectively, the data reveal a complex array of cell types in the NTS, only a portion of which include taste-responsive cells, which work together to acquire sensory information.
To determine whether EEG spectral analysis could be used to demonstrate awareness in patients with severe brain injury.
We recorded EEG from healthy controls and three patients with severe brain injury, ranging from minimally conscious state (MCS) to locked-in-state (LIS), while they were asked to imagine motor and spatial navigation tasks. We assessed EEG spectral differences from 4 to 24 Hz with univariate comparisons (individual frequencies) and multivariate comparisons (patterns across the frequency range).
In controls, EEG spectral power differed at multiple frequency bands and channels during performance of both tasks compared to a resting baseline. As patterns of signal change were inconsistent between controls, we defined a positive response in patient subjects as consistent spectral changes across task performances. One patient in MCS and one in LIS showed evidence of motor imagery task performance, though with patterns of spectral change different from the controls.
EEG power spectral analysis demonstrates evidence for performance of mental imagery tasks in healthy controls and patients with severe brain injury.
EEG power spectral analysis can be used as a flexible bedside tool to demonstrate awareness in brain-injured patients who are otherwise unable to communicate.
Consciousness; EEG spectral analysis; motor imagery; traumatic brain injury; locked-in-state; minimally conscious state
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions.
Face and symmetry processing have common characteristics, and several lines of evidence suggest they interact. To characterize their relationship and possible interactions, in the present study we created a novel library of images in which symmetry and face-likeness were manipulated independently. Participants identified the target that was most symmetric among distractors of equal face-likeness (Experiment 1) and identified the target that was most face-like among distractors of equal symmetry (Experiment 2). In Experiment 1, we found that symmetry judgments improved when the stimuli were more face-like. In Experiment 2, we found a more complex interaction: Image symmetry had no effect on detecting frontally viewed faces, but worsened performance for nonfrontally viewed faces. There was no difference in performance for upright versus inverted images, suggesting that these interactions occurred on the parts-based level. In sum, when symmetry and face-likeness are independently manipulated, we find that each influences the perception of the other, but the nature of the interactions differs.
Faces; Symmetry; Vision; Inversion effect
Quantifying similarity and dissimilarity of spike trains is an important requisite for understanding neural codes. Spike metrics constitute a class of approaches to this problem. In contrast to most signal-processing methods, spike metrics operate on time series of all-or-none events, and are, thus, particularly appropriate for extracellularly recorded neural signals. The spike metric approach can be extended to multineuronal recordings, mitigating the ‘curse of dimensionality’ typically associated with analyses of multivariate data. Spike metrics have been usefully applied to the analysis of neural coding in a variety of systems, including vision, audition, olfaction, taste and electric sense.
Psychophysical and fMRI studies have indicated that visual processing of global symmetry has distinctive scaling properties, and proceeds more slowly than analysis of contrast, spatial frequency, and texture. We therefore undertook a visual evoked potential (VEP) study to directly compare the dynamics of symmetry and texture processing, and to determine the extent to which they interact.
Stimuli consisted of interchange between structured and random black-and-white checkerboard stimuli. For symmetry, structured stimuli were colored with 2-fold symmetry (horizontal or vertical mirror), 4-fold symmetry (both mirror axes), and 8-fold symmetry (oblique mirror axes added). For texture, structured stimuli were colored according to the “even” isodipole texture (Julesz, Gilbert & Victor, 1978). Thus, all stimuli had the same contrast, and check size, but differed substantially in correlation structure.
To separate components of the VEP related to symmetry and texture from components that could be generated by local luminance and contrast changes, we extracted the odd-harmonic components of the VEP (recorded at Cz-Oz, Cz-O1, Cz-O2, Cz-Pz) elicited by structured-random interchange.
Responses to symmetry were largest for the 8-fold patterns, and progressively smaller for 4-fold, vertical, and horizontal symmetry patterns. 8-fold patterns were therefore used in the remainder of the study. The symmetry response is shifted to larger checks and lower temporal frequencies compared to the response to texture, and its temporal tuning is broader. Processing of symmetry makes use of neural mechanisms with larger receptive fields, and slower, more sustained temporal tuning characteristics than those involved in the analysis of texture.
Sparse stimuli were used to dissociate check size and check density. VEP responses to sparse symmetry stimuli showed that there is no difference between 1st and 2nd order symmetry for densities less than 12.5%. We discuss these findings in relation to local and global visual processes.
Symmetry; texture; visual evoked potentials; isodipole; form vision
To understand the functional connectivity of neural networks, it is important to develop simple and incisive descriptors of multineuronal firing patterns. Analysis at the pairwise level has proven to be a powerful approach in the retina, but it may not suffice to understand complex cortical networks. Here we address the problem of describing interactions among triplets of neurons. We consider two approaches: an information-geometric measure (Amari, 2001), which we call the “strain,” and the Kullback-Leibler divergence. While both approaches can be used to assess whether firing patterns differ from those predicted by a pairwise maximum-entropy model, the strain provides additional information. Specifically, when the observed firing patterns differ from those predicted by a pairwise model, the strain indicates the nature of this difference – whether there is an excess or a deficit of synchrony – while the Kullback-Leibler divergence only indicates the magnitude of the difference. We show that the strain has technical advantages, including ease of calculation of confidence bounds and bias, and robustness to the kinds of spike-sorting errors associated with tetrode recordings.
We demonstrate the biological importance of these points via an analysis of multineuronal firing patterns in primary visual cortex. There is a striking scale-dependent behavior of triplet firing patterns: deviations from the pairwise model are substantial when the neurons are within 300 microns of each other, and negligible when they are at a distance of > 600 microns. The strain identifies a consistent pattern to these interactions: when triplet interactions are present, the strain is nearly always negative, indicating that there is less synchrony than would be expected from the pairwise interactions alone.
Maximum entropy; Pairwise model; Multineuron; Synchrony; Information geometry
Connectivity in the cortex is organized at multiple scales 1-5, suggesting that scale-dependent correlated activity is particularly important for understanding the behavior of sensory cortices and their function in stimulus encoding. Here, we analyze the scale-dependent structure of cortical interactions by using maximum entropy models 6-9 to characterize multiple-tetrode recordings from primary visual cortex of anesthetized monkeys (Macaca mulatta). We compare the properties of firing patterns among local clusters of neurons (<300 microns) with neurons separated by larger distances (600-2500 microns). We find that local firing patterns are distinctive: while multi-neuronal firing patterns at larger distances can be predicted by pairwise interactions, patterns within local clusters often show evidence of high-order correlations. Surprisingly, these local correlations are flexible and rapidly reorganized by visual input. While they modestly reduce the amount of information that a cluster conveys, they also modify the format of this information, creating sparser codes by increasing the periods of total quiescence, and concentrating information into briefer periods of common activity. These results imply a hierarchical organization of neuronal correlations: simple pairwise correlations link neurons over scales of tens to hundreds of minicolumns, but on the scale of a few minicolumns, ensembles of neurons form complex subnetworks whose moment-to-moment effective connectivity is dynamically reorganized by the stimulus.
Several recent studies have shown that the ON and OFF channels of the visual system are not simple mirror images of each other, that their response characteristics are asymmetric (Chichilnisky and Kalmar, 2002; Sagdullaev and McCall, 2005). How the asymmetries bear on visual processing is not well understood. Here we show that ON and OFF ganglion cells show a strong asymmetry in their temporal adaptation to photopic (day) and scotopic (night) conditions and that the asymmetry confers a functional advantage. Under photopic conditions, the ON and OFF ganglion cells show similar temporal characteristics. Under scotopic conditions, the two cell classes diverge – ON cells shift their tuning to low temporal frequencies, while OFF cells continue to respond to high. This difference in processing corresponds to an asymmetry in the natural world, one produced by the Poisson nature of photon capture and persists over a broad range of light levels. This work characterizes a previously unknown divergence in the ON and OFF pathways and its utility to visual processing. Furthermore, the results have implications for downstream circuitry and thus offer new constraints for models of downstream processing, since ganglion cells serve as building blocks for circuits in higher brain areas. For example, if simple cells in visual cortex rely on complementary interactions between the two pathways, such as push-pull interactions (Alonso et al., 2001; Hirsch, 2003), their receptive fields may be radically different under scotopic conditions, when the ON and OFF pathways are out of sync.
Retinal Ganglion Cell; Sensory Neurons; Parallel; Information; Vision; Receptive Field; Population coding
Deep brain stimulation (DBS) is an established therapy for Parkinson’s Disease and is being investigated as a treatment for chronic depression, obsessive compulsive disorder and for facilitating functional recovery of patients in minimally conscious states following brain injury. For all of these applications, quantitative assessments of the behavioral effects of DBS are crucial to determine whether the therapy is effective and, if so, how stimulation parameters can be optimized. Behavioral analyses for DBS are challenging because subject performance is typically assessed from only a small set of discrete measurements made on a discrete rating scale, the time course of DBS effects is unknown, and between-subject differences are often large. We demonstrate how Bayesian state-space methods can be used to characterize the relationship between DBS and behavior comparing our approach with logistic regression in two experiments: the effects of DBS on attention of a macaque monkey performing a reaction-time task, and the effects of DBS on motor behavior of a human patient in a minimally conscious state. The state-space analysis can assess the magnitude of DBS behavioral facilitation (positive or negative) at specific time points and has important implications for developing principled strategies to optimize DBS paradigms.
deep brain stimulation; state-space models; Bayesian estimation; behavior; model selection; logistic regression
Conventional methods widely available for the analysis of spike trains and related neural data include various time- and frequency-domain analyses, such as peri-event and interspike interval histograms, spectral measures, and probability distributions. Information theoretic methods are increasingly recognized as significant tools for the analysis of spike train data. However, developing robust implementations of these methods can be time-consuming, and determining applicability to neural recordings can require expertise. In order to facilitate more widespread adoption of these informative methods by the neuroscience community, we have developed the Spike Train Analysis Toolkit. STAToolkit is a software package which implements, documents, and guides application of several information-theoretic spike train analysis techniques, thus minimizing the effort needed to adopt and use them. This implementation behaves like a typical Matlab toolbox, but the underlying computations are coded in C for portability, optimized for efficiency, and interfaced with Matlab via the MEX framework. STAToolkit runs on any of three major platforms: Windows, Mac OS, and Linux. The toolkit reads input from files with an easy-to-generate text-based, platform-independent format. STAToolkit, including full documentation and test cases, is freely available open source via http://neuroanalysis.org, maintained as a resource for the computational neuroscience and neuroinformatics communities. Use cases drawn from somatosensory and gustatory neurophysiology, and community use of STAToolkit, demonstrate its utility and scope.
Computational neuroscience; Information theory; Neural coding; Neurodatabases; Data sharing
Neurons in primary visual cortex are widely considered to be oriented filters or energy detectors that perform one-dimensional feature analysis. The main deviations from this picture are generally thought to include gain controls and modulatory influences. Here we investigate receptive field (RF) properties of single neurons with localized two-dimensional stimuli, the two-dimensional Hermite functions (TDHs). TDHs can be grouped into distinct complete orthonormal bases that are matched in contrast energy, spatial extent, and spatial frequency content but differ in two-dimensional form, and thus can be used to probe spatially specific nonlinearities. Here we use two such bases: Cartesian TDHs, which resemble vignetted gratings and checkerboards, and polar TDHs, which resemble vignetted annuli and dartboards. Of 63 isolated units, 51 responded to TDH stimuli. In 37/51 units, we found significant differences in overall response size (21/51) or apparent RF shape (28/51) that depended on which basis set was used. Because of the properties of the TDH stimuli, these findings are inconsistent with simple feedforward nonlinearities and with many variants of energy models. Rather, they imply the presence of nonlinearities that are not local in either space or spatial frequency. Units showing these differences were present to a similar degree in cat and monkey, in simple and complex cells, and in supragranular, infragranular, and granular layers. We thus find a widely distributed neurophysiological substrate for two-dimensional spatial analysis at the earliest stages of cortical processing. Moreover, the population pattern of tuning to TDH functions suggests that V1 neurons sample not only orientations, but a larger space of two-dimensional form, in an even-handed manner.
Using drifting compound grating stimuli matched in energy and frequency spectrum, we previously showed that neurons in the primary visual cortex (V1) were tuned to line-like, edge-like, and intermediate one-dimensional features. Because these compound grating stimuli were drifting, allowing for potential interaction between shape and motion, we examine here the dependence of V1 feature tuning on drift speed. We find that the feature selectivity and specificity of individual V1 neurons strongly depend on speed. A simple model explains these observations in terms of an interaction between linear filtering by the receptive field and the static nonlinearity of spike threshold, embedded in a recurrent network. Although the speed-dependent behaviors in single V1 neurons preclude their acting as extractors of one-dimensional features, the population as a whole retains a representation of a full suite of features.
Quantifying the dissimilarity (or distance) between two sequences is essential to the study of action potential (spike) trains in neuroscience and genetic sequences in molecular biology. In neuroscience, traditional methods for sequence comparisons rely on techniques appropriate for multivariate data, which typically assume that the space of sequences is intrinsically Euclidean. More recently, metrics that do not make this assumption have been introduced for comparison of neural activity patterns. These metrics have a formal resemblance to those used in the comparison of genetic sequences. Yet the relationship between such metrics and the traditional Euclidean distances has remained unclear. We show, both analytically and computationally, that the geometries associated with metric spaces of event sequences are intrinsically non-Euclidean. Our results demonstrate that metric spaces enrich the study of neural activity patterns, since accounting for perceptual spaces requires a non-Euclidean geometry.
Understanding how the brain performs computations requires understanding neuronal firing patterns at successive levels of processing—a daunting and seemingly intractable task. Two recent studies have made dramatic progress on this problem by showing how its dimensionality can be reduced. Using the retina as a model system, they demonstrated that multineuronal firing patterns can be predicted by pairwise interactions.
Adaptation and visual attention are two processes that alter neural responses to luminance contrast. Rapid contrast adaptation changes response size and dynamics at all stages of visual processing while visual attention has been shown to modulate both contrast gain and response gain in macaque extrastriate visual cortex. Since attention aims to enhance behaviorally relevant sensory responses while adaptation acts to attenuate neural activity, the question we asked is, how does attention alter adaptation? We present here single-unit recordings from V4 of two rhesus macaques performing a cued target detection task. The study was designed to characterize the effects of attention on the size and dynamics of a sequence of responses produced by a series of flashed oriented gratings parametric in luminance contrast. We found that the effect of attention on the response dynamics of V4 neurons is inconsistent with a mechanism that only alters the effective stimulus contrast, or only rescales the gain of the response. Instead, the action of attention modifies contrast gain early in the task, and modifies both response gain and contrast gain later in the task. We also show that responses to attended stimuli are more closely locked to the stimulus cycle than unattended responses, and that attended responses show less of the phase lag produced by adaptation than unattended responses. The phase advance generated by attention of the adapted responses suggests that the attentional gain control operates in some ways like a contrast gain control utilizing a neural measure of contrast to influence dynamics.
macaque; adaptation; visual attention; contrast; dynamics
We extend Spekreijse's strategy for analyzing lateral interactions in visual evoked potentials (VEPs) to clinical neurophysiologic testing of patients with epilepsy. Stimuli consisted of the radial windmill/dartboard pattern [Ratliff, F., & Zemon, V. (1982). Some new methods for the analysis of lateral interactions that influence the visual evoked potential. In: Bodis-Wollner (Ed.), Evoked potentials, Vol. 388. (pp. 113–124). New York: Annals of the New York Academy of Sciences.] and conventional checkerboards. The fundamental and 2nd-harmonic components of the steady-state responses were used to calculate indices reflecting facilitatory (FI) and suppressive (SI) cortical interactions.
We carried out two studies. In the first, VEPs in 38 patients receiving antiepileptic drug (AED) therapy were compared to those of age-matched controls. For three AEDs (tiagabine, topiramate, and felbamate), addition of the drug did not change the FI and SI compared to baseline values or those of normal controls. However, the addition of gabapentin was associated with an increase of the FI, and this change was reversed when the medication was withdrawn. This suggested a medication-specific change in cortical lateral interactions.
The second study focused on the effects of neurostimulation therapy. Eleven epilepsy patients receiving chronic vagus nerve stimulation (VNS) treatment were tested. By comparing VEPs recorded with the stimulator on (Stim-ON) and turned off (Stim-OFF) in the same session, we determined that VNS did not have a short-acting effect on lateral interactions. However, when compared with normal controls, the VNS patients had a significantly smaller SI (p < .05), but no difference in the FI, demonstrating the presence of a chronic effect. We conclude that with the appropriate stimuli, VEPs can be used as a measure of cortical lateral interactions in normals and epileptic patients, and demonstrate specific changes in these interactions associated with certain treatment modalities.
Visual evoked potentials; Windmill-dartboard; Epilepsy; Gabapentin; VNS
Detection of motion is a crucial component of visual processing. To probe the computations underlying motion perception, we created a new class of non-Fourier motion stimuli, characterized by their third- and fourth-order spatiotemporal correlations. As with other non-Fourier stimuli, they lack second-order correlations, and therefore their motion cannot be detected by standard Fourier mechanisms. Additionally, these stimuli lack pairwise spatiotemporal correlation of edges or flicker—and thus, also cannot be detected by extraction of one of these features, followed by standard motion analysis. Nevertheless, many of these stimuli produced apparent motion in human observers. The pattern of responses—i.e., which specific spatiotemporal correlations led to a percept of motion—was highly consistent across subjects. For many of these stimuli, inverting the overall contrast of the stimulus reversed the direction of apparent motion. This “reverse-phi” phenomenon challenges existing models, including models that correlate low-level features and gradient models. Our findings indicate that current knowledge of the computations underlying motion processing is as yet incomplete, and that understanding how high-order spatiotemporal correlations lead to motion percepts will illuminate the computations underlying early motion processing.
motion—2D; computational modeling; isodipole texture
The apparent receptive field characteristics of sensory neurons depend on the statistics of the stimulus ensemble – a nonlinear phenomenon often called contextual modulation. Since visual cortical receptive fields determined from simple stimuli typically do not predict responses to complex stimuli, understanding contextual modulation is crucial to understanding responses to natural scenes. To analyze contextual modulation, we examined how apparent receptive fields differ for two stimulus ensembles that are matched in first- and second-order statistics, but differ in their feature content: one ensemble is enriched in elongated contours. To identify systematic trends across the neural population, we used a multidimensional scaling method, the Procrustes transformation. We found that contextual modulation of receptive field components increases with their spatial extent. More surprisingly, we also found that odd-symmetric components change systematically, but even-symmetric components do not. This symmetry dependence suggests that contextual modulation is driven by oriented on-off dyads, i.e., modulation of the strength of intracortically-generated signals.
primary visual cortex; plasticity; linear-nonlinear model; reverse correlation
Receptive fields of sensory neurons in the brain are usually restricted to a portion of the entire stimulus domain. At all levels of the gustatory neuraxis, however, there are many cells that are broadly tuned, i.e., they respond well to each of the basic taste qualities (sweet, sour, salty and bitter). Although it might seem that this broad tuning precludes a major role for these cells in representing taste space, here we show the opposite – namely, that the tastant-specific temporal aspects (firing rate envelope and spike timing) of their responses enable each cell to represent the entire stimulus domain. Specifically, we recorded the response patterns of cells in the nucleus of the solitary tract (NTS) to representatives of four basic taste qualities and their binary mixtures. We analyzed the temporal aspects of these responses, and used their similarities and differences to construct the taste space represented by each neuron. We found that for the more broadly tuned neurons in the NTS, the taste space is a systematic representation of the entire taste domain. That is, the taste space of these broadly tuned neurons is three-dimensional, with basic taste qualities widely separated and binary mixtures placed close to their components. Further, the way that taste quality is represented by the firing rate envelope is consistent across the population of cells. Thus, the temporal characteristics of responses in the population of NTS neurons, especially those that are more broadly tuned, produce a comprehensive and logical representation of the taste world.
taste; nucleus of the solitary tract; temporal coding; gustation; electrophysiology; rat
An animal's ability to rapidly adjust to new conditions is essential to its survival. The nervous system, then, must be built with the flexibility to adjust, or shift, its processing capabilities on the fly. To understand how this flexibility comes about, we tracked a well-known behavioral shift, a visual integration shift, down to its underlying circuitry, and found that it is produced by a novel mechanism – a change in gap junction coupling that can turn a cell class on and off. The results showed that the turning on and off of a cell class shifted the circuit's behavior from one state to another, and, likewise, the animal's behavior. The widespread presence of similar gap junction-coupled networks in the brain suggests that this mechanism may underlie other behavioral shifts as well.
gap junction; shunt; network shift; state change; adaptation; cable theory; horizontal cell; attention
The brain has a one-back memory for visual stimuli. Neural responses to an image contain as much information about the current image as it does about another image presented immediately before.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
Researchers usually assume that neuronal responses carry primarily information about the stimulus that evoked these responses. We show here that, when multiple images are shown in a fast sequence, the response to an image contains as much information about the preceding image as about the current one. Importantly, this memory capacity extends only to the most recent stimulus in the sequence. The effect can be explained only partly by adaptation of neuronal responses. These discoveries were made with the help of novel methods for analyzing high-dimensional data obtained by recording the responses of many neurons (e.g., 100) in parallel. The methods enabled us to study the information contents of neural activity as accessible to neurons in the cortex, i.e., by collecting information only over short time intervals. This one-back memory has properties similar to the iconic storage of visual information—which is a detailed image of the visual scene that stays for a short while (<1 s) when we close our eyes. Thus, one-back memory may be the neural foundation of iconic memory. Our results are consistent with recent detailed computer simulations of local cortical networks of neurons (“generic cortical microcircuits”), which suggested that integration of information over time is a fundamental computational operation of these networks.