PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-14 (14)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
1.  Hippocampal Remapping Is Constrained by Sparseness rather than Capacity 
PLoS Computational Biology  2014;10(12):e1003986.
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space.
Author Summary
The mammalian brain represents space in the population of hippocampal place cells as well as in the population of medial entorhinal cortex grid cells. Since both populations are active at the same time, space information has to be synchronized between the two. Both brain areas are reciprocally connected, and it is unclear how the two codes influence each other. In this paper, we analyze a theoretical model of how a place code processes inputs from the grid cell population. The model shows that the sparseness of the place code poses a much stronger constraint than maximal information transfer. We thus conclude that the potentially high spatial acuity of the grid code cannot be efficiently conveyed to a sparse place cell population and thus propose that sparseness and spatial acuity are two independent objectives of the neuronal place representation.
doi:10.1371/journal.pcbi.1003986
PMCID: PMC4256019  PMID: 25474570
2.  Neuronal Adaptation Translates Stimulus Gaps into a Population Code 
PLoS ONE  2014;9(4):e95705.
Neurons in sensory pathways exhibit a vast multitude of adaptation behaviors, which are assumed to aid the encoding of temporal stimulus features and provide the basis for a population code in higher brain areas. Here we study the transition to a population code for auditory gap stimuli both in neurophysiological recordings and in a computational network model. Independent component analysis (ICA) of experimental data from the inferior colliculus of Mongolian gerbils reveals that the network encodes different gap sizes primarily with its population firing rate within 30 ms after the presentation of the gap, where longer gap size evokes higher network activity. We then developed a computational model to investigate possible mechanisms of how to generate the population code for gaps. Phenomenological (ICA) and functional (discrimination performance) analyses of our simulated networks show that the experimentally observed patterns may result from heterogeneous adaptation, where adaptation provides gap detection at the single neuron level and neuronal heterogeneity ensures discriminable population codes for the whole range of gap sizes in the input. Furthermore, our work suggests that network recurrence additionally enhances the network's ability to provide discriminable population patterns.
doi:10.1371/journal.pone.0095705
PMCID: PMC3997522  PMID: 24759970
3.  Re-encoding of associations by recurrent plasticity increases memory capacity 
Recurrent networks have been proposed as a model of associative memory. In such models, memory items are stored in the strength of connections between neurons. These modifiable connections or synapses constitute a shared resource among all stored memories, limiting the capacity of the network. Synaptic plasticity at different time scales can play an important role in optimizing the representation of associative memories, by keeping them sparse, uncorrelated and non-redundant. Here, we use a model of sequence memory to illustrate how plasticity allows a recurrent network to self-optimize by gradually re-encoding the representation of its memory items. A learning rule is used to sparsify large patterns, i.e., patterns with many active units. As a result, pattern sizes become more homogeneous, which increases the network's dynamical stability during sequence recall and allows more patterns to be stored. Last, we show that the learning rule allows for online learning in that it keeps the network in a robust dynamical steady state while storing new memories and overwriting old ones.
doi:10.3389/fnsyn.2014.00013
PMCID: PMC4051198  PMID: 24959137
associative memory; memory capacity; sparse coding; recurrent plasticity; memory consolidation
4.  Inhomogeneous Sparseness Leads to Dynamic Instability During Sequence Memory Recall in a Recurrent Neural Network Model 
Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns.
doi:10.1186/2190-8567-3-8
PMCID: PMC3844438  PMID: 23876197
Associative memory; Sequence memory; Memory capacity; Sparse coding
5.  Tonotopic organization of the hyperpolarization-activated current (Ih) in the mammalian medial superior olive 
Neuronal membrane properties can largely vary even within distinct morphological cell classes. The mechanisms and functional consequences of this diversity, however, are little explored. In the medial superior olive (MSO), a brainstem nucleus that performs binaural coincidence detection, membrane properties at rest are largely governed by the hyperpolarization-activated inward current (Ih) which enables the temporally precise integration of excitatory and inhibitory inputs. Here, we report that Ih density varies along the putative tonotopic axis of the MSO with Ih being largest in ventral, high-frequency (HF) processing neurons. Also Ih half-maximal activation voltage and time constant are differentially distributed such that Ih of the putative HF processing neurons activate faster and at more depolarized levels. Intracellular application of saturating concentrations of cyclic AMP removed the regional difference in hyperpolarization-activated cyclic nucleotide gated (HCN) channel activation, but not Ih density. Experimental data in conjunction with a computational model suggest that increased Ih levels are helpful in counteracting temporal summation of phase-locked inhibitory inputs which is particularly prominent in HF neurons.
doi:10.3389/fncir.2013.00117
PMCID: PMC3708513  PMID: 23874271
HCN channel; medial superior olive; sound localization; tonotopy; coincidence detection
6.  Recurrent Coupling Improves Discrimination of Temporal Spike Patterns 
Despite the ubiquitous presence of recurrent synaptic connections in sensory neuronal systems, their general functional purpose is not well understood. A recent conceptual advance has been achieved by theories of reservoir computing in which recurrent networks have been proposed to generate short-term memory as well as to improve neuronal representation of the sensory input for subsequent computations. Here, we present a numerical study on the distinct effects of inhibitory and excitatory recurrence in a canonical linear classification task. It is found that both types of coupling improve the ability to discriminate temporal spike patterns as compared to a purely feed-forward system, although in different ways. For a large class of inhibitory networks, the network’s performance is optimal as long as a fraction of roughly 50% of neurons per stimulus is active in the resulting population code. Thereby the contribution of inactive neurons to the neural code is found to be even more informative than that of the active neurons, generating an inherent robustness of classification performance against temporal jitter of the input spikes. Excitatory couplings are found to not only produce a short-term memory buffer but also to improve linear separability of the population patterns by evoking more irregular firing as compared to the purely inhibitory case. As the excitatory connectivity becomes more sparse, firing becomes more variable, and pattern separability improves. We argue that the proposed paradigm is particularly well-suited as a conceptual framework for processing of sensory information in the auditory pathway.
doi:10.3389/fncom.2012.00025
PMCID: PMC3343312  PMID: 22586392
recurrent neural network; network dynamics; auditory pathway; sparse connectivity
7.  Circuit Mechanisms of Memory Formation 
Neural Plasticity  2011;2011:494675.
doi:10.1155/2011/494675
PMCID: PMC3238405  PMID: 22203914
8.  A Corticothalamic Circuit Model for Sound Identification in Complex Scenes 
PLoS ONE  2011;6(9):e24270.
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal.
doi:10.1371/journal.pone.0024270
PMCID: PMC3172241  PMID: 21931668
11.  Capacity measurement of a recurrent inhibitory neural network 
BMC Neuroscience  2011;12(Suppl 1):P196.
doi:10.1186/1471-2202-12-S1-P196
PMCID: PMC3240296
12.  Frequency-Invariant Representation of Interaural Time Differences in Mammals 
PLoS Computational Biology  2011;7(3):e1002013.
Interaural time differences (ITDs) are the major cue for localizing low-frequency sounds. The activity of neuronal populations in the brainstem encodes ITDs with an exquisite temporal acuity of about . The response of single neurons, however, also changes with other stimulus properties like the spectral composition of sound. The influence of stimulus frequency is very different across neurons and thus it is unclear how ITDs are encoded independently of stimulus frequency by populations of neurons. Here we fitted a statistical model to single-cell rate responses of the dorsal nucleus of the lateral lemniscus. The model was used to evaluate the impact of single-cell response characteristics on the frequency-invariant mutual information between rate response and ITD. We found a rough correspondence between the measured cell characteristics and those predicted by computing mutual information. Furthermore, we studied two readout mechanisms, a linear classifier and a two-channel rate difference decoder. The latter turned out to be better suited to decode the population patterns obtained from the fitted model.
Author Summary
Neuronal codes are usually studied by estimating how much information the brain activity carries about the stimulus. On a single cell level, the relevant features of neuronal activity such as the firing rate or spike timing are readily available. On a population level, where many neurons together encode a stimulus property, finding the most appropriate activity features is less obvious, particularly because the neurons respond with a huge cell-to-cell variability. Here, using the example of the neuronal representation of interaural time differences, we show that the quality of the population code strongly depends on the assumption — or the model — of the population readout. We argue that invariances are useful constraints to identify “good” population codes. Based on these ideas, we suggest that the representation of interaural time differences serves a two-channel code in which the difference between the summed activities of the neurons in the two hemispheres exhibits an invariant and linear dependence on interaural time difference.
doi:10.1371/journal.pcbi.1002013
PMCID: PMC3060160  PMID: 21445227
13.  Single-trial phase precession in the hippocampus 
During the crossing of the place field of a pyramidal cell in the rat hippocampus, the firing phase of the cell decreases with respect to the local theta rhythm. This phase precession is usually studied on the basis of data in which many place field traversals are pooled together. Here we study properties of phase precession in single trials. We found that single-trial and pooled-trial phase precession were different with respect to phase-position correlation, phase-time correlation, and phase range. While pooled-trial phase precession may span 360°, the most frequent single-trial phase range was only around 180°. In pooled trials, the correlation between phase and position (r = −0.58) was stronger than the correlation between phase and time (r = −0.27), whereas in single trials these correlations (r = −0.61 for both) were not significantly different. Next, we demonstrated that phase precession exhibited a large trial-to-trial variability. Overall, only a small fraction of the trial-to-trial variability in measures of phase precession (e.g. slope or offset) could be explained by other single-trial properties (such as running speed or firing rate), while the larger part of the variability remains to be explained. Finally, we found that surrogate single trials, created by randomly drawing spikes from the pooled data, are not equivalent to experimental single trials: pooling over trials therefore changes basic measures of phase precession. These findings indicate that single trials may be better suited for encoding temporally structured events than is suggested by the pooled data.
doi:10.1523/JNEUROSCI.2270-09.2009
PMCID: PMC2830422  PMID: 19846711
Place cells; Hippocampus; CA1; Phase shift; Theta rhythm; Temporal coding; Spatial; Memory
14.  Glycinergic inhibition tunes coincidence detection in the auditory brainstem 
Nature Communications  2014;5:3790.
Neurons in the medial superior olive (MSO) detect microsecond differences in the arrival time of sounds between the ears (interaural time differences or ITDs), a crucial binaural cue for sound localization. Synaptic inhibition has been implicated in tuning ITD sensitivity, but the cellular mechanisms underlying its influence on coincidence detection are debated. Here we determine the impact of inhibition on coincidence detection in adult Mongolian gerbil MSO brain slices by testing precise temporal integration of measured synaptic responses using conductance-clamp. We find that inhibition dynamically shifts the peak timing of excitation, depending on its relative arrival time, which in turn modulates the timing of best coincidence detection. Inhibitory control of coincidence detection timing is consistent with the diversity of ITD functions observed in vivo and is robust under physiologically relevant conditions. Our results provide strong evidence that temporal interactions between excitation and inhibition on microsecond timescales are critical for binaural processing.
Coincidence detector neurons in the mammalian brainstem encode interaural time differences (ITDs) that are implicated in auditory processing. Myoga et al. study a previously developed neuronal model and find that inhibition is crucial for sound localization, but more dynamically than previously thought.
doi:10.1038/ncomms4790
PMCID: PMC4024823  PMID: 24804642

Results 1-14 (14)