The role of amyloid beta (Aβ) in brain function and in the pathogenesis of Alzheimer's disease (AD) remains elusive. Recent publications reported that an increase in Aβ concentration perturbs pre-synaptic release in hippocampal neurons. In particular, it was shown in vitro that Aβ is an endogenous regulator of synaptic transmission at the CA3-CA1 synapse, enhancing its release probability. How this synaptic modulator influences neuronal output during physiological stimulation patterns, such as those elicited in vivo, is still unknown. Using a realistic model of hippocampal CA1 pyramidal neurons, we first implemented this Aβ-induced enhancement of release probability and validated the model by reproducing the experimental findings. We then demonstrated that this synaptic modification can significantly alter synaptic integration properties in a wide range of physiologically relevant input frequencies (from 5 to 200 Hz). Finally, we used natural input patterns, obtained from CA3 pyramidal neurons in vivo during free exploration of rats in an open field, to investigate the effects of enhanced Aβ on synaptic release under physiological conditions. The model shows that the CA1 neuronal response to these natural patterns is altered in the increased-Aβ condition, especially for frequencies in the theta and gamma ranges. These results suggest that the perturbation of release probability induced by increased Aβ can significantly alter the spike probability of CA1 pyramidal neurons and thus contribute to abnormal hippocampal function during AD.
amyloid-beta; hippocampus; computational modeling; release probability; neuronal output
The work introduces a linear neural population model that allows to derive analytically the power spectrum subjected to the concentration of the anesthetic propofol. The analytical study of the power spectrum of the systems activity gives conditions on how the frequency of maximum power in experimental electroencephalographic (EEG) changes dependent on the propofol concentration. In this context, we explain the anesthetic-induced power increase in neural activity by an oscillatory instability and derive conditions under which the power peak shifts to larger frequencies as observed experimentally in EEG. Moreover the work predicts that the power increase only occurs while the frequency of maximum power increases. Numerically simulations of the systems activity complement the analytical results.
general anesthesia; propofol; neural fields; power spectrum; EEG
Hippocampal sharp wave-ripple complexes (SWRs) involve the synchronous discharge of thousands of cells throughout the CA3-CA1-subiculum-entorhinal cortex axis. Their strong transient output affects cortical targets, rendering SWRs a possible means for memory transfer from the hippocampus to the neocortex for long-term storage. Neurophysiological observations of hippocampal activity modulation by the cortical slow oscillation (SO) during deep sleep and anesthesia, and correlations between ripples and UP states, support the role of SWRs in memory consolidation through a cortico-hippocampal feedback loop. We couple a cortical network exhibiting SO with a hippocampal CA3-CA1 computational network model exhibiting SWRs, in order to model such cortico-hippocampal correlations and uncover important parameters and coupling mechanisms controlling them. The cortical oscillatory output entrains the CA3 network via connections representing the mossy fiber input, and the CA1 network via the temporoammonic pathway (TA). The spiking activity in CA3 and CA1 is shown to depend on the excitation-to-inhibition ratio, induced by combining the two hippocampal inputs, with mossy fiber input controlling the UP-state correlation of CA3 population bursts and corresponding SWRs, whereas the temporoammonic input affects the overall CA1 spiking activity. Ripple characteristics and pyramidal spiking participation to SWRs are shaped by the strength of the Schaffer collateral drive. A set of in vivo recordings from the rat hippocampus confirms a model-predicted segregation of pyramidal cells into subgroups according to the SO state where they preferentially fire and their response to SWRs. These groups can potentially play distinct functional roles in the replay of spike sequences.
hippocampus; slow oscillation; sharp waves; ripples; mossy fibers; temporoammonic pathway; correlations
Although intracerebral field potential oscillations are commonly used to study information processing during cognition and behavior, the cellular and network processes underlying such events remain unclear. The limited spatial resolution of standard single-point recordings does not clarify whether field oscillations reflect the activity of one or many afferent presynaptic populations. However, multi-site recording devices now provide high-resolution spatial profiles of local field potentials (LFPs) and when coupled to modern mathematical analyses that discriminate signals with distinct but overlapping spatial distributions, they open the door to better understand these potentials. Here we review recent insights that help disentangle certain pathway-specific activities. Accordingly, some oscillatory patterns can now be viewed as a periodic succession of synchronous synaptic currents that reflect the time envelope of spiking activity in given presynaptic populations. These analyses modify our concept of brain rhythms as abstract entities, molding them into mechanistic representations of network activity and allowing us to work in the time domain, reducing the loss of information inherent to data-chopping frequency treatment.
local field potentials; gamma oscillations; spatial discrimination; independent component analysis; spontaneous activity
We investigate the dynamical properties of an associative memory network consisting of stochastic neurons and dynamic synapses that show short-term depression and facilitation. In the stochastic neuron model used in this study, the efficacy of the synaptic transmission changes according to the short-term depression or facilitation mechanism. We derive a macroscopic mean field model that captures the overall dynamical properties of the stochastic model. We analyze the stability and bifurcation structure of the mean field model, and show the dependence of the memory retrieval performance on the noise intensity and parameters that determine the properties of the dynamic synapses, i.e., time constants for depressing and facilitating processes. The associative memory network exhibits a variety of dynamical states, including the memory and pseudo-memory states, as well as oscillatory states among memory patterns. This study provides comprehensive insight into the dynamical properties of the associative memory network with dynamic synapses.
dynamic synapse; short-term plasticity; neural network; associative memory network; mean field model; bifurcation analysis
We do not claim that the brain is completely deterministic, and we agree that noise may be beneficial in some cases. But we suggest that neuronal variability may be often overestimated, due to uncontrolled internal variables, and/or the use of inappropriate reference times. These ideas are not new, but should be re-examined in the light of recent experimental findings: trial-to-trial variability is often correlated across neurons, across trials, greater for higher-order neurons, and reduced by attention, suggesting that “intrinsic” sources of noise can only account for a minimal part of it. While it is obviously difficult to control for all internal variables, the problem of reference time can be largely avoided by recording multiple neurons at the same time, and looking at statistical structures in relative latencies. These relative latencies have another major advantage: they are insensitive to the variability that is shared across neurons, which is often a significant part of the total variability. Thus, we suggest that signal-to-noise ratios in the brain may be much higher than usually thought, leading to reactive systems, economic in terms of number of neurons, and energy efficient.
neural variability; signal-to-noise ratio; reliability; redundancy; neural coding
This study describes a spiking model that self-organizes for stable formation and maintenance of orientation and ocular dominance maps in the visual cortex (V1). This self-organization process simulates three development phases: an early experience-independent phase, a late experience-independent phase and a subsequent refinement phase during which experience acts to shape the map properties. The ocular dominance maps that emerge accommodate the two sets of monocular inputs that arise from the lateral geniculate nucleus (LGN) to layer 4 of V1. The orientation selectivity maps that emerge feature well-developed iso-orientation domains and fractures. During the last two phases of development the orientation preferences at some locations appear to rotate continuously through ±180° along circular paths and referred to as pinwheel-like patterns but without any corresponding point discontinuities in the orientation gradient maps. The formation of these functional maps is driven by balanced excitatory and inhibitory currents that are established via synaptic plasticity based on spike timing for both excitatory and inhibitory synapses. The stability and maintenance of the formed maps with continuous synaptic plasticity is enabled by homeostasis caused by inhibitory plasticity. However, a prolonged exposure to repeated stimuli does alter the formed maps over time due to plasticity. The results from this study suggest that continuous synaptic plasticity in both excitatory neurons and interneurons could play a critical role in the formation, stability, and maintenance of functional maps in the cortex.
spiking networks; STDP; learning; functional maps; orientation selectivity; ocular dominance; stability
Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance-based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network-based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
spike frequency adaptation; adaptation; oscillations; rate models; network dynamics; Fokker–Planck; mean-field; recurrent network
In intermittent control, instead of continuously calculating the control signal, the controller occasionally changes this signal at certain sparse points in time. The control law may include feedback, adaptation, optimization, or any other control strategies. When, where, and how does the brain employ intermittency as it controls movement? These are open questions in motor neuroscience. Evidence for intermittency in human motor control has been repeatedly observed in the neural control of movement literature. Moreover, some researchers have provided theoretical models to address intermittency. Even so, the vast majority of current models, and I would dare to say the dogma in most of the current motor neuroscience literature involves continuous control. In this paper, I focus on an area in which intermittent control has not yet been thoroughly considered, the structure of muscle synergies. A synergy in the muscle space is a group of muscles activated together by a single neural command. Under the assumption that the motor control is intermittent, I present the minimum transition hypothesis (MTH) and its predictions with regards to the structure of muscle synergies. The MTH asserts that the purpose of synergies is to minimize the effort of the higher level in the hierarchy by minimizing the number of transitions in an intermittent control signal. The implications of the MTH are not only for the structure of the muscle synergies but also to the intermittent and hierarchical nature of the motor system, with various predictions as to the process of skill learning, and important implications to the design of brain machine interfaces and human robot interaction.
muscle synergies; motor control; intermittent control; spinal cord; blind source separation
A long standing hypothesis in the neuroscience community is that the central nervous system (CNS) generates the muscle activities to accomplish movements by combining a relatively small number of stereotyped patterns of muscle activations, often referred to as “muscle synergies.” Different definitions of synergies have been given in the literature. The most well-known are those of synchronous, time-varying and temporal muscle synergies. Each one of them is based on a different mathematical model used to factor some EMG array recordings collected during the execution of variety of motor tasks into a well-determined spatial, temporal or spatio-temporal organization. This plurality of definitions and their separate application to complex tasks have so far complicated the comparison and interpretation of the results obtained across studies, and it has always remained unclear why and when one synergistic decomposition should be preferred to another one. By using well-understood motor tasks such as elbow flexions and extensions, we aimed in this study to clarify better what are the motor features characterized by each kind of decomposition and to assess whether, when and why one of them should be preferred to the others. We found that three temporal synergies, each one of them accounting for specific temporal phases of the movements could account for the majority of the data variation. Similar performances could be achieved by two synchronous synergies, encoding the agonist-antagonist nature of the two muscles considered, and by two time-varying muscle synergies, encoding each one a task-related feature of the elbow movements, specifically their direction. Our findings support the notion that each EMG decomposition provides a set of well-interpretable muscle synergies, identifying reduction of dimensionality in different aspects of the movements. Taken together, our findings suggest that all decompositions are not equivalent and may imply different neurophysiological substrates to be implemented.
muscle synergies; non-negative matrix factorization; EMG; elbow rotations; dimensionality reduction; triphasic pattern
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space.
muscle synergies; reaching; arm movement; task decoding; single-trial analysis
Recently there has been a growing interest in the modular organization of leg movements, in particular those related to locomotion. One of the basic modules involves the flexion of the leg during swing and it was shown that this module is already present in neonates (Dominici et al., 2011). In this paper, we question how these finding build upon the original work by Sherrington, who proposed that the flexor reflex is the basic building block of flexion during swing phase. Similarly, the relation between the flexor reflex and the withdrawal reflex modules of Schouenborg and Weng (1994) will be discussed. It will be argued that there is large overlap between these notions on modules and the older concepts of reflexes. In addition, it will be shown that there is a great flexibility in the expression of some of these modules during gait, thereby allowing for a phase-dependent modulation of the appropriate responses. In particular, the end of the stance phase is a period when the flexor synergy is facilitated. It is proposed that this is linked to the activation of circuitry that is responsible for the generation of locomotor patterns (CPG, “central pattern generator”). More specifically, it is suggested that the responses in that period relate to the activation of a flexor burst generator. The latter structure forms the core of a new asymmetric model of the CPG. This activation is controlled by afferent input (facilitation by a broad range of afferents, suppression by load afferent input). Meanwhile, many of these physiologic features have found their way in the control of very flexible walking bipedal robots.
flexion reflex; local sign; reflex modules; synergy; central pattern generator; gait; forward model
Dopamine neurons of the substantia nigra pars compacta (SNc) are uniquely sensitive to degeneration in Parkinson's disease (PD) and its models. Although a variety of molecular characteristics have been proposed to underlie this sensitivity, one possible contributory factor is their massive, unmyelinated axonal arbor that is orders of magnitude larger than other neuronal types. We suggest that this puts them under such a high energy demand that any stressor that perturbs energy production leads to energy demand exceeding supply and subsequent cell death. One prediction of this hypothesis is that those dopamine neurons that are selectively vulnerable in PD will have a higher energy cost than those that are less vulnerable. We show here, through the use of a biology-based computational model of the axons of individual dopamine neurons, that the energy cost of axon potential propagation and recovery of the membrane potential increases with the size and complexity of the axonal arbor according to a power law. Thus SNc dopamine neurons, particularly in humans, whose axons we estimate to give rise to more than 1 million synapses and have a total length exceeding 4 m, are at a distinct disadvantage with respect to energy balance which may be a factor in their selective vulnerability in PD.
dopamine; energy metabolism; neurodegeneration; Parkinson's disease; axons; unmyelinated; nigrostriatal pathway
A key objective in systems and cognitive neuroscience is to establish associations between behavioral measures and concurrent neuronal activity. Single-trial analysis has been proposed as a novel method for characterizing such correlates by first extracting neural components that maximally discriminate trials on a categorical variable, (e.g., hard vs. easy, correct vs. incorrect etc.), and then correlate those components to a continues dependent variable of interest, e.g., reaction time, difficulty Index, etc. However, often times in experiment design it is difficult to either define meaningful categorical variables, or to record enough trials for the method to extract the discriminant components. Experiments designed for the study of the effects of stimulus presentation modality in working memory provide such a scenario, as will be exemplified. In this paper, we proposed a new approach to single-trial analysis in which we directly extract neural activity that maximally correlates to single-trial manual response times; eliminating the need to define an arbitrary categorical variable. We demonstrate our method on real electroencephalography (EEG) data recordings from the study of stimulus presentation modality effect (SPME).
EEG; single-trial analysis; neuroimaging; correlated components; machine learning
We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
brain-based computational model; spiking neuronal networks; winner-take-all; motor control and learning/plasticity; spike-timing-dependent plasticity; sensorimotor control; large-scale spiking neural networks; neurorobotics
We report that multi-stable perception operates in a consistent, dynamical regime, balancing the conflicting goals of stability and sensitivity. When a multi-stable visual display is viewed continuously, its phenomenal appearance reverses spontaneously at irregular intervals. We characterized the perceptual dynamics of individual observers in terms of four statistical measures: the distribution of dominance times (mean and variance) and the novel, subtle dependence on prior history (correlation and time-constant). The dynamics of multi-stable perception is known to reflect several stabilizing and destabilizing factors. Phenomenologically, its main aspects are captured by a simplistic computational model with competition, adaptation, and noise. We identified small parameter volumes (~3% of the possible volume) in which the model reproduced both dominance distribution and history-dependence of each observer. For 21 of 24 data sets, the identified volumes clustered tightly (~15% of the possible volume), revealing a consistent “operating regime” of multi-stable perception. The “operating regime” turned out to be marginally stable or, equivalently, near the brink of an oscillatory instability. The chance probability of the observed clustering was <0.02. To understand the functional significance of this empirical “operating regime,” we compared it to the theoretical “sweet spot” of the model. We computed this “sweet spot” as the intersection of the parameter volumes in which the model produced stable perceptual outcomes and in which it was sensitive to input modulations. Remarkably, the empirical “operating regime” proved to be largely coextensive with the theoretical “sweet spot.” This demonstrated that perceptual dynamics was not merely consistent but also functionally optimized (in that it balances stability with sensitivity). Our results imply that multi-stable perception is not a laboratory curiosity, but reflects a functional optimization of perceptual dynamics for visual inference.
multi-stability; binocular rivalry; adaptation; model; exploitation-exploration dilemma
The observation that the activity of multiple muscles can be well approximated by a few linear synergies is viewed by some as a sign that such low-dimensional modules constitute a key component of the neural control system. Here, we argue that the usefulness of muscle synergies as a control principle should be evaluated in terms of errors produced not only in muscle space, but also in task space. We used data from a force-aiming task in two dimensions at the wrist, using an electromyograms (EMG)-driven virtual biomechanics technique that overcomes typical errors in predicting force from recorded EMG, to illustrate through simulation how synergy decomposition inevitably introduces substantial task space errors. Then, we computed the optimal pattern of muscle activation that minimizes summed-squared muscle activities, and demonstrated that synergy decomposition produced similar results on real and simulated data. We further assessed the influence of synergy decomposition on aiming errors (AEs) in a more redundant system, using the optimal muscle pattern computed for the elbow-joint complex (i.e., 13 muscles acting in two dimensions). Because EMG records are typically not available from all contributing muscles, we also explored reconstructions from incomplete sets of muscles. The redundancy of a given set of muscles had opposite effects on the goodness of muscle reconstruction and on task achievement; higher redundancy is associated with better EMG approximation (lower residuals), but with higher AEs. Finally, we showed that the number of synergies required to approximate the optimal muscle pattern for an arbitrary biomechanical system increases with task-space dimensionality, which indicates that the capacity of synergy decomposition to explain behavior depends critically on the scope of the original database. These results have implications regarding the viability of muscle synergy as a putative neural control mechanism, and also as a control algorithm to restore movements.
aiming movement; muscle coordination; motor control; biomechanics; optimal control
Local field potentials (LFP) reflect the properties of neuronal circuits or columns recorded in a volume around a microelectrode (Buzsáki et al., 2012). The extent of this integration volume has been a subject of some debate, with estimates ranging from a few hundred microns (Katzner et al., 2009; Xing et al., 2009) to several millimeters (Kreiman et al., 2006). We estimated receptive fields (RFs) of multi-unit activity (MUA) and LFPs at an intermediate level of visual processing, in area V4 of two macaques. The spatial structure of LFP receptive fields varied greatly as a function of time lag following stimulus onset, with the retinotopy of LFPs matching that of MUAs at a restricted set of time lags. A model-based analysis of the LFPs allowed us to recover two distinct stimulus-triggered components: an MUA-like retinotopic component that originated in a small volume around the microelectrodes (~350 μm), and a second component that was shared across the entire V4 region; this second component had tuning properties unrelated to those of the MUAs. Our results suggest that the LFP reflects neural activity across multiple spatial scales, which both complicates its interpretation and offers new opportunities for investigating the large-scale structure of network processing.
receptive field; multiunit activity; local field potentials; V4; visual cortex
Computational models at different space-time scales allow us to understand the fundamental mechanisms that govern neural processes and relate uniquely these processes to neuroscience data. In this work, we propose a novel neurocomputational unit (a mesoscopic model which tell us about the interaction between local cortical nodes in a large scale neural mass model) of bursters that qualitatively captures the complex dynamics exhibited by a full network of parabolic bursting neurons. We observe that the temporal dynamics and fluctuation of mean synaptic action term exhibits a high degree of correlation with the spike/burst activity of our population. With heterogeneity in the applied drive and mean synaptic coupling derived from fast excitatory synapse approximations we observe long term behavior in our population dynamics such as partial oscillations, incoherence, and synchrony. In order to understand the origin of multistability at the population level as a function of mean synaptic coupling and heterogeneity in the firing rate threshold we employ a simple generative model for parabolic bursting recently proposed by Ghosh et al. (2009). Further, we use here a mean coupling formulated for fast spiking neurons for our analysis of generic model. Stability analysis of this mean field network allow us to identify all the relevant network states found in the detailed biophysical model. We derive here analytically several boundary solutions, a result which holds for any number of spikes per burst. These findings illustrate the role of oscillations occurring at slow time scales (bursts) on the global behavior of the network.
multispikes; self-organization; transients; firing rate; parabolic burst; network synchrony; generative model; oscillations
Upon sensory stimulation, primary cortical areas readily engage in narrow-band rhythmic activity between 30 and 90 Hz, the so-called gamma oscillations. Here we show that, when embedded in a balanced network, type-I excitable neurons entrained to the collective rhythm show a discontinuity in their firing-rates between a slow and a fast spiking mode. This jump in the spiking frequencies is characteristic to type II neurons, but is not present in the frequency-current curve (f-I curve) of isolated type I neurons. Therefore, this rate bimodality arises as an emerging network property in type I population models. We have studied the mechanisms underlying the generation of these two firing modes, in order to reproduce the spiking activity of in vivo cortical recordings, which is known to be highly irregular and sparse. We have also analyzed the relation between afferent inputs and the single unit activity, and between the latter and the local field potential (LFP) phase, in order to establish how the collective dynamics modulates the spiking activity of the individual neurons. Our results reveal that the inhibitory-excitatory balance allows two encoding mechanisms, for input rate variations and LFP phase, to coexist within the network.
gamma oscillations; local field potential; bimodal; coding; bursting
We describe a model for cortical development that resolves long-standing difficulties of earlier models. It is proposed that, during embryonic development, synchronous firing of neurons and their competition for limited metabolic resources leads to selection of an array of neurons with ultra-small-world characteristics. Consequently, in the visual cortex, macrocolumns linked by superficial patchy connections emerge in anatomically realistic patterns, with an ante-natal arrangement which projects signals from the surrounding cortex onto each macrocolumn in a form analogous to the projection of a Euclidean plane onto a Möbius strip. This configuration reproduces typical cortical response maps, and simulations of signal flow explain cortical responses to moving lines as functions of stimulus velocity, length, and orientation. With the introduction of direct visual inputs, under the operation of Hebbian learning, development of mature selective response “tuning” to stimuli of given orientation, spatial frequency, and temporal frequency would then take place, overwriting the earlier ante-natal configuration. The model is provisionally extended to hierarchical interactions of the visual cortex with higher centers, and a general principle for cortical processing of spatio-temporal images is sketched.
synchronous oscillation; cortical development; synaptic organization; cortical response properties; cortical information flow
Drosophila larvae crawl by peristaltic waves of muscle contractions, which propagate along the animal body and involve the simultaneous contraction of the left and right side of each segment. Coordinated propagation of contraction does not require sensory input, suggesting that movement is generated by a central pattern generator (CPG). We characterized crawling behavior of newly hatched Drosophila larvae by quantifying timing and duration of segmental boundary contractions. We developed a CPG network model that recapitulates these patterns based on segmentally repeated units of excitatory and inhibitory (EI) neuronal populations coupled with immediate neighboring segments. A single network with symmetric coupling between neighboring segments succeeded in generating both forward and backward propagation of activity. The CPG network was robust to changes in amplitude and variability of connectivity strength. Introducing sensory feedback via “stretch-sensitive” neurons improved wave propagation properties such as speed of propagation and segmental contraction duration as observed experimentally. Sensory feedback also restored propagating activity patterns when an inappropriately tuned CPG network failed to generate waves. Finally, in a two-sided CPG model we demonstrated that two types of connectivity could synchronize the activity of two independent networks: connections from excitatory neurons on one side to excitatory contralateral neurons (E to E), and connections from inhibitory neurons on one side to excitatory contralateral neurons (I to E). To our knowledge, such I to E connectivity has not yet been found in any experimental system; however, it provides the most robust mechanism to synchronize activity between contralateral CPGs in our model. Our model provides a general framework for studying the conditions under which a single locally coupled network generates bilaterally synchronized and longitudinally propagating waves in either direction.
Drosophila; CPG; sensory feedback; synchronous; wave propagation
The theta-gamma cross-frequency coupling (CFC) in hippocampus was reported to reflect memory process. In this study, we measured the CFC of hippocampal local field potentials (LFPs) in a two-vessel occlusion (2VO) rat model, combined with both amplitude and phase properties and associated with short and long-term plasticity indicating the memory function. Male Wistar rats were used and a 2VO model was established. STP and LTP were recorded in hippocampal CA3-CA1 pathway after LFPs were collected in both CA3 and CA1. Based on the data of relative power spectra and phase synchronization, it suggested that both the amplitude and phase coupling of either theta or gamma rhythm were involved in modulating the neural network in 2VO rats. In order to determine whether the CFC was also implicated in neural impairment in 2VO rats, the coupling of CA3 theta–CA1 gamma was measured by both phase-phase coupling (n:m phase synchronization) and phase-amplitude coupling. The attenuated CFC strength in 2VO rats implied the impaired neural communication in the coordination of theta-gamma entraining process. Moreover, compared with modulation index (MI) a novel algorithm named cross frequency conditional mutual information (CF-CMI), was developed to focus on the coupling between theta phase and the phase of gamma amplitude. The results suggest that the reduced CFC strength probably attributed to the disruption of the phase of CA1 gamma envelop. In conclusion, it implied that the phase coupling and CFC of hippocampal theta and gamma played an important role in supporting functions of neural network. Furthermore, synaptic plasticity on CA3-CA1 pathway was reduced in line with the decreased CFC strength from CA3 to CA1. It partly supported our hypothesis that directional CFC indicator might probably be used as a measure of synaptic plasticity.
two-vessel occlusion; cross frequency conditional mutual information (CF-CMI); synaptic plasticity; hippocampus; neural information flow (NIF)