Sleep, anesthesia, and coma share a number of neural features but the recovery profiles are radically different. To understand the mechanisms of reversibility of unconsciousness at the network level, we studied the conditions for gradual and abrupt transitions in conscious and anesthetized states. We hypothesized that the conditions for explosive synchronization (ES) in human brain networks would be present in the anesthetized brain just over the threshold of unconsciousness. To test this hypothesis, functional brain networks were constructed from multi-channel electroencephalogram (EEG) recordings in seven healthy subjects across conscious, unconscious, and recovery states. We analyzed four variables that are involved in facilitating ES in generic, non-biological networks: (1) correlation between node degree and frequency, (2) disassortativity (i.e., the tendency of highly-connected nodes to link with less-connected nodes, or vice versa), (3) frequency difference of coupled nodes, and (4) an inequality relationship between local and global network properties, which is referred to as the suppressive rule. We observed that the four network conditions for ES were satisfied in the unconscious state. Conditions for ES in the human brain suggest a potential mechanism for rapid recovery from the lightly-anesthetized state. This study demonstrates for the first time that the network conditions for ES, formerly shown in generic networks only, are present in empirically-derived functional brain networks. Further investigations with deep anesthesia, sleep, and coma could provide insight into the underlying causes of variability in recovery profiles of these unconscious states.
explosive synchronization; state transition; anesthesia; brain network; consciousness
Research in neural information processing has been successful in the past, providing useful approaches both to practical problems in computer science and to computational models in neuroscience. Recent developments in the area of cognitive neuroscience present new challenges for a computational or theoretical understanding asking for neural information processing models that fulfill criteria or constraints from cognitive psychology, neuroscience and computational efficiency. The most important of these criteria for the evaluation of present and future contributions to this new emerging field are listed at the end of this article.
computational neuroscience; artificial intelligence; large scale computational modeling; computational cognition; cognitive neurocomputing
We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs) at four distinct stimuli levels of 60, 70, 80, and 90 dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter) in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials. We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process. We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model.
Bayesian models; long-term habituation; instantaneous phase; event-related potentials; circular statistics
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced.
New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern.
Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.
Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns.
Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
event-related potentials; brain-computer interface (BCI); dummy face; fusion stimuli; gaze-independent; facial expression
Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine.
cortical spreading depression; computational model; realistic cortical geometry; magnetic resonance imaging; finite elements simulation; reaction diffusion
Alongside the time-locked event-related potentials (ERPs), nociceptive somatosensory inputs can induce modulations of ongoing oscillations, appeared as event-related synchronization or desynchronization (ERS/ERD) in different frequency bands. These ERD/ERS activities are suggested to reflect various aspects of pain perception, including the representation, encoding, assessment, and integration of the nociceptive sensory inputs, as well as behavioral responses to pain, even the precise details of their roles remain unclear. Previous studies investigating the functional relevance of ERD/ERS activities in pain perception were normally done by assessing their latencies, frequencies, magnitudes, and scalp distributions, which would be then correlated with subjective pain perception or stimulus intensity. Nevertheless, these temporal, spectral, and spatial profiles of stimulus induced ERD/ERS could only partly reveal the dynamics of brain oscillatory activities. Indeed, additional parameters, including but not limited to, phase, neural generator, and cross frequency couplings, should be paid attention to comprehensively and systemically evaluate the dynamics of oscillatory activities associated with pain perception and behavior. This would be crucial in exploring the psychophysiological mechanisms of neural oscillation, and in understanding the neural functions of cortical oscillations involved in pain perception and behavior. Notably, some chronic pain (e.g., neurogenic pain and complex regional pain syndrome) patients are often associated with the occurrence of abnormal synchronized oscillatory brain activities, and selectively modulating cortical oscillatory activities has been showed to be a potential therapy strategy to relieve pain with the application of neurostimulation techniques, e.g., repeated transcranial magnetic stimulation (rTMS) and transcranial alternating current stimulation (tACS). Thus, the investigation of the oscillatory activities proceeding from phenomenology to function, opens new perspectives to address questions in human pain psychophysiology and pathophysiology, thereby promoting the establishment of rational therapeutic strategy.
pain; cortical oscillations; event-related desynchronization (ERD); event-related synchronization (ERS); electroencephalography (EEG)
We used a musculoskeletal model to investigate the possible biomechanical and neural bases of using consistent muscle synergy patterns to produce functional motor outputs across different biomechanical conditions, which we define as generalizability. Experimental studies in cats demonstrate that the same muscle synergies are used during reactive postural responses at widely varying configurations, producing similarly-oriented endpoint force vectors with respect to the limb axis. However, whether generalizability across postures arises due to similar biomechanical properties or to neural selection of a particular muscle activation pattern has not been explicitly tested. Here, we used a detailed cat hindlimb model to explore the set of feasible muscle activation patterns that produce experimental synergy force vectors at a target posture, and tested their generalizability by applying them to different test postures. We used three methods to select candidate muscle activation patterns: (1) randomly-selected feasible muscle activation patterns, (2) optimal muscle activation patterns minimizing muscle effort at a given posture, and (3) generalizable muscle activation patterns that explicitly minimized deviations from experimentally-identified synergy force vectors across all postures. Generalizability was measured by the deviation between the simulated force direction of the candidate muscle activation pattern and the experimental synergy force vectors at the test postures. Force angle deviations were the greatest for the randomly selected feasible muscle activation patterns (e.g., >100°), intermediate for effort-wise optimal muscle activation patterns (e.g., ~20°), and smallest for generalizable muscle activation patterns (e.g., <5°). Generalizable muscle activation patterns were suboptimal in terms of effort, often exceeding 50% of the maximum possible effort (cf. ~5% in minimum-effort muscle activation patterns). The feasible muscle activation ranges of individual muscles associated with producing a specific synergy force vector was reduced by ~45% when generalizability requirements were imposed. Muscles recruited in the generalizable muscle activation patterns had less sensitive torque-producing characteristics to changes in postures. We conclude that generalization of function across postures does not arise from limb biomechanics or a single optimality criterion. Muscle synergies may reflect acquired motor solutions globally tuned for generalizability across biomechanical contexts, facilitating rapid motor adaptation.
motor modules; musculoskeletal model; postural response; isometric force; motor control
We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.
auditory nerve; electrical stimulation; cochlear implant; spike timing; computational modeling
Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CCnorm, Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CCnorm is better behaved in that it is effectively bounded between −1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CCnorm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CCnorm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models.
sensory neuron; receptive field; signal power; model selection; statistical modeling; neural coding
There is a strong emphasis on developing novel neuroscience technologies, in particular on recording from more neurons. There has thus been increasing discussion about how to analyze the resulting big datasets. What has received less attention is that over the last 30 years, papers in neuroscience have progressively integrated more approaches, such as electrophysiology, anatomy, and genetics. As such, there has been little discussion on how to combine and analyze this multimodal data. Here, we describe the growth of multimodal approaches, and discuss the needed analysis advancements to make sense of this data.
data integration; technology integration; large-scale data analysis; multimodal; neural data analysis
Structural brain networks constructed based on diffusion-weighted MRI (dMRI) have provided a systems perspective to explore the organization of the human brain. Some redundant and nonexistent fibers, however, are inevitably generated in whole brain tractography. We propose to add one critical step while constructing the networks to remove these fibers using the linear fascicle evaluation (LiFE) method, and study the differences between the networks with and without LiFE optimization. For a cohort of nine healthy adults and for 9 out of the 35 subjects from Human Connectome Project, the T1-weighted images and dMRI data are analyzed. Each brain is parcellated into 90 regions-of-interest, whilst a probabilistic tractography algorithm is applied to generate the original connectome. The elimination of redundant and nonexistent fibers from the original connectome by LiFE creates the optimized connectome, and the random selection of the same number of fibers as the optimized connectome creates the non-optimized connectome. The combination of parcellations and these connectomes leads to the optimized and non-optimized networks, respectively. The optimized networks are constructed with six weighting schemes, and the correlations of different weighting methods are analyzed. The fiber length distributions of the non-optimized and optimized connectomes are compared. The optimized and non-optimized networks are compared with regard to edges, nodes and networks, within a sparsity range of 0.75–0.95. It has been found that relatively more short fibers exist in the optimized connectome. About 24.0% edges of the optimized network are significantly different from those in the non-optimized network at a sparsity of 0.75. About 13.2% of edges are classified as false positives or the possible missing edges. The strength and betweenness centrality of some nodes are significantly different for the non-optimized and optimized networks, but not the node efficiency. The normalized clustering coefficient, the normalized characteristic path length and the small-worldness are higher in the optimized network weighted by the fiber number than in the non-optimized network. These observed differences suggest that LiFE optimization can be a crucial step for the construction of more reasonable and more accurate structural brain networks.
structural connectivity; diffusion-weighted MRI; tractography; brain network; connectome
synaptic plasticity; metaplasticity; Hebbian learning; homeostasis; STDP
After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments.
grid cells; grid realignment; spatial information processing; continuous attractor network; sensory integration
Mounting evidence shows that brain disorders involve multiple and different neural dysfunctions, including regional brain damage, change to cell structure, chemical imbalance, and/or connectivity loss among different brain regions. Understanding the complexity of brain disorders can help us map these neural dysfunctions to different symptom clusters as well as understand subcategories of different brain disorders. Here, we discuss data on the mapping of symptom clusters to different neural dysfunctions using examples from brain disorders such as major depressive disorder (MDD), Parkinson’s disease (PD), schizophrenia, posttraumatic stress disorder (PTSD) and Alzheimer’s disease (AD). In addition, we discuss data on the similarities of symptoms in different disorders. Importantly, computational modeling work may be able to shed light on plausible links between various symptoms and neural damage in brain disorders.
brain disorders; functional connectivity; neurotransmitters; regional brain volume; major depressive disorder; Parkinson’s disease; schizophrenia; posttraumatic stress disorder
In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development.
spike pattern structure; synaptic plasticity; efficacy variability; STDP; synaptic homeostasis; spike generating models
Spatial navigation in mammals is based on building a mental representation of their environment—a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment—the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level.
hippocampus; learning and memory; topological analysis; mathematical concepts; theoretical neuroscience
The ability to assess brain responses in unsupervised manner based on fMRI measure has remained a challenge. Here we have applied the Random Forest (RF) method to detect differences in the pharmacological MRI (phMRI) response in rats to treatment with an analgesic drug (buprenorphine) as compared to control (saline). Three groups of animals were studied: two groups treated with different doses of the opioid buprenorphine, low (LD), and high dose (HD), and one receiving saline. PhMRI responses were evaluated in 45 brain regions and RF analysis was applied to allocate rats to the individual treatment groups. RF analysis was able to identify drug effects based on differential phMRI responses in the hippocampus, amygdala, nucleus accumbens, superior colliculus, and the lateral and posterior thalamus for drug vs. saline. These structures have high levels of mu opioid receptors. In addition these regions are involved in aversive signaling, which is inhibited by mu opioids. The results demonstrate that buprenorphine mediated phMRI responses comprise characteristic features that allow a supervised differentiation from placebo treated rats as well as the proper allocation to the respective drug dose group using the RF method, a method that has been successfully applied in clinical studies.
fMRI; random forest; machine learning; phMRI; pharmacology; buprenorphine
Deep brain stimulation (DBS) is an established therapy for movement disorders, including tremor, dystonia, and Parkinson's disease, but the mechanisms of action are not well understood. Symptom suppression by DBS typically requires stimulation frequencies ≥100 Hz, but when the frequency is increased above ~2 kHz, the effectiveness in tremor suppression declines (Benabid et al., 1991). We sought to test the hypothesis that the decline in efficacy at high frequencies is associated with desynchronization of the activity generated within a population of stimulated neurons. Regularization of neuronal firing is strongly correlated with tremor suppression by DBS, and desynchronization would disrupt the regularization of neuronal activity. We implemented computational models of CNS axons with either deterministic or stochastic membrane dynamics, and quantified the response of populations of model nerve fibers to extracellular stimulation at different frequencies and amplitudes. As stimulation frequency was increased from 2 to 80 Hz the regularity of neuronal firing increased (as assessed with direct estimates of entropy), in accord with the clinical effects on tremor of increasing stimulation frequency (Kuncel et al., 2006). Further, at frequencies between 80 and 500 Hz, increasing the stimulation amplitude (i.e., the proportion of neurons activated by the stimulus) increased the regularity of neuronal activity across the population, in accord with the clinical effects on tremor of stimulation amplitude (Kuncel et al., 2007). However, at stimulation frequencies above 1 kHz the regularity of neuronal firing declined due to irregular patterns of action potential generation and conduction block. The reductions in neuronal regularity that occurred at high frequencies paralleled the previously reported decline in tremor reduction and may be responsible for the loss of efficacy of DBS at very high frequencies. This analysis provides further support for the hypothesis that effective DBS masks the intrinsic patterns of activity in the stimulated neurons and replaces it with regularized firing.
high frequency stimulation; computational model; informational lesion; conduction block; desynchronization
Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.
competitive neural network; auditory brain; pitch identification; harmonic training; unsupervised learning
In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics “up” and “down” states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.
self-sustained activity; cortical oscillations; irregular firing activity; hierarchical modular networks; cortical network models; intrinsic neuronal diversity; up-down states; chaotic neural dynamics
The vestibulo-ocular reflex (VOR) is essential in our daily life to stabilize retinal images during head movements. Balanced vestibular functionality secures optimal reflex performance which otherwise can be distorted by peripheral vestibular lesions. Luckily, vestibular compensation in different neuronal sites restores VOR function to some extent over time. Studying vestibular compensation gives insight into the possible mechanisms for plasticity in the brain. In this work, novel experimental analysis tools are employed to reevaluate the VOR characteristics following unilateral vestibular lesions and compensation. Our results suggest that following vestibular lesions, asymmetric performance of the VOR is not only limited to its gain. Vestibular compensation also causes asymmetric dynamics, i.e., different time constants for the VOR during leftward or rightward passive head rotation. Potential mechanisms for these experimental observations are provided using simulation studies.
vestibulo-ocular reflex; vestibular compensation; commissural neuron circuitry; unilateral vestibular lesion; plasticity
Based on a modified neural field network model composed of cortex and thalamus, we here propose a computational framework to investigate the onset control of absence seizure, which is characterized by the spike-wave discharges. Firstly, we briefly demonstrate the existence of various transition types in Taylor's model by increasing the thalamic input. Furthermore, after the disinhibitory function is reasonably introduced into the Taylor's model, we can observe the occurrence of various transition states of firing patterns with different dominant frequencies as the thalamic input is varied under different disinhibitory effects onto the pyramidal neural population. Interestingly, it is found that the onset of spike-wave discharges can be delayed as the disinhibitory input is considered. More importantly, we explore bifurcation mechanism of firing transitions as some key parameters are changed. And also, we observe other dynamical states, such as simple oscillations and saturated discharges with different spatial scales, which are consistent with previous theoretical or experimental findings.
absence seizures; spike-wave discharges; disinhibitory input; delay onset; bifurcation
Neural avalanches are a prominent form of brain activity characterized by network-wide bursts whose statistics follow a power-law distribution with a slope near 3/2. Recent work suggests that avalanches of different durations can be rescaled and thus collapsed together. This collapse mirrors work in statistical physics where it is proposed to form a signature of systems evolving in a critical state. However, no rigorous statistical test has been proposed to examine the degree to which neuronal avalanches collapse together. Here, we describe a statistical test based on functional data analysis, where raw avalanches are first smoothed with a Fourier basis, then rescaled using a time-warping function. Finally, an F ratio test combined with a bootstrap permutation is employed to determine if avalanches collapse together in a statistically reliable fashion. To illustrate this approach, we recorded avalanches from cortical cultures on multielectrode arrays as in previous work. Analyses show that avalanches of various durations can be collapsed together in a statistically robust fashion. However, a principal components analysis revealed that the offset of avalanches resulted in marked variance in the time-warping function, thus arguing for limitations to the strict fractal nature of avalanche dynamics. We compared these results with those obtained from cultures treated with an AMPA/NMDA receptor antagonist (APV/DNQX), which yield a power-law of avalanche durations with a slope greater than 3/2. When collapsed together, these avalanches showed marked misalignments both at onset and offset time-points. In sum, the proposed statistical evaluation suggests the presence of scale-free avalanche waveforms and constitutes an avenue for examining critical dynamics in neuronal systems.
neuronal avalanches; in vitro; bursts; network dynamics; cultured neuronal networks; multi-electrode array; criticality
Pain is a highly subjective experience. Self-report is the gold standard for pain assessment in clinical practice, but it may not be available or reliable in some populations. Neuroimaging data, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have the potential to be used to provide physiology-based and quantitative nociceptive pain assessment tools that complements self-report. However, existing neuroimaging-based nociceptive pain assessments only rely on the information in pain-evoked brain activities, but neglect the fact that the perceived intensity of pain is also encoded by ongoing brain activities prior to painful stimulation. Here, we proposed to use machine learning algorithms to decode pain intensity from both pre-stimulus ongoing and post-stimulus evoked brain activities. Neural features that were correlated with intensity of laser-evoked nociceptive pain were extracted from high-dimensional pre- and post-stimulus EEG and fMRI activities using partial least-squares regression (PLSR). Further, we used support vector machine (SVM) to predict the intensity of pain from pain-related time-frequency EEG patterns and BOLD-fMRI patterns. Results showed that combining predictive information in pre- and post-stimulus brain activities can achieve significantly better performance in classifying high-pain and low-pain and in predicting the rating of perceived pain than only using post-stimulus brain activities. Therefore, the proposed pain prediction method holds great potential in basic research and clinical applications.
pre-stimulus brain activity; EEG; fMRI; pain perception; machine learning; feature selection
Feedback within the oculomotor system improves visual processing at eye movement end points, also termed a visual grasp. We do not just view the world around us however, we also reach out and grab things with our hands. A growing body of literature suggests that visual processing in near-hand space is altered. The control systems for moving either the eyes or the hands rely on parallel networks of fronto-parietal regions, which have feedback connections to visual areas. Since the oculomotor system effects on visual processing occur through feedback, both through the motor plan and the motor efference copy, a parallel system where reaching and/or grasping motor-related activity also affects visual processing is likely. Areas in the posterior parietal cortex, for example, receive proprioceptive and visual information used to guide actions, as well as motor efference signals. This trio of information channels is all that would be necessary to produce spatial allocation of reach-related visual attention. We review evidence from behavioral and neurophysiological studies that support the hypothesis that feedback from the reaching and/or grasping motor control networks affects visual processing while noting ways in which it differs from that seen within the oculomotor system. We also suggest that object affordances may represent the neural mechanism through which certain object features are selected for preferential processing when stimuli are near the hand. Finally, we summarize the two effector-based feedback systems and discuss how having separate but parallel effector systems allows for efficient decoupling of eye and hand movements.
attention; vision; sensorimotor integration; reaching and grasping; peripersonal space