We investigated auditory, dorsal attention, and default mode networks in adults with tinnitus and hearing loss in a resting state functional connectivity study. Data were obtained using continuous functional magnetic resonance imaging (fMRI) while the participants were at “rest” and were not performing any task. Participants belonged to one of three groups: middle-aged adults with tinnitus and mild-to-moderate high frequency hearing loss (TIN), age-matched controls with normal hearing and no tinnitus (NH), and a second control group with mild-to-moderate high frequency hearing loss without tinnitus (HL). After standard preprocessing, (a) a group independent component analysis (ICA) using 30 components and (b) a seeding-based connectivity analysis were conducted. In the group ICA, the default mode network was the only network to display visual differences between subject groups. In the seeding analysis, we found increased connectivity between the left parahippocampus and the auditory resting state network in the TIN group when compared to NH controls. Similarly, there was also an increased correlation between the right parahippocampus and the dorsal attention network when compared to HL controls. Other group differences in this attention network included decreased correlations between the seed regions and the right supramarginal gyrus in TIN patients when compared to HL controls. In the default mode network, there was a strong decrease in correlation between the seed regions and the precuneus when compared to both control groups. The findings of this study identify specific alterations in the connectivity of the default mode, dorsal attention, and auditory resting state networks due to tinnitus. The results suggest that therapies for tinnitus that mitigate the increased connectivity of limbic regions with auditory and attention resting state networks and the decreased coherence of the default mode network could be effective at reducing tinnitus-related distress.
Analyzing the neuronal organizational structures and studying the changes in the behavior of the organism is key to understanding cognitive functions of the brain. Although some studies have indicated that spatiotemporal firing patterns of neuronal populations have a certain relationship with the behavioral responses, the issues of whether there are any relationships between the functional networks comprised of these cortical neurons and behavioral tasks and whether it is possible to take advantage of these networks to predict correct and incorrect outcomes of single trials of animals are still unresolved.
This paper presents a new method of analyzing the structures of whole-recorded neuronal functional networks (WNFNs) and local neuronal circuit groups (LNCGs). The activity of these neurons was recorded in several rats. The rats performed two different behavioral tasks, the Y-maze task and the U-maze task. Using the results of the assessment of the WNFNs and LNCGs, this paper describes a realization procedure for predicting the behavioral outcomes of single trials. The methodology consists of four main parts: construction of WNFNs from recorded neuronal spike trains, partitioning the WNFNs into the optimal LNCGs using social community analysis, unsupervised clustering of all trials from each dataset into two different clusters, and predicting the behavioral outcomes of single trials. The results show that WNFNs and LNCGs correlate with the behavior of the animal. The U-maze datasets show higher accuracy for unsupervised clustering results than those from the Y-maze task, and these datasets can be used to predict behavioral responses effectively.
The results of the present study suggest that a methodology proposed in this paper is suitable for analysis of the characteristics of neuronal functional networks and the prediction of rat behavior. These types of structures in cortical ensemble activity may be critical to information representation during the execution of behavior.
To map and investigate the relationships established on the web between leading health-research institutions around the world.
Sample selection was based on the World Health Organization (WHO) Collaborating Centres (CCs). Data on the 768 active CCs in 89 countries were retrieved from the WHO's database. The final sample consisted of 190 institutions devoted to health sciences in 42 countries. Data on each institution's website were retrieved using webometric techniques (interlinking), and an asymmetric matrix was generated for social network analysis.
The results showed that American and European institutions, such as the Centers for Disease Control and Prevention (CDC), the National Institutes of Health (NIH) and the National Institute of Health and Medical Research (INSERM), are the most highly connected on the web and have a higher capacity to attract hyperlinks. The Karolinska Institute (KI-SE) in Sweden is well placed as an articulation point between several integrants of the network and the component's core but lacks general recognition on the web by hyperlinks. Regarding the north-south divide, Mexico and Brazil appear to be key southern players on the web. The results showed that the hyperlinks exchanged between northern and southern countries present an abysmal gap: 99.49% of the hyperlinks provided by the North are directed toward the North itself, in contrast to 0.51% that are directed toward the South. Regarding the South, its institutions are more connected to its northern partners, with 98.46% of its hyperlinks directed toward the North, and mainly toward the United States, compared with 1.54% toward southern neighbors.
It is advisable to strengthen integration policies on the web and to increase web networking through hyperlink exchange. In this way, the web could actually reflect international cooperation in health and help to legitimize and enhance the visibility of the many existing south-south collaboration networks.
Epilepsy is a common chronic neurological disorder characterized by recurrent unprovoked seizures. Electroencephalogram (EEG) signals play a critical role in the diagnosis of epilepsy. Multichannel EEGs contain more information than do single-channel EEGs. Automatic detection algorithms for spikes or seizures have traditionally been implemented on single-channel EEG, and algorithms for multichannel EEG are unavailable.
This study proposes a physiology-based detection system for epileptic seizures that uses multichannel EEG signals. The proposed technique was tested on two EEG data sets acquired from 18 patients. Both unipolar and bipolar EEG signals were analyzed. We employed sample entropy (SampEn), statistical values, and concepts used in clinical neurophysiology (e.g., phase reversals and potential fields of a bipolar EEG) to extract the features. We further tested the performance of a genetic algorithm cascaded with a support vector machine and post-classification spike matching.
We obtained 86.69% spike detection and 99.77% seizure detection for Data Set I. The detection system was further validated using the model trained by Data Set I on Data Set II. The system again showed high performance, with 91.18% detection of spikes and 99.22% seizure detection.
We report a de novo EEG classification system for seizure and spike detection on multichannel EEG that includes physiology-based knowledge to enhance the performance of this type of system.
Hubs within the neocortical structural network determined by graph theoretical analysis play a crucial role in brain function. We mapped neocortical hubs topographically, using a sample population of 63 young adults. Subjects were imaged with high resolution structural and diffusion weighted magnetic resonance imaging techniques. Multiple network configurations were then constructed per subject, using random parcellations to define the nodes and using fibre tractography to determine the connectivity between the nodes. The networks were analysed with graph theoretical measures. Our results give reference maps of hub distribution measured with betweenness centrality and node degree. The loci of the hubs correspond with key areas from known overlapping cognitive networks. Several hubs were asymmetrically organized across hemispheres. Furthermore, females have hubs with higher betweenness centrality and males have hubs with higher node degree. Female networks have higher small-world indices.
Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely.
Although sleep is a fundamental behavior observed in virtually all animal species, its functions remain unclear. One leading proposal, known as the synaptic renormalization hypothesis, suggests that sleep is necessary to counteract a global strengthening of synapses that occurs during wakefulness. Evidence for sleep-dependent synaptic downscaling (or synaptic renormalization) has been observed experimentally, but the physiological mechanisms which generate this phenomenon are unknown. In this study, we propose that changes in neuronal membrane excitability induced by acetylcholine may provide a dynamical mechanism for both wake-dependent synaptic upscaling and sleep-dependent downscaling. We show in silico that cholinergically-induced changes in network firing patterns alter overall network synaptic potentiation when synaptic strengths evolve through spike-timing dependent plasticity mechanisms. Specifically, network synaptic potentiation increases dramatically with high cholinergic concentration and decreases dramatically with low levels of acetylcholine. We demonstrate that this phenomenon is robust across variation of many different network parameters.
The function of sleep is one of the greatest mysteries in contemporary neuroscience. Nearly every species of animal requires it, yet we do not know why. One idea, known as the synaptic renormalization hypothesis, suggests that waking results in a global increase in the strengths of connections in the brain, a phenomenon which is unsustainable because stronger connections consume more energy and take up more space. The function of sleep, according to this hypothesis, is to downscale or “renormalize” connection strengths. While mounting experimental evidence confirms that sleep-dependent synaptic downscaling does occur, we still do not know what biophysical mechanism causes it. In this paper, we show computational results which indicate that the neuromodulator acetylcholine may have a key role to play in sleep-dependent synaptic downscaling. If confirmed experimentally, these findings will help to unravel the mystery of sleep.
As neurons develop, several immature processes (i.e., neurites) grow out of the cell body. Over time, each neuron breaks symmetry when only one of its neurites grows much longer than the rest, becoming an axon. This symmetry breaking is an important step in neurodevelopment, and aberrant symmetry breaking is associated with several neuropsychiatric diseases, including schizophrenia and autism. However, the effects of neurite count in neuronal symmetry breaking have never been studied. Existing models for neuronal polarization disagree: some predict that neurons with more neurites polarize up to several days later than neurons with fewer neurites, while others predict that neurons with different neurite counts polarize synchronously. We experimentally find that neurons with different neurite counts polarize synchronously. We also show that despite the significant differences among the previously proposed models, they all agree with our experimental findings when the expression levels of the proteins responsible for symmetry breaking increase with neurite count. Consistent with these results, we observe that the expression levels of two of these proteins, HRas and shootin1, significantly correlate with neurite count. This coordinated symmetry breaking we observed among neurons with different neurite counts may be important for synchronized polarization of neurons in developing organisms.
We studied reach adaptation to a 30° visuomotor rotation to determine whether augmented error feedback can promote faster and more complete motor learning. Four groups of healthy adults reached with their unseen arm to visual targets surrounding a central starting point. A manipulandum tracked hand motion and projected a cursor onto a display immediately above the horizontal plane of movement. For one group, deviations from the ideal movement were amplified with a gain of 2 whereas another group experienced a gain of 3.1. The third group experienced an offset equal to the average error seen in the initial perturbations, while a fourth group served as controls. Learning in the gain 2 and offset groups was nearly twice as fast as controls. Moreover, the offset group averaged more reduction in error. Such error augmentation techniques may be useful for training novel visuomotor transformations as required of robotic teleoperators or in movement rehabilitation of the neurologically impaired.
Translating the timing of brain developmental events across mammalian species using suitable models has provided unprecedented insights into neural development and evolution. More importantly, these models can prove to be useful abstractions and predict unknown events across species from known empirical event timing data retrieved from published literature. Such predictions can be especially useful since the distribution of the event timing data is skewed with a majority of events documented only across a few selected species. The present study investigates the choice of single hidden layer feed-forward neural networks (FFNN) for predicting the unknown events from the empirical data. A leave-one-out cross-validation approach is used to determine the optimal number of units in the hidden layer and the decay parameter for the FFNN. It is shown that unlike the present Finlay-Darlington (FD) model, FFNN does not impose any constraints on the functional form of the model and falls under the class of semiparametric regression models that can approximate any continuous function. The results from FFNN as well as FD model also indicate that a majority of events with large absolute prediction errors correspond to those of primates and late events comprising the tail of event timing data distribution with minimal representation in the empirical data. These results also indicate that accurate prediction of primate events may be challenging.
Graph theory deterministically models networks as sets of vertices, which are linked by connections. Such mathematical representation of networks, called graphs are increasingly used in neuroscience to model functional brain networks. It was shown that many forms of structural and functional brain networks have small-world characteristics, thus, constitute networks of dense local and highly effective distal information processing. Motivated by a previous small-world connectivity analysis of resting EEG-data we explored implications of a commonly used analysis approach. This common course of analysis is to compare small-world characteristics between two groups using classical inferential statistics. This however, becomes problematic when using measures of inter-subject correlations, as it is the case in commonly used brain imaging methods such as structural and diffusion tensor imaging with the exception of fibre tracking. Since for each voxel, or region there is only one data point, a measure of connectivity can only be computed for a group. To empirically determine an adequate small-world network threshold and to generate the necessary distribution of measures for classical inferential statistics, samples are generated by thresholding the networks on the group level over a range of thresholds. We believe that there are mainly two problems with this approach. First, the number of thresholded networks is arbitrary. Second, the obtained thresholded networks are not independent samples. Both issues become problematic when using commonly applied parametric statistical tests. Here, we demonstrate potential consequences of the number of thresholds and non-independency of samples in two examples (using artificial data and EEG data). Consequently alternative approaches are presented, which overcome these methodological issues.
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient. We validate the performance of our overall system by decoding electrophysiologic data from a behaving rodent.
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Status epilepticus (SE), a pro-epileptogenic brain insult in rodent models of temporal lobe epilepsy, is successfully induced by pilocarpine in some, but not all, rats. This study aimed to identify characteristic alterations within the hippocampal neural network prior to the onset of SE. Sixteen microwire electrodes were implanted into the left hippocampus of male Sprague-Dawley rats. After a 7-day recovery period, animal behavior, hippocampal neuronal ensemble activities, and local field potentials (LFP) were recorded before and after an intra-peritoneal injection of pilocarpine (350 mg/kg). The single-neuron firing, population neuronal correlation, and coincident firing between neurons were compared between SE (n = 9) and nonSE rats (n = 12). A significant decrease in the strength of functional connectivity prior to the onset of SE, as measured by changes in coincident spike timing between pairs of hippocampal neurons, was exclusively found in SE rats. However, single-neuron firing and LFP profiles did not show a significant difference between SE and nonSE rats. These results suggest that desynchronization in the functional circuitry of the hippocampus, likely associated with a change in synaptic strength, may serve as an electrophysiological marker prior to SE in pilocarpine-treated rats.
There is considerable interest in the role of coupling between theta and gamma oscillations in the brain in the context of learning and memory. Here we have used a neural network model which is capable of producing coupling of theta phase to gamma amplitude firstly to explore its ability to reproduce reported learning changes and secondly to memory-span and phase coding effects. The spiking neural network incorporates two kinetically different GABAA receptor-mediated currents to generate both theta and gamma rhythms and we have found that by selective alteration of both NMDA receptors and GABAA,slow receptors it can reproduce learning-related changes in the strength of coupling between theta and gamma either with or without coincident changes in theta amplitude. When the model was used to explore the relationship between theta and gamma oscillations, working memory capacity and phase coding it showed that the potential storage capacity of short term memories, in terms of nested gamma-subcycles, coincides with the maximal theta power. Increasing theta power is also related to the precision of theta phase which functions as a potential timing clock for neuronal firing in the cortex or hippocampus.
An array of signals regulating the early stages of postnatal subventricular zone (SVZ) neurogenesis has been identified, but much less is known regarding the molecules controlling late stages. Here, we investigated the function of the activity-dependent and morphogenic microRNA miR-132 on the synaptic integration and survival of olfactory bulb (OB) neurons born in the neonatal SVZ. In situ hybridization revealed that miR-132 expression occurs at the onset of synaptic integration in the OB. Using in vivo electroporation we found that sequestration of miR-132 using a sponge-based strategy led to a reduced dendritic complexity and spine density while overexpression had the opposite effects. These effects were mirrored with respective changes in the frequency of GABAergic and glutamatergic synaptic inputs reflecting altered synaptic integration. In addition, timely directed overexpression of miR-132 at the onset of synaptic integration using an inducible approach led to a significant increase in the survival of newborn neurons. These data suggest that miR-132 forms the basis of a structural plasticity program seen in SVZ-OB postnatal neurogenesis. miR-132 overexpression in transplanted neurons may thus hold promise for enhancing neuronal survival and improving the outcome of transplant therapies.
The interplay between anatomical connectivity and dynamics in neural networks plays a key role in the functional properties of the brain and in the associated connectivity changes induced by neural diseases. However, a detailed experimental investigation of this interplay at both cellular and population scales in the living brain is limited by accessibility. Alternatively, to investigate the basic operational principles with morphological, electrophysiological and computational methods, the activity emerging from large in vitro networks of primary neurons organized with imposed topologies can be studied. Here, we validated the use of a new bio-printing approach, which effectively maintains the topology of hippocampal cultures in vitro and investigated, by patch-clamp and MEA electrophysiology, the emerging functional properties of these grid-confined networks. In spite of differences in the organization of physical connectivity, our bio-patterned grid networks retained the key properties of synaptic transmission, short-term plasticity and overall network activity with respect to random networks. Interestingly, the imposed grid topology resulted in a reinforcement of functional connections along orthogonal directions, shorter connectivity links and a greatly increased spiking probability in response to focal stimulation. These results clearly demonstrate that reliable functional studies can nowadays be performed on large neuronal networks in the presence of sustained changes in the physical network connectivity.
Most complex networks from different areas such as biology, sociology or technology, show a correlation on node degree where the possibility of a link between two nodes depends on their connectivity. It is widely believed that complex networks are either disassortative (links between hubs are systematically suppressed) or assortative (links between hubs are enhanced). In this paper, we analyze a variety of biological networks and find that they generally show a dichotomous degree correlation. We find that many properties of biological networks can be explained by this dichotomy in degree correlation, including the neighborhood connectivity, the sickle-shaped clustering coefficient distribution and the modularity structure. This dichotomy distinguishes biological networks from real disassortative networks or assortative networks such as the Internet and social networks. We suggest that the modular structure of networks accounts for the dichotomy in degree correlation and vice versa, shedding light on the source of modularity in biological networks. We further show that a robust and well connected network necessitates the dichotomy of degree correlation, suggestive of an evolutionary motivation for its existence. Finally, we suggest that a dichotomous degree correlation favors a centrally connected modular network, by which the integrity of network and specificity of modules might be reconciled.
Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons.
The manner in which information is encoded in neural signals is a major issue in Neuroscience. A common distinction is between rate codes, where information in neural responses is encoded as the number of spikes within a specified time frame (encoding window), and temporal codes, where the position of spikes within the encoding window carries some or all of the information about the stimulus. One test for the existence of a temporal code in neural responses is to add artificial time jitter to each spike in the response, and then assess whether or not information in the response has been degraded. If so, temporal encoding might be inferred, on the assumption that the jitter is small enough to alter the position, but not the number, of spikes within the encoding window. Here, the effects of artificial jitter on various spike train and information metrics were derived analytically, and this theory was validated using data from afferent neurons of the turtle vestibular and paddlefish electrosensory systems, and from model neurons. We demonstrate that the jitter procedure will degrade information content even when coding is known to be entirely by rate. For this and additional reasons, we conclude that the jitter procedure by itself is not sufficient to establish the presence of a temporal code.
An investigation of long timescale (5 minutes) fMRI neuronal adaptation effects, based on retinotopic mapping and spatial frequency stimuli, is presented in this paper. A hierarchical linear model was developed to quantify the adaptation effects in the visual cortex. The analysis of data involved studying the retinotopic mapping and spatial frequency adaptation effects in the amblyopic cortex. Our results suggest that, firstly, there are many cortical regions, including V1, where neuronal adaptation effects are reduced in the cortex in response to amblyopic eye stimulation. Secondly, our results show the regional contribution is different, and it seems to start from V1 and spread to the extracortex regions. Thirdly, our results show that there is greater adaptation to broadband retinotopic mapping as opposed to narrowband spatial frequency stimulation of the amblyopic eye, and we find significant correlation between fMRI response and the magnitude of the adaptation effect, suggesting that the reduced adaptation may be a consequence of the reduced response to different stimuli reported for amblyopic eyes.