|Home | About | Journals | Submit | Contact Us | Français|
The brain is noisy. Neurons receive tens of thousands of highly fluctuating inputs and generate spike trains that appear highly irregular. Much of this activity is spontaneous—uncoupled to overt stimuli or motor outputs—leading to questions about the functional impact of this noise. Although noise is most often thought of as disrupting patterned activity and interfering with the encoding of stimuli, recent theoretical and experimental work has shown that noise can play a constructive role—leading to increased reliability or regularity of neuronal firing in single neurons and across populations. These results raise fundamental questions about how noise can influence neural function and computation.
Compared with their artificial analogues, biological systems are often considered quite unreliable. Each press of the space bar on a keyboard has a >99% chance of transmitting the appropriate signal to the computer. By contrast, a moving of a whisker might have only a 15% chance of generating a spike in a corresponding layer 4 neuron in a mouse’s somatosensory cortex . If computers were so unreliable, they would be nearly useless devices. Often this unreliability is attributed to the noisiness of biological systems. That is, the behavior of many biological systems is often considered to be stochastic or probabilistic. Indeed, recordings from neurons in vivo and in vitro have shown that total membrane current can be described as being randomly drawn from a Gaussian distribution (Figure 1) and that action potentials can be described as occurring randomly in time according to a Poisson process (see Glossary for a definition) [2-4]. Discussion of neuronal noise has focused on how neurons can overcome or compensate for this noise, for example by averaging across time or across neurons, and still process and transmit information . Here we take a contrasting view and discuss experimental and theoretical results that emphasize how neurons can behave reliably and synchronously not despite of, but because of, noise.
Many of the ideas needed for this review can be introduced by considering the simple example of current injection into a neuron during an intracellular recording. If the current is rapidly stepped from zero to a value large enough to fire the cell, the neuron will fire a series of action potentials, namely a spike train. Maintaining this level of current for an extended time will allow the frequency of action potentials to stabilize, resulting in a regular clock-like series of spikes which will last until the current is turned off. Repeating this step-like current injection many times will result in many trains of action potentials with similar rates. By aligning these spike trains to the onset of the current injection, one can measure whether spikes occur at the same time, relative to stimulus onset, across trials. Various measures of trial-to-trial reliability of spiking are used [5,6], but all of these attempt to quantify the probability of a spike occurring at the same time in different trains of action potentials, appropriately normalized by the single trial firing rate. When spike trains are evoked by steady-state current injection, the timing of the first few action potentials will be similar from trial to trial. However, the timing of later action potentials in the train will be different on each trial (Figure 2a) [7,8]. This variability of spike times across trials is thought to be caused by random trial-to-trial fluctuations in opening of channels, spontaneous synaptic inputs and so forth.
A single spike can be evoked reliably (on every trial) at a particular time during such a spike train by adding a large, transient depolarizing current pulse to the constant current input. Such large transients will also result in more reliable timing for a few subsequent spikes. Adding a smaller depolarizing transient can evoke a spike on 50% of the trials. Even negative current pulses, which cause transient hyperpolarization, cause spikes to be fired reliably as the neuron recovers from the hyperpolarization. Thus, stimuli that consist of many transient current pulses (positive and/or negative at various times and of various amplitudes) added to the constant current input described above will generate sequences of spikes that occur reliably at particular times across trials. Such inputs will look ‘noisy’ (Figure 2a, left) and are often called ‘frozen noise.’ Such fluctuations are noise because they are generated by drawing repeatedly from a random distribution (most often a Gaussian distribution), and thus the input has no pattern (see Glossary). The stimulus waveform is then fixed or ‘frozen’ and used repeatedly across many trials. Frozen noise inputs generate reliable spiking [7-9], provided the amplitude of the noise is large enough to overcome the intrinsic noise of the synaptic inputs and channels in the cell [10,11]. In vivo, similar fluctuating inputs can be provided by time-dependent variations in a sensory stimulus (Figure 2b) [12,13]. Note that the notion of providing the same noise over and over is somewhat counterintuitive, as in many contexts, noise refers to some uncontrolled source of variation. However, as discussed here, noise refers to an input, the value of which can be seen as being randomly drawn from some statistical distribution (e.g. a normal distribution). The use of frozen noise is a very useful tool for evaluating what features of a neuron’s response are actually dependent on a particular stimulus, as opposed to being dependent on the initial state of the neuron or some stimulus-independent process. A similar analysis to the above can be applied to spike trains generated in a population of neurons that receive either identical or similar inputs. In this case, correlated fluctuations will generate action potentials that occur at the same time in different neurons across the population [14,15].
Below we discuss how these parallel phenomena of reliability and synchrony are generated and how they depend on both the nature of input signals and on the intrinsic properties of the neurons receiving these inputs. We focus on a set of findings that show how fluctuating or noisy inputs can increase the reliability or synchrony of spike trains and discuss recent experimental data on this constructive role of noise in the context of recent results from theoretical physics.
Adding noise to an input signal changes many features of a neuron’s spike train. If the amplitude of the noise is large enough, noise can push a neuron from rest over threshold, making a neuron spike, even if the average value of the input is zero. Thus, adding noise can make an unresponsive neuron responsive. The effectiveness of the noise at generating spikes will vary with the noise amplitude and spectrum  (see Glossary). Similarly, for a neuron that is firing at a low rate, the addition of noise might increase the probability that the membrane potential will cross the threshold for action potential generation and thus might increase the firing rate. The spikes that are elicited by adding noise to an otherwise subthreshold neuron will be elicited in a pattern that reflects properties of the noise and the biophysical properties of the neuron. For example, providing white noise input (see Glossary) to a simple integrate-and-fire model neuron will cause it to fire spikes according to a renewal process such as a Poisson process .
Adding noise also can increase the regularity or periodicity of spiking, causing neurons to fire in a more clock-like fashion, with spikes being separated by fixed time intervals. This can occur in at least two ways. First, if a neuron is receiving a subthreshold input that varies in time (e.g. a sine wave), then addition of noise of appropriate amplitude might induce firing only during the peak of the subthreshold signal. This is an example of stochastic resonance, which has been of considerable interest in a variety of systems ranging from global climate to electronics to neurons  and which also might increase the response to nonoscillatory subthreshold signals. Even without any underlying oscillatory input in some neuron models [19,20], increases in the amplitude of noise can lead to more patterned firing, as indicated by a decrease in the coefficient of variation (defined as the ratio of the standard deviation of spike count to the average spike count) of the spike train with increasing noise. This phenomenon, known as coherence resonance, although it is predicted from computational models, has not yet been described in real neurons. These phenomena demonstrate that noise can alter the rate and pattern of firing of single neurons in different ways, potentially improving the neuron’s ability to transmit signals and represent stimuli.
Assessing the reliability of neuronal spiking is critical to understanding what features of spike trains might be important for coding stimuli. If any feature of the spike train (such as spike times) is not reliably generated when stimuli are repeated, then this feature will not carry information about the stimulus, restricting possible spike time coding schemes . In vivo data have shown that neurons are reliable enough to transmit information on a timescale of ~10 ms, although this is highly dependent on the particular system being studied [12,22].
Because they rely on the same mechanisms, factors that affect reliability of spike trains across stimulus repetitions also will affect synchrony of spike trains across neurons. Synchrony also will depend on the similarity of properties of the neurons in the population being synchronized. Different neurons receiving the same input can respond with very different spike trains [23-25], and so even highly uniform noise might cause synchronization of specific subsets of neurons with similar properties. Synchronization also is influenced by connectivity, which in some cases strongly biases a network in favor of synchrony .
The idea that synchronous oscillatory activity encodes information about stimuli has generated considerable interest, although these claims are controversial [27-30]. Besides possibly coding information, synchrony will influence transmission of activity from one group of neurons to another [31,32]. Synchronous spiking allows groups of neurons with common postsynaptic targets to depolarize these targets more effectively, leading to better propagation of spiking to downstream targets [33,34]. Such a role of synchronization suggests that the timescales that are most relevant for synchronization should be comparable to the membrane time constants of the target neurons (<~50 ms) . Understanding synchrony is also important because many techniques for measuring neuronal activity, including local field potential (LFP) recording, intrinsic optical imaging, EEG and MEG, depend on neurons having synchronized activity at the timescale relevant to the measurement [36,37]. Thus, analysis of synchrony is important to interpreting data acquired using these techniques.
Local field potentials or EEG recordings often show large oscillatory responses, indicating that the firing of many nearby neurons is periodic and synchronized . These oscillations often are synchronized across multiple recording sites, indicating that this synchronization occurs between distant brain areas. Such short-range and long-range synchronization might be generated by several distinct mechanisms. Local oscillations are often considered to arise owing to features of local circuit connectivity. For example, gamma oscillations (which typically range from 40 to 80 Hz) can be generated from the interplay of pyramidal cells and local circuit interneurons . Interneuron activity will cause pauses in pyramidal neuron firing, which will then fire as inhibition decays, reactivating the interneurons. Because of the wide arborization of interneuron axons, activity of these cells can synchronize firing of many pyramidal cells [26,40]. The duration of this pause will be determined by the decay time constant of the inhibitory post-synaptic currents (IPSCs) received by the pyramidal neurons. For GABAA receptors, the IPSCs will have a time constant of ~10 ms, resulting in a frequency of pyramidal cell firing in the gamma frequency range .
Recently we have described how oscillatory synchrony can be generated by addition of noise, even when neurons are not directly connected, provided that the noise received by different neurons is correlated . Such correlated fluctuations have been shown to occur in olfactory bulb mitral cells owing to divergent connectivity of local circuit interneurons [42,43]. These correlated fluctuations can be in the form of Poisson trains of postsynaptic current-like events, or they can be continuous white (or colored) noise. Output synchronization increases approximately linearly with input correlation , so even modest changes in correlated input lead to changes in the degree of output synchronization.
Previous theoretical work has provided several examples of similar noise-induced synchronization. Perhaps the most fundamental result is that two uncoupled nonlinear systems driven by identical weak noise will eventually synchronize, independent of their initial conditions [44,45]. Similar results have been obtained for reduced neuron models driven either by white noise [46,47] or Poisson trains . Synchronization can occur rapidly—within a few cycles of the oscillation, provided that the rate of correlated events is relatively high . Similar phenomena in which noise can generate oscillatory synchrony have been reported recently in models of regulation of gene expression in systems such as quorum-sensing bacteria , suggesting that this might be a very general phenomenon.
At first glance, correlated noise seems unlikely to support synchronization, but this phenomenon can be understood by considering the following example. Consider the effect of a small input delivered simultaneously to many neurons in the population (Figure 3b). If these neurons are firing regularly they can be modeled as oscillators (see Box 1) , and their current state can be described by their phase. In this case, the degree of synchronization in the network can be measured as the distribution of phase differences between any one neuron and the rest of the population. If the neurons are completely unsynchronized, the distribution of their phase differences will be flat or uniform. If neurons are synchronized, the distribution of their phase differences will be peaked. If we start with a population of unsynchronized neurons and we deliver a small stimulus pulse, then the effect of this stimulus on the phase of each neuron will be determined by the phase of the neuron at the time the stimulus arrives. The input might advance or delay the next spike, or it might leave the next spike time unchanged.
The phase resetting curve (PRC) is a very useful tool for characterizing how oscillating systems respond to stimuli . Neurons communicate via synaptic currents which cause changes in membrane potential. A natural question is how inputs affect the firing of the neuron. In a rhythmically firing cell, we can ask how inputs shift the timing of the rhythm. We can define the phase of a rhythm by assigning the peak of the first action potential to be a phase of zero, and the phase of the second action potential to be 2π. Figure 3 shows a trace of the potential of a repetitively firing neuron and the definitions of the corresponding phase. If T is interspike interval for the neuron, then the time, t, is converted to the phase by ϕ = 2πt/T. Typically, we allow the phase to lie between 0 and 2π by subtracting away multiples of 2π. The PRC tells us how a stimulus at a phase ϕ in the cycle alters the time of the next spike. Suppose, for example, that a brief pulse of current is applied (on top of the DC current required to make the neuron fire repetitively) at a time t (or phase ϕ = 2πt/T) after the last spike. This will cause the neuron to fire its next spike at a different time T (ϕ). The value of T depends on the time of the input. The PRC, Δ(ϕ), is defined as
If Δ(ϕ) is negative (positive), this means that the perturbation pushed the phase back (ahead) and thus delayed (advanced) the time of the next spike. Winfree  popularized the use of the PRC in understanding biological oscillators, particularly in the context of circadian rhythms. The PRC can be experimentally determined in any of several different ways [25,50,69]. The PRC plays a crucial role in the analysis of how oscillatory neurons and networks synchronize and in the reliability of neuronal responses.
The relationship between the phase of a neuron at the time the input arrives and the change in the phase of the next spike is given by a function known as the phase resetting curve (PRC) of a neuron [25,51] (see Box 1). The effect of an input on the distribution of phases across the population depends on the PRC and on the initial distribution of phases. For example, if the PRC is a constant (that is, if the effect of the input on the time of the next spike is independent of when the last spike occurred), then any neuron receiving input will have its phase shifted by this constant. Thus, the distribution of phase differences will be unchanged by input. Alternatively, if the PRC has positive and negative parts (such as a sine wave), then those cells receiving input late in their cycle will spike earlier than they would have without the input and those cells receiving input early in their cycle will wait longer to spike (see Figure 3). That is, those cells that are ‘ahead’ of the others will speed up and those cells that are ‘behind’ will slow down (Figure 3). Because the spiking is periodic, this will result in the spikes of all neurons becoming more tightly clustered. Once the population is synchronized, inputs that are correlated across the population will have similar effects on all neurons in the population, so synchronization will be maintained. This intuitive argument, although obviously not rigorous, can be formalized into a demonstration of the synchronizing effects of noise . This result holds whether the correlated inputs are discrete pulses or continuous signals, and even when individual neurons receive a mixture of correlated and uncorrelated inputs [52,53]. As mentioned above, this phenomenon is extremely general, and is likely to generate synchrony in many cell types receiving correlated noise .
Synchronization of neuronal oscillations by noise requires that neurons receive partially correlated input. This is likely to be a common feature of brain areas in which there is divergent local connectivity or a strong topographic input. In the cortex, a single interneuron can be connected with tens of thousands of local circuit interneurons. Thus, activity of this single cell will provide correlated input to many neurons in the local circuit. Alternatively, correlated inputs might be stimulus driven. For example, movements of objects in the visual field or eye movements will generate transient periods of correlated spiking of thalamocortical afferents [54,55]. Even if these correlated fluctuations are too weak to generate spiking in most cells , they can still causes shifts in spike times, leading to enhanced synchronization.
A second key requirement of stochastic synchrony is that neurons across the population are similar in their firing rates. This requirement will limit the opportunities for noise-induced synchronization, but firing rates need to be comparable only for brief periods (as little as 100 ms) because synchronization can occur within a few oscillation cycles . Similar firing rates are required because noise-induced synchrony will not by itself result in changes in the average firing rate of individual neurons. Thus, the basic mechanism that we have described does not provide a way for neurons to equalize their firing rates in response to changes in input correlation. However, in a circuit context, increased synchrony will likely cause greater activation of local excitatory or inhibitory connections which then might lead to changes in firing rates .
Finally, to be synchronized by noise, neurons in a population must have similar biophysical properties and PRCs (see Box 1). That is, they must change their spike timing similarly in response to incoming inputs. Because neurons of the same class (e.g. pyramidal cells in a given layer) typically have similar PRCs whereas neurons of different types generally do not [24,58], correlated noise can generate selective synchrony in a particular cell type even if the input correlation is uniform across all cell types in an area. Neurons with many different shapes of PRCs can be synchronized by noise, although certain features of PRC shape might be more favorable for synchronization [15,52,59].
Noise is usually considered destructive, or disruptive in the processing and transmission of information. In this review, we have discussed many examples demonstrating that noise can be constructive, leading to larger, more patterned and more useful responses. Noise can cause single neurons to fire more regularly within a trial, and more reliably across trials, and correlated noise can cause synchronization of both aperiodic and periodic activity across populations of neurons. This work, and other ideas about the variability of neuronal responses facilitating the representation of probability distributions , suggests that aspects of neuronal responses that appear to be noise might in fact be fundamental components of the way in which information is propagated or represented in neurons. As such, measuring and reporting the variability of responses (and not just averages) will become increasingly important for understanding the mechanisms that allow neurons to efficiently encode information.
Noise and correlations are at the center of many controversies about the nature of the neural code and, as such, these controversies might benefit from a better understanding of how activity patterns of neuronal activity are altered by noise and how correlated activity is generated . For example, correlated activity across populations of neurons has been taken as evidence that small populations of neurons are sufficient to encode stimuli  and has been argued to limit the extent to which horizontal connections can enhance stimulus representation . However, given the role of correlation in synchronizing oscillatory responses, such correlations might generate patterns of population activity that are propagated more efficiently from one brain area to the next. Moreover, alterations in population synchrony have been associated prominently with brain disorders, including Parkinson’s disease, epilepsy, schizophrenia and autism [64,65]. Thus, a theoretical and mechanistic understanding of the role of noise in neural synchronization might be increasingly important in our understanding of these disorders.
We would like to thank Dr. Alison Barth and Matt Angle for reading this manuscript. This work was supported by NIH R01DC005798, R01MH079504 and NSF DMS 0513500.
Coherence resonance: a phenomenon in which the regularity (see above) of a process is maximal when the input has a nonzero noise amplitude.
Noise: a signal that varies as a function of time, the value of which at any given time is drawn randomly from some distribution. Noise can be described by its spectrum and its amplitude. The spectrum of noise describes how rapidly the value of the signal is changing. If fluctuations occur such that the value of the signal at any time is completely independent of its value at any subsequent (or previous) time, then the noise is said to be white noise. This name is derived from an analogy with white light. White noise contains fluctuations at all possible frequencies (as can be seen from the power spectrum of the noise in Figure 1) just as white light consists of wavelengths. If the value of the noise at one time is correlated with its value over a particular interval, the noise is often said to be colored or low pass filtered (Figure 1). The amplitude of noise is related to the standard deviation of the distribution from which it is drawn (Figure 1).
Poisson process: a stochastic process used to model events that occur at a fixed average rate, independent of each other in time.
Regularity: the degree to which a process occurs repeatedly at a fixed time interval. The output of a metronome is highly regular; the output of a Geiger counter is highly irregular and is in fact well described by a Poisson process.
Reliability: the degree to which a single neuron fires the same number of action potentials, at the same time, in response to repeated delivery of the same input. This input can take many forms, ranging from direct current injection to sensory stimuli.
Stochastic resonance: a phenomenon in which the signal-to-noise ratio of a nonlinear system reaches a maximum when the input has a nonzero noise amplitude.
Synchrony: analogous to reliability, but similarity of spiking is measured across a population of neurons recorded over the same interval.