To survive an organism must choose favorable actions in a great variety of situations. This responsibility falls primarily on the central nervous system and in particular on its millions or billions of neurons. When deciding how to act neurons face a fundamental problem. A typical neuron does not interface directly with the environment, but only with other neurons. All it can “see” is a large number of input patterns, constantly changing, on its thousands of synapses, but what these input patterns may mean or represent is unknown to it. Worse, all a neuron can do, in essence, is choose just one of three actions: stay silent, spike once, or burst.1
In other words, a neuron is extraordinarily “stupid” [14
]: it does not know what its inputs mean, it does not know what it is communicating, to whom, and for what purpose, and in any case it has very little to say. How, then, can neurons possibly act in the interest of their brain?
A large body of evidence suggests neurons learn by modifying their synapses according to the distribution of pre- and postsynaptic spikes, modulated by chemical signals such as dopamine and noradrenaline [1
]. Spikes play a privileged role in most models of neuronal learning: the distribution of spikes (or inter-spike intervals) determines when synapses are modified. For example in Hebbian learning synaptic plasticity is a function of correlations between spiking activity, whereas in spike-timing dependent plasticity (STDP) the precise timing of pre-and postsynaptic spikes determines whether synapses are potentiated or depotentiated [15
]. Spikes therefore seem to provide a mechanism the brain uses to assign distribute credit and blame amongst neurons: synapses that transmit many spikes are proportionally potentiated or depotentiated, suggesting that spiking neurons and synapses are credited or blamed for good or bad outcomes experienced by the organism. Using bursts to assign credit and blame only makes sense if bursts contribute actively
to total brain activity, in contrast to silent (or near silent) neurons, which form a more passive
This paper makes two main contributions. First, it information-theoretically characterizes the distinction between bursting foreground and silent background in terms of selectivity (quantified as effective information). Second, it proposes that neurons communicate selectivity with bursts. By this we mean: (i) neurons should use bursts to emphasize outputs that depend selectively on their inputs, and few or no spikes for outputs that depend vaguely on their inputs and (ii) neurons should propagate selective inputs by responding to spiking inputs with spiking outputs. Bursts should be: (i) selective and (ii) impactful.
In the results we show that highly selective outputs are responsible for almost all of the information transferred by a neuron. Moreover, we show that communicating selectivity satisfies a necessary condition for ensuring that selectivity is preserved by composite channels (such as pairs of neurons or neuronal populations). We then consider the implications of communicating selectivity for credit assignment and learning, showing that (de)potentiating synapses in response to selective outputs (i.e. bursts) yields finer control over how neurons deep in cortex adapt to sensory stimuli, since selective responses are more traceable.
Finally, we discuss how communicating selectivity may be enforced. Since synaptic strengths change constantly, ensuring bursts are selective requires ongoing effort. We argue that sleep, when the brain is offline and its activity is not task-dependent, provides an ideal time to align bursts with effective information and balance the relationship between input and output spikes.
Many models of learning and inference in distributed systems have been developed, starting perhaps with Selfridge’s Pandemonium of “shrieking demons” [30
]. Recent approaches have focused on Bayesian models [13
] where, typically, the number of spikes outputted by a neuron or the likelihood of a neuron outputting spikes corresponds to the probability of some event. Our approach is complementary to these since, after imposing the two constraints required for communicating selectivity, neurons have many remaining degrees of freedom regarding when they should spike. Neurons are free to use their spikes to predict neuronal or external events, so long as the events they focus on are specific
Our work builds on observations that cortical representations of sensory inputs are sparse
]. Indeed communicating selectivity is a necessary condition for bursts to be sparse in cortex. However, rather than focus on sparsity at the population level, we investigate the more basic notion of selectivity, which is a local (specific to individual neurons) information-theoretic requirement for global sparsity.