|Home | About | Journals | Submit | Contact Us | Français|
Neurons display continuous subthreshold oscillations and discrete action potentials (APs). When APs are phase-locked to the subthreshold oscillation, we hypothesize they represent two types of information: the presence/absence of a sensory feature and the phase of subthreshold oscillation. If subthreshold oscillation phases are neuron-specific, then the sources of APs can be recovered based on the AP times. If the spatial information about the stimulus is converted to AP phases, then APs from multiple neurons can be combined into a single axon and the spatial configuration reconstructed elsewhere. For the reconstruction to be successful, we introduce two assumptions: that a subthreshold oscillation field has a constant phase gradient and that coincidences between APs and intracellular subthreshold oscillations are neuron-specific as defined by the “interference principle.” Under these assumptions, a phase-coding model enables information transfer between structures and reproduces experimental phenomenons such as phase precession, grid cell architecture, and phase modulation of cortical spikes. This article reviews a recently proposed neuronal algorithm for information encoding and decoding from the phase of APs (Nadasdy, 2009). The focus is given to the principles common across different systems instead of emphasizing system specific differences.
Ever since the correlation between the theta phases of pyramidal cell firing in the hippocampus and the position of the rat in a linear track was observed (O'Keefe and Recce, 1993), the question has lingered whether the phase of action potentials (APs) relative to local field potentials (LFPs) encode information or if this correlation is a mere epiphenomenon. Encoding implies that information available from the phase is decoded by neurons downstream, as their AP generation depends on this information. Numerous mechanisms have been proposed that could potentially generate phase precession relative to the theta oscillation. One class of models includes the dual oscillator interference model (O'Keefe and Recce, 1993; O'Keefe and Burgess, 2005; Blair et al., 2008) and the somato-dendritic dual oscillator model (Kamondi et al., 1998; Harris et al., 2002; Lengyel et al., 2003; Huhn et al., 2005). The key assumption in both models is that phase precession is generated by the interaction between two theta oscillations with slightly different frequencies. Another class of models focuses on the dendritic mechanisms (Magee, 2001), assumes a depolarization ramp (Mehta et al., 2002), or proposes network-level mechanisms (Jensen and Lisman, 1996; Tsodyks et al., 1996; Wallenstein and Hasselmo, 1997). Nevertheless, all of these models share the key assumption that the cause of phase precession is localized within the hippocampus. In contrast, we proposed an alternative model, which considers phase coding as originating from sensory processing, after which the code is transferred to the cortex where it is decoded and re-encoded before it is further propagated to the associated systems, including the entorhinal cortex (EC) and hippocampus (Nadasdy, 2009). Recent studies reporting AP phase modulation in the prefrontal (Montemurro et al., 2008; Kayser et al., 2009; Siegel et al., 2009), auditory (Kayser et al., 2009), visual (Montemurro et al., 2008), and EC (Hafting et al., 2008) are consistent with this view. Despite the differences in physiological characteristics, cell types, the input–output connectivity and predominant oscillation frequencies across these systems, we argue that the sensory, thalamo-cortical and limbic systems are sharing the common language of phase coding. In this review without the capacity of describing system specific implementations we overview the common mechanism of AP phase coding.
When we record a neuron intracellularly while injecting different levels of current pulses, the current will drive the subthreshold membrane potential oscillations (SMOs) toward the threshold potential, evoking APs upon threshold crossing (Llinas et al., 1991). The larger the depolarizing current is, the more likely the membrane potential is to cross the threshold and generate APs. This is the mechanism by which the intensity of a sensory signal is converted to a firing rate code. Intriguingly, the level of input current in these experiments will not only affect the firing rate but also the phase of APs, as phases advance systematically with increasing depolarization, even after the firing rate has been saturated (Figure (Figure1).1). Using the phase, neurons are endowed with a broader dynamic range for encoding information than they are with the firing rate. A similar sensory encoding scheme has been proposed and experimentally observed in the salamander retina (Gollisch and Meister, 2008). If neurons encode information using the phase of APs, how will that information be read out?
The fluctuation of neuronal membrane potential around the mean without generating APs is known as SMO. This oscillation has a power spectrum with peaks at regionally specific resonant frequency bands, for instance olivary neurons ~5Hz (Devor and Yarom, 2002a), entorhinal cortical neurons 4–7Hz (Giocomo et al., 2007), and cortical neurons ~40Hz (Llinas et al., 1991; Silva et al., 1991). The most likely sources of such oscillations are specific intrinsic conductances (White et al., 1998; Dickson et al., 2000; Fransen et al., 2004). However, the coherency of SMOs across neurons depends on electrotonic interactions between neurons (Devor and Yarom, 2002b). A number of mechanisms, including gap junctions, electrotonic synapses, ephaptic conductivity, and glial transfer (Yeh et al., 1996), have been proposed to mediate SMOs between neurons. These mechanisms allow the SMO to propagate in a radial spread or traveling waves, depending on the network architecture. Moreover, near-synchronized activity of interneurons impinging on different parts of principal cells may also sculpt such oscillations (Buzsaki and Chrobak, 1995).
Regardless of whether they are imposed or exchanged, we assume that these oscillations are not independent between neurons. Instead, oscillations of adjacent neurons stabilize themselves into a near-synchronized state. A number of studies confirmed the propagation of membrane oscillations and LFPs as either radial or traveling waves (Bringuier et al., 1999; Prechtl et al., 2000; Benucci et al., 2007; Lubenov and Siapas, 2009).
Based on the prevalence of SMOs, we further assume that the extracellular sum of such population-wide, near-synchronized rhythms contributes to the LFP. Although LFPs are considered to be derived from the sum of synaptic activity at the dendritic regions of neurons (Mitzdorf, 1985; Logothetis et al., 2001), a significant oscillatory component of LFP may also be derived from the sum of SMOs within a 250-μm (Katzner et al., 2009). This is supported by the shared theta frequency oscillation between intracellular SMOs and LFPs within the EC and in the frontal lobe (Alonso and Llinas, 1989; Llinas et al., 1991), as well as by the high correlation between LFP and intracellular SMO (Tanaka et al., 2009). The high correlation between LFP and SMO accomplishes a conceptual link between LFP and SMO and enables an important experimental shortcut of estimating the SMO based on the LFP.
The following two sections outline the principles of the phase-coding model.
Subthreshold membrane potential oscillations play critical roles in phase coding during both encoding and decoding. The periodic amplification of the excitatory postsynaptic potentials (EPSP) by the SMO, which causes sensory neurons to convert input to AP phases during encoding, also makes the decoding-neurons highly selective for the timing of EPSPs. A presynaptically evoked EPSP that coincides with the depolarizing phase of the SMO is more potent in evoking APs than EPSPs outside of that time window. Due to the electrotonic propagation of SMO, there is a distance-dependent phase difference in membrane oscillations between most neurons, which, in a sufficiently large network, covers the entire 180° phase range. Thus, coincidences between input APs and SMO peaks are spatially restricted and neuron-specific. Conversely, for any input AP time there will be a neuron that is most activated by the AP–SMO coincidence. We call this the interference principle (Figure (Figure2).2). The interference principle guarantees a consistent mapping of an input AP pattern on a spatial layout of neurons, which reproduces the original temporal pattern of APs (Nadasdy, 2009). For a faithful spatial reconstruction, we must furthermore assume an isomorphism between the sensory and target SMO fields. We remark that the interference principle should not be confused with the “oscillatory interference model” (O'Keefe and Burgess, 2005; Burgess et al., 2007).
The interference principle is applied twice, first when the sensory input is converted to the phase code (stage 1) and second at the target area (the cerebral cortex in mammals) where information is reconstructed from the phase code (stage 4). However, neurons that convert the input to phase may operate at a lower threshold than neurons that detect coincidences. The next section will summarize a four-stage model of information encoding and reconstruction. Then we will discuss possible realizations of the interference principle in sensory and limbic information processing that are consistent with a number of empirical data.
We propose that in all sensory systems, phase encoding and decoding takes place by a four-stage transformation. Stages 3 and 4 are also applicable to cortico-cortical information transfer. We will illustrate the four stages on the mammalian visual system, but the same principles can be generalized to other sensory systems.
We emphasize that perfect reconstruction is neither the goal nor the final stage of information processing. When the sensory-cortical neurons reconstruct information from the phase code, they also add information to it. Reconstruction in the real brain is not an exact reproduction of the sensory information, since the input coming from the sensory thalamic nuclei is combined with inputs from a number of associated cortical areas. Rather, reconstruction is the stage at which important transformations, such as topographical and coordinate transformations and the combination of information from other cortical areas, take place. The reconstruction stage is also the starting point for cortico-cortical information transfer.
Above we described a conceptual model for neural encoding, information transmission, and decoding (for numerical simulations, see Nadasdy, 2009). For the sake of simplicity, we proved that information reconstruction from the phase code is nearly perfect within as few as four gamma cycles and 100 neurons, given the isomorphism of the SMO phase gradients at the sensory input and the target area (Nadasdy, 2009). Although this latter assumption may seem difficult to maintain under physiological conditions, there is substantial morphological and functional evidence in support of it. For example, multiple loops of the thalamo-cortical projection pathway through the thalamic reticular nucleus provide low- and high-frequency (gamma) links between the thalamus and cortex (Jones, 2002). Visual cortical areas 17 and 18 also synchronize to LGN with a 2.6-ms delay on anesthetized cats (Castelo-Branco et al., 1998). Moreover, a global retina-LGN-cortex synchronization is evident in the high gamma band (Castelo-Branco et al., 1998). On the one hand, incoherency between the encoding and decoding SMO fields would compromise phase coding. On the other hand, a systematic topographic (but not temporal) incoherency of SMO phase gradients between the encoding and decoding structures is where transformations and computations can be implemented. For example, transformations between retinal and head-centered and between head- and body-centered coordinates can be performed by gain fields (Zipser and Andersen, 1988) or by tuning the SMO field, which transforms the map of interferences. According to the phase-coding model, the location of AP–SMO coincidences, i.e., the interference pattern smoothly shifts depending on the relative phases of APs from concurrent inputs reaching the neuron. Moreover, an arsenal of interneurons is deployed to provide fine tuning of the SMO, not unlike to the hippocampus, where each interneuron type specifically calibrates the location and frequency of membrane resonance, thus tuning the SMO in individual neurons to the gradient of the larger SMO field (Cobb et al., 1995).
One of the critical features of phase coding is that it allocates different frequency bands for different types of information by utilizing the spatially and temporally coherent SMOs shared between coupled networks. One such frequency band is the range of phases within each oscillation period. The other frequency band is the frequency of SMO itself. It has been demonstrated that information can effectively be encoded and decoded by multiplexing the code in these two frequency bands (Nadasdy, 2009). The assignment of frequencies to features may vary across brain structures. Likewise, at the stage of sensory encoding and gamma alignment different scenarios are possible. The scenario we described earlier was that the spatial/anatomical location is encoded by phase and luminance is encoded by period cycles. However, these two features are interchangeable and phase can represent luminance and period cycles can represent the spatial/anatomical location. Within the visual system the magno, parvo, and konio cellular pathways represent the heterogeneity of these coding solutions. For instance, it is conceivable that since the magno cellular pathway is specialized to effectively transfer motion and orientation while the parvo cellular pathway transfers luminance and color with high spatial acuity, the former one encodes motion in phase, while the latter one encodes the spatial position or spatial frequency in phase. Thus, qualitative and spatial stimulus features are given different priorities in the different pathways of the visual system.
Another remarkable feature of phase coding is that with only a few parameter adjustments we can obtain different solutions to represent space and time. For example, if the cortical cytoarchitecture is homogeneous, such as in the EC, and if it allows an unconstrained propagation of SMO waves over multiple spatial SMO wavelengths, then multiple representations of the same input develop because of the spatial aliasing inherent to the interference principle (Nadasdy, 2009; also see a different solution by Burgess, 2008). Conversely, the same EC neuron exhibits spatial tuning to multiple, equidistant spatial locations, consistent with the definition of grid cells. The missing link between the spatial maps and network architecture could be the spatially and temporally periodic SMO field. Based on our simulations, the phase-coding model predicts that the phase-gradient map in the EC is coalescent with the topography of the grid cell map, i.e., with the matrix of grid cells that share space fields (Nadasdy, 2009).
The third important feature of phase coding becomes evident when we track the activity of a neuron relative to the SMO cycles under a dynamic input condition while also varying the propagation direction of the SMO field. This emulates the condition of recording place in a freely moving animal's hippocampus and computing the phase of spikes relative to ongoing theta LFP oscillations. In similar experiments, the AP phase systematically advances relative to the theta cycles, defined as phase precession (O'Keefe and Recce, 1993; Skaggs et al., 1996; Harris et al., 2002). However, recording theta not only from a single electrode but also from a larger volume around the place cell should reproduce what we found by modeling. Namely, APs should always phase-lock to the intracellular SMO (Harvey et al., 2009), but the direction of phase precession (advancement vs. lagging) will depend on the propagation direction of global SMO/LFP field around the neuron (Nadasdy, 2009). The assumption of SMO field propagation is consistent with the observation of traveling waves in the hippocampus on freely moving rats (Lubenov and Siapas, 2009). The phase-lock between the APs and the intracellular SMO has been confirmed during behavior (Harvey et al., 2009). Combining SMO, LFP, and AP measurements from multiple neurons separated by different distances would elucidate the underlying network dynamics and test the interference principle.
Among the predictions that can be derived from the phase-coding model is the phase modulation of spikes in the cortex in relationship to stimulus or behavioral manipulations. We earlier argued that reconstruction takes place in the supragranular layer of the neocortex. According to our model, layers 2–3 and 4b pyramidal cells vigorously respond to the granule cell input only if the time of input APs coincides with the cell's intracellular SMO peaks. In our simulations the optimal coincidence time window was ~1ms (Nadasdy, 2009). Empirically, however, this time window is a probability function, rather than a binary function, allowing neurons to fire less frequently when the input is away from the peak but still reaches threshold. When the stimulus is optimal for the neuron, the AP will be generated reliably near the intracellular SMO peak (LFP trough). The same neuron may also respond, although less likely, to a suboptimal stimulus. If the suboptimal stimulus is optimal for another neuron, it will drive that neuron at the exact intracellular SMO peak. However, due to the slight phase difference between the two intracellular SMO processes, the same depolarization that drives the other neuron at exact SMO peak will drive the first neuron at a slightly different SMO phase than would its own optimal stimulus. As a result we shall observe a modest phase difference between spikes of the same neuron when we vary the stimulus parameters within the receptive field. Studies are in progress to test this prediction. Prefrontal cortical neurons in a working memory task exhibit memory item dependent phase offset relative to the slow oscillations (Siegel et al., 2009). Other studies investigating the auditory and visual cortex found feature-dependent phase differences relative to theta in auditory (Kayser et al., 2009) and relative to alpha in primary visual cortex (Montemurro et al., 2008) and to gamma (Nadasdy and Andersen, 2009) also in primary visual cortex. It is also conceivable that the phases of local SMOs shift relative to the LFP, which integrates oscillations over a larger cell population (Harvey et al., 2009). We anticipate an increasing amount of data to arise in support of these so-far isolated examples in cortical recordings.
For phase coding and decoding to work, the subsystems of brain have to meet with specific dynamic conditions. One such condition is the high coherency between the SMOs at the encoding and decoding stages. For instance, the efficacy of visual information reconstruction in the cortex is highly dependent on the phase coherence between the LGN and V1. We postulated based on simulations that this coherency must approach a precision of 1ms (Nadasdy, 2009), which is consistent with the coherency provided by the thalamo-cortical loop (Jones, 2002). The empirical precision of synchrony between cortical and LGN SMOs is yet to be determined. We also showed that the precise topographic mapping between the input and output is where the system can implement coordinate transformations between representations (Nadasdy, 2009).
The second condition is the compatibility of SMO frequencies across and within structures. While the hippocampal LFP is dominated by coherent theta and gamma oscillations, the hippocampal pyramidal cells express mainly theta frequency SMOs. If phase coding in the hippocampus relies on theta, it is not clear what role gamma oscillations may play. Likewise, entorhinal cortical neurons express theta frequency SMOs. In contrast, sensory organs and primary sensory areas are dominated by gamma oscillations. Notably, we observed visual feature-dependent spike phase modulation relative to the gamma band LFP and not to alpha, while other studies reported phase modulation relative to alpha band LFP (Belitski et al., 2008).
Although the correlation between SMO and LFP is high, they are not identical. The extent at which LFP is a good approximation of SMO is still unknown. The correlation between LFP and SMO is critical for the empirical testing of the phase-coding model and cries for defining the transfer function between population SMO and LFP.
Last, the noise tolerance of phase coding is unknown. Different types of noise need to be considered. One is the noise generated by the movement of sensory organs themselves, which affects the sensory sampling. Second is the noise level of intrinsic SMO oscillations. Third is the temporal incoherency between source and target structures. Fourth is the spatial incoherency between the neuronal source and target. While spatial incoherency can implement useful transformations in the reconstruction, the temporal incoherency is highly detrimental for the reconstruction. The effects of these concerns need to be investigated by simulations and tested empirically.
Because our understanding of the relationship between SMO and LFP is still incomplete, it leaves the question open: what is the timescale of phase modulation in the brain? The frequency of SMO and LFP consistently varies along the fronto-temporo-occipital axis, dominated by gamma in the occipital regions of the cortex, alpha in the frontal areas, and theta in the EC, hippocampal, and parahippocampal regions. In addition, gamma power is high and oscillations are phase-locked to hippocampal theta. Although hippocampal phase precession is defined relative to theta, we anticipate phase precession relative to gamma oscillations as well, while APs should be phase-locked to the intracellular gamma SMO. We also anticipate a similar relationship between EC theta and gamma. The phase modulation of spikes relative to alpha/theta LFP (Montemurro et al., 2008; Kayser et al., 2009) and relative to gamma LFP (Nadasdy and Andersen, 2009) in the visual cortex is still unclear. One of the most important questions is whether or not the interference principle would work at multiple timescales to allow information to be encoded relative to multiple frequency bands of ongoing oscillations and whether or not these frequency bands carry content-specific information. There is much to learn about the collective resonant property of the nervous system in the next few years that will complete our understanding of how the activity of millions of neurons is orchestrated, and this orchestration may happen in a much more deterministic fashion than the “noisy” brain models suggest.
Finally, as stated in the title, the phase-coding model suggests a critical revision of the concept of binding by synchrony. Accordingly, the key of preserving the integrity of the code across multiple stages of information transfer in the brain is the precise asynchrony of APs between neighbor neurons, as opposed to the zero-phase lag synchrony proposed earlier (Gray and Singer, 1989). We argued that a subtle but constant phase gradient of the propagating SMOs is critical for encoding and reconstructing the sensory information as well as to for performing different coordinate transformations on the sensory input to achieve context invariant object representations in the brain.
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We acknowledge Richard A. Andersen, Neil Burgess, and Paul Miller for invaluable comments on the original manuscript (Nadasdy, 2009) and the support from the National Eye Institute. We thank Sarah Gibson, Jason Ettlinger, and Hollie S. Thomas for proofreading.
Zoltan Nadasdy is a research scientist whose main interest is to understand the fundamental mechanisms of neural coding, in particular the relationship between intrinsic oscillations and spike patterns. He developed these ideas over the years of studying Neuroscience at the Rutgers University (Ph.D.) and during his post-doctoral trainings in electrophysiology at the Hebrew University of Jerusalem and at the California Institute of Technology. His research areas are spike sequences, neural coding and neural correlates of visual perception. Currently he is working in the field of human electrophysiology.