The large heterogeneity of responses in higher cortical areas makes it difficult to understand neural activity on the population level. We here focused on a particular problem, that of short-term memory, in which neurons in PFC often have persistent firing rates that carry information about an item in short-term memory, yet also vary strongly in time. Our main aim in this study was to elucidate this interaction of time and memory information. In particular, we asked how information about time and memory are represented in PFC and what type of mechanisms may underlie such a representation.
To address our first question, that of representation, we searched for a reduced-dimensionality representation of the data in which time and frequency dependence of the neural firing rates could be separated. Our analysis began with a PCA preprocessing step that shows that as few as six dimensions capture most (>95%) of the explainable variance in the firing rates of a large set of cells. Given the prominent heterogeneity of neural responses, this was not a foregone conclusion from the PCA analysis, but is instead a feature of our particular dataset. For example, if each neuron fired at a high rate at a single moment during the delay period, with different neurons firing at different times, as found during singing in area HVC of songbirds (Hahnloser et al., 2002
), or as found in hippocampus of rats trained to estimate the duration of a delay period (Pastalkova et al., 2008
), the number of dimensions could not have been reduced at all.
However, while PCA served to summarize the data, most of the coordinates found did not reveal a useful representation of the data (). For that purpose, we developed the DOC method, which yielded a much more easily interpretable representation and revealed essentially complete separability of the time-dependent and f1-dependent variations in the firing rates (). This functional separation exists at the population level, not at the single neuron level. Functional separability of time and f1 components had previously been postulated in a theoretical model of the delay period data (Singh and Eliasmith, 2006
). Our analysis of the experimental data confirms this postulate. Intriguingly, the analysis also shows that, unlike the Singh and Eliasmith model, in which there was only one time component and one f1 component, there are at least three time and three f1 components in the data.
Based on the novel DOC representation, and to address our second question, that of mechanism, we then constructed a dynamical network model that replicates many details of the data. Even though the individual neurons within the resulting network mix time and f1 information just as the actually recorded neurons, their activities are produced from an underlying population representation in which time and f1 do not interact. A key aspect of the model was that the underlying time- and f1-dependent dynamics are driven by separate mechanisms: the representation of time is largely driven by external inputs into the network, while that of f1 is largely driven by internal recurrent connectivity.
The model could, perhaps, be interpreted merely as a compact summary of the dynamics in the data. However, we propose a stronger interpretation. That is, we propose to interpret the network mechanisms that maintain the time and f1 representations in the model as suggesting characteristics of the biological network mechanisms. Thus, we take our model to suggest that the biological mechanisms underlying the time dependence and the f1 dependence of the data are in fact functionally separate, with time representation relying mostly on external inputs, while the f1 representation relies mostly on internal recurrent connectivity. In this view, the basic principles of models previously proposed for the short-term memory of f1 and its comparison to f2 (Miller et al., 2003
; Machens et al., 2005
; Miller and Wang, 2006
; Machens and Brody, 2008
), in which temporal variation during the delay period was not addressed, may therefore still apply for the f1 components, even though the actual network implementations in the older models are too simplistic.
We used our dynamical model to generate predictions with which to test the hypothesis that time- and f1-dependent components of the neural activity are supported by separate mechanisms. Specifically, the model predicts that time-dependent components could adapt much more rapidly than f1-dependent components, both in terms of neural firing rates and in terms of behavior. These predictions can be tested experimentally.
One intriguing aspect of our results is that the dynamics in the data could be well described by a linear system. This was surprising: although the transformation from the original data to our six new axes was linear, the dynamics and state space trajectories within these new axes were neither constrained nor expected to be linear. In fact, as mentioned above, past explorations of PCA in neural population recordings have often led to low-dimensional, yet complex nonlinear dynamics (Friedrich and Laurent, 2001
; Stopfer et al., 2003
; Yu et al., 2006
). Nevertheless, we found that here the dynamics could be summarized by a simple linear dynamical system. While this was convenient (construction of a generative model was straightforward), we do not propose that PFC, in general, is a linear system: linear systems have very limited computational power. Our proposal that the PFC acts as a linear system is limited to the delay period in this task. We speculate that the slow, linear dynamics observed during the delay period are, in fact, a signature of a (high-dimensional) continuous attractor on which the network is moving. Linear, “integrator”-like behavior is a well known computational regime of continuous attractor networks. Since continuous attractors are usually located at bifurcation points of dynamical systems, any perturbation, for instance, through sensory input, will automatically push the system into a different dynamical regime (Machens et al., 2005
). We suggest that the observed linear dynamics during the delay period correspond merely to a reorganization of the stored information within a high-dimensional continuous attractor network whose computational power emerges through its sensitivity to internal and external influences. The stored information may need reorganization to prepare the system for the incoming second stimulus. PFC may seek to move the memory of f1 into a separate short-term memory buffer that will remain unaffected by the second stimulus f2.
In sum, the most important result of applying the DOC method is that it provided a view of apparently very complex data in which the underlying dynamics become transparent. Without this transparency, the hypothesis of a separation between the mechanisms supporting time representation and those supporting frequency representation would not have been formulated. Regardless of whether the hypothesis ultimately proves correct or incorrect, the DOC method gave us a tool to move our thinking forward. In this sense, the specifics of some of the results (e.g., there are three time components and three frequency components, as opposed to two and four, or four and two, respectively) are far less important than the fact that it led directly to a mechanistic, testable, hypothesis. This can be contrasted with using the PCA approach alone, which often results in complex trajectories in state space (Friedrich and Laurent, 2001
; Yu et al., 2006
) that allow no straightforward interpretation and lead to no specific mechanistic hypothesis. We note that some important qualitative conclusions are possible without the mechanistic models. For example, without the DOC method, and assuming only that neural activity in PFC is sufficient for the monkeys to perform the task at a variety of delay periods, it follows that there must be some time-invariant representation of f1. However, our analysis added much more than this conclusion alone: it led to specific hypotheses as to what this f1 representation is, what the mechanisms supporting it are, and how these hypotheses can be tested.
Dimensionality reduction methods serve to summarize complex data. They do not necessarily reveal new aspects of the data, but they can be helpful in doing so. With the help of an additional unsupervised coordinate transform, the DOC method, we were here able to simplify our view of the data to a point that allowed us to break down what appears to be a single problem into smaller, separable, and therefore more easily understandable problems. While related methods have been used in other neurophysiological settings (Friedrich and Laurent, 2001
; Stopfer et al., 2003
; Paz et al., 2005
; Yu et al., 2006
; Narayanan and Laubach, 2009
), our findings increase the enthusiasm with which we encourage application of these methods to data from the frontal lobes. We emphasize that such separability is not guaranteed by the method, it needs to be present in the dataset. Nevertheless, multiplexed representations, where multiple variables affect the firing rates of single neurons, are often found in frontal cortices. For instance, neurons in the PFC can represent both an objects’ location and its identity, suggesting that PFC integrates these two types of information (Rao et al., 1997
). It would be interesting to see whether on the population level the information can be separated again. This dataset and many others may be amenable to a functional separation analysis similar to the one we have presented here.