PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Curr Opin Behav Sci. Author manuscript; available in PMC 2017 April 1.
Published in final edited form as:
PMCID: PMC5077164
NIHMSID: NIHMS759022

Neurocomputational Models of Interval and Pattern Timing

Abstract

Most of the computations and tasks performed by the brain require the ability to tell time, and process and generate temporal patterns. Thus, there is a diverse set of neural mechanisms in place to allow the brain to tell time across a wide range of scales: from interaural delays on the order of microseconds to circadian rhythms and beyond. Temporal processing is most sophisticated on the scale of tens of milliseconds to a few seconds, because it is within this range that the brain must recognize and produce complex temporal patterns—such as those that characterize speech and music. Most models of timing, however, have focused primarily on simple intervals and durations, thus it is not clear whether they will generalize to complex pattern-based temporal tasks. Here, we review neurobiologically based models of timing in the subsecond range, focusing on whether they generalize to tasks that require placing consecutive intervals in the context of an overall pattern, that is, pattern timing.

Keywords: Timing, Neural dynamics, Synfire Chain, State-Dependent network, population clock, neural trajectory

INTRODUCTION

The dynamic nature of our environment and the need to move, communicate, and anticipate when events will happen, contributed to the evolution of neural mechanisms that allow the brain to tell time. On one extreme, animals detect the microsecond delays it takes sound waves to travel from one side of the head to the other in order to localize sound sources in space [1]. On the other extreme, circadian rhythms allow animals to track day-night cycles in the absence of external cues [23]. Between these extremes, humans and other animals also time events on the order seconds to minutes. Humans, for example, anticipate the duration of traffic lights or the time between telephone rings. Similarly some animals track the amount of time between visits to food sources in order to optimize foraging [45]. Finally, rodents and other animals can be trained on a diverse range of temporal tasks, such as peak interval procedures in which they learn the interval between a stimulus and reward availability [68].

In the above examples, animals primarily need to time isolated intervals or durations, as opposed to complex temporal patterns defined by the relative timing of multiple consecutive intervals. The prosody of speech and the rhythm of music, for example, are not defined by any single interval or duration, but by the global temporal structure of many consecutive intervals. Furthermore, speech and music require timing multiple embedded temporal patterns. For example, voice-onset time (the interval between air release and vocal cord vibration) contributes to phoneme discrimination [9], the duration of vowels and pauses between words conveys information about phrase boundaries [1011], and speech rate and contour contribute to prosody and comprehension [1214]. Thus speech relies on timing over a number of different scales and features in parallel.

Perhaps the clearest example of just how sophisticated our ability to process complex temporal patterns can be is that language is reducible to a purely temporal code. Specifically, when individuals communicate via Morse code, the information is contained in the duration of tones, the interval between them, and their global structure. At the relatively low speed of 10 words-per-minute each dot and dash is 120 and 360 ms long respectively, and the inter-letter and inter-word intervals are 360 and 840 ms. The offset of any tone marks the stop time of a duration and the start time of an interval. This fact helps constrain the possible timing mechanisms underlying Morse code recognition, as any mechanism that requires a significant amount of time to “reset” before timing the next interval, would be unlikely to satisfy the temporal requirements of Morse code.

To distinguish between temporal tasks that require timing isolated intervals from those that require timing multiple consecutive intervals within a global context, we will use the terms interval timing (although we note that this term is commonly used for timing in the range of seconds to minutes [6]) and pattern timing (Figure 1).

Figure 1
Interval vs Pattern Timing. A) In a duration discrimination task subjects listen to two tones of different durations, and are asked to determine which is longer. Each tone is an independent stimulus, and the interval between the tones is not relevant ...

While most psychophysical tasks focus on interval timing, a number of temporal tasks rely on the production or discrimination of intervals embedded within a global pattern. Such tasks include:

  1. Temporal Pattern Reproduction: The motor production of a sequence of different intervals [1517].
  2. Serial Reaction Time Task: This task is a form of implicit timing in which subjects are required to press an array of keys that light up at specific times in a specific order. With practice the reaction times to press each key decrease [1819].

To simplify our discussion, we will focus on aperiodic patterns as opposed to periodic tasks in which subjects have to discriminate or reproduce isolated or repetitive intervals [20]. However, there is data suggesting that periodic and aperiodic timing tasks rely on different neural mechanisms [2122].

It is clear that the brain uses multiple neural mechanisms to tell time across temporal scales. For example the mechanisms underlying sound localization, the ability to tap along with the beat of a song, or generate circadian rhythms are clearly distinct [2324]. However, it is less clear whether the neural mechanisms underlying interval and pattern timing are the same: does pattern timing rely on the timing of independent intervals, like marking the laps on a stopwatch, or is each interval automatically encoded in the context of a pattern? Here we ask if the same mechanisms that have been proposed to underlie simple forms of timing can also account for the complex temporal tasks such as recognizing and producing letters in Morse code. To answer this question we examine three classes of neurobiologically-based timing models—that is, those that have been implemented at the level of simulated neurons (spiking or firing rate).

Synfire Chain Models of Timing

One of the simplest models of how time might be represented in networks of neurons is a synfire chain, which is generally composed of a large number of neurons arranged in separate pools connected with a feed-forward architecture (Fig. 2) [2527]. Activity propagates from one pool to another, such that each pool is activated at different points in time—e.g., pool one is activated at t=0, while neurons in pool 10 might be activated at t=100 ms. Thus it is possible for downstream circuits to read out the elapsed time by detecting which pool is currently active. Similarly, it is possible to produce a timed motor response by connecting the appropriate pool to appropriate output units. In their simplest form, synfire chains implement delay lines, in which each synaptic step inserts an additional delay.

Figure 2
Timing with synfire chains. Neurons in the songbird HVC nucleus burst sparsely at specific points in a song’s syllable. Long and Fee (2010) suggest that the temporal structure both within a syllable and throughout a song may be generated by a ...

It is easy to see how a synfire chain could be used to detect or produce a 100 ms interval. To produce the interval the neurons activated at 100 ms should be connected to the appropriate output. For detection a readout neuron should only fire when it receives simultaneous input from the 100 ms pool and a direct sensory pathway. Importantly, timing via synfire chains could also underlie pattern timing. For example, at the motor level a synfire chain could potentially be used to generate a complex pattern by connecting each pool of the chain to sequential output units. Indeed, songbird studies have provided evidence that synfire chains could underlie the complex forms of timing necessary for song generation. This complex timing appears to be generated by sequential bursting of a subset of neurons in the songbird sensorimotor nucleus HVC. These HVC neurons fire at specific moments during a song, providing the timing necessary for the structure of each syllable and the sequence of syllables within a song [2829]. Using in vivo recordings and spiking neural network simulations, Long and colleagues [30] provided evidence that these firing patterns are consistent with a feedforward synfire chain network architecture (Figure 2).

Numerous additional studies in mammals have revealed synfire chain-like temporal signatures. Specifically, populations of neurons that fire during specific points in time ("temporal receptive fields") reveal chain-like activation patterns when sorted according to firing latency—typically visualized as a diagonal band of activity [8,3132]. It remains unclear however, whether these patterns are generated locally by the circuits being recorded, and if so, whether they are a result of feedforward synfire architecture. Indeed, while these patterns of activity are certainly suggestive of a feedforward network, a number of computational models have shown that they can emerge from the propagation of activity within recurrent neural networks (see below).

Cortical circuits are characterized by recurrent connectivity between local pyramidal neurons [3334]. While synfire chains can in principle incorporate recurrent connections, in practice they are typically implemented within purely feed-forward architectures. Consequently, it is highly unlikely that cortical networks are actually feedforward synfire chains. A related and important issue is the capacity of synfire chains. Specifically: how many trajectories of a given temporal length can one feedforward synfire network encode? In one sense the capacity is low. For example in a purely feed-forward network, if we assume that each neuron only fires once and must participate in every pattern, then the capacity is essentially one trajectory (which is not the case in a recurrent network given these same assumption [35]). However, if we assume that different subpopulations of neurons within a pool fire during different trajectories, then the capacity increases significantly [36].

Overall, synfire chains offer a potentially general, and biologically plausible, mechanism to account for both interval and pattern timing. However, the traditional focus on feedforward synfire chains is probably unrealistic because of the absence of recurrency and their limited capacity.

Positive Feedback Models

Other neurobiologically-based models of timing explicitly rely on positive feedback through recurrent excitatory connections. One such model was developed to account for a series of experimental observations of Shuler and colleagues who reported that V1 neurons can encode the interval between stimulus onset and a reward [3739]. In the basic task, rats are exposed to a visual flash which predicts the arrival of a water reward after a delay Δt (more specifically, the reward was available after a fixed number of licks, which correlates with time). In vivo single unit recordings from V1 revealed that a subpopulation of neurons encode the reward interval—for example, some neurons maintained a relatively high firing-rate during the delay period.

To account to this experimental data, Shouval and colleagues developed a spike-based recurrent neural network model and described how local cortical circuits might encode the reward interval [3942]. Their hypothesis is that the observed prolonged activity is generated through recurrent excitatory connections. This approach has parallels in models of short-term memory that have used well-tuned positive feedback to maintain a fixed level of activity [4345]. We also note that there is a significant experimental literature reporting that neurons can exhibit ramping firing rates during well-trained temporal tasks [4648]; and that some computational models of ramping neurons also incorporate network-level positive feedback [49].

In Shouval’s model, positive feedback is being used to, in effect, generate a long network time constant. The authors find that potentiating the recurrent connections through a self-organizing Hebbian-like plasticity rule can extend network-wide firing elicited by a specific cue. Recently it has been shown that the synaptic tuning can be achieved using an experimentally derived form of associative learning that takes into account the fact that the reinforcement signal is delayed in relation to the activity patterns that trigger LTP or LTD [42]. Importantly, the recurrent positive feedback does not maintain the activity at some fixed-point, as in working memory models. Rather, low-levels of positive feedback are sufficient to extend the amount of time that the network is active, effectively controlling the network-level decay time [43]. In this manner the mean duration of the evoked firing can represent reward onset time. Significantly, these firing patterns are a result of the dynamics within the local cortical network: they do not require tonic external input nor dedicated timing cells, supporting the theory that temporal processing is a general and intrinsic property of recurrent neural networks [5052].

While this positive feedback model elegantly accounts for the experimental data on a form of interval timing, it is unclear if it would extend to pattern timing. Specifically, positive feedback mechanisms seem unlikely to be able to time consecutive intervals because the network would have to be rapidly reset at the end of each interval, which also marks the beginning of the next. Thus, the experimental and computational results of Shuler and Shouval further suggest the presence of distinct mechanisms for interval and pattern timing.

State-Dependent Networks and Population Clocks

One of the first neurobiologically-based models of timing and temporal processing proposed that networks of neurons are intrinsically able to tell and encode time as a result of dynamic changes in the state of neural networks [5355]. Specifically, this model states that the evolving neural population activity is a code that represents time.

At the sensory level, the hypothesis is that the discrimination of temporal intervals arises from the interaction between the internal-state of a network and incoming stimuli. In this sensory mode, the recurrent weights of these networks are generally fairly weak—that is, not capable of sustaining self-perpetuating activity. Thus, much of the temporal information emerges from neural and synaptic properties that are naturally time-varying (the so-called hidden states—e.g. short-term synaptic plasticity). Such models have been shown to effectively discriminate not only simple intervals, but complex temporal patterns as well [51,5659]. The hypothesis is that each sensory event interacts with the current state of the network, forming a pattern of network states that naturally encodes each event in the context of the recent stimulus history—much as the ripples generated by each raindrop falling in a pond will interact with the ripples created by previous raindrops. Experimental studies have supported this hypothesis by demonstrating that cortical networks contain information about not only the current stimulus, but also the interval and order of recent events [6064].

The same general framework has also been applied to timing in the motor domain [55,6567]. In contrast to sensory timing, motor timing relies on the active production of a response at the appropriate interval after a start cue. Therefore, in the motor regime, the recurrent connections need to be relatively strong, i.e. capable of self-perpetuating activity. In state-dependent models of motor behavior, time is encoded in the dynamically changing patterns of active neurons, forming a population clock [68]. The activity in the network traces out a trajectory in neural state space, in which each point in time corresponds to a unique population of active neurons. These patterns can be sparse: a few neurons activated at any point in time and each neuron activated at only one point, as in a synfire chain; or “dense”: with many neurons activated at a time, and each neuron potentially active at different points in the same trajectory (we can think of these as “high-entropy” trajectories). Experimental studies have reported numerous examples of either sparse functional feed-forward patterns of activity [8,28,30,6970], or complex high-entropy patterns [7174] of activity that encode time. A recent experimental and computational study also provided support for the notion that time is represented in high-dimensional trajectories [75]. In this work, recordings from over 100 neurons in the premotor cortex revealed a neural trajectory that evolved over a period of seconds during a task in which monkeys expected a reward between 1.5 and 3.5 seconds after the start cue. Analysis suggested that the reward window was represented in a trajectory segment, and that temporal expectation was intrinsically represented because this segment was the closest to a boundary that, if crossed, triggered a motor response.

Fig. 3 provides an example of pattern timing in a population clock model implemented in a simulated recurrent neural network (RNN) based on firing rate units. The network starts in a high-gain regime which generates a high-dimensional trajectory in response to a brief input. The network is then trained to reproduce this “innate” trajectory, by adjusting the weights of the recurrent network [67]. As a result of this training, the trajectory becomes locally stable (a “dynamic attractor”). Because this trajectory is stable in high-dimensional space, the output unit can then be trained to produce an arbitrarily complex temporal pattern, in this case the Morse code spelling of “Hello.” Here it should be stressed that the learning rule used to adjust the recurrent weights is not biologically plausible.

Figure 3
An example of a population clock implemented in a recurrent neural network (RNN) that produces Morse code spelling of “Hello.” A) The structure of the RNN consists of a single input, a sparse recurrent network of 1200 firing rate units, ...

Synfire chain and positive feedback models can certainly be applied to pattern timing, but we suggest that state-dependent network models are better suited for pattern timing because they are inherently high dimensionsal. Consider that six different isolated intervals, when arranged into a sequence of four (that is, a pattern composed of four intervals) can produce a total of 1296 potential patterns. A single state-dependent network is well suited to learn any arbitrary set of these patterns. Thus, state-dependent networks, and related reservoir computing models [7678] represent general computational frameworks capabable not only of interval and pattern timing, but also spatial and temporal computations.

Conclusions

The great majority of experimental and theoretical work on timing in the subsecond range has focused on isolated intervals and durations, i.e. interval timing. Here we stress that within this time scale the brain also performs a wide range of temporal tasks that require processing consecutive intervals and placing these in a temporal context—speech, music, and Morse code being clear examples of such pattern timing.

In addition to the distinction between interval and pattern timing, there are other temporal features that still must be carefully addressed in computational models. Of particular relevance is temporal scaling: how do we produce or recognize the same global temporal pattern at different speeds? A pianist can, for example, play the same piece of music at a range of different musical tempos. Though temporal scaling is a robust phenomenon, the underlying neurobiological mechanisms are not known, and indeed temporal scaling has not been reported in any of the three classes of biologically plausible models discussed above. A few experimental studies suggest that neural trajectories encode relative time. That is, when animals time intervals of different lengths within the same overall pattern, it appears that the same neural trajectory may be replayed at different speeds [8,71,79].

Given the diverse range of temporal tasks the brain performs, together with the large number of brain areas that have been implicated in timing in the range of tens of milliseconds to a few seconds, we argue that the brain does not have a singular timing mechanism. Rather, the brain has a number of different timing mechanisms, each used to solve specific temporal tasks. In some cases, for example, the brain may use specialized mechanisms for interval timing that are not capable of pattern timing. Potential examples include positive feedback mechanisms and the ramping of neuronal firing rates. But in other instances—for example the discrimination of simple and complex auditory patterns—we propose that the same neural mechanisms can underlie both interval and pattern timing.

Highlights

  • 4.
    On the scale of tens of milliseconds to a few seconds the brain is capable of performing both interval and pattern timing tasks.
  • 5.
    A number of computational models based on both feedforward and recurrent network architectures can potentially account for interval timing.
  • 6.
    Fewer computational models can account for pattern timing, but both synfire chains and state-dependent network models, are capable of accounting for both interval and pattern timing.

Acknowledgments

The authors are supported by the NIH grants MH60163 and T32 NS058280, and NSF grant IIS-1420897. We thank Martina DeSalvo for comments on this manuscript.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

1. Carr CE. Processing of Temporal Information in the Brain. Annual Review of Neuroscience. 1993;16:223–243. [PubMed]
2. Colwell CS. Linking neural activity and molecular oscillations in the SCN. Nat Rev Neurosci. 2011;12:553–569. [PMC free article] [PubMed]
3. King DP, Takahashi JS. Molecular genetics of circadian rhythms in mammals. Annu Rev Neurosci. 2000;23:713–742. [PubMed]
4. Henderson J, Hurly TA, Bateson M, Healy SD. Timing in Free-Living Rufous Hummingbirds, Selasphorus rufus. Current biology : CB. 2006;16:512–515. [PubMed]
5. Boisvert MJ, Sherry DF. Interval Timing by an Invertebrate, the Bumble Bee Bombus impatiens. Current Biology. 2006;16:1636–1640. [PubMed]
6. Buhusi CV, Meck WH. What makes us tick? Functional and neural mechanisms of interval timing. Nat Rev Neurosci. 2005;6:755–765. [PubMed]
7. Matell MS, Meck WH. Neuropsychological mechanisms of interval timing behavior. BioEssays. 2000;22:94–103. [PubMed]
8. Mello GBM, Soares S, Paton JJ. A scalable population code for time in the striatum. Curr Biol. 2015;9:1113–1122. [PubMed]
9. Lisker L, Abramson AS. A cross language study of voicing in initial stops: acoustical measurements. Word. 1964;20:384–422.
10. Scott DR. Duration as a cue to the perception of a phrase boundary. J Acoust Soc Am. 1982;71:996–1007. [PubMed]
11. Aasland WA, Baum SR. Temporal parameters as cues to phrasal boundaries: A comparison of processing by left- and right-hemisphere brain-damaged individuals. Brain and Language. 2003;87:385–399. [PubMed]
12. Breitenstein C, Van Lancker D, Daum I. The contribution of speech rate and pitch variation to the perception of vocal emotions in a German and an American sample. Cognition and Emotion. 2001;15:57–79.
13. Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M. Speech recognition with primarily temporal cues. Science. 1995;270:303–304. [PubMed]
14. Drullman R. Temporal envelope and fine structure cues for speech intelligibility. J Acoust Soc Am. 1995;97:585–592. [PubMed]
15. Laje R, Cheng K, Buonomano DV. Learning of temporal motor patterns: An analysis of continuous vs. reset timing. Frontiers in Integrative Neuroscience. 2011;5 [PMC free article] [PubMed]
16. Ullen F, Bengtsson SL. Independent Processing of the Temporal and Ordinal Structure of Movement Sequences. J Neurophysiol. 2003;90:3725–3735. [PubMed]
17. Collier GL, Wright CE. Temporal rescaling of sample and complex ratios in rhythmic tapping. J Exp Psychol Hum Percept Perform. 1995;21:602–627. [PubMed]
18. Shin JC, Ivry RB. Concurrent learning of temporal and spatial sequences. J. Exp. Psychol: Learn. Mem. Cog. 2002;28:445–457. [PubMed]
19. O'Reilly JX, McCarthy KJ, Capizzi M, Nobre AC. Acquisition of the Temporal and Ordinal Structure of Movement Sequences in Incidental Learning. J Neurophysiol. 2008;99:2731–2735. [PubMed]
20. Merchant H, Zarco W, Prado L. Do We Have a Common Mechanism for Measuring Time in the Hundreds of Millisecond Range? Evidence From Multiple-Interval Timing Tasks. J Neurophysiol. 2008;99:939–949. [PubMed]
21. Teki S, Grube M, Kumar S, Griffiths TD. Distinct Neural Substrates of Duration-Based and Beat-Based Auditory Timing. The Journal of Neuroscience. 2011;31:3805–3812. [PMC free article] [PubMed]
22. Grube M, Cooper FE, Chinnery PF, Griffiths TD. Dissociation of duration-based and beatbased auditory timing in cerebellar degeneration. Proceedings of the National Academy of Sciences. 2010;107:11597–11601. [PubMed]
23. Buonomano DV. The biology of time across different scales. Nat Chem Biol. 2007;3:594–597. [PubMed]
24. Mauk MD, Buonomano DV. The Neural Basis of Temporal Processing. Ann. Rev. Neurosci. 2004;27:307–340. [PubMed]
25. Abeles M. Corticonics. Cambridge: Cambridge University Press; 1991.
26. Haß J, Blaschke S, Rammsayer T, Herrmann JM. A neurocomputational model for optimal temporal processing. Journal of Computational Neuroscience. 2008;25:449–464. [PubMed]
27. Diesmann M, Gewaltig MO, Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature. 1999;402:529–533. [PubMed]
28. Hahnloser RHR, Kozhevnikov AA, Fee MS. An ultra-sparse code underlies the generation of neural sequence in a songbird. Nature. 2002;419:65–70. [PubMed] • The first paper to demonstrate the presence of a potential temporal code for song timing in HVC.
29. Long MA, Fee MS. Using temperature to analyse temporal dynamics in the songbird motor pathway. Nature. 2008;456:189–194. [PMC free article] [PubMed]
30. Long MA, Jin DZ, Fee MS. Support for a synaptic chain model of neuronal sequence generation. Nature. 2010;468:394–399. [PubMed] •• This study provided strong evidence for the presence of a feedforward synfire chain type architecture in the songbird nucleus HVC.
31. Pastalkova E, Serrano P, Pinkhasova D, Wallace E, Fenton AA, Sacktor TC. Storage of spatial information by the maintenance mechanism of LTP. Science. 2006;313:1141–1144. [PubMed]
32. Kraus Benjamin J, Robinson Ii Robert J, White John A, Eichenbaum H, Hasselmo Michael E. Hippocampal “Time Cells”: Time versus Path Integration. Neuron. 2013;78:1090–1101. [PMC free article] [PubMed]
33. Song S, Sjostrom PJ, Reigl M, Nelson, Chklovskii DB. Highly nonrandom feature of synaptic connectivity in local cortical circuits. PLOS Biology. 2005;3:508–518.
34. Shepherd GM. The synaptic organization of the brain. Fourth. New York: Oxford University; 1998. vol.
35. Liu JK, Buonomano DV. Embedding Multiple Trajectories in Simulated Recurrent Neural Networks in a Self-Organizing Manner. J. Neurosci. 2009;29:13172–13181. [PubMed]
36. Herrmann M, Hertz JA, Prügel-Bennett Analysis of synfire chains. Network: Computation in Neural Systems. 1995;6:403–414.
37. Shuler MG, Bear MF. Reward timing in the primary visual cortex. Science. 2006;311:1606–1609. [PubMed]
38. Chubykin Alexander A, Roach Emma B, Bear Mark F, Shuler Marshall GH. A Cholinergic Mechanism for Reward Timing within Primary Visual Cortex. Neuron. 2013;77:723–735. [PMC free article] [PubMed]
39. Namboodiri Vijay Mohan K, Huertas Marco A, Monk Kevin J, Shouval Harel Z, Hussain Shuler Marshall G. Visually Cued Action Timing in the Primary Visual Cortex. Neuron. 2015;86:319–330. [PubMed] • This work exanded previous studies by the same group showing that sensory cortices are not merely passive receptors of sensory information but can actually learn to anticipate the timing of a reward.
40. Gavornik JP, Shuler MGH, Loewenstein Y, Bear MF, Shouval HZ. Learning reward timing in cortex through reward dependent expression of synaptic plasticity. Proceedings of the National Academy of Sciences. 2009;106:6826–6831. [PubMed] •• Using biologically realistic simulations, the authors show that the positive feedback within spiking recurrent neural networks can be used to represent behaviorally relevant intervals.
41. Gavornik JP, Shouval HZ. A network of spiking neurons that can represent interval timing: mean field analysis. J Comput Neurosci. 2011;30:501–513. [PMC free article] [PubMed]
42. Huertas MA, Hussain Shuler MG, Shouval HZ. A Simple Network Architecture Accounts for Diverse Reward Time Responses in Primary Visual Cortex. The Journal of Neuroscience. 2015;35:12659–12672. [PMC free article] [PubMed]
43. Lim S, Goldman MS. Balanced cortical microcircuitry for maintaining information in working memory. Nat Neurosci. 2013;16:1306–1314. [PMC free article] [PubMed]
44. Wang X, Kadia SC. Differential Representation of Species-Specific Primate Vocalizations in the Auditory Cortices of Marmoset and Cat. J Neurophysiol. 2001;86:2616–2620. [PubMed]
45. Brody CD, Romo R, Kepecs A. Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations. Curr Opin Neurobiol. 2003;13:204–211. [PubMed]
46. Leon MI, Shadlen MN. Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron. 2003;38:317–327. [PubMed]
47. Niki H, Watanabe M. Prefrontal and cingulate unit activity during timing behavior in the monkey. Brain Res. 1979;171:213–224. [PubMed]
48. Brody CD, Hernandez A, Zainos A, Romo R. Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cereb. Cortex. 2003;13:1196–1207. [PubMed]
49. Simen P, Balci F, de Souza L, Cohen JD, Holmes P. A model of interval timing by neural integration. J Neurosci. 2011;31:9238–9253. [PMC free article] [PubMed]
50. Ivry RB, Schlerf JE. Dedicated and intrinsic models of time perception. Trends in Cognitive Sciences. 2008;12:273–280. [PMC free article] [PubMed]
51. Karmarkar UR, Buonomano DV. Timing in the absence of clocks: encoding time in neural network states. Neuron. 2007;53:427–438. [PMC free article] [PubMed]
52. Goel A, Buonomano DV. Timing as an intrinsic property of neural networks: evidence from in vivo and in vitro experiments. Philos Trans R Soc Lond B Biol Sci. 2014;369:20120460. [PMC free article] [PubMed]
53. Buonomano DV, Mauk MD. Neural network model of the cerebellum: temporal discrimination and the timing of motor responses. Neural Comput. 1994;6:38–55.
54. Buonomano DV, Merzenich MM. Temporal information transformed into a spatial code by a neural network with realistic properties. Science. 1995;267:1028–1030. [PubMed]
55. Mauk MD, Donegan NH. A model of Pavlovian eyelid conditioning based on the synaptic organization of the cerebellum. Learn. Mem. 1997;3:130–158. [PubMed] • This is was one of the first papers to carefully outline the theory that time may be represented in a dynamically changing population of neurons. Specifically, through a changing population of active granule cells in the cerebellum.
56. Buonomano DV. Decoding temporal information: a model based on short-term synaptic plasticity. J Neurosci. 2000;20:1129–1141. [PubMed]
57. Lee TP, Buonomano DV. Unsupervised formation of vocalization-sensitive neurons: a cortical model based on short-term and homeostatic plasticity. Neural Comput. 2012;24:2579–2603. [PubMed]
58. Maass W, Natschläger T, Markram H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 2002;14:2531–2560. [PubMed]
59. Haeusler S, Maass W. A Statistical Analysis of Information-Processing Properties of Lamina-Specific Cortical Microcircuit Models. Cereb. Cortex. 2007;17:149–162. [PubMed]
60. Dranias MR, Westover MB, Cash SS, VanDongen AMJ. Stimulus information stored in lasting active and hidden network states is destroyed by network bursts. Frontiers in Integrative Neuroscience. 2015;9 [PMC free article] [PubMed]
61. Nikolić D, Häusler S, Singer W, Maass W. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex. PLoS Biol. 2009;7:e1000260. [PMC free article] [PubMed]
62. Kilgard MP, Merzenich MM. Order-sensitive plasticity in adult primary auditory cortex. Proc Natl Acad Sci U S A. 2002;99:3205–3209. [PubMed]
63. Buonomano DV, Hickmott PW, Merzenich MM. Context-sensitive synaptic plasticity and temporal-to-spatial transformations in hippocampal slices. Proc Natl Acad Sci U S A. 1997;94:10403–10408. [PubMed]
64. Zhou X, de Villers-Sidani É, Panizzutti R, Merzenich MM. Successive-signal biasing for a learned sound sequence. Proceedings of the National Academy of Sciences. 2010;107:14839–14844. [PubMed]
65. Medina JF, Mauk MD. Computer simulation of cerebellar information processing. Nat Neurosci. 2000;(3 Suppl):1205–1211. [PubMed]
66. Buonomano DV, Laje R. Population clocks: motor timing with neural dynamics. Trends in Cognitive Sciences. 2010;14:520–527. [PMC free article] [PubMed]
67. Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat Neurosci. 2013;16:925–933. [PubMed] •• This computational paper demonstrated that it is possible to solve the chaos problem in firing rate networks by tuning the recurrent weights, and that the resulting dynamics can autonomously generate high dimensional trajectories that can underlie both interval and pattern timing.
68. Buonomano DV, Karmarkar UR. How do we tell time? Neuroscientist. 2002;8:42–51. [PubMed]
69. Pastalkova E, Itskov V, Amarasingham A, Buzsaki G. Internally Generated Cell Assembly Sequences in the Rat Hippocampus. Science. 2008;321:1322–1327. [PMC free article] [PubMed]
70. MacDonald CJ, Carrow S, Place R, Eichenbaum H. Distinct hippocampal time cell sequences represent odor memories in immobilized rats. J Neurosci. 2013;33:14607–14616. [PMC free article] [PubMed]
71. Crowe DA, Zarco W, Bartolo R, Merchant H. Dynamic Representation of the Temporal and Sequential Structure of Rhythmic Movements in the Primate Medial Premotor Cortex. The Journal of Neuroscience. 2014;34:11972–11983. [PubMed] • Using in vivo single unit recordings in primates, this study found evidence for a dynamics population code for time—a poplation clock—during a timed motor task.
72. Crowe DA, Averbeck BB, Chafee MV. Rapid Sequences of Population Activity Patterns Dynamically Encode Task-Critical Spatial Information in Parietal Cortex. J. Neurosci. 2010;30:11640–11653. [PMC free article] [PubMed]
73. Jin DZ, Fujii N, Graybiel AM. Neural representation of time in cortico-basal ganglia circuits. Proc Natl Acad Sci U S A. 2009;106:19156–19161. [PubMed]
74. Kim J, Ghim J-W, Lee JH, Jung MW. Neural Correlates of Interval Timing in Rodent Prefrontal Cortex. The Journal of Neuroscience. 2013;33:13834–13847. [PubMed]
75. Carnevale F, de Lafuente V, Romo R, Barak O, Parga N. Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty. Neuron. 2015;86:1067–1077. [PubMed] •• Combining primate behavior, electrophysiology and neural network modeling, this study demonstrated that the population dynamics of premotor cortex intrinsically encodes time. And furthermore that expectation time windows are naturally represented by the distance between the neural trajectory encoded time, and some boudary in neural state space.
76. Buonomano DV, Maass W. State-dependent Computations: Spatiotemporal Processing in Cortical Networks. Nat Rev Neurosci. 2009;10:113–125. [PubMed]
77. Jaeger H, Haas H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science. 2004;304:78–80. [PubMed]
78. Jaeger H, Maass W, Principe J. Special issue on echo state networks and liquid state machines. Neural Networks. 2007;20:287–289.
79. Bartolo R, Merchant H. Learning and generalization of time production in humans: rules of transfer across modalities and interval durations. Exp Brain Res. 2009;197:91–100. [PubMed]