PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Curr Biol. Author manuscript; available in PMC 2011 August 9.
Published in final edited form as:
PMCID: PMC3152973
NIHMSID: NIHMS313575

Neuroscience: What we cannot model, we do not understand

Abstract

To understand computations in neuronal circuits, a model of a small patch of cortex has been developed that can describe the firing regime in the visual system remarkably well.

Circuits of neurons in the brain are very complicated: because of the multiple nonlinearities, different types of neurons, complex dendritic geometries, diverse connectivity patterns and dependencies on learning and development, the cerebral cortex and other neuronal circuits constitute the most complex systems ever studied by science. Perhaps not surprisingly, the computational power that emerges from such circuits is astounding; neuronal networks are responsible for diverse cognitive phenomena such as seeing, smelling, remembering, planning and so on.

To understand how function emerges from ensembles of neurons and their interactions, we need a rigorous interplay of theoretical work and experimental approaches capable of listening to the activity of neurons. This synergy of theory and neurophysiology is beautifully illustrated in recent work by Rasch et al. [1]. These authors took a courageous approach using computational models to describe the activity in a local 5 × 5 mm patch of neocortex with an impressive set of 35,000 neurons and ~4 million synapses. They focused on primary visual cortex, one of the most studied parts of cortex and the first stage in the hierarchical cascade of processes that convert the retinal input into our visual perceptions. The Logothetis lab used multiple microwire electrodes to measure the activity of neurons in primary visual cortex of anesthetized monkeys while the monkeys watched a natural scene movie. The authors then ‘presented’ the same movie to their model to explore its fidelity and quantitatively compare the computational output and the neurophysiological one.

To compare the circuit in silico and in vivo, one must consider what aspects of the complex neuronal ensemble responses one aims to explain. Instead of trying to predict the detailed spiking activity of every single neuron as done in many other studies (for example [2,3]), Rasch et al. [1] defined a ‘firing regime’ that is characterized by several properties of the neuronal responses. These properties included the firing rate, distribution of interspike intervals, variability in spike counts over time, degree of burst firing and degree of synchronization in the network. The authors use these inter-related properties to define the state of the network.

Another important aspect that the theorist must consider when thinking about such network models is the large number of parameters that arises as a consequence of the complexity in the circuitry. The modeler needs to make decisions about the number and type of neurons, their distribution and connectivity, the type of ionic channels they are embedded with and their corresponding characteristics. Some of these decisions may be constrained by experimental data; others may require more guesswork. Parameters are our enemies. It is extremely difficult from a computational viewpoint to systematically characterize the whole parameter space. For convex optimization problems, one can tune and optimize each parameter separately, but it is often difficult to assess whether the problem at hand is convex or not. In high-dimensional spaces, the curse of dimensionality makes itself evident. Rasch et al. [1] start by approximating parameters by intelligent estimation based on existing data taken from the literature. Remarkably, without tuning parameters, the model does not perform too badly and the computational circuit is a reasonable approximation to the empirically defined firing regime.

Rasch et al. [1] went on to turn knobs here and there to examine the sensitivity and robustness of the model to different variables and assumptions. By tuning parameters, the authors were able significantly to improve the fit to the experimental data and, more importantly, along the way, they discovered specific knobs that are more relevant to influence the model output. Specifically, key parameters included the relative synaptic weights of excitatory to inhibitory neurons [4] and the relative weighting of the patchy long-range connections (which were reduced to avoid pathologic oscillatory behavior). The model was also quite sensitive to certain channel conductance parameters, specifically NMDA, and surprisingly GABAB with its relative slow dynamics. Many modelers have actually ignored this conductance in more complex and detailed simulation studies [5], and Rasch et al. [1] contend that it might improve the fit to obtain realistic firing rates by providing longer lasting and non-linear qualities to the firing rates of the inhibitory cell component.

In the same way that all roads lead to Rome, there may be multiple different ways to build a network with similar output properties. In a landmark study, Prinz and colleagues [6], and subsequently several other groups, showed that different combinations of intrinsic neuronal properties as well as connectivity patterns can lead to the same properties at the network level. In a similar vein, Rasch et al. [1] discovered robustness in the output to a large number of knobs and parameters in their model. In other words, quite distinct combinations of parameters can lead to the same firing regime.

How well does the model approximate the statistical fingerprint or firing regime of primary visual cortex? Rasch et al. [1] elegantly evaluated the answer to this question by comparing the differences between the model results and the experimental data to the variability across recordings from the same electrodes, across different repetitions of the movie stimulus or across recordings from different individual monkeys. Neurophysiological recordings in cortex typically show a significant degree of variability across repetitions, neurons and monkeys; much has been written and discussed about this variability (for example [7,8]). The difference between the best model and the neurophysiological responses was about as large as the difference between different physiological recordings in response to the same movie (in terms of the firing regime). In other words, given two sets of data describing the ten statistical properties that defined the firing regime, it would be difficult to discriminate which one came from the model and which one came from the monkey.

In addition to the importance of this work to characterizing and understanding the principles underlying the behavior of complex neuronal circuits, computational models of this sort can also help understand abnormal patterns of activity in cortex. One example of this type of effort is the ongoing effort to understand epilepsy through computational models. Neuronal synchrony occurs readily in densely interconnected model networks [5,9,10]. Because synchronous neuronal firing is so easily produced, many authors have examined this as evidence of pathology or even representations of seizures in computational models [5,11,12]. Likely to avoid this pathologic behavior, Rasch et al. [1] imposed a strong constraint on the modeled system: it must reproduce the sparse and at times irregular firing behavior of the experimental (albeit anesthetized) animal preparation. Rasch et al. [1] allude in their comments to an important limitation to this approach: given this form of optimization into the ‘computational advantageous regime’, it is not clear that the model could subsequently reproduce long range oscillatory behavior similar to the alpha rhythm in the resting awake state. Thus, in addition to helping us characterize the firing regimes of cortex and the mechanisms by which neuronal circuits lead to these regimes, the type of computational modeling used by Rasch et al. [1] can help us translate this understanding to investigate conditions of clinical relevance.

The instructive work of Rasch et al. [1] highlights many of the key challenges ahead. What is the ‘appropriate’ level of abstraction to build models in neuroscience? Should we build models with many parameters to take into account ever more realistic aspects of the biology [13] or should we consider ‘toy models’ that aim to extract the key principles of neuronal networks [14]? Should we aim to predict the spike timing of every neuron with millisecond precision or rather to characterize more global aspects of the network behavior? To take a simple analogy, the accuracy in predicting how an object moves may benefit from considering a model that includes friction, object shape, object material and how/when/where forces are applied among other variables. However, a single parameter model that ignores many of these variables and assumes point masses (force = mass × acceleration) may take us a long way towards generalization and understanding. The theoretical physicist, Richard Feynman, famously wrote: “What I cannot create, I do not understand.” Similarly, theoretical efforts and computational models constitute essential requirements to understand the function of complex circuits of neurons. Stay tuned, plenty of exciting theoretical and computational work ahead.

References

1. Rasch M, Schuch K, Logothetis N, Maass W. Statistical comparison of spike responses to natural stimuli in monkey area V1 with simulated responses of a detailed laminar network model for a patch of V1. J Neurophysiol. 2010 Nov 24; [Epub ahead of print] [PubMed]
2. Keat J, Reinagel P, Reid RC, Meister M. Predicting every spike: a model for the responses of visual neurons. Neuron. 2001;30:803–817. [PubMed]
3. David SV, Gallant JL. Predicting neuronal responses during natural vision. Network. 2005;16:239–260. [PubMed]
4. Mazzoni A, Panzeri S, Logothetis NK, Brunel N. Encoding of Naturalistic Stimuli by Local Field Potential Spectra in Networks of Excitatory and Inhibitory Neurons. PLoS Comput. Biol. 2008;4:1–20. [PMC free article] [PubMed]
5. Traub RD, Contreras D, Cunningham MO, Murray H, LeBeau FE, Roopun A, Bibbig A, Wilent WB, Higley MJ, Whittington MA. Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. J. Neurophysiol. 2005;93:2194–2232. [PubMed]
6. Prinz AA BD, Marder E. Similar network activity from disparate circuit parameters. Nat. Neurosci. 2004;7:1345–1352. [PubMed]
7. Koch C. Biophysics of Computation. New York: Oxford University Press; 1999.
8. Kreiman G. Neural coding: computational and biophysical perspectives. Physics Life Rev. 2004;1:71–102.
9. Suffczynski P, Lopes da Silva F, Parra J, Velis D, Kalitzin S. Epileptic transitions: model predictions and experimental validation. J. Clin. Neurophysiol. 2005;22:288–299. [PubMed]
10. Borgers C, Kopell N. Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity. Neural Comput. 2003;15:509–538. [PubMed]
11. Anderson WS, Kudela P, Cho J, Bergey GK, Franaszczuk PJ. Studies of stimulus parameters for seizure disruption using neural network simulations. Biol. Cybern. 2007;97:173–194. [PMC free article] [PubMed]
12. Santhakumar V, Aradi I, Soltesz I. Role of mossy fiber sprouting and mossy cell loss in hyperexcitability: a network model of the dentate gyrus incorporating cell types and axonal topography. J. Neurophysiol. 2005;93:437–453. [PubMed]
13. Markram H. The blue brain project. Nat. Rev. Neurosci. 2006;7:153–160. [PubMed]
14. Serre T, Kreiman G, Kouh M, Cadieu C, Knoblich U, Poggio T. A quantitative theory of immediate visual recognition. Prog. Brain Res. 2007;165C:33–56. [PubMed]