Given the overwhelming complexity of the brain, it is mandatory for any neural system model to find a sufficiently parsimonious, and yet neurobiologically plausible, conceptual framework for investigating neuronal dynamics. How can we optimally investigate the functional coupling between neuronal populations, derive the mechanisms underlying synchronization of oscillatory activity and understand interactions across multiple spatial and temporal scales? These questions, which were elegantly summarized in the opening speeches by the honorary presidents of the Sendai workshop, Ryuta Kawashima and Shun-ichi Amari, are at the heart of systems neuroscience. They can be addressed by three complementary approaches that currently constitute a very active area of research. The first approach focuses on the temporal relationships of oscillatory activity in different brain regions as expressed, for example, in terms of coherence or synchronization. The motivation for this approach is that the connectivity between different neuronal populations may critically rely on coherence: Oscillations of average membrane potential do not only affect the output of the population, but also its sensitivity to input, and therefore only coherently oscillating neuronal groups may be able to interact effectively (

Fries 2005). Despite its intuitive appeal, this mechanistic idea is still quite general and is usually tested by applying time-frequency analyses directly to measured signals, e.g. from individual EEG/MEG sensors. In order to test specific instantiations of this idea, one may want to use a parameterized model that represents distinct neurophysiological processes from lower scales that are not directly measured. This is possible with a second approach, neural mass models (NMMs;

Freeman 1972), which operate at a mesoscopic spatial scale roughly corresponding to cortical macrocolumns. NMMs represent neuronal populations by the modes of statistical distributions of their relevant neurophysiological properties, e.g. average membrane potential and average firing rate. This approach offers a parsimonious way of parameterizing and scrutinizing the neurophysiology of interacting populations, for example, in terms of the roles of neuronal cell types (pyramidal cells, inhibitory interneurons, etc.) and the properties of their connections (e.g. conduction delays, synaptic weights). Critically, if NMMs are combined with an appropriate forward model of how neural activity is expressed at the level of scalp electrodes, these parameters can even be estimated from empirical data and assessed statistically, using Bayesian inversion (

Kiebel et al. 2006) or filtering techniques (

Riera et al. 2007). However, other important questions are not easily addressed directly by NMMs. For example, it is not trivial (albeit feasible in an indirect way, see below) to model the effects of neuromodulatory transmitters since the necessary anatomical infrastructure (e.g. transmitter-specific receptors) is below the spatial scale which is represented in NMMs. Questions like these are usually the domain of a third approach that uses large sets of individually modeled neurons which interact. Usually, these are compartmental models of neurons (e.g. integrate-and-fire neurons) which allow one to model quite detailed aspects of neuronal dynamics, e.g. the effect that transmitter-specific ion channels or connections with different synaptic sites in the dendritic tree have on the population dynamics. However, due to the very large number of parameters involved and the strong dependencies between them, it is usually not possible to invert these models (i.e. fit them to empirical data and get meaningful parameter estimates). Instead, they can be used for simulations to generate predictions about the system's behavior in different domains of the parameter space (see, for example,

Husain et al. 2004 and

Deco et al. 2004).

In practice, the fact that the approaches briefly summarized above operate on different spatial scales of neuronal dynamics

^{5} means that the choice amongst them depends on the specific scientific question asked. The most interesting challenge perhaps is to find ways of conceptually linking these approaches and bridging the scales, a challenge which was addressed by several presentations at both the Sendai and Barcelona workshop and which we will discuss further in the final section of this paper. For example, at Sendai, Olivier Bertrand (INSERM U280, Lyon, France) reported results from studies which compared the dynamics of oscillatory networks in humans that were measured at different spatial scales, i.e. by means of intracranial EEG and scalp EEG, respectively (

Bertrand & Tallon-Baudry 2000). In intracranial recordings of brain responses to visual and auditory stimuli, he found clear evidence for separate oscillatory processes in the beta and gamma bands. Specifically, in his experiments, beta oscillations tended to show desynchronization when evoked responses and gamma oscillations were emerging, sometimes followed by a rebound of activity after gamma oscillations had returned to baseline. In contrast to the intracranial data, detection of these oscillations was much more difficult in scalp recordings with EEG and MEG. This could have been due to the existence of multiple oscillatory generators in the beta and gamma ranges: it may be that only during those periods when the generators are phase-synchronized, a measurable oscillatory signal is found at the scalp level.

Karl Friston (Wellcome Trust Centre for Neuroimaging, London) presented recent developments in Dynamic Causal Modeling (DCM), a general framework for making inferences about processes at the neural level given measured imaging data (see

Friston et al. 2003 for the first paper on DCM and

Stephan et al. 2007a for a recent review). For EEG/MEG data, for example, DCM is based on a nonlinear NMM of interacting cortical columns consisting of pyramidal cells, inhibitory interneurons and spiny stellate cells (

David et al. 2006). This model can be used for investigating a wide range of questions at different spatial and temporal scales. For example, one can probe the role of different neuron types and their connections for oscillatory activity and coherence (

David & Friston 2003), the impact of time constants or inter-regional conduction delays on steady-state frequency spectra (

Moran et al. 2007b) or the magnitude of synaptic strengths and spike-frequency adaptation during pathophysiological processes (

Moran et al. 2007a). By enabling statistical inference about (unobserved) neural processes at small spatial scales, DCM can thus provide mechanistic accounts of spatially large-scale phenomena, measured at the sensor level.

Regardless of the spatio-temporal scale of interest, a central aspect of all models of effective connectivity is the question how causal relationships amongst neuronal populations are best inferred mathematically. For example, DCM uses deterministic delay differential equations whose parameters are estimated from measured data using variational Bayesian inversion (

Friston et al. 2007). Two other speakers presented alternative approaches for characterizing effective connectivity. Tohru Ozaki (Institute of Statistical Mathematics, Japan) proposed to use innovation methods to explore causal relations based on a voxel-wise searching strategy. He presented this method in the general context of heteroscedastic state space modelling and filtering techniques. Pedro Valdes-Sosa (Cuban Neuroscience Center, Havana) presented a methodology that involved the use of Granger causality on spatial manifolds. He proposed a multivariate autoregressive model for EEG/fMRI data and based its parameter estimation on a maximization-minorization (MM) algorithm (Valdes-Sosa et al. 2005), using a combination of different penalty functions to ensure a balance between sparseness and smoothness of cortical connectivity.

Michael Breakspear (University of New South Wales, Sydney, Australia) presented a neural field model

^{6} for multiscale spatio-temporal analyses of human epilepsy data. In the spatial domain, he explored the influence of global (between-population) coupling on local (within-population) dynamics. In the temporal domain, he compared modeling results to EEG data from patients with primary generalized seizures and demonstrated, using bifurcation analysis of the model, how cortical activity at different temporal scales was coupled in a nonlinear and dynamic fashion, leading to potential instabilities and seizures (

Breakspear et al. 2006b).

Gustavo Deco (University of Barcelona, Spain) presented a model of interacting cortical areas each of which consisted of multiple populations of biophysically realistic integrate-and-fire neurons. Using two complementary analytical approaches, he investigated the neurophysiological mechanisms underlying biased competition during attention (

Deco & Rolls 2005) and decision-making (

Deco et al. 2007). In an analysis of stationary dynamics, he used a mean-field reduction, effectively treating the model as a NMM, to investigate how different operational regimes of the network depended on the values of various model parameters. Additionally, he investigated the nonstationary dynamic behavior of the neuronal spiking rates, using the full integrate-and-fire model (i.e. numerical integration without any mean-field reduction). Together, these two approaches enabled him to draw some rather fine-grained conclusions. For example, with regard to attention, the model explained why backward connections between cortical areas should be about 2.5 times weaker in strength than the corresponding forward connections. Furthermore, this analysis showed that top-down attentional effects can be explained in terms of shifting the neurons' nonlinear activation function (i.e. firing rate as a function of input current). Thus, the model offered new insights into possible mechanisms of attention, going beyond the classical “biased competition” hypothesis, and showed that attention can be seen as a dynamical process that emerges implicitly from a neuronal multi-attractor network.

This work by Deco and colleagues demonstrates that there are important points of contact between NMMs operating on a mesoscopic scale and biophysically more detailed and fine-grained models, like ensembles of Hodgkin-Huxley or integrate-and-fire neurons. First, as in the example above, NMMs can be derived from a mean field reduction of ensemble activity on a microscopic scale (c.f.

Deco et al. 2005;

Loh et al. 2007). Second, given a careful parameterization of the model and suitable experimental manipulation, NMMs can be capable of indirectly assessing certain aspects of neuronal dynamics whose structural support is located at a microscopic scale. As an example,

Liljenstrom & Hasselmo (1995) and

Moran et al. (2007a) have shown how NMMs can be used to indirectly investigate processes at a microscopic level, e.g. how specific changes in neurotransmission alters spike frequency adaptation of neurons. Third, models consisting of large ensembles of biophysically realistic neurons can be used to establish the construct validity of NMMs. For example,

Lee et al. (2006) used the detailed biophysical model of

Tagamets & Horwitz (1998) to generate synthetic fMRI data; subsequently, they verified that a simple NMM (i.e. DCM) was able to recover the mechanisms by which the data were generated. And finally, as pointed out in the presentation by Karl Friston mentioned above, one of the goals of the ongoing development of DCM is to construct models that bridge mesoscopic and microscopic scales. For example, such models could be based on a simplified variant of the biophysically grounded parameterization of Hodgkin-Huxley or integrate-and-fire models. One of the main challenges will be to find a suitable set of prior densities that eschew problems with parameter interdependencies and model inversion.