|Home | About | Journals | Submit | Contact Us | Français|
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
The leaky integrate-and-fire model for neurons is often used to model spiking neuronal networks as it captures some fundamental properties of neurons (integration, spikes), the model parameters are intuitive, and it can be implemented easily and efficiently in simulations. However, there are many properties of biological neurons that this model does not reproduce. Many generalizations of the integrate-and-fire model have been proposed (for a review, see Izhikevich, 2004). Like the original integrate-and-fire model, these generalizations are hybrid systems that consist of a set of ordinary differential equations together with a set of update rules. A well-known example (Izhikevich, 2003) increases the complexity of the differential equations, which, by generating several types of bifurcations, results in a model that reproduces a large number of observed spiking patterns. However, the introduction of these features comes at a cost: it is computationally more expensive to implement in clock-driven simulators than the standard integrate-and-fire model, and it is not easily integrated in event-driven simulators.
Event-driven algorithms (e.g., Mattia & Giudice, 2000) provide an efficient way to simulate spiking neuronal networks exactly (without approximation) if the state variables of a neuron can be computed from an arbitrary time in the past in the absence of events. Previous models have generally used integrate-and-fire neurons, since the time dependence of the state variable is analytically solvable. Once state variables are propagated, the solutions are used to compute the time of the next spike (event). This solution for event times depends on the model for synapses. Recently, Brette (2007) showed that for integrate-and-fire neurons with current synapses, the event times can be obtained from the solutions of a polynomial equation. It is possible to implement more complex neuronal models with event-driven algorithms using a lookup table (Ros, Carrillo, Ortigosa, Barbour, & Agis, 2006), but this requires interpolation and comes at the cost of losing precision. It has also recently been proven that some nonlinear integrate-and-fire models have analytical solutions for the time dependence of the state variables (Tonnelier, Belmabrouk, & Martinez, 2007). However, these solutions apply only to quadratic or exponential integrate-and-fire neurons, and when synapses with exponentially decaying currents are used, only one timescale is used. The quadratic integrate-and-fire model provides some additional features to a linear integrate-and-fire model: spike latency, bistability, and threshold variability. However, it lacks many other features that are desirable for realistic simulations (Izhikevich, 2004).
In order to model adaptation, a method frequently used is to add voltage-dependent currents (for a review, see Burkitt, 2006). While this is is an accurate representation of the biological system, it usually prevents diagonalization (over reals) of the system of differential equations. Another option is to consider that the threshold depends on the time of the previous spike only (Jolivet, Lewis, & Gerstner, 2004). While being analytically tractable in the formalism of a spike response model, this form of threshold adaptation lacks the rich phenomenology resulting from accommodation to subthreshold stimuli.
To include the effects of adaptation, we use a voltage-dependent threshold (Hill, 1936) in our model. This can be viewed as applying the effects of voltage-dependent currents to the threshold rather than to the voltage. Dodla, Svirskis, and Rinzel (2006) have shown recently that a voltage-dependent threshold produces postinhibitory facilitation. We here demonstrate that models of this class allow a much richer phenomenology: in addition to adaptation, the voltage-dependent threshold produces accommodation, threshold variability, rebound spikes, tonic and phasic spiking, and other features of biological neurons. Perhaps more surprising, in a different regime of parameters, it can also produce hyperpolarization-induced activity or threshold latency.
To model afterpotentials, we rely on spike-induced currents (Hille, 1992). Burstiness is obtained as a consequence of depolarizing afterpotentials. It can be combined with the adaptation. Resonating (intrinsically oscillating) neurons are traditionally modeled based on subthreshold oscillations. Our model does not include subthreshold oscillations; however, neurons with a preferred frequency can be obtained by combining excitatory and inhibitory spike-induced currents.
In this study we introduce a model based on a diagonalizable set of linear differential equations, which produces a large repertoire of firing patterns. Since the solution of the system of differential equations is a sum of exponentials, it can be simulated using the same tools used for the leaky integrate-and-fire model. The complex firing patterns obtained from the model are a result of suitably chosen boundary conditions and update rules.
The state of the neuron is described by a set of variables: membrane voltage V, instantaneous threshold Θ, and an arbitrary number of internal currents Ij. The model consists of a linear differential equation and an update rule for each variable. The coefficients of the differential equations form a triangular matrix, which allows sequential solving of the set of equations. Internal currents are exponentially decaying and do not depend on other variables, the evolution of the membrane voltage depends on the internal currents, and the evolution of the instantaneous threshold depends on the membrane voltage. In the presence of a constant external current Ie, the time evolution of the state variables is
The update rule, which applies if V(t) = Θ(t), changes the values of the state variables. Any update rule can be used as long as after the update, the membrane potential V(t) is smaller than the instantaneous threshold potential Θ(t) (we refer to the set of state variables with V(t) < Θ(t) as the region of accepted values). The set of update rules we used is
where the parameters Rj, Aj, Vr, and Θr can be chosen freely, subject only to the constraint Θr > Vr. For the spike-induced currents, the choice of Rj = 0 leads to a constant update function (the value of the current after the spike is independent of the value of the current before the spike). For another choice, Rj = 1, the update function is additive. For all examples shown in this letter, we use at most two spike-induced currents (N ≤ 2 in equation 2.1).
The system of ordinary differential equations, equation 2.1, has one stationary solution:
The Jacobian matrix for equation 2.1 has the eigenvalues −kj, −G/C, −b. If kj, G/C, and b are all positive, the stationary solution, equation 3.1, is a stable node. As we will see, this stationary solution dominates the phase space of equation 2.1.
If the values for kj, G/C, and b are all distinct, the solution to equation 2.1, with initial conditions Ij (0) = Ij0, V(0) = V0, Θ(0) = Θ0, is:
How can this simple, linear, diagonalizable system of ordinary differential equations reproduce a wide variety of complex neuronal behaviors? The answer relies on the restrictions applied on the variable values (the membrane potential has to be below threshold) and on the update rules (if the membrane potential reaches the threshold, the state variables are changed according to equation 2.2).
If the stationary solution, equation 3.1, is outside the region of accepted values, that is, if
the model will produce a tonic response. The sufficient condition for a tonic response is
If a < b, the neuron has a tonic response to an excitatory current:
If a > b, the neuron has a tonic response to an inhibitory current:
If the stationary solution is inside the region of accepted values, the neuron is responsive (there are initial conditions for which the neuron will fire) if on the line Vst = Θst, there are points for which the vector describing the evolution of the system points outward the region of accepted values (toward Vst > Θst). For any point on the line Vst = Θst, the normal vector (pointing outward) is . The condition for the neuron to be responsive is: there exist V, Ij such that
If a − b + G/C ≠ 0, there are values for V for which condition equation 3.4 is satisfied irrespective of the values for Ij. Therefore, if the stationary solution is inside the region of accepted values, depending on the initial conditions and on the update rules, the neuron can respond in a phasic or tonic fashion or not respond at all.
Brette (2007) has pointed out that since the equation V(t) = Θ(t) is a sum of exponentials in t, it can be reduced to a polynomial equation. Consider kj, G/C, and b to be rational numbers and k their greatest common divisor such that nj = kj/k, ng = G/(Ck), nb = b/k, which defines nj, ng, and ng, all . With the transformation x = exp(−kt), the time of the next spike is obtained by numerically solving the equation D(x) = 0, where
The leaky integrate-and-fire neuronal model produces tonic spiking in response to a depolarizing external current (see Figure 1A; the values of the parameters and external currents for all simulations are summarized in Table 1). With currents infinitesimally above threshold, the model neurons fire with an infinitesimal firing rate, as shown in Figure 1B. With an increase in input, the output of the model monotonically increases. A neuron with a continuous input-output curve is classified as class 1 excitable (Hodgkin, 1948). This is a result of the stationary solution for the membrane potential being near the fixed threshold. To reach the threshold, the system has to go through points that have infinitesimal time derivatives for the membrane potential.
If the coefficient a, which describes the dependence of the threshold on the membrane potential, is between 0 and b, several types of adaptive behaviors are obtained. If the external current is sufficiently large to satisfy equation 3.3, the neuron produces a tonic response with spike-frequency adaptation (see Figure 1C). For a smaller current, the neuron can produce phasic spiking. This simulation is presented both as a time evolution, Figure 1D, and in phase space, Figure 2A. There is a range of currents just below what is needed to satisfy equation 3.3 for which the neuronal response can be bistable (see Figure 1J). The neuron can respond tonically, or not respond at all, depending on initial conditions. A jump in the external current, divided by the neuron’s capacitance, to 1.5 V/s and then 1.7 V/s results in a tonic response. However, adaptation for 0.1 s at 1.5 V/s is sufficient to prevent any response to a subsequent 1.7 V/s input.
If the current is the minimum needed to satisfy equation 3.3, the points corresponding to equation 3.1 and equation 3.4 coincide. Depending on the parameter chosen for Θr, the neuron can produce either a Class 1 or a Class 2 response. A Class 2 response is characterized by having a nonzero firing rate at the threshold (as shown in Figure 1H). If the update rule for the stationary solution is applied, the simulation leads back to the stationary solution before crossing the boundary, the neuron is Class 1.
A value for the coefficient a between 0 and b leads to accommodation: a sudden shift in the external current produces spikes, while a gradual rise to the same external current value does not (see Figure 1E). If, in addition, the recovery of the membrane potential is faster than the recovery of the threshold (G/C > b), it also leads to threshold variability: a subthreshold stimulation can produce a response if it follows a hyperpolarization (see Figure 1F). A more extreme version of threshold variability, in which a response is obtained in the absence of stimulation following hyperpolarization, can be obtained as well, as shown in Figure 1G. This model retains the integrative property of the leaky integrate-and-fire neuron (see Figure 1I).
A feature more difficult to reproduce is inhibition-induced firing (also called anode break). It characterizes a neuron that fires when hyperpolarized. This behavior can be obtained in our model by considering the effect of the membrane potential on the threshold potential to be larger than the rebound factor for the threshold potential, that is, a > b (see Figure 1K). For hyperpolarization-induced spiking, both the membrane potential and the instantaneous threshold potential are below resting potential. Thus, an update rule that would leave the threshold potential unchanged and reset the membrane potential to resting value would lead to a higher membrane potential than the threshold immediately following the update. One simple solution to this problem is to impose a low bound on the update of the threshold potential as in equation 2.2.
By modifying the parameters for the spike-induced currents, it is possible to produce afterpotentials that can be either depolarizing or hyperpolarizing (see Figure 1Q). By choosing a short, large depolarizing afterpotential and a longer but weaker hyperpolarizing afterpotential, bursting behavior is introduced (see Figure 1M). For a burst to stop, the update rule for the hyperpolarizing spike-induced current cannot be constant. Therefore, we used constant updating for i1 and additive updating for i2. If we keep the same parameters for the voltage dependence of the threshold while adding the spike-induced currents, the external current that previously produced tonic spiking produces tonic bursting, as seen in Figure 1M; the one that produced phasic spiking produces phasic bursting (see Figure 1N); and that which resulted in a rebound spike produces a rebound burst (see Figure 1O). Using the parameters for threshold adaptation, which result in an hyperpolarization-induced firing while adding the spike-induced currents, results in hyperpolarization-induced bursting (see Figure 1L).
When the amplitude of the spike-induced currents is decreased, the neuron reaches a mixed mode in which it initially bursts; however, it continues to respond tonically with single spikes (see Figure 1P).
With a short-term depolarizing spike-induced current and a long-term hyperpolarizing spike-induced current, tonic response without external input can be obtained if the hyperpolarizing current is sufficiently small (see Figure 1R). As for bursting, we used a constant update rule for the depolarizing current and an additive update rule for the hyperpolarizing current. A short pulse that produces one or a few spikes generates transitions from the down state to the up state. A similar or longer pulse, if it arrives during the up state, would cause an accumulation of hyperpolarizing currents, resulting in a transition to the down state.
Integration is an essential feature of neurons and is well reproduced by the integrate-and-fire neuronal model. However, there are neurons that preferentially respond to stimuli of a given frequency, perhaps most prominently in the hair cells of the peripheral auditory system (Crawford & Fettiplace, 1981; Fettiplace, 1987) but also in neocortex (Hutcheon & Puil, 1992). Such bandpass-like behavior has been suggested to be of use for processing frequency-modulated spike trains and segregating information contained in their oscillatory components from that represented by their mean firing rates (Niebur, Koch, & Rosin, 1993). To simulate frequency-selective neurons, we consider a short hyperpolarizing spike-induced current that will reduce responses at frequencies that are too high and a depolarizing spike-induced current that will enhance the response at the correct frequency. In the preferred frequency panel presented in Figure 1S, the frequencies lower than the preferred one are reduced by the threshold adaptation caused by the afterpotential at the preferred frequency. A similar result can be obtained by considering a third hyperpolarizing current with a longer timescale.
A feature that is relatively difficult to simulate using a linear model is spike latency: following a brief depolarizing pulse, the neuron fires not during the pulse but after it. We simulate spike latency in this model using negative threshold adaptation: with an increase in membrane potential, the threshold value decreases. The brief pulse will produce a depolarization, which is not sufficient to cross the threshold initially. However, if, following the depolarization, the threshold decreases faster than the membrane potential rebounds, it is possible to produce a delayed spike, as shown in Figure 1T.
Neurons have many degrees of freedom (e.g., voltages, ion concentrations, receptor states), and one of the central tasks of computational modeling is to identify the variables essential for understanding the behavior of the system under study. Among the point neuron models (those not taking into account explicitly the morphological complexity of the neuron), the crudest (e.g., McCulloch-Pitts, leaky and nonleaky integrate-and-fire) attempt to characterize the state of a neuron by one variable, usually thought of as the transmembrane voltage. While this approach has proven useful in many cases, the addition of one variable (as in the Fitzhugh-Nagumo, Morris-Lecar, and Izhikevich models, for example) enormously increases the behavioral space of the model.
Most of these models try to maximize the number of observed computational features that can be captured while keeping the number of state variables small. This usually requires the introduction of nonlinearities (Izhikevich, 2003; Brette & Gerstner, 2005; Touboul, 2008). In the approach presented here, we instead expand the integrate-and-fire model by using a slightly larger number of state variables but keeping the system of differential equations linear and diagonalizable. Therefore it can be simulated with algorithms very similar to the leaky integrate-and-fire model.
Treves (1993) modeled Ca2+-dependent opening of K+ channels as spike-induced changes in conductance and showed that this results in spike-frequency adaptation. While accurate in predicting spike-frequency adaptation (Liu & Wang, 2001), the mechanism is nonlinear. Another way to model spike-frequency adaptation is to consider a variable threshold (Holden, 1976; Tuckwell, 1978), with a threshold jump following a spike. To model spike-frequency adaptation, we used an accommodating threshold. The voltage-dependent (accommodating) threshold was introduced by Hill (1936) and used previously by Niebur and Koch (1994). Dodla et al. (2006) have shown that a leaky integrate-and-fire model with accommodating threshold shows postinhibitory facilitation and phasic spiking. In addition, we show that neurons with voltage-dependent threshold can show hyperpolarization-induced spiking and spike latency. We also show that since the equation for the threshold is linear and, together with the equation for membrane potential the system of equations is diagonalizable over reals, the time to the next spike can be obtained as a solution of a polynomial equation.
Another behavior requiring explanation is bursting activity. One biophysical mechanism leading to bursting is Ca2+ influx via the T-type voltage-gated calcium channels (Rinzel & Lee, 1987; Smith, Cox, Sherman, & Rinzel, 2000). In our model, bursting is obtained using spike-induced currents. To obtain a quantitative fit to the underlying biophysics, it is sufficient to estimate the total current through T-type voltage-gated calcium channels following a spike and the Ca2+-dependent currents it produces, and to fit these currents with an exponential decay. The initial value and the decay time are then used as parameters for the spike-induced current in the model presented in this article. Bursting may also be the result of persistent Na+ currents (Butera, Rinzel, & Smith, 1999), and the importance of this current for bursting in neocortical networks in vitro has been shown by van Drongelen et al. (2006). The spike-induced currents in our model are a natural implementation of the persistent Na+ current.
Spiking-induced currents similar to those leading to bursting but smaller in magnitude lead to a mixed mode. A relative refractory period can be introduced by adding a fast hyperpolarizing spike-induced current, like the fast afterhyperpolarizing voltage-gated K+ current IAH P (Hille, 1992).
The phenomenology of our model is comparable to the widely used model introduced by Izhikevich (2003). One limitation of the linear model when compared to the model by Izhikevich (2003) is the incapacity to reproduce subthreshold oscillations. It is possible to mimic the effect of subthreshold oscillations by using multimodal afterpotentials caused by spike-induced currents. This gives the possibility of simulating neurons with preferred frequencies. In this model, a preferred spiking frequency is obtained from a resonance with the afterpotentials rather than from subthreshold oscillations.
An attractive property of our model is that features obtained from the voltage-dependent threshold, like a rebound spike, are essentially independent of those resulting from the spike-induced currents, like bursting. Thus, it is to some extent possible to combine these mechanisms and generate more complex patterns. For instance, we can choose parameters for the voltage dependence of the threshold that generates a rebound spike and combine them with parameters for spike-induced currents that produce bursting, to obtain a neuron produceing a rebound burst. We also note that sometimes the same behavioral feature can result from modifications of different equations. For instance, spike-frequency adaptation can be obtained by adapting the threshold or, alternatively, by considering a long-term hyperpolarizing spike-induced current with additive updating.
This neuronal model can be implemented in network simulations using event-driven, grid-based, as well as hybrid (Morrison, Straube, Plesser, & Diesmann, 2007) algorithms, if the synapses are modeled as exponentially decaying currents. To characterize the time evolution, the synaptic currents can simply be added to the list of spike-induced currents in equation 3.2.
It is possible to increase the complexity of the model further without modifying any of the described algorithms by increasing the complexity of the update rules, equations 2.2. A simple example is to introduce dependence of the spiking-induced currents on the membrane potential before the firing in a conductance-like fashion. Since the impact of the complexity of the update rules on the complexity of the simulation is very small, this is an easy place to start if additional features need to be included in the simulation.
We thank Yi Dong for helpful discussions. This work was supported by NIH grants 5R01EY016281-02 (NEI) and NS43188-01A1 (NINDS).