In this note, we compute the first passage time probability density for leaky integrate-and-fire neurons by solving Volterra integral equations. As mentioned, one important application for such maximum likelihood methods is optimal parameter search. Typically, parameter searches for neuron models start with a large noise level, so that the initial parameter set starts with a nonzero likelihood. The optimization algorithm (e.g., Nelder & Mead, 1965
) then gradually tunes down the noise to physiological levels. A good parameter set should capture most of the variance and leave only a small portion of the variance as intrinsic noise. The accurate calculation of the likelihood for small noise is thus important for finding an acceptable parameter set. At low noise levels, the previously proposed (Plesser & Tanaka, 1997
; Paninski et al., 2008
) Plesser-Tanaka-Paninski method (called for simplicity the gaussian method in this report) suffers from large numerical errors for all but the finest discretizations (see ) at which point, of course, the computation may demand excessive levels of resources. Our method solves this problem with high accuracy and high computational efficiency.
Paninski et al. (2008)
showed that the gradient of the likelihood can be calculated easily due to the fact that a matrix equation was used to calculate the FPTPD. We improved the numerical solution by calculating the mean probability current
(see equation 3.2
) instead of
(see equation 2.4
). Conceptually the computation of the likelihood gradient is the same for our improved method; in particular, the same chain rule is used to calculate the gradient. The difference lies in the calculation of the gradient of
where, instead of the gradient of the gaussian function, the gradient of the erf function is calculated. Note also that the latter is in the convenient form of an exponential function.
It is possible to use a more accurate high-order block-by-block for the integral calculation (Linz, 1985
) to achieve better accuracy with higher efficiency, an approach we did not further pursue in this short note.5
We point out that the problems of the gaussian method in the case of low noise are not solved by using higher-order integration methods: the problems are due to undersampling of the probability flux within the integration bin, which, as a matter of principle, can within the gaussian method be solved only by higher sampling but not by using higher-order approximations.
Are the noise levels considered here of biological relevance? Answering this question is complicated by the fact that there is no universal agreement as to what part of the observed variability in the neuronal activity is noise and what part is signal. If only the mean firing rate is considered of importance (the rate coding hypothesis), then variations of neuronal firing shorter than tens of milliseconds or so are considered noise. If it is assumed that the precise timing of neural spikes is important (the temporal coding hypothesis), then such variations are part of the signal. We will provide noise estimates for both cases below.
Three types of noise are encountered in essentially any neuronal system. The first is Johnson noise, which is due to the discrete nature of electric charge carriers. The second is channel noise, which is caused by spontaneous opening and closing of transmembrane ion channels. Manwani and Koch (1999)
show that channel noise is at least an order of magnitude larger than Johnson noise. We also consider a third type of noise, which we call vesicle noise and is caused by spontaneous vesicle fusion in synaptic terminals. Since in general each vesicle fusion will open (many) more than one channel, it can be expected that it dominates channel noise.6
There is a fourth type of noise, which is due to the variation in the exact timing of the input.7
This synaptic noise was shown to dominate Johnson noise and channel noise (Manwani & Koch, 1999
) and since each postsynaptic event leads to the release of at least one synaptic vesicle (otherwise, in a failed synaptic event, there is no postsynaptic effect), it is likely that synaptic noise also dominates vesicle noise.
We estimate the synaptic noise by assuming it results from Poisson-distributed synaptic inputs from other neurons. Let the number of presynaptic neurons with average firing rate λ
. Within a time interval of length τ
, the integration time of the neuron, the incoming number of spikes is a Poisson random variable with mean and variance N
. Each input spike causes the influx of a charge Q
. The sum of all these charges leads to an average current of
whose mean is E0λQ
and whose variance is
. The standard deviation of the membrane voltage caused by input variability is σp
is the capacitance of the neuron and
. Since the AMPA receptor open time is considerably shorter than the membrane time constant, we assume that charge is injected instantaneously. P
can then be estimated by the maximal EPSP value.8
One well-characterized system in which these parameters have been measured is the hippocampus. For CA1 neurons which receive input from CA3, we use P
= 0.13 mV (Sayer, Redman, & Andersen, 1989
= 15.3 ms (Brown, Fricke, & Perkel, 1981
000 (Megas, Emri, Freund, & Gulys, 2001
= 1 Hz (Frerking, Schulte, Wiebe, & Stubli, 2005
), resulting in σp
= 0.182 mV/
ms. For these values, the noise level
in , row 3 corresponds to σp
= 0.511 mV/
ms (see note 1 for the units). We showed that the gaussian method fails to give reasonable results for noise at this level, which is considerably larger than the estimated synaptic noise.
For temporal coding, the variation in input spike times is part of the signal; thus, there is no synaptic noise as defined above, and the largest noise component is vesicle noise. Using a mean rate of spontaneous release of 0.282 Hz (Fiacco & McCarthy, 2004
) and considering that the (spontaneous) miniature EPSPs are at most as large as evoked EPSPs, an upper bound σp
= 0.096 mV/
ms is obtained. This upper bound is more than five times smaller than the noise level in , which was discussed in the previous paragraph. It is clearly beyond the limit where the gaussian method can be applied.
It is highly likely that at least some nervous systems require levels of precision that go way beyond our rather crude estimates, One, possibly extreme, example is the circuits that control the electric fields in weakly electric fish, which have been shown to operate at submicrosecond precision (Moortgat, Keller, Bullock, & Sejnowski, 1998
). Use of the original gaussian method, without the improvement introduced in this note, would be very problematic for these systems.
Finally, we point out that this paradigm is by no means limited to equation 2.2
. It can just as easily be applied to the generalized leaky integrate-and-fire neuron (Mihalas & Niebur, 2009
), which reproduces a large number of observed neuronal behaviors. In fact, applicability goes way beyond neural models. The algorithm can be used to compute first passage times through a constant threshold for all stochastic processes with linear dynamics, provided the noise is gaussian and does not depend explicitly on V
, that is, Ornstein-Uhlenbeck processes. Efficient solution of this vast class of both theoretically and practically important problems should prove to be of significant value to a large computational community.
The source code (Matlab) is available on request from the authors.