Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2756789

Formats

Article sections

- Abstract
- 1 Introduction
- 2 The Quasi–Active System
- 3 Model Reduction of the Quasi-Active System
- 4 Balanced Truncation Model Reduction Results
- 5 IRKA Model Reduction Results
- 6 A Quasi-Integrate-and-Fire Model
- 7 Discussion
- References

Authors

Related links

J Comput Neurosci. Author manuscript; available in PMC 2010 October 1.

Published in final edited form as:

Published online 2009 January 27. doi: 10.1007/s10827-008-0134-2

PMCID: PMC2756789

NIHMSID: NIHMS129951

Anthony R. Kellems, Rice University, Tel.: (713)-348-3700, E-mail: ude.ecir@smellekt

The publisher's final edited version of this article is available at J Comput Neurosci

See other articles in PMC that cite the published article.

The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 *μ*m. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (_{2} approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model.

Accurate delineation of the single cell map from synaptic input to membrane potential at the site of action potential initiation requires large, complex models. The size is dictated by synapse density and total dendritic length. With approximately one synapse per micron on a cell with 5 millimeters of dendrite we arrive at 5000 compartments. The complexity arises from both geometric tapering and nonuniform distribution of a variety of channel types. With both transient and persistent *Na*^{+} and *K*^{+} currents, a low threshold *Ca*^{2+} current, a leak current and current activated upon hyperpolarization we reach 10 (1 voltage and 9 gating) equations per compartment. Altogether, we arrive at a coupled system of 50,000 ordinary differential equations.

Wilfrid Rall was the first to build such models and the first to contemplate their systematic reduction. In particular, he showed how cells with symmetric geometry, receiving symmetric synaptic input, satisfying the “3/2” law at each branch point, and possessing passive membranes, could be mapped onto a single straight passive cable (Rall, 1959). Although his assumptions were slightly relaxed by (Schierwagen, 1989) and (Poznanski, 1991), they remain too restrictive to be adopted by the general investigator of synaptic integration. If one is content to reproduce only spike shapes or rates then one can arrive at useful reduced models by simply lumping compartments together. (Bush and Sejnowski, 1993) and (Pinsky and Rinzel, 1994) for example, achieve their ends with merely nine and two compartments, respectively. Both models of course surrender spatial precision of individual synapses.

We here employ old and new tools from the field of linear dynamical systems to reduce models with tens of thousands of equations to models with tens of equations without sacrificing either precise synaptic placement or accuracy of the associated subthreshold membrane potential at the site of action potential initiation.

We begin, in §2, by specifying the full, nonlinear, nonuniform, branched, synaptically driven model. We compute the associated rest potential, linearize the full system about rest, and arrive at what Koch terms the quasi-active system (Koch, 1999). In §3 we apply two distinct methods (one in the time domain, the other, in the frequency domain) of model reduction. They share the ability to reduce the dimension of the “internal” dynamics without compromising the input-output map. In §4 and §5 we test and contrast these two strategies via investigations of both dendritic democratization and dendritic resonance. In §6 we append a threshold mechanism to our reduced model and arrive at a morphologically accurate integrate-and-fire model. We close in §7 with a discussion of limitations and extensions.

We consider dendritic neurons (see, e.g., Figure 1) with *D* branched dendrites meeting at the soma. The *d*th dendrite possesses *B _{d}* branches, and so the neuron posseses

Example of a toy neuron to demonstrate our labeling scheme. Branches are indexed using the Hines ordering (Hines, 1984), and junctions are labeled with italics. For this cell there are *D* = 3 dendrites and *J* = 4 junctions. Our labeling scheme implies, **...**

$$\mathcal{B}\equiv \sum _{d=1}^{D}{B}_{d}$$

branches. We denote by * _{b}* the length of branch

$$\begin{array}{l}{a}_{b}{C}_{m}{\partial}_{t}{v}_{b}=\frac{1}{2{R}_{i}}{\partial}_{x}({a}_{b}^{2}{\partial}_{x}{v}_{b})-{a}_{b}\sum _{c=1}^{C}{G}_{bc}(x)({v}_{b}-{E}_{c})\prod _{f=1}^{{F}_{c}}{w}_{\mathit{bcf}}^{{q}_{cf}}\\ -\frac{1}{2\pi}\sum _{s=1}^{{S}_{b}}{g}_{bs}(t)\delta (x-{x}_{bs})({v}_{b}-{E}_{bs})\end{array}$$

(1)

$${\partial}_{t}{w}_{\mathit{bcf}}=\frac{{w}_{cf,\infty}({v}_{b})-{w}_{\mathit{bcf}}}{{\tau}_{cf}({v}_{b})},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}0<x<{\ell}_{b},\phantom{\rule{0.38889em}{0ex}}0<t.$$

(2)

Here *g _{bs}*(

These branch potentials interact at *J* junction points, where junction *J* denotes the soma. At junction *j* < *J* we denote by
${b}_{j}^{1}$ and
${b}_{j}^{2}$ the branch indices of the two children and by
${b}_{j}^{3}$ the branch index of the mother. Continuity of the potential at each junction requires

$${v}_{{b}_{j}^{1}}({\ell}_{{b}_{j}^{1}},t)={v}_{{b}_{j}^{2}}({\ell}_{{b}_{j}^{2}},t)={v}_{{b}_{j}^{3}}(0,t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}j=1,\dots ,J-1$$

(3)

while current balance there requires

$${a}_{{b}_{j}^{1}}^{2}({\ell}_{{b}_{j}^{1}}){\partial}_{x}{v}_{{b}_{j}^{1}}({\ell}_{{b}_{j}^{1}},t)+{a}_{{b}_{j}^{2}}^{2}({\ell}_{{b}_{j}^{2}}){\partial}_{x}{v}_{{b}_{j}^{2}}({\ell}_{{b}_{j}^{2}},t)={a}_{{b}_{j}^{3}}^{2}(0){\partial}_{x}{v}_{{b}_{j}^{3}}(0,t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}j=1,\dots ,J-1.$$

(4)

Our *D* dendrites join at the soma and associated branch indices are
${b}_{J}^{d}$, *d* = 1*,*…*, D*. Continuity of the potential at the soma then reads

$${v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t)={v}_{{b}_{J}^{d}}({\ell}_{{b}_{J}^{d}},t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}d=2,\dots ,D$$

(5)

while current balance there requires

$$\begin{array}{l}{C}_{m}{\partial}_{t}{v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t)=\frac{\pi}{{A}_{\sigma}{R}_{i}}\sum _{d=1}^{D}{a}_{{b}_{J}^{d}}^{2}({\ell}_{{b}_{J}^{d}}){\partial}_{x}{v}_{{b}_{J}^{d}}({\ell}_{{b}_{J}^{d}},t)\\ -\sum _{c=1}^{C}{G}_{\sigma c}({v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t)-{E}_{c})\prod _{f=1}^{{F}_{c}}{w}_{\sigma cf}^{{q}_{cf}}(t)\\ -\frac{1}{{A}_{\sigma}}\sum _{s=1}^{{S}_{\sigma}}{g}_{\sigma s}(t)({v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t)-{E}_{\sigma s})\end{array}$$

(6)

$${\partial}_{t}{w}_{\sigma cf}(t)=\frac{{w}_{cf,\infty}({v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t))-{w}_{\sigma cf}(t)}{{\tau}_{cf}({v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t))},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}0<x<{\ell}_{b},\phantom{\rule{0.38889em}{0ex}}0<t.$$

(7)

where *σ* is the somatic index and *A _{σ}* is the surface area of the soma. Finally, we denote by the set of leaf indices, where a leaf is a branch with no children. We suppose that each leaf is sealed at its distal end, i.e.,

$${\partial}_{x}{v}_{b}(0,t)=0,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}b\in \mathcal{L}.$$

(8)

The somatic voltage is thus
${v}_{\sigma}(t)\equiv {v}_{{b}_{J}^{1}}({\ell}_{{b}_{J}^{1}},t)$. Initially the neuron is at rest, implying that * _{t}v_{b}*(

$${v}_{b}(x,0)={\overline{v}}_{b}(x)$$

(9)

$${w}_{\mathit{bcf}}(x,0)={\overline{w}}_{\mathit{bcf}}(x)\equiv {w}_{cf,\infty}({\overline{v}}_{b}(x)).$$

(10)

Consider (1) for branch *b* at its resting potential * _{b}*. Small perturbations to this rest state ought to be well-represented by linear approximations to the differential equations (Koch, 1999). If

$$\begin{array}{l}{v}_{b}={\overline{v}}_{b}+\epsilon {\stackrel{\sim}{v}}_{b}+\mathcal{O}({\epsilon}^{2})\\ {w}_{\mathit{bcf}}={\overline{w}}_{\mathit{bcf}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcf}}+\mathcal{O}({\epsilon}^{2}).\end{array}$$

Substituting these into (1), we construct a linearized model by solving for the perturbation terms * _{b}, _{bcf}* of order

$$\begin{array}{l}\epsilon {a}_{b}{C}_{m}{\partial}_{t}{\stackrel{\sim}{v}}_{b}=\epsilon \frac{1}{2{R}_{i}}{\partial}_{x}({a}_{b}^{2}{\partial}_{x}{\stackrel{\sim}{v}}_{b})-{a}_{b}\sum _{c=1}^{C}{G}_{bc}(x)({\overline{v}}_{b}+\epsilon {\stackrel{\sim}{v}}_{b}-{E}_{c})\prod _{f=1}^{{F}_{c}}{\left({\overline{w}}_{\mathit{bcf}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcf}}(t)\right)}^{{q}_{cf}}\\ -\frac{1}{2\pi}\sum _{s=1}^{{S}_{b}}({\overline{g}}_{bs}+\epsilon {\stackrel{\sim}{g}}_{bs}(t))\delta (x-{x}_{bs})({\overline{v}}_{b}+\epsilon {\stackrel{\sim}{v}}_{b}-{E}_{bs})\end{array}$$

(11)

$$\epsilon {\partial}_{t}{\stackrel{\sim}{w}}_{\mathit{bcf}}(t)=\frac{{w}_{cf,\infty}({\overline{v}}_{b}+\epsilon {\stackrel{\sim}{v}}_{b})-({\overline{w}}_{\mathit{bcf}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcf}})}{{\tau}_{cf}({\overline{v}}_{b}+\epsilon {\stackrel{\sim}{v}}_{b})}.$$

(12)

The initial conditions are now

$${\stackrel{\sim}{v}}_{b}(x,0)={\stackrel{\sim}{w}}_{\mathit{bcf}}(x,0)=0$$

while boundary conditions, because they are already linear, remain the same as in (3), (4), (8). The soma conditions contain nonlinear terms, but they may be linearized in the same manner as shown here.

Expanding *w _{cf}*

$${\partial}_{t}{\stackrel{\sim}{w}}_{\mathit{bcf}}(t)=\frac{{w}_{cf,\infty}^{\prime}({\overline{v}}_{b}){\stackrel{\sim}{v}}_{b}-{\stackrel{\sim}{w}}_{\mathit{bcf}}}{{\tau}_{cf}({\overline{v}}_{b})}.$$

In (11), if the product of gating variables is written as

$${F}_{bc}(\epsilon )=\prod _{f=1}^{{F}_{c}}{\left({\overline{w}}_{\mathit{bcf}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcf}}(t)\right)}^{{q}_{cf}}$$

then differentiating with respect to *ε* yields

$${F}_{bc}^{\prime}(\epsilon )=\sum _{p=1}^{{F}_{c}}\left[{q}_{cp}{({\overline{w}}_{\mathit{bcp}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcp}})}^{{q}_{cp}-1}\prod _{f=1,f\ne p}^{{F}_{c}}{\left({\overline{w}}_{\mathit{bcf}}+\epsilon {\stackrel{\sim}{w}}_{\mathit{bcf}}(t)\right)}^{{q}_{cf}}{\stackrel{\sim}{w}}_{\mathit{bcp}}\right].$$

The order 0 and order *ε* terms of *F _{bc}* are

$${F}_{bc}(0)=\prod _{f=1}^{{F}_{c}}{\overline{w}}_{\mathit{bcf}}^{{q}_{cf}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{F}_{bc}^{\prime}(0)=\sum _{p=1}^{{F}_{c}}\left({q}_{cp}{\overline{w}}_{\mathit{bcp}}^{{q}_{cp}-1}\prod _{f=1,f\ne p}^{{F}_{c}}{\overline{w}}_{\mathit{bcf}}^{{q}_{cf}}{\stackrel{\sim}{w}}_{\mathit{bcp}}\right),$$

respectively. Hence the linearized ionic currents take the form

$${I}_{\text{ionic}}(x,{\stackrel{\sim}{v}}_{b},{\stackrel{\sim}{w}}_{b})=\sum _{c=1}^{C}{G}_{bc}(x)\left({F}_{bc}(0){\stackrel{\sim}{v}}_{b}+{F}_{bc}^{\prime}(0)({\overline{v}}_{b}-{E}_{c})\right).$$

The linearized synaptic input in (11) has the form

$${I}_{\text{synaptic}}(t,x,{\stackrel{\sim}{v}}_{b})=\frac{1}{2\pi}\sum _{s=1}^{{S}_{b}}\left({\stackrel{\sim}{g}}_{bs}(t)({\overline{v}}_{b}-{E}_{bs})+{\overline{g}}_{bs}{\stackrel{\sim}{v}}_{b}\right)\delta (x-{x}_{bs}).$$

To complete the linearization process, we equate the terms of order *ε* and arrive at

$$\begin{array}{l}{\partial}_{t}{\stackrel{\sim}{v}}_{b}={\mathcal{D}}_{b}{\stackrel{\sim}{v}}_{b}+{K}_{b}(x){\stackrel{\sim}{v}}_{b}+\sum _{c=1}^{C}\sum _{f=1}^{{F}_{c}}{d}_{\mathit{bcf}}(x){\stackrel{\sim}{w}}_{\mathit{bcf}}+{\eta}_{b}\\ {\partial}_{t}{\stackrel{\sim}{w}}_{\mathit{bcf}}(t)={\phi}_{\mathit{bcf}}(x){\stackrel{\sim}{v}}_{b}+{\psi}_{\mathit{bcf}}(x){\stackrel{\sim}{w}}_{\mathit{bcf}},\end{array}$$

where

$$\begin{array}{l}{\mathcal{D}}_{b}=\frac{1}{2{C}_{m}{R}_{i}{a}_{b}}{\partial}_{x}({a}_{b}^{2}{\partial}_{x}),\\ {K}_{b}(x)=-\frac{1}{{C}_{m}}\sum _{c=1}^{C}{G}_{bc}(x){F}_{bc}(0)-\frac{1}{2\pi {a}_{b}{C}_{m}}\sum _{s=1}^{{S}_{b}}{\overline{g}}_{bs}\delta (x-{x}_{bs})\\ {d}_{\mathit{bcf}}(x)=-\frac{1}{{C}_{m}}{G}_{bc}(x)({\overline{v}}_{b}-{E}_{c}){q}_{cf}{\overline{w}}_{\mathit{bcf}}^{{q}_{cf}-1}\prod _{p=1,p\ne f}^{{F}_{c}}{\overline{w}}_{\mathit{bcp}}^{{q}_{cp}}\\ {\eta}_{b}=-\frac{1}{2\pi {a}_{b}{C}_{m}}\sum _{s=1}^{{S}_{b}}{\stackrel{\sim}{g}}_{bs}(t)\delta (x-{x}_{bs})({\overline{v}}_{b}-{E}_{bs})\\ {\phi}_{\mathit{bcf}}(x)=\frac{{w}_{cf,\infty}^{\prime}({\overline{v}}_{b})}{{\tau}_{cf}({\overline{v}}_{b})},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\psi}_{\mathit{bcf}}(x)=-\frac{1}{{\tau}_{cf}({\overline{v}}_{b})}.\end{array}$$

It is now apparent that this is a linear system for the *b*th branch, namely

$${\partial}_{t}\left(\begin{array}{c}{\stackrel{\sim}{v}}_{b}\\ {\stackrel{\sim}{w}}_{b11}\\ \vdots \\ {\stackrel{\sim}{w}}_{b1{F}_{1}}\\ {\stackrel{\sim}{w}}_{b21}\\ \vdots \\ {\stackrel{\sim}{w}}_{b2{F}_{2}}\\ \vdots \\ {\stackrel{\sim}{w}}_{bC1}\\ \vdots \\ {\stackrel{\sim}{w}}_{bC{F}_{C}}\end{array}\right)=\left(\begin{array}{ccccc}{\mathcal{D}}_{b}+{K}_{b}& {d}_{b11}& {d}_{b12}& \dots & {d}_{bC{F}_{C}}\\ {\phi}_{b11}& {\psi}_{b11}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ {\phi}_{b12}& \phantom{\rule{0.16667em}{0ex}}& {\psi}_{b12}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \vdots & \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \ddots & \phantom{\rule{0.16667em}{0ex}}\\ {\phi}_{bC{F}_{C}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\psi}_{bC{F}_{C}}\end{array}\right)\left(\begin{array}{c}{\stackrel{\sim}{v}}_{b}\\ {\stackrel{\sim}{w}}_{b11}\\ \vdots \\ {\stackrel{\sim}{w}}_{b1{F}_{1}}\\ {\stackrel{\sim}{w}}_{b21}\\ \vdots \\ {\stackrel{\sim}{w}}_{b2{F}_{2}}\\ \vdots \\ {\stackrel{\sim}{w}}_{bC1}\\ \vdots \\ {\stackrel{\sim}{w}}_{bC{F}_{C}}\end{array}\right)+\left(\begin{array}{c}{\eta}_{b}\\ 0\\ \vdots \\ 0\end{array}\right),$$

or, more concisely,

$${\partial}_{t}{z}_{b}={\mathcal{Q}}_{b}{z}_{b}+{u}_{b}.$$

(13)

We discretize the neuron in space by dividing each branch into *γ _{b}* = ceil(

$$m=\sum _{c=1}^{C}{F}_{c}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}n=1+\sum _{b=1}^{\mathcal{B}}{\gamma}_{b}$$

then the discretized system has *N* = *n*(*m* + 1) variables. Restricting our attention to the *b*th branch, we compartmentalize the voltage by _{:,}_{b}^{γ}* ^{b}*, and similarly the gating variables, to obtain

$${\partial}_{t}{\text{z}}_{b}={\text{Q}}_{b}{\text{z}}_{b}+{\text{B}}_{b}{\text{u}}_{b}$$

where

$$\begin{array}{l}{\text{z}}_{b}=\left(\begin{array}{c}{\stackrel{\sim}{v}}_{:,b}\\ {\stackrel{\sim}{w}}_{:,b11}\\ \vdots \\ {\stackrel{\sim}{w}}_{:,bC{F}_{C}}\end{array}\right)\in {\mathbb{R}}^{{\gamma}_{b}(m+1)}\\ {\text{Q}}_{b}=\left(\begin{array}{ccccc}\text{diag}({K}_{:,b})& {d}_{:,b11}& {d}_{:,b12}& \dots & {d}_{:,bC{F}_{C}}\\ {\phi}_{:,b11}& {\psi}_{:,b11}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ {\phi}_{:,b12}& \phantom{\rule{0.16667em}{0ex}}& {\psi}_{:,b12}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \vdots & \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \ddots & \phantom{\rule{0.16667em}{0ex}}\\ {\phi}_{:,bC{F}_{C}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\psi}_{:,bC{F}_{C}}\end{array}\right)\in {\mathbb{R}}^{{\gamma}_{b}(m+1)\times {\gamma}_{b}(m+1)}\\ {\text{B}}_{b}=\left(\begin{array}{c}I\\ 0\end{array}\right)\in {\mathbb{R}}^{{\gamma}_{b}(m+1)\times {\gamma}_{b}},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}I\in {\mathbb{R}}^{{\gamma}_{b}\times {\gamma}_{b}},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\text{u}}_{b}\in {\mathbb{R}}^{{\gamma}_{b}}.\end{array}$$

For example, if branch *b* receives synaptic input at *x _{b}*

$${\text{u}}_{b}=-\frac{1}{2\pi {a}_{b}{h}_{b}{C}_{m}}\left(\begin{array}{c}0\\ \vdots \\ 0\\ {g}_{b1}(t)({\overline{v}}_{b}({x}_{b1})-{E}_{b1})\\ 0\\ \vdots \\ 0\\ {g}_{b2}(t)({\overline{v}}_{b}({x}_{b2})-{E}_{b2})\\ 0\end{array}\right)$$

where *h _{b}* =

$$\begin{array}{l}\mathbf{Q}=\left(\begin{array}{cccc}{\text{Q}}_{1}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \ddots & \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\text{Q}}_{\mathcal{B}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\text{Q}}_{\sigma}\end{array}\right)\in {\mathbb{R}}^{N\times N},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathbf{z}=\left(\begin{array}{c}{\text{z}}_{1}\\ \vdots \\ {\text{z}}_{\mathcal{B}}\\ {\text{z}}_{\sigma}\end{array}\right)\in {\mathbb{R}}^{N},\\ \mathbf{B}=\left(\begin{array}{cccc}{\text{B}}_{1}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \ddots & \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\text{B}}_{\mathcal{B}}& \phantom{\rule{0.16667em}{0ex}}\\ \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& \phantom{\rule{0.16667em}{0ex}}& {\text{B}}_{\sigma}\end{array}\right)\in {\mathbb{R}}^{N\times n},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathbf{u}=\left(\begin{array}{c}{\text{u}}_{1}\\ \vdots \\ {\text{u}}_{\mathcal{B}}\\ {\text{u}}_{\sigma}\end{array}\right)\in {\mathbb{R}}^{n}.\end{array}$$

and now coupling the branch conditions via the Hines matrix by **A** = **Q** + **H** we arrive at the compartmental model

$${\mathbf{z}}^{\prime}(t)=\mathbf{Az}(t)+\mathbf{Bu}(t).$$

(14)

We view **u** as the input to the neuron. The neuron’s output, **y**, is the voltage at the site of initiation of action potentials, which is expressed as

$$\mathbf{y}(t)=\mathbf{Cz}(t).$$

(15)

We suppose that the output is the soma potential, formally written as **y**(*t*) = **z**_{N}_{−}* _{m}*(

We begin with a classical time-domain method known as Balanced Truncation (BT) and then contrast this with a newer frequency-domain approach deemed the Iterative Rational Krylov Algorithm.

To balance a linear dynamical system is to transform it into one where the controllability and observability gramians coincide. As a result, and in a rigorous quantitative sense, the states that are difficult to reach are rarely observed. We present the method below. For the early history, and further details and applications, see (Kailath, 1980) and (Antoulas and Sorensen, 2001).

The controllability gramian and observability gramian are defined via

$$\mathcal{P}={\int}_{0}^{\infty}{e}^{\mathbf{A}t}\mathbf{B}{\mathbf{B}}^{\ast}{e}^{{\mathbf{A}}^{\ast}t}dt,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathcal{Q}={\int}_{0}^{\infty}{e}^{{\mathbf{A}}^{\ast}t}{\mathbf{C}}^{\ast}\mathbf{C}{e}^{\mathbf{A}t}dt,$$

but typically are computed as solutions to the Lyapunov equations

$$\mathbf{A}\mathcal{P}+\mathcal{P}{\mathbf{A}}^{\ast}+\mathbf{B}{\mathbf{B}}^{\ast}=0,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\mathbf{A}}^{\ast}\mathcal{Q}+\mathcal{Q}\mathbf{A}+{\mathbf{C}}^{\ast}\mathbf{C}=0.$$

We gather their Cholesky factors

$$\mathcal{P}=U{U}^{\ast},\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}\mathcal{Q}=L{L}^{\ast}.$$

and compute the singular value decomposition of the mixed product

$${U}^{\ast}L=Z\mathrm{\sum}{Y}^{\ast}.$$

Here *Σ* is a diagonal matrix whose entries are the eigenvalues of *U**
*U*, *Z* is an orthogonal matrix whose columns are the eigenvectors of *U**
*U*, and *Y* is an orthogonal matrix whose columns are the eigenvectors of *L**
*L* (for details on the SVD, see (Trefethen and Bau, 1997)). The diagonal elements of *Σ*, are nonnegative and in descending order and are known as the **Hankel Singular Values** (HSV’s) of the dynamical system described by (14) and (15). We now compose

$$T={\mathrm{\sum}}^{-1/2}{Y}^{\ast}{L}^{\ast},\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}{T}^{-1}=UZ{\mathrm{\sum}}^{-1/2},$$

and note that the transformed gramians

$$\widehat{\mathcal{P}}=T\mathcal{P}{T}^{\ast}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}\widehat{\mathcal{Q}}={T}^{-\ast}\mathcal{Q}{T}^{-1}$$

are balanced and diagonal in the sense that

$$\widehat{\mathcal{P}}=\widehat{\mathcal{Q}}=\mathrm{\sum}.$$

Moreover they are the gramians of the transformed state, = *T* **z**, which itself is governed by the transformed dynamical system as

$$\frac{d\widehat{\mathbf{z}}(t)}{dt}=\widehat{\mathbf{A}}\widehat{\mathbf{z}}(t)+\widehat{\mathbf{B}}\mathbf{u}(t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathbf{y}(t)=\widehat{\mathbf{C}}\widehat{\mathbf{z}}(t)$$

(16)

where **Â** = *T* **A***T* ^{−1}, = *T* **B**, **Ĉ** = **C***T*^{−1}. Based on the decay of the singular values in *Σ*, we can construct a reduced model by using only the *k* largest singular values. This corresponds to approximating (16) with

$$\frac{d\mathit{\xi}(t)}{dt}={\widehat{\mathbf{A}}}_{11}\mathit{\xi}(t)+{\widehat{\mathbf{B}}}_{1}\mathbf{u}(t)\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\widehat{\mathbf{y}}(t)={\widehat{\mathbf{C}}}_{1}\mathit{\xi}(t),$$

(17)

where **Â** _{11} is the initial *k* × *k* submatrix of **Â**, _{1} is the first *k* rows of , and **Ĉ**_{1} is the first *k* columns of **Ĉ**.

The reduced state vector *ξ** ^{k}* has no apparent physiological sense until it is fed to the equation for

For small systems, Balanced Truncation is a clean, precise, and feasible method, but as the system dimension grows the cost of computing the gramians becomes prohibitive. Since the reduction comes after the gramians have been computed, this requires storage of dense *N* × *N* matrices before the reduction step, meaning that memory is also an issue.

Krylov methods, on the other hand, construct approximate reduced systems of a given dimension from the beginning. Instead of transforming the original system and then truncating, the Krylov process iteratively projects the dynamics of the original system onto a smaller subspace. This reduces the memory requirements significantly, since only matrix-vector products are used, and in turn drastically speeds up the reduction process. We now describe the main ideas behind this algorithm; for full details see (Gugercin et al., 2008).

Consider the quasi-active system given in (14) and (15). We construct two matrices, **V*** _{k}*,

$${\mathbf{W}}_{k}^{T}({\mathbf{V}}_{k}{\mathit{\xi}}^{\prime}(t)-\mathbf{A}{\mathbf{V}}_{k}\mathit{\xi}(t)-\mathbf{Bu}(t))=0$$

(18)

and

$$\text{Range}({\mathbf{V}}_{k})\cap \text{Null}({\mathbf{W}}_{k}^{T})=\{0\}.$$

(19)

From (19) we see that ${\mathbf{W}}_{k}^{T}{\mathbf{V}}_{k}$ is invertible. Hence we can use (18) to construct a reduced model

$$\frac{d\mathit{\xi}(t)}{dt}={\mathbf{A}}_{k}\mathit{\xi}(t)+{\mathbf{B}}_{k}\mathbf{u}(t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\widehat{\mathbf{y}}(t)={\mathbf{C}}_{k}\mathit{\xi}(t).$$

(20)

where

$${\mathbf{A}}_{k}={({\mathbf{W}}_{k}^{T}{\mathbf{V}}_{k})}^{-1}{\mathbf{W}}_{k}^{T}\mathbf{A}{\mathbf{V}}_{k},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\mathbf{B}}_{k}={({\mathbf{W}}_{k}^{T}{\mathbf{V}}_{k})}^{-1}{\mathbf{W}}_{k}^{T}\mathbf{B},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\mathbf{C}}_{k}=\mathbf{C}{\mathbf{V}}_{k}.$$

(21)

The reduced-order system is computed by finding **V*** _{k}* and

$$\underset{{\mathbf{V}}_{k},{\mathbf{W}}_{k}}{min}{\int}_{-\infty}^{\infty}\mid \mid {\mathbf{C}}^{T}{(i\omega \mathbf{I}-\mathbf{A})}^{-1}\mathbf{B}-{\mathbf{C}}_{k}^{T}{(i\omega \mathbf{I}-{\mathbf{A}}_{k})}^{-1}{\mathbf{B}}_{k}\mid {\mid}^{2}d\omega .$$

One strategy for solving this is to interpolate the full transfer function, to first order, at the negative of each of its poles. Since these poles are not generally known *a priori* and may be hard to compute, we make an initial guess and then iterate until convergence, indicating that we have arrived at the reduced system. This is achieved in a computationally efficient manner via the Iterative Rational Krylov Algorithm (IRKA), whose implementation details we do not discuss here but are found in (Gugercin et al., 2008).

We have written a MATLAB suite of functions that loads morphology and channel kinetics and distributions, constructs the associated quasi-active system and both its BT and IRKA reductions, and computes and displays the response of the 4 (nonlinear, quasi-active, BT, IRKA) models to random (in space and time) sequences of synaptic input. These codes are available from the authors upon request. All computations were performed on a Sun Ultra 20 computer with a 2.2 GHz AMD Opteron processor.

Morphologies, shown in Figure 2, were obtained from the Rice-Baylor archive (Martinez) and from NeuroMorpho.org (http://NeuroMorpho.org) (Ascoli, 2006), and then imported to our software suite, which lets the user visualize and simulate the neuron via GUI’s. Pyramidal and interneuron ion channel models were obtained from the literature and from ModelDB (http://senselab.med.yale.edu/modeldb) (Hines et al., 2004), as detailed in Tables 2 and 3 of the Appendix. Unless otherwise indicated, * _{bs}* = 0.

Renderings of the three morphologies used in this paper: A) a forked neuron; B) cell AR-1-20-04-A, a projection neuron from rat entorhinal cortex (Martinez); C) cell n408, a rat hippocampal cell from region CA1 (Pyapali et al., 1998)

Consider a forked neuron as shown in Figure 2A. Each of the three branches is 200 *μ*m long and is divided into 2 *μ*m-long segments. The root has radius 2 *μ*m while the leaves have radius 1 *μ*m. Ion channels with Hodgkin-Huxley kinetics (see Table 2) were uniformly distributed. This leads to a quasi-active system of dimension 1204. We computed BT matrices and found that the Hankel singular values decay quite rapidly (Figure 3A). Indeed, *σ _{n}* <

A) Hankel singular values for the forked neuron given a 2 *μ*m compartment size. Since there are 301 compartments, and the HH ion channels have 3 gating variables, the quasi-active system has 1204 variables, hence the BT system has the same number **...**

Uniform channel model and kinetics, which corresponds to the Hodgkin-Huxley squid giant axon parameters at 6.3°C (Hodgkin and Huxley, 1952)

Tonic synapses have a significant effect on the dynamics, but no effect on the accuracy of the reduced model versus the quasi-active one. We demonstrate the impact of tonic synapses by randomly choosing 10% of the compartments to have tonic components * _{bs}* = 0.2 nS, which changes the rest potential of the neuron from a uniform value of ≈ −64.9186 mV to one that is now spatially-varying and elevated (Figure 3D). Using the exact same stimulus as used for Figure 3B–C, we find that the neuron’s output is now more oscillatory, but the reduced system’s accuracy is unchanged (Figure 3E–F).

Computing the BT matrices required about 72 seconds, while the 30 ms simulation required only 0.02 seconds. At first glance this appears expensive when compared to the nonlinear and quasi-active simulation times, both requiring about 1.3 seconds. However, the BT matrix computation is a one-time cost, since these matrices can now be reused to facilitate simulation with their associated morphology.

In fact, it can be seen that the decay of the normalized Hankel singular values almost directly corresponds with the numerical accuracy achieved, as Figure 4 illustrates on a more realistic neuron. This result is not surprising, however, given that an error bound exists for BT model reduction (Antoulas and Sorensen, 2001). Therefore, the normalized Hankel singular values may be used as a reliable guide to achieving any desired numerical accuracy compared to the quasi-active system.

An immediate application for such low-dimensional systems is to accurately quantify how synaptic input scales with distance to the soma, also known as “dendritic democratization” (Magee and Cook, 2000) (Häusser, 2001) (Timofeeva et al., 2008). One standard form of synaptic input is an alpha function, which describes the input onto synapse *s* of branch *b* as

$${\stackrel{\sim}{g}}_{bs}(t)={\widehat{g}}_{bs}\frac{t-{t}_{\text{start}}}{\tau}exp\left(1-\frac{t-{t}_{\text{start}}}{\tau}\right),$$

(22)

where *t*_{start} is the time (in ms) of stimulus onset, and the time constant *τ* = 1 ms is the time at which the conductance reaches its maximum value of *ĝ _{bs}*. In order to know the strength required for a single synaptic input at a given location to produce a peak target depolarization at the soma, we need to run a parameter sweep for

Strengths *ĝ*_{bs} of alpha-function synaptic input needed at each compartment to get a 0.2 mV peak soma depolarization. Dots were computed using the BT system, while circles were computed using the nonlinear system. Notice that the conductance on **...**

For more realistic morphologies, and more complex channel models, the one-time cost of computing the BT matrices grows as (*n*^{3}), but the speed-up gained from using the resulting reduced system is substantial. As an example, neuron AR-1-20-04-A (see Figure 2B) was discretized into 1121 compartments, each with length ≈ 2 *μ*m. We included *I _{Na}* and

Non-uniform channel model and kinetics. *I*_{Na} and *I*_{K} use Hodgkin-Huxley type kinetics, while *I*_{A} uses Connor-Stevens type kinetics and a spatial distribution of conductance based on Hoffman’s work. For *G*(*x*), *x* is measured in *μ*m from the soma. **...**

Using this reduced model, we executed a parameter sweep for *ĝ _{bs}* for each compartment, taking a total of 42 minutes. To compare the output to that of the nonlinear model, we did a sweep for 7 compartments in each branch, taking 268 total minutes. The results indicate (see Figure 6) that the reduced system accurately tracked the scaling of proximal synaptic inputs while underestimating the scaling for distal conductances. The error in the distal computations is not due to our reduction scheme but rather to our initial quasi-active hypothesis. Regarding Figure 6, a value of

Neurons have been shown to respond preferentially to stimuli occurring at specific frequencies (Hutcheon and Yarom, 2000) (Hasselmo et al., 2007) (Ulrich, 2002), and quantifying how this resonant frequency varies with stimulus location is a difficult experimental task. However, our reduced model is well-suited to this type of simulation.

Acquiring the resonant frequency data is time-consuming if high frequency resolution is desired. Typical experimental studies stimulate the cell with a ZAP current, which has the form

$$I(t)=asin(b{t}^{c}+d)$$

(23)

for some constants *a, b, c,* and *d*, with *t* in ms. The resonant frequency can then be obtained by dividing the Fourier transform of the voltage by the Fourier transform of the input current (Puil et al., 1986). However, this limits the frequency resolution unless very long or detailed simulations are performed. On the other hand, brute force methods give better resolution, but are slow because require many simulations to be performed using oscillatory currents, such as

$$I(t)={I}_{0}(0.5+0.5sin(2\pi \omega (t-\widehat{t})/1000)),$$

(24)

where *I*_{0} is the peak amplitude, is the time of stimulus onset (in ms), and *ω* is the frequency (in Hz).

Our reduced model is useful here because we can use the ZAP current to get a band in which the resonant frequency *ω _{r}* may be and then run a brute-force search to pinpoint this value, all in much less time than the full system requires. For example, using the ZAP current in (23) with parameters

With the BT method as a benchmark, we test the model reduction technique based on using IRKA. First we demonstrate typical convergence of IRKA for our problems, and then we show that cells of much larger dimension can be tackled.

Consider neuron AR-1-20-04-A from Figure 2B. We compute the maximum absolute error in the soma voltage between the quasi-active and reduced systems using IRKA (Figure 8B), and we compare this to the error curve shown in Figure 4A. Notice that nearly 5-digit agreement occurs for a 25-dimensional system using IRKA. This is close to what we would expect from the BT method (20-dimensional system), but IRKA required only 5 seconds to compute the reduced system (Figure 8A), whereas BT required 4.6 hours. Thus similar accuracy with IRKA can be obtained in a fraction of the time needed for BT. Notice that although the error in IRKA as *k* increases is not as smooth of a trend as that of BT, nor does the error decrease as rapidly, the accuracy is more than enough for neuroscience applications.

A) Time to compute reduced system, and B) maximum absolute error between quasi-active and reduced system output using IRKA for cell AR-1-20-04-A (*N* = 6726)

The main difference between IRKA and BT is that IRKA computes a system of a given size, whereas BT computes full matrices that are then truncated to the desired size. If a reduced system of a different size is desired, IRKA must recompute the matrices, whereas BT can just truncate. But the premium we pay for immediate BT changes is far too expensive, considering that IRKA’s results are just as good but are obtained in a fraction of the time. As an example, the reduced system from IRKA with *k* = 20 was used to run the same dendritic democratization parameter sweep as in Figure 6. The computed synaptic conductances for the BT and IRKA systems agree to nearly 5 digits for each compartment, indicating that IRKA is indeed computing the right reduced system.

A clear advantage of IRKA is that it handles systems of much larger dimension than BT. We could not use BT for systems where *N* > 7000, due to lack of memory, and even if we could, the computation time would preclude any practical use for such large systems. IRKA does not suffer from this drawback, which translates into the ability to compute reduced models of cells with much finer discretizations and to reduce cells with much more complex dendritic structure.

As an example, we consider neuron n408 (Figure 2C), a rat hippocampal cell from region CA1 (Pyapali et al., 1998). We use a 2 *μ*m spatial discretization, yielding 6894 compartments. With a Connor-Stevens ion channel model this gives a total linear system size of *N* = 41364. Using IRKA we find that 5 digits of accuracy can be obtained using a system of dimension *k* ≈ 15, which is a 2700-fold reduction, or more than 3 orders of magnitude, in less than a minute (see Figure 9). If we use a finer discretization of *h* ≈ 0.5 *μ*m, we arrive at a system of size *N* = 165330. IRKA produces a 15-dimensional system that is accurate to nearly 5 digits, a monstrous 11000-fold reduction, or 4 orders of magnitude!

Quasi-active neuron models cannot spike, but adopting an integrate-and-fire (IAF) mechanism allows such models to emulate this behavior. Under our assumption that the soma is the site of action potential generation, we only need to check whether

$$\widehat{\mathbf{y}}(t)={\stackrel{\sim}{v}}_{\sigma}(t)\ge {V}_{\text{th}}$$

(25)

where *V*_{th} is some threshold voltage (recall that
${\stackrel{\sim}{v}}_{\sigma}(t)\equiv {\stackrel{\sim}{v}}_{1}({\ell}_{{b}_{J}^{1}},t)$). At each timestep during the simulation we check if threshold was reached, and if so then we instantaneously reset the soma voltage to the rest and hold it there for a refractory period of *τ*_{ref} ms.

Reset values should be chosen to produce as much similarity as possible between the outputs of the active and quasi-active models while still being biophysically reasonable. However, the state variables of the reduced system do not have biophysical significance until transformed into the observable **ŷ**(*t*). This implies our choice of rest as the reset value for ** ξ**(

Using the forked neuron as our test case, we implement the IAF mechanism using the non-uniform channel model. We discretized using *h* ≈1 *μ*m so that the nonlinear model had 3606 variables; IRKA computed a reduced model having only 10 variables. First we simulate using the nonlinear system, and then we compare this to the output from the reduced system simulation. The spikes generated from both systems are compared to determine if there are any matches. A match is said to occur when the reduced system’s spike occurs within *τ*_{ref} ms before or after a spike in the nonlinear system.

In order to quantify how well the mechanism captures spiking behavior, we compute the coincidence factor **Γ** (Kistler et al., 1997) (Jolivet et al., 2006), defined as

$$\mathbf{\Gamma}=\frac{{N}_{\text{coinc}}-{N}_{\text{nonlin}}{N}_{\text{reduced}}({\tau}_{\text{ref}}/T)}{({N}_{\text{nonlin}}+{N}_{\text{reduced}})(1-{N}_{\text{nonlin}}({\tau}_{\text{ref}}/T))/2},$$

(26)

where *N*_{nonlin} and *N*_{reduced} are the number of spikes in the nonlinear and reduced models, respectively, and *T* is the length of the simulation. The coincidence factor measures how close the spike train from the reduced model approximates that of the nonlinear model by comparing the number of coincident spikes, *N*_{coinc}, with the number of coincident spikes occurring by chance. **Γ** is scaled to ensure that **Γ** = 1 implies the spike trains are equal and **Γ** = 0 implies the spike trains would occur purely by chance.

However, **Γ** can be quite low even when most spikes from the reduced system are coincident. Thus, to better evaluate the effect of false positives, we introduce two metrics: percentage matched, which is the number of spikes from the nonlinear system that are also found in the reduced system; and percentage mismatched, which is how many spikes from the reduced system do not match any spikes from the nonlinear system:

$$\%\phantom{\rule{0.16667em}{0ex}}\text{matched}=100\frac{\#\phantom{\rule{0.16667em}{0ex}}\text{matched}}{\#\phantom{\rule{0.16667em}{0ex}}\text{nonlinear}\phantom{\rule{0.16667em}{0ex}}\text{spikes}}$$

(27)

$$\%\phantom{\rule{0.16667em}{0ex}}\text{mismatched}=100\frac{\#\phantom{\rule{0.16667em}{0ex}}\text{unmatched}\phantom{\rule{0.16667em}{0ex}}\text{reduced}\phantom{\rule{0.16667em}{0ex}}\text{spikes}}{\#\phantom{\rule{0.16667em}{0ex}}\text{reduced}\phantom{\rule{0.16667em}{0ex}}\text{spikes}}.$$

(28)

We vary *V*_{th} between 8 and 20 mV, with at least 20 simulations at each threshold value, and we use *τ*_{ref} = 4 ms throughout. Each simulation lasts 1000 ms, during which alpha-function synaptic inputs arrive at random locations and at random times.

We ran two sets of simulations to assess the effect of low- and high-activity inputs. The low-activity set used 250 “strong” inputs per simulation, while the high-activity one used 1250 “weak” inputs. These inputs are realistically calibrated by computing *ĝ _{bs}* values for the reduced system (as in §4.2). These values are scaled to obtain synaptic conductances at each compartment that would give approximately 3 mV depolarizations at the soma for the strong inputs, but only 1 mV depolarizations for the weak ones, which agrees with typical values in Chapter 3 of (Traub and Miles, 1991) and (Thomson and Lamy, 2007).

As shown in Figure 10, low thresholds capture a large number of the nonlinear spikes, but also yield more mismatched spikes. Fewer mismatched spikes occur as *V*_{th} increases, but this also leads to greatly diminished numbers of spikes generated, and hence fewer nonlinear spikes captured. More mismatched spikes were observed in the “weak” input simulations, but more spikes were matched as well. The coincidence factors were also low for these simulations, reaching a peak of 0.43 at *V*_{th} = 10 mV for the “weak” input case and a peak of 0.52 at *V*_{th} = 8 mV for the “strong” input case.

Ratio of matched and mismatched spikes, as well as coincidence factor, as *V*_{th} increases for A) 250 “strong” inputs and B) 1250 “weak” inputs per simulation. For each *V*_{th} value we used *τ*_{ref} = 4 ms and ran at least **...**

The spike data is shown explicitly in the soma potentials of Figure 11. It is evident from these plots that the reduced system captures the subthreshold behaviors very well even for large depolarizations. Furthermore, the spike times computed by the reduced system tend to be very close to the actual spike times.

Of course one has no right to expect high accuracy when applying supra-threshold stimuli to a sub-threshold model. Nonetheless, there are a number of means by which our crude mechanism may be improved. If we fail to detect a somatic spike because it was generated in the dendrite, then it may pay to build the reduction in order that it accurately tracks the quasi-active response at several, say *p*, distinct locations.

Proper investigation of this issue would address the questions of optimal placement of *p* observers, placement and calibration of threshold mechanisms at each site, and the trade of accuracy for speed as *p* is increased. We offer here only empirical evidence that such an investigation is indeed warranted.

In particular, we apply a multi-site IAF mechanism to the forked neuron subject to the same input streams that generated Figure 10. Since there are 3 branches and a soma, we set *p* = 4 and designate as observables the voltages at the soma and the midpoints of each branch. Threshold values are *V*_{th} = 22 mV at the leaf midpoints, *V*_{th} = 16 mV at the root midpoint, and *V*_{th} = 14 mV at the soma (these values were chosen after manual trial-and-error). Using the same discretization as in §6.1, IRKA computes a reduced model with *k* = 40 that is accurate to nearly 5 digits, and hence **A**_{k}^{40×40}, **B**_{k}^{40×61}, and **C**_{k}^{4×40}.

Using the same input patterns as above, we find a significant increase in the accuracy of the spiking behavior (see Figure 12) with very little change in computational cost. For the strong input case we match 56% of the actual spikes and have only a 15% mismatch rate. For the weak input case, we get 65% matched with 13% mismatched. The coincidence factor improves in both cases: from 0.43 to 0.73 for the “weak” input case, and from 0.52 to 0.66 for the “strong” input case. Moreover, the difference in simulation time was negligible when compared to the thresholding mechanism of §6.1.

Raster diagrams showing spikes computed for the nonlinear system (×’s) and the reduced system (○’s) for A) “strong” and B) “weak” input cases demonstrate that a variable-threshold, multi-site **...**

Rather than simple thresholding we note that a number of investigations of single compartment models have achieved better spike capturing by incorporating more sophisticated firing mechanisms. For example, (Fourcaud-Trocmé et al., 2003) includes a voltage-dependent exponential term, while (Jolivet et al., 2006) and (Brette and Ger-stner, 2005) consider adaptive thresholding. For the class of morphologies and channel distributions of the previous sections, adaptive thresholding, in our hands, did not produce a significant improvement in spike accuracy. Regarding the implementation of exponential integrate-and-fire we note that there is not a natural means by which to nonlinearize our reduced quasi-active system. More precisely, as we observed at the close of §3.1, the reduced state ** ξ** in (17) is governed by the small but dense pair

We have applied two distinct methods to the reduction of dimension of large scale single neuron models. We have demonstrated that in typical settings one may reduce models with tens of thousands of variables to models with merely tens of variables that still accurately track the somatic subthreshold response of spatially distributed synaptic inputs. In regimes where subthreshold stimuli dominate, such as questions of (proximal) dendritic democratization and resonance, our reduced model reveals in minutes what the full models require hours or days to simulate. Although there is no reason to trust our subthreshold integrator on suprathreshold stimuli, we have demonstrated that simply thresholding the reduced voltage at a number of observation points has the potential to capture the cell’s full spiking behavior. Further refinement of this approach holds the promise of bringing morphological specificity to the large scale network contributions of (Rudolph and Destexhe, 2006) with significantly fewer compartments than the approach proposed by (Markram, 2006).

The work in this paper is supported through the Sheafor/Lindsay Fund via the ERIT program at Rice’s Computer and Information Technology Institute CITI, NSF grant DMS-0240058, and NIBIB Grant No. 1T32EB006350-01A1

The following tables contain all the information pertaining to the ion channels and gating variable kinetics used in this paper. Table 2 is for the uniform channel model, which uses the Hodgkin-Huxley squid giant axon parameters. Table 3 is for the non-uniform channel model, whose non-uniformity comes from an A-type *K*^{+} current following Connor-Stevens kinetics and consistent with the graded channel distribution of (Hoffman et al., 1997).

- Antoulas AC, Sorensen DC. Approximation of large-scale dynamical systems: An overview. International J of Applied Math and Computer Science. 2001;11(5):1093–1121.
- Ascoli GA. Mobilizing the base of neuroscience data: the case of neuronal morphologies. Nature Reviews Neuroscience. 2006;7:318–324. [PubMed]
- Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J Neurophysiology. 2005;94:3637–3642. [PubMed]
- Bush PC, Sejnowski TJ. Reduced compartmental models of neocortical pyramidal cells. J Neurosci Methods. 1993;46:159–166. [PubMed]
- Connor JA, Stevens CF. Inward and delayed outward membrane currents in isolated neural somata under voltage clamp. J Physiol. 1971;213:1–19. [PubMed]
- Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J Neurosci. 2003;23:11628–11640. [PubMed]
- Gugercin S, Antoulas A, Beattie C.
_{2}model reduction for large-scale linear dynamical systems. SIAM J on Matrix Analysis and Applications. 2008;30:609–638. - Hasselmo ME, Giocomo LM, Zilli EA. Grid cell firing may arise from interference of theta frequency membrane potential oscillations in single neurons. Hippocampus. 2007;17:1252–1271. [PMC free article] [PubMed]
- Häusser M. Synaptic function: Dendritic democracy. Current Biology. 2001;11:R10–R12. [PubMed]
- Hines M. Efficient computation of branched nerve equations. International J of Bio-Medical Computing. 1984;15:69–76. [PubMed]
- Hines ML, Morse T, Migliore M, Carnevale NT, Shepherd GM. Modeldb: A database to support computational neuroscience. J Comput Neurosci. 2004;17:7–11. [PubMed]
- Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–544. [PubMed]
- Hoffman DA, Magee JC, Colbert CM, Johnston D. K
^{+}channel regulation of signal propagation in dendrites of hippocampal pyramidal neurons. Nature. 1997;387:869–875. [PubMed] - http://NeuroMorpho.org/. The neuromorpho.org inventory. Accessed March 11, 2008. http://NeuroMorpho.org/
- http://senselab.med.yale.edu/modeldb/. Senselab modeldb. Accessed March 11, 2008. http://senselab.med.yale.edu/modeldb/
- Hutcheon B, Yarom Y. Resonance, oscillation and the intrinsic frequency preferences of neurons. Trends Neurosci. 2000;23:216–222. [PubMed]
- Jolivet R, Rauch A, Lüscher HR, Gerstner W. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J Comput Neurosci. 2006;21:35–49. [PubMed]
- Kailath T. Linear Systems. Prentice Hall; 1980.
- Kistler WM, Gerstner W, van Hemmen JL. Reduction of the hodgkin-huxley equations to a single-variable threshold model. Neural Computation. 1997;9:1015–1045.
- Koch C. Biophysics of Computation. Oxford University Press, Inc; 1999.
- Magee JC, Cook EP. Somatic epsp amplitude is independent of synapse location in hippocampal pyramidal neurons. Nat Neurosci. 2000;3:895–903. [PubMed]
- Manwani A, Koch C. Detecting and estimating signals in noisy cable structures, I: Neuronal noise sources. Neural Computation. 1999;11:1797–1829. [PubMed]
- Markram H. The blue brain project. Nature Rev Neuroscience. 2006;7:153–160. [PubMed]
- Martinez JO. Rice-baylor archive of neuronal morphology. [Accessed May 1, 2008]. http://www.caam.rice.edu/~cox/neuromart.
- Pinsky PF, Rinzel J. Intrinsic and network rhythmogenesis in a reduced Traub model for CA3 neurons. J Comput Neurosci. 1994;1:39–60. [PubMed]
- Poznanski R. A generalized tapering equivalent cable model for dendritic neurons. Bulletin of Mathematical Biology. 1991;53(3):457–467. [PubMed]
- Puil E, Gimbarzevsky B, Miura RM. Quantification of membrane properties of trigeminal root ganglion neurons in guinea pigs. J Neurophysiol. 1986;55:995–1016. [PubMed]
- Pyapali G, Sik A, Penttonen M, Buzsaki G, Turner D. Dendritic properties of hip-pocampal ca1 pyramidal neurons in the rat: Intracellular staining in vivo and in vitro. J of Comparative Neurology. 1998;391:335–352. [PubMed]
- Rall W. Branching dendritic trees and motoneuron membrane resistivity. Experimental Neurology. 1959;1:491–527. [PubMed]
- Rudolph M, Destexhe A. Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies. Neural Computation. 2006;18:2146–2210. [PubMed]
- Schierwagen AK. A non-uniform equivalent cable model of membrane voltage changes in a passive dendritic tree. J of Theoretical Biology. 1989;141:159–179. [PubMed]
- Thomson AM, Lamy C. Functional maps of neocortical local circuitry. Frontiers in Neuroscience. 2007;1:19–42. [PMC free article] [PubMed]
- Timofeeva Y, Cox SJ, Coombes S, Josíc K. Democratization in a passive dendritic tree: an analytical investigation. J Comput Neurosci. 2008;25:228–244. [PubMed]
- Traub RD, Miles R. Neuronal Networks of the Hippocampus. Cambridge University Press; 1991.
- Trefethen LN, Bau D. Numerical Linear Algebra. SIAM; 1997.
- Ulrich D. Dendritic resonance in rat neocortical pyramidal cells. J Neurophysiol. 2002;87:2753–2759. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |