Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC4894014

Formats

Article sections

Authors

Related links

Math Biosci. Author manuscript; available in PMC 2016 July 1.

Published in final edited form as:

PMCID: PMC4894014

NIHMSID: NIHMS787118

Corresponding author: Aslak Tveito, Email: on.alumis@kalsa

The publisher's final edited version of this article is available at Math Biosci

See other articles in PMC that cite the published article.

Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well known from the literature.

Numerous mechanisms in cells are fueled by the energy contained in concentration gradients that exist across the cell membrane. One such process is the electrical signaling that results from ionic fluxes carried by specialized membrane proteins, collectively termed ion channels. Cell membranes are densely populated by ion channels, which fluctuate between different states in a stochastic manner that depends on the charge difference across the membrane (the membrane potential). These fluctuations between the various states are commonly represented by continuous-time Markov models; see e.g. [1, 2, 3, 4]. Markov models have been successfully used for half a century and offer great flexibility for precisely describing the functional states of a channel. Given the structure and rates of the Markov model, it is straightforward to use simulations to study channel behavior. But the inverse problem, which seeks to infer the structure and rates of the Markov model from single-channel recordings, is much more challenging and remains a field of active research; see e.g. [5, 6, 7, 8, 9].

Single channel recording was first demonstrated in the seminal work by Sakmann and Neher; see [10, 11]. Many of the principles used to deduce the form and rates of Markov models based on single channel data were developed by Colquhoun and Hawkes beginning almost forty years ago; see [12, 13], and are classically summarized in chapter 18 of [14]. More recently, Qin, Auerbach and Sachs [15, 16] developed maximum likelihood (MLE) approaches for defining the most probable hidden Markov model for a given dataset. These algorithms form the core of the open source QuB software package, which is available to the community, and contains tools for automated Markov model construction and parameterization; see Nicolai and Sachs [17]. Furthermore, Markov chain Monte Carlo (MCMC) fitting has been used to solve the inversion problem for models of intracellular calcium channels, which poses the same mathematical problem as the gating of voltage-dependent channels, see [18, 19, 20, 6, 7, 8, 9]. While both the MLE and MCMC approaches have been used to good effect, each has specific shortcomings [7], and a robust method for determining hidden Markov model structure from single-channel data remains a priority for the field.

The problem of identifiability of a model based on recordings of conducting and non-conducting states is of great importance and has been addressed by many authors. Characterizations of identifiable models and non-identifiable models have been provided by e.g. Fredkin and Rice [21] and more recently by Siekmann et al. [7] and Hines et al. [8]. These results, however, address the classical problem based on voltage clamped data and the results are therefore not automatically applicable to the case under consideration in the present paper.

Prior efforts to address the problem of identifying the rates of Markov models have been based on conventional experimental procedures for recording single-channel activity. That is, by fitting the binary changes in transmembrane current associated with channel opening and closure under conditions where transmembrane potential is clamped by the experimenter. The purpose of the present paper is to describe a fundamentally different approach, which involves an analysis procedure for defining a hidden Markov model from measurements of time-varying transmembrane potential in current clamped single ion channels. To the best of our knowledge, this is the first attempt to identify Markov models based on such voltage traces of single channel data. A similar approach has previously been applied for identification of Markov models from voltage recordings made in whole-cells; see e.g. Milescu et al. [22]. Following the tradition in the field, we present the method using simulated data. It turns out that the methodology described here also enable us to address the question of identifiability. We demonstrate use of the method to compute rates of a Markov model and we will show to determine wether the computed rates are unique.

Our aim is to devise an alternative method for inverting Markov models. The inversion will be based on time-traces of the transmembrane potential of single ion-channels. The traces used in our analysis will be pseudo-experimental data generated by a dynamical model formulated in terms of an ordinary differential equation including a stochastic term. The stochastic term is governed by a Markov model with rate-functions expressing the probability of going from one state to another state. The inversion problem is to determine these rate-functions based on observations of recordings of the time-traces of the transmembrane potential.

By running numerous Monte Carlo simulations using the stochastic differential equation, we can compute probability density functions by gathering the simulation results in histograms. The same probability density functions can be computed by solving a deterministic system of partial differential equations, see e.g. Smith [2], Bressloff [3] or [4]. In this report we will use the simulation results based on the stochastic differential equations as pseudo-experimental data in terms of time-dependent voltage traces represented by histograms. The method of inversion is to adjust the parameters of the system of deterministic partial differential equation so that the solution of this system is as close as possible to the histograms representing pseudo-experimental data. The adjustment will be performed in term of minimizing a cost-function.

The method is described for Markov models of the potassium channel, but the method is quite general and can, in principle, be applied to Markov models of any ion channel.

Let us first consider a Markov model on its simplest possible form;

$$C\underset{\beta}{\overset{\alpha}{\rightleftarrows}}O.$$

(1)

Here *C* and *O* denote the closed and open states, respectively. The rates *α* and *β* represent the probability of leaving a state in the sense that, for a small time-step Δ*t*, we have

$$\alpha \mathrm{\Delta}t=\text{Prob}\phantom{\rule{0.16667em}{0ex}}[S(t+\mathrm{\Delta}t)=O\mid S(t)=C]$$

and

$$\beta \mathrm{\Delta}t=\text{Prob}\phantom{\rule{0.16667em}{0ex}}[S(t+\mathrm{\Delta}t)=C\mid S(t)=O],$$

respectively. The problem of inversion for this simple model is to find the rates *α* and *β* such that the Markov model represents the behavior of the model as good as possible. It is well known that the probabilities *o* = *o*(*t*) and *c* = *c*(*t*) of being in the open (O) or closed state (C) evolve according to the following system of ordinary differential equations (see e.g. Keener and Sneyd [1]),

$$\begin{array}{l}{o}^{\prime}=\alpha c-\beta o,\\ {c}^{\prime}=\beta o-\alpha c.\end{array}$$

Since the sum of the probabilities add up to one, we can reduce this system to a single equation that is easily solved. Experimental data on probability of being in the open or closed state can thus be used to find determine the rates *α* and *β*.

In principle this approach is straightforward to generalize to much more complex Markov models involving many states. Generally, the system of ordinary differential equations governing the vector *v* of probabilities of occupying different states is given by

$${p}^{\prime}=Ap.$$

Here the matrix *A* contains the rates of the Markov model and the problem is to compute these rates. For a matrix with constant coefficients, the solution of this problem can, in principle, always be found on the form

$$p(t)=exp(A){p}_{0},$$

where *p*_{0} is the vector of probabilities at time *t* = 0. In principle, this formula can be used to deduce the rates of the Markov model, but severe difficulties arise due to inherent instabilities in the inversion process. This approach to inversion is based on experimental data with clamped transmembrane potential. Our method is based on data where the transmembrane potential is recorded.

For a single channel, with the initial condition *v*(0) = *v*^{0}, the dynamics of the transmembrane potential can be modeled as follows (see e.g. [3, 2, 4]),

$$C{\overline{v}}^{\prime}=-\gamma {g}_{K}(\overline{v}-{v}_{K})-{g}_{S}\phantom{\rule{0.16667em}{0ex}}(\overline{v}-{v}_{S})-{I}_{0}$$

(2)

where *C* is the specific capacitance of the preparation, *v _{K}, v_{S}* are the Nernst potentials for potassium and for the seal current, respectively,

In most computations, we used the following parameters:

$$C=1000\text{fF},\phantom{\rule{0.16667em}{0ex}}{g}_{K}=10\text{pS},{g}_{S}=100\text{pS},{v}_{K}=-360\text{mV},{v}_{S}=0\text{mV}.$$

In addition the applied current will be specified in each simulation, and it will be in the interval 1000 to 3000 fA. If other parameters are used, it will be stated in the text.

Both voltage and current clamp recordings are based on the same fundamental recording arrangement in which a portion of cell membrane (and included channel or channels) is contained between two electrodes, and as such are elements in a circuit that can be made between the two electrodes. Feedback control can then be used to prescribe either the current passing between the two electrodes (current clamp), or the potential difference between them (voltage clamp). In the case of voltage clamp, the experimenter prescribes a voltage waveform and the feedback amplifier then applies the current required to maintain that control. Because single ion channels open and close in a binary and stochastic fashion the current required to maintain the prescribed voltage reflects those opening and closing events as sharp switch-like changes (Figure 1 Left). In current clamp, the feedback control is modified to prescribe a current waveform and the voltage between the two electrodes (and therefore across the membrane patch) is measured simultaneously. Because the gating of the membrane channels is subject to this voltage, channel opening and closing will occur and result in subsequent changes to measured voltage (as shown at Right in Figure 1). It has become conventional for single channel recordings to be conducted in voltage clamp mode because this provides a direct measurement of single channel current at the prescribed voltage, which is intuitive to experimenters and analysts alike. However, because channel transitions are stochastic and the rates at which individual channels transition between their closed, open, and other functional states (e.g. inactivated) are voltage-dependent, this approach also requires many sweeps at different test voltages to characterize the voltage-dependencies of these transitions. In principle, appropriately designed current clamp recordings should allow these constraints to be partially circumvented because voltage varies continuously with channel opening and closing (thus eliminating discontinuities implicit to choosing discrete test voltages). However, at this time we are not aware of any data that have been collected for single channels in this recording mode. In part it is likely that this is due to technical aspects in making the recordings - specifically the combination of patch capacitance, single channel current, and recording noise are likely to apply significant constraints. Additionally the analytic framework for interpreting such recordings is poorly defined, and this is a significant disincentive for attempting challenging experiments of this type. With this paper we seek to develop that framework. In Figure 1 the two protocols are illustrated.

By running the stochastic model (2) many times, we can summarize the results in probability density functions for the open and closed states. These probability densities can also be computed directly by solving a deterministic system of partial differential equations; see e.g. [3, 2, 4]. The probability of the channel being in the open state or closed state for voltages between *v* and *v* + Δ*v* is given, respectively, by

$${P}_{o}\{v<v(t)<v+\mathrm{\Delta}v\}={\int}_{v}^{v+\mathrm{\Delta}v}{\rho}_{o}(w,t)dw,$$

and

$${P}_{c}\{v<v(t)<v+\mathrm{\Delta}v\}={\int}_{v}^{v+\mathrm{\Delta}v}{\rho}_{c}(w,t)dw$$

where *ρ _{o}* are

$$\begin{array}{r}\frac{\partial {\rho}_{o}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{o}{\rho}_{o})=\alpha {\rho}_{c}-\beta {\rho}_{o},\\ \frac{\partial {\rho}_{c}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{c}{\rho}_{c})=\beta {\rho}_{o}-\alpha {\rho}_{c},\end{array}$$

(3)

where

$${a}_{o}=-\frac{1}{C}\phantom{\rule{0.16667em}{0ex}}({g}_{K}\phantom{\rule{0.16667em}{0ex}}(v-{v}_{K})+{g}_{L}(v-{v}_{L})+{I}_{0}),$$

(4)

$${a}_{c}=-\frac{1}{C}\phantom{\rule{0.16667em}{0ex}}({g}_{L}\phantom{\rule{0.16667em}{0ex}}(v-{v}_{L})+{I}_{0}).$$

(5)

For a derivation of this system, see e.g. [2, 4]. Numerical methods for solving this system is discussed in detail in [4].

Boundary conditions of this model are set up such that probability do not leak across the boundary. Suppose we solve the system on the interval Ω = [*v*_{1}*, v*_{2}], then we require that the flux is zero at both *v* = *v*_{1} and *v* = *v*_{2}. Here *v*_{1} and *v*_{2} define the upper and lower limits of the transmembrane potential governed by the model (2).

In order to find the domain Ω = [*v*_{1}*, v*_{2}] where we want to solve the system (3) we need to derive bounds on the solution of the model (2). When the channel is open, the model reads

$$C{v}^{\prime}=-{g}_{K}(v-{v}_{K})-{g}_{S}\phantom{\rule{0.16667em}{0ex}}(v-{v}_{S})-{I}_{0}$$

(6)

and when the channel is closed (non-conducting) (*γ* = 0)*,* the model reads

$$C{v}^{\prime}=-{g}_{S}\phantom{\rule{0.16667em}{0ex}}(v-{v}_{S})-{I}_{0}.$$

(7)

Therefore, the solutions will either converge towards (conducting channel)

$${v}_{o}=\frac{{g}_{K}}{{g}_{K}+{g}_{S}}{v}_{K}-\frac{1}{{g}_{K}+{g}_{S}}{I}_{0}$$

(8)

or towards (non-conducting channel)

$${v}_{n}=-\frac{{I}_{0}}{{g}_{S}}$$

(9)

where we have used *v _{S}* = 0. Furthermore, if the initial condition

$${v}_{o}-{v}_{n}=\frac{{g}_{K}}{{g}_{S}({g}_{K}+{g}_{S})}\phantom{\rule{0.16667em}{0ex}}({I}_{0}+{g}_{S}{v}_{K})$$

which is negative for all values of *I*_{0} of interest and therefore *v*_{1} = *v _{o}* and

In order to invert the measured data to find the coefficients of the Markov model, it is important to have data covering the values of the transmembrane potential that is of interest in the Markov model. This can be achieved by adjusting the applied current *I*_{0}, the initial condition *v*^{0}, or the equilibrium potential *v _{K}*.

Suppose we have pseudo-experimental data generated by repeated runs of the model (2). These data can be combined in order to define the probability density functions for the open state and for the non-conducting states. In the special case of a 2x2 Markov model (see (1)) there is only one non-conducting state (the closed state *C*), but for more complex Markov models there may be many conducting or non-conducting states.

Real experimental data can only distinguish between conducting (open) states and non-conducting states (closed or inactivated states), and therefore we group the pseudo experimental data into two distributions; * _{o}* =

$$J(r)={\Vert \rho (r)-\overline{\rho}\Vert}^{2}.$$

(10)

Here *ρ* = *ρ*(*r*) = (*ρ _{o}*(

The crux of our method is to compute the parameters *r* by minimizing the cost function *J* = *J*(*r*) given by (10). We will show detailed examples below how this is done and how it works, but first we will characterize what we mean by an optimal solution.

In general, it is not at all clear that the rates of a Markov model can be computed based on the observations of wether the channel is in the conducting or non- conduction mode. As pointed out in e.g. [7], the situation is often complicated and the problem of deciding wether the rates can be identified is topic for ongoing research. Siekmann et al. [7], use the MCMC method to detect non-identifiable models. We will use a completely different approach but our results coincides with their observations. In order to characterize invertible and non-invertible cases, we need to define invertibility in this context. Roughly speaking, we state that a model is invertible by our method if we can find a unique minimum of the cost function (10).

From classical Calculus (see e.g. Apostol [23]), we may get an indication about the optimality of a given candidate solution *r*^{*} by invoking the properties of the function *J* in the vicinity of *r*^{*}. By Talyor’s theorem, we have

$$J({r}^{\ast}+\epsilon r)=J({r}^{\ast})+\epsilon \nabla J({r}^{\ast})\xb7r+\frac{{\epsilon}^{2}}{2}{r}^{T}H({r}^{\ast})r+O({\epsilon}^{3})$$

(11)

for any vector *r* and small number *ε*. Here *H* is the Hessian matrix of the function *J*. Clearly, *H* is symmetric, and assume in addition that it is positive definite. Then, for any vector *y*, we have

$${y}^{T}Hy\ge {\lambda}_{0}{\mid y\mid}^{2}$$

where *λ*_{0}
*>* 0 is the smallest eigenvalue of the Hessian matrix *H*, and *|* · *|* is the usual Euclidian norm. Suppose that *r*^{*} is a stationary point of *J*, i.e. *J*(*r*^{*}) = 0, then, for any non-zero vector *r* and small non-zero value *ε*, we have

$$J({r}^{\ast}+\epsilon r)\approx J({r}^{\ast})+\frac{1}{2}{\epsilon}^{2}{r}^{T}H({r}^{\ast})r\ge J({r}^{\ast})+\frac{{\lambda}_{0}}{2}{\epsilon}^{2}{\mid r\mid}^{2}>J({r}^{\ast}).$$

Since, for all small and non-zero values of *ε* and any non-zero vector *r* we have *J*(*r*^{*} + *εr*) *> J*(*r*^{*})*,* the parameters given by *r*^{*} define a local minimum of the function *J*. We also note that the smallest eigenvalue is a measure for how well-defined the local minimum is. If the smallest eigenvalue is much larger than zero, any step *r* away from the minimum will increase the value of the function *J* considerably. Contrary, when the value of *λ*_{0} is very small, the value of *J* varies little around *r*^{*} and a well defined minimum is therefore hard to find. In the case of *λ*_{0} = 0, we know that there is an associate (non-zero) eigenvector *r*_{0} such that *H*(*r**)*r*_{0} = 0 and thus

$$J({r}^{\ast}+\epsilon {r}_{0})\approx J({r}^{\ast})+\frac{{\epsilon}^{2}}{2}{r}_{0}^{T}H({r}^{\ast}){r}_{0}=J({r}^{\ast}).$$

So given the local minimum *r*^{*}, we can add *εr*_{0} for a small value of *ε*, and the value of *J* is more or less unchanged; the local minimum *r*^{*} is not unique. It is also straightforward to see that if the smallest eigenvalue *λ*_{0} is negative (then the matrix is not positive definite) at a stationary point *r*^{*}, that parameter vector is not a local minimum since

$$J({r}^{\ast}+\epsilon {r}_{0})\approx J({r}^{\ast})+\frac{1}{2}{\epsilon}^{2}{r}_{0}^{T}H({r}^{\ast}){r}_{0}=J({r}^{\ast})+\frac{{\lambda}_{0}}{2}{\epsilon}^{2}{\mid {r}_{0}\mid}^{2}<J({r}^{\ast}).$$

Thus, if *r*^{*} is changed in the direction of the eigenvector *r*_{0} associated the negative eigenvalue *λ*_{0}, the value of the function *J* is reduced and therefore *r** is not a local minimum.

Based on the consideration above, we are in position to discuss what we mean by identifiability of a Markov model. We will refer to a model as identifiable by our method if the model satisfies three criterions:

*Stationary point.*If we use the Fminsearch algorithm (a Matlab function based on the Nelder-Mead algorithm) to find an optimal parameter vector*r*^{*}, this parameter vector must be a stationary point of the cost function*J*=*J*(*r*) in the sense that*|**J*(*r*^{*})*|*≈ 0.*Local minimum.*The candidate optimal solution*r*^{*}must be a local minimum in the sense that all eigenvalues of the Hessian matrix*H*evaluated at*r*^{*}are strictly positive.*Global minimum.*If the Fminsearch algorithm is run with different initial conditions and result in different local minima*r*, a candidate optimal solution*r*^{*}must have a value of the cost function*J*that is significantly lower than all the other candidates.

This is a pragmatic definition of identifiability which is useful in practical computations. However, the computational procedure outlined here may still fail; it may identify a minimum that is not really the global minimum, and it may indicate that the solution is unique even if it is not. This is the unsatisfying state of matters due to the fact that minimization of complicated non-linear functions is very challenging.

In the procedure introduced above, it remains to define the norm used in (10) to measure the difference between the pseudo-experimental data and the probability density functions *ρ* computed by solving the system (3). The issue here is to compare a numerical solution defined on a uniform mesh representing the solution of a system of partial differential equations of the form (3) with pseudo-experimental data based on repeated solves of stochastic differential equation (2). We do the comparison by introducing an adaptive mesh that reflects the number of samples in the pseudo-experimental data in a given interval, and then we integrate difference between the experimental data and the numerical solutions over the interval.

For *m >* 0 we define the vector *x* = [*x*_{0}*, x*_{1}*,* . . . , *x _{m}*] to represent the boundaries of

$${\Vert \rho -\overline{\rho}\Vert}^{2}=\frac{{\sum}_{i=1}^{{m}^{o}}{({\rho}_{o,i}-{\overline{\rho}}_{i}^{o})}^{2}+{\sum}_{i=1}^{{m}^{n}}{({\rho}_{c,i}-{\overline{\rho}}_{i}^{n})}^{2}}{{\sum}_{i=1}^{{m}^{o}}{({\overline{\rho}}_{i}^{o})}^{2}+{\sum}_{i=1}^{{m}^{n}}{({\overline{\rho}}_{i}^{n})}^{2}},$$

(12)

where

$${\rho}_{o,i}=\frac{1}{{x}_{i}^{o}-{x}_{i-1}^{o}}{\int}_{{x}_{i-1}^{o}}^{{x}_{i}^{o}}{\rho}_{o}(v;r)dv$$

and

$${\rho}_{c,i}=\frac{1}{{x}_{i}^{c}-{x}_{i-1}^{c}}{\int}_{{x}_{i-1}^{c}}^{{x}_{i}^{c}}{\rho}_{c}(v;r)dv.$$

Here *ρ* = (*ρ _{o}*(

The pseudo-experimental data used for inversion data was generated by solving the stochastic equation (2) 100 times for 100 seconds using Δ*t* = 1/1000 ms giving 10 billion samples. In the optimization, the first 10% of each run is discarded in order to obtain steady state behaviour. Two different applied currents are applied; *I*_{0} = 1000fA and *I*_{0} = 3000fA. For every inversion, 1000 initial guesses were evaluated and the ten best cases (lowest value of *J*) were used as start vectors for Fminsearch (Matlab). The best of the ten solutions provided by Fminsearch was defined to be the optimal solution. All computations were performed on a standard desktop computer. The PDE problems typically took a few seconds to solve, and the overall optimisation in the order of hours.

The eigenvalues of the Hessian matrix are computed using singular value decomposition (the *svd* function in Matlab). The Hessian matrix is computed using numerical differentiation of sixth order [24].

We will show results of inversion using the method described above for a series of Markov models. The inversion is based on the fact the probability density functions of the open and the non-conducting states can be obtained by pseudo-experimental data and by solving a system of partial differential equations. The parameters defining the Markov model are adjusted in order for the solution of the system of partial differential equations to be as similar to the pseudo-experimental data as possible. We will start by showing that the distribution computed by repeated runs of the stochastic model (2) (Monte Carlo simulations) and the solution of the deterministic system of partial differential equations converge as the mesh parameters are refined, and the number runs of the stochastic model is increased. This result is well known, see e.g. [2, 3, 4].

For ease of reference, we will call the solutions computed by solving the stochastic model (2) Monte Carlo simulations (MC), and the solutions of the associated deterministic system of partial differential equations (PDE) will be called the PDE-solution.

We consider the Markov model

$${C}_{1}\underset{\beta}{\overset{2\alpha}{\rightleftarrows}}{C}_{2}\underset{2\beta}{\overset{\alpha}{\rightleftarrows}}O.$$

(13)

Here the rates (ms^{−1}) are specified on the form

$$\alpha (v)=exp({a}_{1}+{a}_{2}v),\beta (v)=exp({b}_{1}+{b}_{2}v),$$

with *α*_{1} = −1.0, *α*_{2} = 0.02, *b*_{1} = −4.5, *b*_{2} = −0.05 and *I*_{0} = 3000fA. The PDE model associated this Markov model is given by

$$\begin{array}{ll}\hfill {\partial}_{t}{\rho}_{o}+{\partial}_{v}\phantom{\rule{0.16667em}{0ex}}({a}_{o}{\rho}_{o})& =-2\beta (v){\rho}_{o}+\alpha (v){\rho}_{{c}_{2}},\\ \hfill {\partial}_{t}{\rho}_{{c}_{2}}+{\partial}_{v}\phantom{\rule{0.16667em}{0ex}}({a}_{n}{\rho}_{{c}_{2}})& =2\beta (v){\rho}_{o}-(\beta (v)+\alpha (v)){\rho}_{{c}_{2}}+2\alpha (v){\rho}_{{c}_{1}},\\ \hfill {\partial}_{t}{\rho}_{{c}_{1}}+{\partial}_{v}\phantom{\rule{0.16667em}{0ex}}({a}_{n}{\rho}_{{c}_{1}})& =\beta (v){\rho}_{{c}_{2}}-2\alpha (v){\rho}_{{c}_{1}},\end{array}$$

(14)

where *ρ*_{o}, *ρ*_{c1} and *ρ*_{c1} denote the probability density function of the states *O, C*_{1} and *C*_{2}, respectively, and the functions *α _{o}* and

In Figure 2 we compare the results of the solution of the system (14) (red, solid lines) and the solution obtained running MC simulations (histograms) using the stochastic model (2).

Clearly, in order to be able to identify the rates of the Markov model, the model output must be sensitive with respect to changes in the model. For the Markov model (13) with the rates given above, we illustrate the sensitivity with respect to changes in the rates in Figure 3. The figure shows the steady state probability distribution of the open state (left) and the non-conducting states (right). We observe that by multiplying the rate *b*_{1} with either 0.9 or 1.1, the stationary solution changes significantly. This indicates that it is possible to invert this model.

We first apply our inversion procedure to three Markov models with no loop. The form of the models are defined in Table 1 and the results are given in Table 2. The Markov models are based on Scheme 4 in Zhou et al. [25]. The CCCCOI model is identical to their model except that the *β* rate has been altered to fit the standard Boltzmann form. In the CCCCO model, we have simply removed the inactivated state I, and in the CO model we have kept just one closed and one open state.

The table shows the correct parameters (True value) of the Markov models, the computed (inverted) parameters denoted by *r*^{*} for the model CCO, the value of the cost function *J*(*r*^{*}), the value of the norm *|**J*(*r*^{*})*|* and the value of the smallest eigenvalue **...**

We observe that for all the three Markov model (CO, CCCCO and CCCCOI) we are able to compute the rates based on the pseudo-experimental data. The accuracy of the computed parameters are quite good. In the optimum, the value of the cost function is quite small and so is the norm of the gradient of the cost function which means that we are close to a stationary point. Finally, the smallest eigenvalue for all cases are positive. These cases are identifiable according to the definition given above.

In the above problems, all parameters are involved with transitions between conducting and non-conducting states. The rates are also voltage dependent. Next, we try a problem where this is not the case:

$${C}_{1}\underset{{\beta}_{1}}{\overset{2{\alpha}_{1}}{\rightleftarrows}}{C}_{2}\underset{2{\beta}_{2}}{\overset{{\alpha}_{2}}{\rightleftarrows}}O$$

with *α _{i}* = exp(

It is quite common to use Markov models including a loop of the form illustrated in Figure 4. However, Siekmann et al [7] conclude that this model is not identifiable. They considered a case with constant rates (independent of the trans-membrane potential) and their method is very different from ours. It is therefore interesting to see if the model is identifiable in our framework The rates of the model are parameterized as follows:

$${\alpha}_{i}(v)=exp({a}_{i1}+{a}_{i2}v),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\beta}_{i}(v)=exp({b}_{i1}+{b}_{i2}v),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}i=1,2,3.$$

In order to test if the model is identifiable using our method, we can use data provided by the PDE model an evoke to value of the smallest eigenvalue. We find that ${\lambda}_{0}^{\ast}=2.798\xb7{10}^{-9}$ which indicate that this case is very hard to invert. Also, running inversion procedure does not result in a converged solution.

We also consider a case where we have removed the voltage dependence of the rates, i.e. we have put *α _{i}*

For a special case of the three state model shown in Figure 4, we can show mathematically that it is impossible to identify the rates based on observation of (* _{o}, _{n}*). To see this, we first note that the associate PDE system is given by

$$\frac{\partial {\rho}_{o}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{o}{\rho}_{o})={\alpha}_{1}{\rho}_{c}-({\beta}_{1}+{\alpha}_{2}){\rho}_{o}+{\beta}_{2}{\rho}_{i},$$

(15)

$$\frac{\partial {\rho}_{c}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{c}{\rho}_{c})={\beta}_{1}{\rho}_{o}-({\alpha}_{1}+{\beta}_{3}){\rho}_{c}+{\alpha}_{3}{\rho}_{i},$$

(16)

$$\frac{\partial {\rho}_{i}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{c}{\rho}_{i})={\alpha}_{2}{\rho}_{o}-({\beta}_{2}+{\alpha}_{3}){\rho}_{i}+{\beta}_{3}{\rho}_{c}.$$

(17)

By adding the two latter equations of the system above, we find the system

$$\frac{\partial {\rho}_{o}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{o}{\rho}_{o})={\alpha}_{1}{\rho}_{c}-({\beta}_{1}+{\alpha}_{2}){\rho}_{o}+{\beta}_{2}{\rho}_{i},$$

(18)

$$\frac{\partial {\rho}_{n}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{c}{\rho}_{n})=({\beta}_{1}+{\alpha}_{2})\phantom{\rule{0.16667em}{0ex}}{\rho}_{o}-{\alpha}_{1}{\rho}_{c}-{\beta}_{2}{\rho}_{i},$$

(19)

where *ρ _{n}* =

$$\frac{\partial {\rho}_{o}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{o}{\rho}_{o})=\mu {\rho}_{n}-({\beta}_{1}+{\alpha}_{2}){\rho}_{o}$$

(20)

$$\frac{\partial {\rho}_{n}}{\partial t}+\frac{\partial}{\partial v}\phantom{\rule{0.16667em}{0ex}}({a}_{c}{\rho}_{n})=({\beta}_{1}+{\alpha}_{2})\phantom{\rule{0.16667em}{0ex}}{\rho}_{o}-\mu {\rho}_{n}.$$

(21)

From this system we notice that we can compute *ρ _{o}* and

The size of the capacitance influence the identifiability of a Markov model. This is illustrated in Figure 5 for the model CO (upper left) CCCCO (upper right), CCCCOI (lower left) and the three state model including a loop (lower right). The computations is based on data generated by solving the PDE using the correct rates of the Markov model. We observe that for the CO, CCCCO and CCCCOI models, there is a range of values of the capacitance for which the model is invertible. For the three state model with a loop, however, the model is not identifiable for any value of the capacitance.

The method introduced here is easy to work with for Markov models with few states, but in principle it can also be applied to more complex methods. In Figure 6 we show a 17 state model taken from Zhou et al. [25]. The parameters of the model are given in Table 4, and the eigenvalues of the Hessian are given in Table 5. This model has many very small eigenvalues and is therefore clearly not identifiable by our method. In Table 5 we show the change of the cost function when the parameter vector is changed. We change the parameter by a small step along each eigenvector of the Hessian matrix. Each eigenvector is normalized to have Euclidian norm equal to one, and we change each parameter vector by a step of length 10^{−5}, 10^{−3} and 0.1. We note that for the eigenvectors associated small eigenvalues, the change of the cost function is very small when the parameter vector is changed. This is exactly as expected from the theory discussed above. Finally, large steps in the direction of eigenvectors with larger eigenvalues result in significant changes of the cost function.

The table defines the name and the form of three Markov models. The rates are written on the form *α*(*v*) = exp(*α*_{1} + *α*_{2}*v*)*, β*(*v*) = exp(*b*_{1} + *b*_{2}*v*). The *O–I* transition is assumed to be voltage independent; *γ* = **...**

The table shows the correct parameters (True value) of the Markov models, the computed (inverted) parameters denoted by *r*^{*} for the models CO, CCCCO and CCCCOI, the value of the cost function *J*(*r*^{*}), the value of the norm *|**J*(*r*^{*})*|* and the value **...**

In Table 6 we consider the reduced case where the *O*_{2}-state is removed from the model. The smallest eigenvalue does not change much, indicating that the *O*_{1}-*O*_{2} transition is not the most difficult one to identify.

The main purpose of this report is to propose a new method for computing rates of Markov models of voltage-gated ion channels. Other methods are based on experimental data obtained by clamping the transmembrane potential and recording whether the channel is in the conducting (open) or non-conducting (closed or inactivated) mode. Our method is based on repeated recordings of traces of the transmembrane potential of the single channel. The method proposed here also enables analysis of identifiability of the model and it may provide a way of quantifying the uncertainty of the resulting rates of the Markov model.

As pointed out by Siekmann et al. [7], it is straightforward to devise procedures that will provide the rates of Markov models based on experimental data. The difficulty involved is to obtain some measure of the reliability of the computed rates. Clearly, if two sets of completely different rates can provide the same results from a model, the model is not suited to give the insight we are ultimately looking for.

Our method is based on comparing probability density functions from experimental data with probability density functions generated by solving a system of partial differential equations. The experimental data in this study are generated by Monte Carlo (MC) simulations where a given Markov model governs the stochastic state of the ion channel. The challenge is then to tune the parameters of a system of partial differential equations (PDE) such that the solution of the PDE system resembles the solution generated by MC as closely as possible. The model is identifiable if different set of rates for the Markov model gives different probability density distributions.

We find that the simple CO, CCCCO and CCCCOI models are identifiable, and that the looped model, see Figure 4, is not identifiable using our method. For the latter case, one eigenvalue of the Hessian matrix is very small and thus we may change the parameter vector *r* defining the Markov model a step in the direction of the associated eigenvector and observe little or no change in the cost function.

For the three state Markov model with a loop (Figure 4), we give an example where the rates between the closed and inactivated states simply disappear from the model in the sense that we can compute the probability density distribution of the conducting (open) and non-conducting (closed plus inactivated) states without knowledge of these rates. The model is therefore clearly not identifiable in this particular case.

For the complex Markov model given in Figure 6, we find that eight eigenvalues are very small. We can change the parameter vector in either of the associated eigenvectors and observe very little change of the cost function.

Following Siekmann et al [7] we have considered the Markov models shown in Figure 7. Here data are created using the model without a loop (left panel) and the data are used to invert the model with a loop (right panel). The MCMC method used by Siekmann et al then, correctly, yields results consistent with the model without a loop. Our method similarly provided rates where the transitions between the two open states are virtually zero.

All the observations regarding identifiability reported in the present paper are based on the particular cost function (10). This cost function can be changed in many different ways; we may vary the applied current *I*_{0} in numerous ways, we can add the time course of each trace of pseudo-experimental data, and we can alter the transmembrane concentration gradient and thus change the associated equilibrium (Nernst) potential of the potassium current. None of these changes have significantly altered the observations reported here, and we have therefore chosen to rely on the simplest version of the cost function.

As demonstrated above, a model is not identifiable if the smallest eigenvalue of the Hessian matrix is zero. In numerical computations, however, you will never get exactly zero – rather a very small number. But by considering the Taylor series (11), it is evident that a very small eigenvalue implies that the cost function changes very little if the parameter vector is changed in the direction of the associated eigenvector. Therefore, we refer to a model as not identifiable when the smallest eigenvalue is sufficiently small. It may be possible to follow this line of reasoning one step further and use the size of the smallest eigenvalue to quantify more precisely the uncertainty of the rates of the Markov model using the method introduced here.

The errors of the computed parameters can be caused of variations in the data since the data are generated by a stochastic process, and it can be caused by inaccuracies inherent in the method of inversion. If we do the inversion based on data created by the deterministic system of partial differential equations, there is no randomness involved but still there are errors in the computed parameters. This indicates that there are inherent inaccuracies in the method of inversion (which is very common for methods of solving inverse problems). One source of error is numerical method used to solve the system of partial differential equations. This part of the error is typically reduced by mesh refinement, but extremely fine meshes are unattractive from a computational point of view. On the other hand; when stochastic data are used, the error typically are reduced with larger samples, and also repeated inversion using the same data but different initial values, yields the same results. This confirms that also stochastic variations are source of variations in computed parameters.

Although single channel data has in principle have been available for forty years ([10, 11]), the majority of patch clamp data is still obtained from whole cell recordings. Our method is suitable for inverting data based on single ion channel but is not directly applicable to data from whole cell recording. For a single channel it is relevant to consider the probability density as a function of the trans-membrane potential. For an identifiable Markov model, such distributions can clearly be used to identify the rates of the Markov model. The whole cell situations is completely different since thousands of single channels contribute the transmembrane potential which is therefore a deterministic entity even if the individual channels are stochastic.

1. Keener J, Sneyd J. Mathematical Physiology. Springer; 2009.

2. Smith Gregory D. Modeling the stochastic gating of ion channels. In: Fall Christopher P, Marland Eric S, Wagner John M, Tyson John J., editors. Computational Cell Biology, volume 20 of Interdisciplinary Applied Mathematics. chapter 11. Springer; 2002. pp. 285–319.

3. Bressloff Paul C. Stochastic processes in cell biology. Vol. 41. Springer; 2014.

4. Tveito Aslak, Lines Glenn Terje. Computing characterizations of drugs for ion channels and receptors using Markov models. Vol. 111. Springer; Heidelberg, Germany: 2016. volume LNCSE.

5. Shelley Christopher, Magleby Karl L. Linking Exponential Components to Kinetic States in Markov Models for Single-Channel Gating. The Journal of General Physiology. 2008 Jul;132(2):295–312. [PMC free article] [PubMed]

6. Siekmann Ivo, Wagner Larry E, II, Yule David, Fox Colin, Bryant David, Crampin Edmund J, Sneyd James. MCMC Estimation of Markov Models for Ion Channels. Biophysical Journal. 2011 Apr;100(8):1919–1929. [PubMed]

7. Siekmann Ivo, Sneyd James, Crampin Edmund J. MCMC Can Detect Nonidentifiable Models. Biophysical Journal. 2012 Dec;103(11):2275–2286. [PubMed]

8. Hines Keegan E, Middendorf Thomas R, Aldrich Richard W. Determination of parameter identifiability in nonlinear biophysical models: A bayesian approach. The Journal of general physiology. 2014;143(3):401–416. [PMC free article] [PubMed]

9. Csanády László. Statistical evaluation of ion-channel gating models based on distributions of log-likelihood ratios. Biophysical journal. 2006;90(10):3523–3545. [PubMed]

10. Neher Erwin, Sakmann Bert. A century of Nature: twenty-one discoveries that changed science and the world. 2010. Single-channel currents recorded from membrane of denervated frog muscle fibres; p. 224.

11. Sakmann Bert, Neher Erwin. Patch clamp techniques for studying ionic channels in excitable membranes. Annual review of physiology. 1984;46(1):455–472. [PubMed]

12. Colquhoun D, Hawkes AG. Relaxation and fluctuations of membrane currents that flow through drug-operated channels. Proceedings of the Royal Society of London. Series B, Biological Sciences. 1977;199(1135):231–262. [PubMed]

13. Colquhoun D, Hawkes AG. On the stochastic properties of bursts of single ion channel openings and of clusters of bursts. Philosophical Transactions of the Royal Society London B. 1982;300:1–59. [PubMed]

14. Sakmann Bert, Neher Erwin., editors. Single-Channel Recording. 2. Springer; 1995.

15. Qin Feng, Auerbach Anthony, Sachs Frederick. Estimating single-channel kinetic parameters from idealized patch-clamp data containing missed events. Biophysical Journal. 1996 Jan;70:264–280. [PubMed]

16. Qin Feng, Auerbach Anthony, Sachs Frederick. A direct optimization approach to hidden markov modeling for single channel kinetics. Biophysical Journal. 2000 Jan;79:1915–1927. [PubMed]

17. Nicolai Christopher, Sachs Frederick. Solving ion channel kinetics with the QuB software. Biophysical Reviews and Letters. 2013;8(3n4):191–211.

18. Ball FG, Davies SS. Statistical inference for a two-state markov model of a single ion channel, incorporating time interval omission. Journal of the Royal Statistical Society. Series B (Methodological) 1995:269–287.

19. Rosales Rafael A. Mcmc for hidden markov models incorporating aggregation of states and filtering. Bulletin of mathematical biology. 2004;66(5):1173–1199. [PubMed]

20. Gin Elan, Falcke Martin, Wagner Larry E, Yule David I, Sneyd James. Markov chain Monte Carlo fitting of single-channel data from inositol trisphosphate receptors. Journal of Theoretical Biology. 2009 Apr;257(3):460–474. [PubMed]

21. Fredkin Donald R, Rice John A. On aggregated markov processes. Journal of Applied Probability. 1986:208–214.

22. Milescu Lorin S, Yamanishi Tadashi, Ptak Krzysztof, Mogri Murtaza Z, Smith Jeffrey C. Real-time kinetic modeling of voltage-gated ion channels using dynamic clamp. Biophysical journal. 2008;95(1):66–87. [PubMed]

23. Apostol Tom M. Calculus. II. Wiley; 1969.

24. Fornberg Bengt. Generation of finite difference formulas on arbitrarily spaced grids. Mathematics of Computation. 1988;51(184):699–706.

25. Zhou Qinlian, Bett Glenna CL, Rasmusson Randall L. Markov Models of Use-Dependence and Reverse Use-Dependence during the Mouse Cardiac Action Potential. PLoS ONE. 2012 Aug;7(8):e42295. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |