|Home | About | Journals | Submit | Contact Us | Français|
Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications.
Computer simulations of electrical behaviour in the whole ventricles (Xie et al. 2004; Rodriguez et al. 2005; Potse et al. 2006; Ten Tusscher et al. 2007; Ashihara et al. 2008) or atria (Harrild & Henriquez 2000; Vigmond et al. 2001; Virag et al. 2002; Seemann et al. 2006) have become commonplace during the last few years. However, even when powerful state-of-the-art computational resources are used, attempts to integrate behaviour from the protein scale of ion channels to the organ scale of cardiac arrhythmias remain enormously challenging, and typically include significant trade-offs between representations of different types of complexities (subcellular processes versus structural complexities). To arrive at a computational efficiency in modelling the (patho)physiological processes in the heart, such that it permits the exploration of the parameter space of interest, the simplifications typically made are as follows.
Clearly, the ultimate goal of modelling is to accurately represent the interplay between the subsystems responsible for the primary functions of the heart, including the electrophysiological, Ca2+ handling, contractile and energetic components. This necessitates a significant level of complexity of the cell models, often achieved by implementing highly nonlinear control systems. Moreover, the ability to use simulations to gain a better understanding of cardiac pathophysiology requires a representation of the bidirectional feedback loops connecting the various subsystems. This presents additional computational concerns with respect to the coupling of processes that span very different temporal and spatial scales, described by equations with varying numerical stiffness.
A case in point is a recently described cell model, which combines mitochondrial bioenergetics with previously developed electrophysiological, Ca2+ handling and contractile models, referred to as the excitation–contraction coupling and mitochondrial energetics (ECME) model (Cortassa et al. 2006). This cell model was developed as a framework for studying how the failure of the mitochondrial network of the cardiomyocyte can lead to action potential shortening or complete electrical inexcitability as a result of the depletion of the high-energy phosphate pool and the activation of the ATP-sensitive K+ channels in the sarcolemma. Through a mechanism of reactive oxygen species (ROS)-induced ROS release (Cortassa et al. 2004), it was shown that the uncoupling of oxidative phosphorylation can induce oscillations of background K+ currents (Aon et al. 2003) that were proposed to contribute to electrical heterogeneity and arrhythmias in whole hearts exposed to ischaemia-reperfusion (Akar et al. 2005). Compounds that could inhibit mitochondrial depolarization also prevented the post-ischaemic arrhythmias, demonstrating that a failure at the level of the mitochondrion scales to cause macroscopic desynchronization of cardiac electrical propagation. Thus, there is a strong motivation for developing multiscale models to study excitation–contraction–bioenergetic coupling, spanning from the organelle to the entire organ. Because the ECME model takes into account both fast Markovian ion channel gating kinetics (microsecond relaxation times) and slow enzymatic reactions associated with the tricarboxylic acid cycle (second-long relaxation times), an ordinary differential equation (ODE) solver, which explicitly accounts for the wide range of time-scales involved, would be ideally suited to speed up computational time. The technical challenges of scaling up a model with a temporal range spanning six orders of magnitude are formidable.
The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning rather than on mathematical rigour, that lifts some of the restrictions imposed by ionic models of the most current generation, such as the ECME model. This article thus presents an efficient multiscale model of ventricular arrhythmogenesis that incorporates mitochondrial ionic channels and mitochondrial energetics. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level and, ultimately, to the development of novel therapeutic applications.
Besides a few exceptions (Wang et al. 1996), the cardiac bidomain equations, or a simplified version referred to as the monodomain equations, are typically used to model cardiac bioelectric phenomena at the tissue and organ level. The bidomain equations (Plonsey 1988) represent the homogenization of cardiac tissue, replacing discrete components of the intracellular and extracellular tissue matrix, such as cells and gap junctions, with a continuum. Each point in space is considered to exist in two domains, the intra- and extracellular; the domains are superimposed in space and separated from each other by the cellular membrane.
The following equations relate the intracellular potential ϕi to the extracellular potential ϕe, through the transmembrane current density Im:
where and are the intracellular and extracellular conductivity tensors, respectively; β is the surface-to-volume ratio of the cardiac cells; Itrans is the current density of the transmembrane stimulus; Cm is the membrane capacitance per unit area; Vm is the transmembrane voltage, defined as ϕi−ϕe; and Iion is the current density of the ionic current, which depends on the transmembrane voltage and other state variables represented by υ.
The bidomain conductivity tensors and are the result of a homogenization procedure (Henriquez 1993), through which the discrete nature of the tissue matrix is translated into a macroscopic representation of the conductive tissue properties. The conductivity tensors account, in an averaged macroscopic sense, for the observation that conduction velocity is faster along the direction of the fibre orientation, and slower in a direction transverse to it. In the intracellular space, this is a reflection of the discrete coupling network, where cell-to-cell connections (gap junctions) are found most frequently at the ends of a cell (along the long axis of the cell). Furthermore, in both intracellular and extracellular spaces, a preferred direction of conduction is a consequence of the cell geometry, which constrains current flow in both spaces. This property is referred to as anisotropy. The anisotropy ratios between the two domains have been shown to be unequal (Clerc 1976; Roberts et al. 1979; Roberts & Scher 1982; Roth 1997). This property has been demonstrated to be of major importance in the interaction between cardiac tissue and externally applied current (Sepulveda et al. 1989).
Different approaches to recast the bidomain equations have been suggested (Hooke et al. 1994). A particularly popular linear transformation is to add equations (2.1) and (2.2) and to use the definition of Vm according to equation (2.4) to retain ϕe and Vm, the two experimentally measurable quantities of interest, as the independent variables (Pollard et al. 1992):
In the most general case, when the heart is immersed in a conductive fluid (e.g. a tissue bath in an experimental context or a surrounding torso to model in vivo scenarios), a Poisson problem has to be additionally solved,
where σb is the conductivity of the bath medium; and Ie is the volume density of the extracellular stimulus current, which is injected or withdrawn through electrodes located in the bath.
Boundary conditions at the tissue–bath interface enforce continuity of the normal component of the extracellular current and continuity of ϕe, whereas the intracellular domain is isolated there (the normal component of the intracellular current is zero). At the boundaries of the bath, the normal component of the extracellular current is zero. Without enforcing Dirichlet boundary conditions, the elliptic partial differential equation (PDE) components, i.e. equation (2.7), and the elliptic portion of the bidomain equations, i.e. equation (2.5), are singular, which may pose numerical difficulties. Typically, grounding electrodes are used in the extracellular domain, i.e. nodes in the mesh are chosen where ϕe is fixed at zero value, to circumvent this problem. For most scenarios of practical interest this is not a limitation since the validation of simulation results with experimental data requires the use of grounding electrodes to match an experimental set-up. With certain iterative solver techniques, for instance Krylov subspace methods such as conjugate gradients (CG), this is not necessarily required (Potse et al. 2006).
Different spatial discretization techniques, such as the finite-difference method (FDM; Skouibine et al. 2000), the finite-volume method (FVM; Harrild & Henriquez 1997; Trew et al. 2005), the interconnected cable model (Leon & Roberge 1990), and the finite-element method (FEM; Rogers & McCulloch 1994; Vigmond et al. 2002; Rodriguez et al. 2005; Ashihara et al. 2008), have been applied to the cardiac bidomain problem. Spatial discretization is mainly governed by the spatial extent of the depolarization wavefront. Assuming that the upstroke of the action potential lasts for approximately 1ms and that under physiological conditions the wavefront travels at a speed between 0.2 (transverse conduction) and 0.7ms−1 (longitudinal conduction), the spatial extent of the wavefront is in the range of 0.2–0.7mm. The grid resolution has to be smaller than the extent of the wavefront, otherwise spatial undersampling effects will influence the simulation results. In the following sections, ‘fine discretization’ will be used to refer to discretizations where the spatial resolution is well below the spatial extent of the wavefront, and ‘coarse’ if the resolution is well above it. A discretization step of 250μm could be considered as the limit between the two; however, it is not strict. Under pathological conditions, for instance, when a very slow decremental conduction is modelled, even a resolution of 100μm may be too coarse.
A more formal argument to estimate spatio-temporal discretization constraints is found by considering the Courant–Friedrichs–Lewy (CFL) condition (Courant et al. 1928). Briefly, assuming straight fibres and equal anisotropy ratio, the CFL condition, which governs the maximum possible time-step for stable integration using the forward Euler method, is approximated by
where with . Assuming a standard value of 1400cm−1 for β and the conductivity values as published by Clerc (1976), the CFL condition predicts a maximum possible time-step of 45.5μs for a 100μm grid. For a 10μm grid Δt decreases to 0.45μs, which renders the forward Euler integration for the parabolic PDE computationally inefficient compared with an implicit scheme such as the Crank–Nicholson scheme discussed below.
Numerically, the bidomain equations can be solved as a coupled system (Vigmond et al. 2002), where transmembrane voltage and extracellular potential are solved for simultaneously. Alternatively, operator-splitting techniques are applied (Keener & Bogar 1998) to decouple the computing scheme into three components, such as an elliptic PDE, a parabolic PDE and a set of nonlinear ODEs (equations (2.9)–(2.12)). It has been shown that the decoupled scheme converges quickly against the coupled scheme by employing a block Gauss–Seidel iteration (Pennacchio & Simoncini 2002). However, in a number of studies, the components were essentially treated as independent (Weber dos Santos et al. 2004a,b; Sundnes et al. 2005; Austin et al. 2006; Plank et al. 2007); solutions were then found by leapfrogging between the decoupled components, where either Vm in equation (2.5) or ϕe in equation (2.6) are considered as constant. In Vigmond et al. (2002), it was found that, with small error tolerance, the difference between coupled and decoupled approaches is negligible.
Discretizing the decoupled bidomain equations leads to a three-step scheme, which involves the solution of the parabolic PDE, the elliptic PDE and the nonlinear system of ODEs at each time-step. In the simplest case, both the parabolic PDE and the nonlinear ODE systems can be solved with an explicit forward Euler scheme (Vigmond et al. 2002),
where is the discretized operator, with ξ being either i (intracellular) or e (extracellular); Δt is the time-step; and Vk, and are the temporal discretizations of Vm, ϕe and , respectively, at the time instant of kΔt.
When the computational mesh is of fine discretization and thus the integration time-step of the parabolic PDE has to be reduced substantially to maintain stability, it is advantageous to employ the more expensive semi-implicit Crank–Nicholson scheme instead of the forward Euler scheme to solve the parabolic PDE, with
The PDEs are solved most efficiently with direct methods; however, such methods are limited to small grids only (Vigmond et al. 2002; Plank et al. 2007) because otherwise memory demands increase quickly, which, in turn, significantly increases the required number of operations per solver step. Although direct methods have been implemented to run in parallel environments (Amestoy et al. 2001; Li & Demmel 2003), typically they are harder to parallelize due to the fine-grained parallelism, which is communication intense. For large systems, iterative methods are mandatory because the memory requirements increase as and the complexity, in terms of required operations, increases as , whereas the requirements for iterative solvers are and , respectively, where N is the system size and ϵ is the error tolerance (Meurant 1999). In general, linear systems with more than a few hundreds of thousands of unknowns can be considered as large. For instance, in Plank et al. (2007) a rabbit ventricular geometry immersed in a bath was discretized, resulting in 0.87 million unknowns. The linear system associated with this set-up could not be factorized on a desktop computer with 8GB of RAM available.
When executing bidomain simulations on sequential computers, the main computational burden can be attributed to the solution of the elliptic problem and the set of ODEs (i.e. the ionic model). The complexity of an ionic model is determined by the number of state variables in the model and the stiffness of the ODE system. With simple ionic models, among which are phenomenological models such as FitzHugh–Nagumo (FitzHugh 1969) and its derivatives as well as electrophysiology-based (Hodgkin–Huxley-type) ionic models with a small number of state variables and without or with limited intracellular compartments and fluxes between them (Drouhard & Roberge 1987), the elliptic problem contributes more than 90 per cent to the overall workload. By contrast, for recent ionic models involving a large number (20–60) of state variables and very stiff ODEs as well as Markov state models (Iyer et al. 2004; Saucerman et al. 2004; Cortassa et al. 2006; Flaim et al. 2007; Mahajan et al. 2008), the ODE solution may even dominate the computations.
The parabolic problem is typically less of a concern. On coarser meshes, simple forward Euler steps can be employed to update Vm. In this case, the contributions of the diffusional component (i.e. the PDE) and the local membrane component to changes in Vm can be updated separately, which renders the PDE linear. On finer grids, semi-implicit Crank–Nicholson schemes perform well. Even when relatively cheap iterative solvers are employed, the parabolic portion can be updated efficiently due to the diagonal dominance of the linear system.
For large systems, parallel computing approaches are necessary to reduce execution times. The parallel computing context alleviates the problem of solving the set of ODEs. State variables in an ionic model do not diffuse, which qualifies the ODEs as an embarrassingly parallel problem. No communication between processors is required to update the state variable and thus the parallel scaling of the ODE portion is linear. The parabolic problem is efficiently solved in parallel as well. Either only a forward Euler step is required (essentially a matrix–vector product, for which good scalability is expected), or the well-posed diagonally dominant linear system is solved efficiently with relatively cheap iterative methods, such as preconditioned CG. Typically, with an incomplete LU (ILU) preconditioner for the iterative CG solver, the parabolic problem can be solved in less than 10 iterations.
The elliptic PDE is the most challenging problem. Standard iterative solvers such as ILU–CG typically require several hundreds of iterations to converge (Plank et al. 2007), which makes this solution significantly more expensive than that of the parabolic system, although both systems share the same sparsity pattern. The parallel scaling of standard iterative solvers is fairly good (Plank et al. 2007); for instance, a parallel ILU–CG solver, where the system is decomposed by a block Jacobi preconditioner with ILU(0), i.e. an incomplete LU factorization with zero fill-in levels that preserves the sparsity pattern of the original matrix, used as a sub-block preconditioner, exhibits good parallel scaling. An example of good parallel scaling is presented in Plank et al. (2007), where parallel efficiency of approximately 92 per cent was achieved over the test range of processor numbers (up to 64 processors). With fewer number of processors, ILU(N) with N levels of fill-in tends to be more efficient; however, with an increasing number of processors, the efficiency of the preconditioning deteriorates since the preconditioner is applied only to the main diagonal block. This deterioration can be avoided by employing overlapping block preconditioners, such as alternating Schwarz methods; however, this increases the burden on the network interconnect owing to the increase in communication caused by the increased number of off-processor elements involved (Toselli & Widlund 2005). If the hardware lacks low-latency interconnects, this is undesirable since parallel efficiency levels off quickly with increasing number of processors.
It has been demonstrated in several recent studies (Weber dos Santos et al. 2004a,b; Austin et al. 2006; Plank et al. 2007) that multilevel preconditioners for CG methods both significantly improve the overall performance and exhibit reasonable parallel efficiency (better than 80%) for up to 128 processors. A generally applicable algebraic multigrid preconditioner (AMG) in conjunction with an iterative Krylov solver reduces the number of iterations per solver step by almost two orders of magnitude compared with ILU–CG (Plank et al. 2007). Although a single iteration with AMG is significantly more expensive than with ILU, the reduction in the number of iterations clearly favours a multilevel approach. For instance, in Plank et al. (2007) a speedup of 6 was reported. Using AMG–CG constitutes to date the most efficient method for solving the elliptic portion of the bidomain equations. Besides its computational efficiency, a major advantage of the AMG approach is that it is applied straightforwardly to unstructured grids, avoiding the complex task of explicitly generating nested representations of the geometry at different levels of resolution, as required by the geometric versions of multilevel preconditioners. Unstructured grids allow smooth representations of the organ boundaries, which is of particular importance in simulations of cardiac defibrillation, where geometric discretizations with jagged boundaries inevitably evoke artificial surface polarizations. A more extensive review of numerical techniques to solve the cardiac bidomain equations can be found in Vigmond et al. (2008).
Substantial gains in computational efficiency can be achieved by explicitly taking into account the fact that high spatial and temporal resolution is needed only in the immediate vicinity of a propagating wavefront, where fast transitions in time are translated into steep gradients in space. Typically, steep wavefronts encompass a small portion of the overall domain, although this portion increases substantially when re-entrant activation patterns, particularly fibrillatory activity, are being modelled. Spatially adaptive methods that dynamically adjust grid resolution hold promise in increasing computational efficiency. Cherry et al. (2003) reported significant performance gains with an adaptive mesh refinement technique; however, the method, although very well suited for FDM and explicit stencils, is not easily modifiable to accommodate FEM or FVM discretizations and implicit stencils. A new class of methods loosely referred to as mortar FEM techniques (Pennacchio 2004) will possibly allow the use of FEM discretizations with spatial adaptivity; however, implementations using unstructured grids have not been reported yet.
Alternatively, simpler domain composition methods can be employed, where the region of interest is divided into several subdomains; each subdomain is then integrated at a different rate, the latter being a function of the ongoing activity in the domain itself and in the adjacent domains. However, such techniques introduce a significant bookkeeping overhead. Whether the reduction in floating point operations compensates for these extra costs, and to what degree, remains inconclusive. For instance, in Quan et al. (1998) a speedup of 17 was reported, whereas in Vigmond & Leon (1999) a speedup of only 3 could be achieved. Using different temporal resolutions on a per-domain base inevitably leads to load balancing issues for codes executed in parallel. That is, even though in a particular domain computations are executed quickly, the slowest domain dominates the overall execution time. A possible but non-trivial solution to address this problem is to migrate vertices between parallel partitions, although to our knowledge there are no simulator codes available today that are capable of achieving this in a robust manner.
The dynamics of the processes that underlie the action potential in a cardiac cell are typically described by a set of ODEs. These are initial-value problems of the form
where y is a vector and f is a vector-valued function. The solution to the above equation is typically found as
Different approaches can be used to approximate the integral over the nonlinear vector function f. Here, all the components of the vector y are updated simultaneously with all the components of integrated simultaneously. General-purpose packages incorporating a large set of standard integrators are currently available and allow a convenient integration in a vector form (Cohen & Hindmarsh 1996). Standard integration techniques include explicit (or forward) and implicit (or backward) methods. Explicit methods are popular since they are easy to implement; however, the order of this class of methods is one, which often results in insufficient accuracy. Approaches to overcome this weakness include the use of several previously computed solutions (multistep methods) or additional intermediate solutions in the interval (Runge–Kutta methods) to update the current solution. The more sophisticated implicit backward methods such as backward differentiation formula (BDF) or implicit Runge–Kutta methods (e.g. Rosenbrock methods; Rosenbrock 1963) have superior stability properties and allow larger time-steps; however, they are computationally expensive and, in general, robust implementation of these methods is difficult. The use of BDF methods requires the solution of a linear system of equations iteratively, by either fixed-point methods or versions of the Newton–Raphson methods. In this case, Jacobian matrices have to be either analytically determined and repetitively evaluated, or numerically approximated (Hairer & Wanner 2004). Many currently used integrators incorporate advanced features such as variable time-stepping and error control, where a time-step is chosen such that the local error per step is below a prescribed tolerance level.
For the particular purpose of integrating ODEs representing the myocardial action potential, standard methods have not been established and many different techniques are currently employed. Overall, although it is generally believed that implicit methods, owing to the stiffness of membrane kinetic variables, are advantageous and should be preferred over forward methods, the vast majority of implementations rely on forward methods.
Although the use of standard integrators is convenient for single-cell action potential simulations, it becomes cumbersome when ionic model integrators are coupled with PDE solvers for calculation of the potentials at the tissue and organ level. Therefore, as described in Maclachlan et al. (2007), in the field of cardiac modelling, it is common to split the vector formulation in equation (3.1) into a set of equations, where each ODE is integrated separately. In this component form, a system of coupled ODEs with N state variables decouples into N ODEs of the form
where and fi refers to the component i of the vector function (Maclachlan et al. 2007). Any standard method discussed above is well suited for component-wise integration as well. The application of implicit BDF methods is easier with component-wise integration since only scalar Newton iterations are required.
Possibly the most popular component-wise integration approach was proposed by Rush & Larsen (1978), and it has been widely applied (Vigmond & Leon 1999; Vigmond et al. 2002; Ten Tusscher & Panfilov 2006b) ever since. When the set of ODEs is decoupled and integrated sequentially in a component form, each state variable yi is updated, with all other state variables yj (j≠i) held constant. Many of the ODEs comprising an ionic model, typically all gating equations in Hodgkin–Huxley-type models, but also ODEs in Markov state formulations, can be written in the form
where all variables belonging to the subvector are considered constant during an integration step. Hence, the nonlinear ODE reduces to a linear ODE with constant coefficients, for which an analytic solution can be found. With A and B being constant, the analytical solution is
This analytical solution is then used to compute the solution at the next time-step, . Numerical analysis has revealed that the Rush–Larsen method offers significant stability and accuracy benefits over the forward Euler method (Maclachlan et al. 2007). Any portions fi of the vector function that cannot be written in a linear form need to be integrated in the fully nonlinear form. In the original Rush–Larsen formulation, the forward Euler method was applied to solve the fully nonlinear form. Therefore, the Rush–Larsen method has been classified as a non-standard FDM with forward Euler (NSFD w/FE; Maclachlan et al. 2007). Several suggestions have been made to further improve the method and overcome its weaknesses (Maclachlan et al. 2007). First, stability issues due to the use of the forward Euler method for the nonlinear components could be addressed by the use of implicit methods. Second, the accuracy of the method, originally first order, could be improved to second by a more expensive scheme involving the computation of midpoints. Although it has been shown that such elaborate approaches yield additional benefits for some ionic models, for instance the Courtemanche model of the human atrial cell (Courtemanche et al. 1998), where stiffness properties are less critical, no performance benefits or even worse performance has been observed with models that have pronounced stiffness properties, such as the canine ventricular ionic model by Winslow et al. (1999).
Regardless of the particular method, the expense in ionic model integration is in the evaluation of , requiring the calculation of hundreds of computationally expensive expressions (mostly exponential and logarithmic functions). Many of these expressions depend on a single variable or they can be recast into products of terms that depend on a single variable only. The physiological range of these variables is typically well known a priori, thus it is convenient to precompute and tabulate the expensive terms and then simply look up their values when required. For instance, assuming that in equation (3.5) the terms A/B and depend only on one state variable yj and that no adaptive time-stepping scheme is employed, i.e. Δt is constant throughout the simulation, then the functions and can be precomputed and used for table lookup (TLU). As shown in Vigmond et al. (2003), recasting equation (3.5) leads to a more efficient TLU scheme,
where one yj-dependent lookup operation is required to determine the values of and . The solution of the ODE can then be updated with one multiplication and one sum operation only (see algorithm 3.1).
NSFD w/FE integration of the linear part of the vector function f using TLU.
The TLU, in its most efficient form, requires one multiplication and one typecast operation only (nearest-neighbour TLU) or two multiplications and two sum operations (TLU with interpolation). Furthermore, if data structures are chosen properly for lookup tables and state variables, excellent cache performance can be achieved with this method.
On a current standard desktop computer, single-cell ionic model simulations, no matter how complex the particular ionic model is, can be executed faster than real time and memory requirements are negligible. With regard to organ-level simulations, however, real time is orders of magnitude too slow for a satisfactory performance. For instance, assuming that a human heart is sufficiently finely discretized by 10 million vertices, with real-time performance the simulation of one second of activity would last for approximately 116 days. Using supercomputers with tens or hundreds of CPUs scales the simulation time down to a few hours. However, considering that this is only the ODE part of a tissue-level simulation and that only a time interval of one second is simulated, it becomes apparent that, even on supercomputers, real-time performance is much too slow to allow meaningful computational studies, where a large parameter space is explored and where simulations of several seconds to minutes of electrophysiological activity are desired.
Clearly, performance plays a key role in tissue/organ simulations whereas it is of minor importance for single-cell simulations. Therefore, ionic model implementations used in tissue-/organ-level simulations are typically not straightforward extensions of single-cell codes. Ionic state variables in tissue models become vectors, memory demands increase, and the flexibility needed to allow implementation of spatial heterogeneities in model parameters or the use of different ionic models with different parameter settings in one organ model requires a careful software engineering design.
At the tissue/organ level, there are specific issues that prevent concepts successfully applied at the single-cell level from being carried over to tissue simulations without major adjustments. These issues can be summarized as follows.
Although computationally efficient implementations of versions of the NSFD w/FE method continue to be an excellent choice for many recent ionic models (Mahajan et al. 2008), for other recent models such as the ECME ionic model (Cortassa et al. 2006), their application becomes impracticable. For instance, with the fastest extended NSFD method implementation to date (Maclachlan et al. 2007), it took 25.77s to execute a simulation of over 500ms of activity in a single cell using the Winslow et al. (1999) canine ventricular model (no specific detail regarding the implementation was provided). The same study showed that fully implicit methods indeed allow much larger time-steps, but the overall performance was not better and the accuracy was strikingly inferior to that of NSFD w/FE.
The reason why some recent models have imposed a much higher burden on the ODE integrators than others stems from the current trend to develop more realistic (and hence, predictive) mechanistic formulations. This has led to the incorporation of processes whose dynamics occur on a faster time-scale and require a large number of state variables to faithfully reproduce experimentally observed phenomena. This is the case for formulations where Hodgkin–Huxley-type ionic channel representations have been replaced by Markov process descriptions (Clancy & Rudy 1999). Taking this approach has led to the development of a series of integrative cardiac myocyte models that can explain the mechanisms underlying observations, such as action potential prolongation in heart failure (Winslow et al. 1999), the coexistence of high gain and graded sarcoplasmic reticulum Ca release (Greenstein & Winslow 2002), the role of beta-adrenergic stimulation on excitation–contraction coupling (Greenstein et al. 2004), and how properties of the cardiac dyad influence excitation–contraction coupling gain (Greenstein et al. 2006).
The wider range of time-scales captured in such ionic models increases the overall model stiffness primarily owing to the very fast time constants that govern intermediate states of these Markov state models. In a number of these ionic models (Jafri et al. 1998; Winslow et al. 1999; Cortassa et al. 2006; Greenstein et al. 2006), the fastest processes are linked to formulations describing the intrinsically fast dynamics of the L-type calcium channel and the ryanodine receptor, RyR, as well as the calcium dynamics in the dyadic space, i.e. the restricted space between the membrane of the T-tubules, where the L-type calcium channels reside predominantly, and the co-localized RyR in the membrane of the junctional sarcoplasmic reticulum. The positive feedback loop between the L-type calcium currents and the calcium release controlled by RyR is referred to as calcium-induced calcium release (CICR). Descriptions of CICR processes stiffen the ODE system considerably, as illustrated in figure 2. The time-scales governing CICR are often in the sub-microsecond range. Using the NFSD w/FE method enforces a time-step as small as 0.2μs to successfully integrate these equations.
To avoid issues with ODE integration that are related to the stiffness of the system of equations resulting from detailed descriptions of CICR, a simplified phenomenological description could be used. Such descriptions employ a simple set of equations that reproduce the profile of a typical calcium-release signal with the time course of the release ‘hard coded’ into the equations. However, because CICR is the process that is at the very core of myocyte contractile function, and because understanding how alterations in CICR impact integrative function in both normal and diseased cells is a fundamental goal of building mathematical models, the use of phenomenological descriptions of CICR or any subcellular process hinders progress towards elucidating the integrative mechanisms at play in the cardiac myocyte. In order to be considered predictive, ionic models must be developed that accurately capture the underlying biophysical mechanisms of experimentally observed phenomena.
In the following sections, we present a novel ODE integration scheme that builds on the NSFD w/FE method but overcomes its limitations to improve computational efficiency and thus enable the use of ionic models such as the ECME model in tissue- and organ-level simulations. The needed gains in efficiency are obtained by taking advantage of two key observations.
In this section, we present the general TMSD approach, which we later apply to the ECME model to provide a specific illustration of its implementation. The essence of the approach is that fast state variables are considered constant as long as the difference in time-scales with the rest of the system is sufficiently large. In applying the temporal multiscale decoupling approach, state variables that act at a similar time-scale are grouped together, and blockwise updates of the different groups are performed using different update frequencies without any impact on the overall solution. However, special care is taken of equations where fi depends on fluxes. In this case, fluxes are integrated and adjusted to a particular update frequency when used outside of the group in which they were computed. Often such groups can be mapped to subsystems of the cell, for instance the mitochondria in the ECME model (Cortassa et al. 2006). Temporal changes of state variables in this subsystem occur at time-scales that are at least three orders of magnitude slower than those of the RyR subsystem and thus infrequent updates of the state variables in this subsystem can be executed without any loss of information. Since updates of slow variables require the same expensive evaluations of a group of functions fn of the vector function f, not updating all state variables at the same rate (figure 2) leads to significant savings in terms of execution time.
The wide range of time-scales involved in these ionic models can be exploited in a systematic manner to arrive at more efficient integration schemes, where different components of the state vector are decoupled according to a temporal multiscale modelling approach. In the simplest case, the state vector y is split into M portions and each partial state vector ym is integrated either with the same time-step Δt, when fixed time-step integration schemes are employed, or if an adaptive integration method with a varying internal time-step is used, the integrator output is synchronized at a desired Δt. With respect to the ODE integration frameworks presented in the review above, each subvector can then be integrated either in the standard form or component-wise in the non-standard form. Such decoupling has several benefits.
A formal description of the TMSD method for three groups is provided in algorithm 4.1. Each group is integrated using a different time-step with every outer time-step being a multiple of the time-step of the innermost loop.
TMSD integration of a system of ODEs using time-steps Δt0, Δt1 and Δt2. State variables are considered constant. Primed and double-primed fluxes J′ and J″ are used to update state variables with a different time-step in an outer loop.
The main reason for the increased computational cost of the current state-of-the-art ionic models that use Markov chains is their stiffness and, to a much lesser extent, the large number of state variables. With Markov state models, the stiffness of the governing equations may vary substantially depending on the state vector y, since the Jacobian is not a constant but a function of y. That is, during a particular interval, a problem may be stiff, but it may be non-stiff during other intervals. For instance, with the ECME model, stiffness seems to be a problem only for a few microseconds during the onset and upstroke of the calcium transient in the dyad; other than that, the ECME model can be treated as other models are, with the minor difference that the number of state variables is two to three times higher than that of most standard ionic models. During the phases of pronounced stiffness, the integration time-step has to be reduced to values as small as 0.2μs, even when expensive implicit ODE integrators are employed (Maclachlan et al. 2007).
In other words, this problem can be circumvented neither with more elaborate ODE integrators nor with the TMSD approach, since the inner loop of the TMSD integrator is tied to the same small time-step. Depending on the number of Markov state variables relative to the overall size of the set of ODEs, the inner loop may impose a computational burden that renders organ-level applications computationally intractable or at least extremely expensive. In such cases, we suggest that the equations themselves be modified to reduce the stiffness of the system without changing the solution. This is again achieved by taking advantage of the separation of time-scales using the rationale as follows.
As part of this study, the suggested TMSD ODE numerical integration technique with dynamic reformulation of ODEs was implemented for the ECME model and validated against the well-established integrator used in the original paper (Cortassa et al. 2006); the latter relies on an implicit BDF method provided through the CVODE library (Cohen & Hindmarsh 1996). Our validated TMSD code used three different groups, which we integrated at time-steps of 2.5, 20 and 100μs. The smallest time-step was applied to integrate the equations related to and the Markov state models representing RyR and the L-type calcium current. Standard Hodgkin–Huxley-type gating variables and other ionic fluxes were integrated at a time-step of 20μs using the NSFD w/FE integrator. The very slow processes linked to mitochondrial metabolism and force generation were integrated with the largest time-step of 100μs.
Besides using the TMSD approach, dynamic reformulation was applied to the equations for RyR and , but not to the Markov state model representing the L-type calcium current. Details of the procedure are illustrated in figure 3. Formulations for the RyR subsystem were switched based on the transition rates of the level. For below a threshold level, , the full-blown Markov state model of the RyR (formulation A, figure 3a) was employed and the ODE that governs the calcium flux balance in this compartment was integrated to update . Since there are no transients, large time-steps could be implemented to speed up execution time. Fast transients in the RyR states are triggered by the quick rise in . As soon as rose above the threshold level , rapid equilibrium assumptions were made, and a simplified model was employed to update the RyR states (formulation B, figure 3b). The combined state substituted the individual states and , where the probability of relative to the new combined state was determined by with α being determined from and , i.e. . The threshold value for switching between the formulations was chosen in such a way that the rate coefficients governing the transitions between and were at least 10 times faster than all other processes. Specifically, we used a threshold =100ms−1, which translated into a threshold concentration of =0.095mM.
When switching to formulation B, further numerical stability benefits were gained by replacing, with an algebraic equation, the ODE that governs the transient as a function of the fluxes into and out of the dyad. This was accomplished by assuming that equilibrates rapidly, i.e. the Ca2+ influx into the dyadic space and the Ca2+ efflux out of it are equal. This is equivalent to assuming that holds true, resulting in the reduction of the ODE governing the transients into an algebraic equation. This assumption is well justified because Ca2+ diffusion out of the dyad occurs far more rapidly than the L-type Ca channel and RyR gating processes (Hinch et al. 2004; Greenstein et al. 2006). As evidenced by figure 3c, this modification does not lead to any differences in results (compare traces A and B in figure 3c), but allowed a much larger time-step, 2.5μs. Otherwise, for stability reasons, the integrator was forced into a small time-step of 0.2μs. However, for close to the base level, employing the algebraic equation to update results in small deviations. In this case, the onset of the transient occurs approximately 5μs earlier than that in the original model (compare traces A and B with B* in figure 3c).
Finally, longer single-cell simulations, over 1min, were executed to ensure the long-term stability of the scheme. It was established, for all 50 state variables of the model, that the results were indiscernible from those obtained with the reference implementation.
The gain in execution speed obtained by using TMSD with the dynamic reformulation of several ODEs versus the classical NSFD w/FE scheme turned out to be at least two orders of magnitude. Although the exact speedup depended on several factors, such as platform, compiler, optimization and implementation details, differences in execution time between a straightforward NSFD w/FE implementation of the ECME model, using an integration time-step of 0.2μs, and the TMSD integrator with dynamic reformulation reached a factor of up to 285. This is mainly attributed to gains in the maximum stable time-step size that can be used to integrate the RyR state equations, while less frequent updates of state variables related to the metabolic state and force generation helped to reduce execution times further by reducing the number of expensive evaluations of the function f.
To assess to what degree the gain in efficiency in the ECME model was sufficient to allow simulations at the tissue and organ levels, we carried out benchmark simulations in tissue consisting of one million cells. The benchmark was executed without coupling the cells, i.e. the parabolic PDE was not solved and no spatial domain was discretized. This problem size was chosen (i) by considering that one million degrees of freedom suffice to discretize any small mammalian heart up to the size of a rabbit at a sufficiently fine spatial resolution (approx. 200μm) and (ii) to ensure that realistic organ-level simulation is tested where cache performance will play an important role.
Execution times to simulate one second of activity were measured on a standard desktop computer with four CPUs for the ECME model and several other widely used models, from a simple ventricular model with seven state variables, the MBRDR model (Drouhard & Roberge 1987), through the Courtemanche model of the human atrial action potential (Courtemanche et al. 1998) and the Luo–Rudy guinea-pig ventricular model in its latest revision (Faber & Rudy 2000; Luo & Rudy 2006), to the most recent Markov state model of the rabbit ventricular cell by Mahajan et al. (2008). For all models other than the ECME, we used the NSFD w/FE method with the acceleration techniques described in the review portion of this article. Although some of these models could be integrated with significantly larger time-steps in a single-cell mode, all models were integrated with a time-step of 20μs, assuming that this value is a constraint imposed by the CFL conditions of the parabolic PDE.
Benchmark results are summarized in table 1. Even with all the optimizations implemented, the ECME model is, by a wide margin, computationally significantly more expensive than all the other models in this benchmark. Nonetheless, the performance is sufficient to investigate phenomena occurring on the time-scale of several seconds to minutes when simulations are executed on a supercomputer. Since the ODE computations scale linearly on a parallel computer, the time spent on solving the ODEs can be scaled down to execute the computations within a reasonable time. For instance, using 512 processors, the execution time will be reduced by a factor of 128 relative to this benchmark. That is, in 2.34min, 1s of activity can be simulated; 1min of activity would be executed in 140min.
Finally, we coupled the ECME model with the implemented TMSD ODE integration scheme with dynamic reformulation to tissue-/organ-level bidomain codes to conduct true organ-level simulations. A guinea-pig-size ventricular geometry model, which represents the rabbit ventricular geometry used in our previous studies (Trayanova et al. 2002; Rodriguez et al. 2005; Ashihara et al. 2008), scaled half in size, was chosen to facilitate the execution of the test on a standard desktop computer with reasonable performance. The discretization of the myocardial volume at an average element size of approximately 200μm led to approximately 75e3 degrees of freedom. The guinea-pig ventricles were paced at the apex to initiate a simple activation sequence (figure 4). Using a current desktop computer (Dual Opteron X2), the simulation of 1ms of activity was executed in approximately 1.2s for a monodomain formulation and in approximately 5s for a bidomain formulation. That is, on a standard desktop computer in 1 hour execution time, approximately 3s of activity could be simulated with a monodomain formulation and 0.72s with a bidomain formulation. These numbers clearly demonstrate the potential of the new methods presented in this study. Again, when parallel computers are employed to execute simulations, much longer observation periods, of the order of tens of seconds to minutes, can be studied with ionic models as complex as the ECME model within a reasonable time.
This article has presented a review of various numerical techniques to model electrical activity in the heart, spanning the spatial scales from subcellular mechanisms to the entire organ. Furthermore, a new version of the NSFD w/FE method was developed, which takes advantage of the very different time-scales of the processes represented in recent ionic models of very high complexity, such as the ECME model, to obtain substantial increases in computational efficiency. In Maclachlan et al. (2007), it was reported that the simulation of 500ms activity in a single cell modelled by the Winslow et al. (1999) canine ventricular model lasted for 25s. The ECME model used in this study employed the same set of equations to represent the L-type calcium current and RyR subsystem, which are by far the computationally most expensive components of the model. Although direct comparison is difficult, the ECME model can be judged as computationally more expensive than the canine model by Winslow et al. The performance we obtained in this study for the ECME model translates into 36ms to simulate 500ms activity in a single cell and, thus, is significantly more efficient than the standard NSFD w/FE method.
This is the first attempt at an implementation based on the TMSD ODE integration method with dynamic reformulation of ODEs. Although the performance is satisfactory and opens new avenues to employ highly detailed cell models in organ-level simulations, this is a first step only and further significant performance improvements can be expected. The choice of decomposing the set of ODEs into groups and the maximum possible time-step for each group cannot be determined automatically and needs manual tuning. In general, this is a significant disadvantage of the TMSD technique since intimate knowledge of the model is required to tune up the method for best performance. Decoupling an ionic model into different subsystems requires a non-trivial effort and careful examination of the maximum rates of change of different state variables. The effort to develop ionic models based on the suggested methods is further complicated by the fact that at least two implementations of an ionic model are required, a very accurate reference implementation such as an implicit BDF integrator in the standard form and the TMSD implementation itself. At the current stage of development, the method is sufficiently efficient to allow parameter sweeps in organ modelling studies of small mammalian hearts, even when high-performance computing (HPC) environments are not available. To simulate larger mammalian hearts or for questions that require the use of the bidomain formulation, the use of HPC hardware is mandatory. Today, with the availability of HPC environments, there are virtually no limitations in using highly detailed Markov chain ionic models, and the testing of various simulation scenarios as well as explorations of parameter spaces indeed appear feasible.
This work was supported by Marie Curie Fellowship MC-OIF 040190 funded by the European Commission and SFB grant F3210-N18 from the Austrian Science Fund (FWF) to G.P.; NIH grants R37-HL54598, R33-HL87345 and P01-HL081427 to B.O'R.; NIH grants R33 HL87345, N01-HV28180, PO1 HL081427 and PO1-HL077180 to R.L.W.; and NIH grants R01-HL063195, R01-HL082729 and R01-HL067322 and NSF grant CBET-0601935 to N.A.T.
One contribution of 11 to a Theme Issue ‘The virtual physiological human: building a framework for computational biomedicine II’.