Qualitative analysis of a proto-model
Analysis of sensitivity with respect to initial species concentrations provides a criterion for the qualitative correctness of a cell signaling model. Sensitivity analysis assesses how changes in model inputs contribute to model output variability, and its ability to deduce model input-output relationships makes sensitivity analysis one of the critical parts of model development, verification, and evaluation. Changes in initial species concentrations can mimic the effects of mutations or changes in the expression level of the molecular players involved, and the sensitivity of the model output to changes in initial species concentrations should match the expected change in system behavior.
The simplest and most generally used sensitivity analysis method is a gradient-based index as follows,
where model outputs and inputs are represented as yi
respectively. This method is often called local sensitivity because it reflects output variability accurately near a given nominal input value, p
*. However, most kinetic parameters are quite uncertain and a range rather than a single parameter value is available, either from the literature or from biophysical constraints on the reactions. Thus, "model-independent", or more precisely, parameter-independent, global sensitivity analysis techniques have generated great interest[10
]. Averaging of local sensitivities over a range of plausible values for uncertain parameters is one possible method for global sensitivity. Local sensitivities are calculated with multiple parameter choices that are selected randomly or regularly within parameter ranges. The sensitivities for those parameter choices, integrated over the time interval of interest in the monitoring of the output, ∫|Sij
, are then averaged to determine global sensitivity [17
]. Importantly, because integration over time and averaging are necessary, a compromise must be made between accurately calculating the magnitude of the effects by using absolute sensitivity values, and assessing the directionality of the effects, by maintaining the sign of the values.
The model on which we applied our methodology simulates the response of a single cell to TRAIL. TRAIL is a protein ligand which triggers the process of programmed cell death, or apoptosis. This model of TRAIL-induced cell death signaling network encompasses the activation of initiator (caspase-8 or C8) and effector (caspase-3 or C3) caspases, the onset of mitochondrial outer membrane permeabilization and the death of the cell, as marked by cleavage of the caspase-3 substrate, PARP. According to a recent study [7
], extrinsic apoptosis shows a specific behavior of all-or-none effector caspase activation at the single-cell level. As the authors termed it, the process shows "variable-delay, snap-action": a long, variable delay between TRAIL stimulation and effector caspase activation is followed by rapid and sudden progression to completion. The original model is composed of 58 ordinary differential equations based on mass action kinetics. Eighteen out of 58 protein species have non-zero initial concentrations, and 70 rate constants regulate the reactions in the model network. The original parameters were determined from the literature and manual fitting. In this study, we applied our methods to analyze qualitative properties of the model and fit the model to dynamic quantitative experimental data in a systematical and computationally effective way. Hereafter, the original model in [7
] will be referred to as the manually calibrated model, to distinguish it from our improved model.
In our analysis, cleavage of PARP is the key output; because the process is all-or-none, if >50% of PARP is cleaved, it is eventually all cleaved and thus a simulated cell is deemed dead at 50% cleaved PARP (see Methods). We first evaluated sensitivities of the cleavage of PARP with respect to changes in initial species concentrations, sampling over a range of plausible parameter values (range described in Methods). In this case, instead of averaging the sensitivities over the sampled range of parameter values, we plotted their distributions in a box plot, to preserve the directionality of the effect in the sign of the sensitivities (Figure ). We interpreted the results in three ways. First, proteins with positive sensitivity would promote PARP cleavage and thus were pro-apoptotic, and by corollary, proteins with negative sensitivity would repress PARP cleavage and had an anti-apoptotic effect. The pro- or anti-apoptotic nature of TRAIL-induced signaling proteins has been identified in the literature and should be encoded properly in the mathematical model. For instance, XIAP (an inhibitor of caspase-3) is a well known anti-apoptotic player, and the sensitivity of PARP cleavage with respect to XIAP was correctly shown to be negative (Figure ). Conversely, Smac, when released from the mitochondria, inhibits XIAP and thus the positive sign of sensitivity with respect to the mitochondrial store of Smac, Smacm, agrees with its pro-apoptotic nature (Figure ). By similarly assessing the sign of the sensitivity of each protein, the TRAIL-induced cell death proto-model could be validated.
Distribution of sensitivity of PARP cleavage. Each box plot shows the distribution of sensitivity of PARP cleavage with respect to change in non-zero initial species concentrations determined by average of local gradient-based sensitivities.
Second, the absolute value of sensitivity provides a measure of how strongly the perturbation of a single species' concentration affects the model output. The sensitivity with respect to perturbation of XIAP was found to be relatively high on average, implying that the model output can be changed dramatically by small changes in XIAP (Figure ). This prediction is supported by biological evidence that XIAP directly inhibits the enzymatic activity of caspases and the degree of inhibition is highly dependent on the concentration of XIAP[18
]. Cleavage of PARP is insensitive to the concentration of caspase-6 (C6), in agreement with experiments in which reducing the expression of caspase-6 by ~90% did not affect TRAIL-induced cell death (Figure ; [7
]). Overall, our sensitivity analysis agreed with the known effects of varying protein concentration. If, however, the signs or strengths of the sensitivities in our analysis had not agreed with experimental results, the model construction would have to be re-examined. Modification of the model and this qualitative analysis would be done iteratively until a satisfactory result could be reached. Although the TRAIL model study does not provide us with an example of failure of qualitative agreement at this step of the procedure, it is still worth noting that qualitative agreement with known experimental system behavior can be a strong preliminary criteria for adequacy of the model structure. In effect, it sets a minimal qualification that must be met before more computationally intensive methods are applied to improve the proto-model by quantitative fitting.
In a third type of assessment of the results our sensitivity analysis of PARP cleavage, we analyzed the influence of the uncertainty of rate constants on the sensitivity with respect to initial species concentrations. Sensitivities that are not affected by parameter values will have narrow distributions, and by consequence, their sensitivity value is very reliable. The sensitivities related to the perturbation of some species like XIAP and caspase-8 were found to be broadly distributed and thus to be relatively uncertain (Figure ). Particularly interesting is the fact that the sensitivity of PARP cleavage to caspase-8 is negative in some cases, even if it is known to have a pro-apoptotic function, invalidating certain parameter sets.
Dominant parameter selection
When global sensitivities are determined by averaging local sensitivities as we did above, no assumptions are made in the relationships between input parameters and output variables, so this method is applicable in most nonlinear and non-monotonic problems. For the model of TRAIL-induced apoptosis, the significance of each rate parameter to the total model output variation can be identified by global sensitivity analysis in a matrix of 70 parameters (inputs) by 58 variables (molecular species, here the outputs) (Figure ). The height of each bar represents the global parameter sensitivity of the corresponding species concentration with respect to changes in the reaction rate constant, or parameter. We observed that certain rate constants, such as p(1), the rate of complex formation between TRAIL and free, inactive receptor, p(29), the rate of dissociation of TRAIL and receptor, or p(57) the rate of dissociation of the activated TRAIL-receptor complex, can influence most protein concentration outputs. Therefore, some of the parameters involved in the reactions for activating the receptor complex are critical to the quantitative description of most downstream molecular species. Meanwhile other parameters have nearly zero sensitivity and thus do not affect any species concentration. Two of these parameters correspond to the reactions for dissociating the complex of the active caspase-8 and inactive caspase-3 (p(33)), and the complex of cytochrome c and the mitochondrial pores (p(48)). While these parameters will be difficult to constrain with any time series data, our analysis shows that their value should not impact model behavior. For TRAIL-induced apoptosis, the experimental data to which we aim to fit the model describes the cleavage of PARP, which marks the activation of caspase-3 and cell death. Therefore, we compared the 70 rate parameters based on their sensitivity to cleaved PARP (Figure , top) and observed that eight parameters had a large impact (Table ).
Figure 3 Global sensitivity matrix of TRAIL-induced cell death model. The height of bars represents the global sensitivity for all 58 model outputs with respect to change in 70 kinetic reaction rate constants, as determined by average of local gradient-based sensitivities. (more ...)
Figure 4 Comparison of different global sensitivity algorithms. Three different sensitivities, which are average of local sensitivities, Sobol's first order sensitivity, and Sobol's total effect sensitivity, are compared. (a) Bar graph showing the global sensitivity (more ...)
Comparison of results from different global sensitivity indices
Importantly, there are often biologically meaningful quantities of interest for which partial derivatives cannot be defined, and these may be the outputs for which the dominant parameters need to be identified. For example, for TRAIL-induced apoptosis we can define biologically meaningful features of the dynamic behavior of cell death. One example is the delay time (tdelay) that measures how long it takes from the time of TRAIL addition to the time at which 50% of PARP is cleaved. Another is the switching time (tswitch), which measures the rapidity of PARP cleavage after caspase-3 (C3) activation. These features are variables that are discontinuous with respect to input parameter variation, and to determine the dominant parameters in controlling tdelay, we therefore explored other sensitivity analysis methods to replace gradient-based sensitivity analysis.
Variance-based sensitivity methods form another category of global sensitivity analysis. In using these methods, the variance of a model output is decomposed into partial variances contributed by individual model input variations, and the sensitivity indices are derived from the ratio of the partial variance to the total variance of model output. Among the several variance-based sensitivity methods, we adopted Sobol's method [11
] to analyze the TRAIL-induced apoptosis model. Sobol's method generates two kinds of sensitivity indices. One is a first-order sensitivity that measures the fractional contribution of single inputs to the variance of output, neglecting any interactions with other model inputs by maintaining these at constant values. The other, a true global sensitivity, is the total effect sensitivity, or the sum of all the sensitivities involving the model input of interest over the full range of parameters values explored. These two sensitivity indices were computed simultaneously by Monte Carlo method and the results are summarized in Figure . The computational cost for sensitivity analysis varies widely by method, as shown in Table . Sobol's method requires more computation (100,000 cases of randomly selected parameter sets) to satisfy the convergence of the Monte Carlo approximation while the average of local sensitivities method converges with 2,000 sets of parameter values.
To determine which parameters dominate the control of PARP cleavage dynamics and tdelay
, the model parameters were ranked by highest to lowest amplitude in sensitivities (Figure ) and the eight most dominant parameters from each of the three sensitivity indices are listed in Table . The parameters that are commonly selected by all three methods are bolded, and those selected by two are underlined; the nomenclature of the parameters follows that of Albeck et al
]. For example, k9
, which is the forward reaction rate constant of PARP cleavage by caspase-3, is ranked within the eight dominant parameters by all three sensitivity indices. k3
relevant to caspase-8 activation and death ligand binding to the receptor respectively, are also dominant by all three methods. Even though all the reactions in the network play a role in cell death signaling, the sets of reactions rate constants listed in Table were identified as the most critical in regulating the dynamic of PARP cleavage. This prediction, that reactions relevant to caspase-8 activation are critical in regulating the delay time to death was arrived at by our computational sensitivity analysis, but, importantly, it is supported by experimental evidence: the reactions involved in caspase-8 substrate cleavage strongly influence tdelay
Once the ranking of parameters has been determined, the next question is how many parameters to target during a calibration to accurately capture network behavior. While there are no general and definitive criteria, it should be noted that estimation of too many parameters increases the number of degrees of freedom and the probability that inadequate local optima are detected. On the other hand, choosing too few parameters decreases fitting performance as well as the reliability of the optimal solution. To address this trade-off, we used the ranked parameters to determine the optimal cut-off for the calibration of the model of TRAIL-induced cell death. In Figure , the 70 parameters on the x-axis were ordered by their ranking number (determined from Figure ). We observed that the sensitivities dropped off sharply after a few steps - ranked sensitivities generated L-shaped curves. The three different sensitivity algorithms have a common property that the parameter of 8th highest sensitivity was approximately at the border between horizontal and vertical lines. This analysis suggested that for this particular model, the eight most sensitive parameters can cover much of the variation in PARP cleavage, and should be sufficient to include in model calibration.
Parameter estimation by global optimization
Most models of biological processes are non-linear and thus model parameter estimations are complex problems that can have multiple solutions. To avoid potentially poor decisions made by identification of local optima, it is essential to develop a search for the global solution. Global optimization methods are roughly categorized into deterministic and stochastic approaches. A conceptual illustration of these two approaches is given in Figure . Here, the 2-dimensional parameter space of two rate constants (k8 and k9, in this example) was explored. As the contour of the objective function showed, there exists a valley-shaped optimum in the lower part of parameter space. It is interesting that this characteristic contour of the objective function is relevant to the discussion of dominant parameters in sensitivity analyses. The parameter k8 was ranked as one of the eight most influential by only one type of sensitivity analysis (average of local sensitivities), while k9 is ranked by all three sensitivities (Table ). So it is expected that perturbations of k9 affect the model output more strongly than changes in k8 do. The valley-shaped contour of objective function in k8 vs. k9 parameter space indeed supports this idea, because the slope in much steeper in the k9 than in the k8 axis.
Figure 5 Deterministic and stochastic sampling of 2D parameter space. Plots show the initial and final parameter value combinations on the parameter space for k8 and k9 during global optimization process by (a) deterministic multi-start method and (b) evolutionary (more ...)
Among the various approaches for global parameter estimation, the simplest one is the deterministic multi-start method where a large number of local estimations start from different initial parameter combinations (Figure ; red circles). The logarithmic space of parameters is divided uniformly in a grid and deterministic local estimation starts from every grid point, comparing the fit of nearby points. Because the entire parameter space is explored, the guarantee for finding the global optimum is high, as long as the grid samples the space sufficiently well. In Figure , parameter sets starting from initial grid points converge to the points aligned along the valley after local estimations have terminated. However the computational load increases exponentially with the number of parameters, as dimensions are added to the sampling grid. To overcome this difficulty, random sampling in a Latin hypercube of parameter space[20
] or parallel computing with cluster processors could be utilized.
Stochastic methods on the other hand, can find the global solution with relatively less computational effort. These methods start with parameter values that are randomly sampled in parameter space, and, according to a set of rules, explore new solutions in the neighborhood of the initial point looking for a better solution and repeat until no further improvement of fit is found. Genetic algorithms and simulated annealing are well known examples of stochastic methods[21
]. In a comparative study of various optimization methods, Stochastic Ranking Evolutionary Strategy (SRES) showed the best performance [15
]. In SRES, a "population" composed of randomly selected "elements", or sets of parameter values, is generated. The elements are ranked by their fit to the data using a bubble-sort procedure[22
]. Only highly ranked elements are retained as ancestors for the next generation, which are used to probabilistically produce a new population of random elements with a better fit, on average. The source code of SRES is available in the public domain[23
For the model of TRAIL-induced cell death, we compared the performances of the deterministic multi-start method and SRES in a global optimization of the eight most dominant parameters identified by average of local sensitivities. For the multi-start method, local estimations started from the lower bound, middle value and upper bound in the range of each parameter so that the total number of cases was 6561 (= 38
). Out of 6561 local estimations, 6550 cases successfully detected their adjacent optimum solutions, although 11 cases failed due to their poor initial guess values. In Figure , the results of all the local estimations were sorted according to the magnitude of their objective function (see Methods for definition); every point in the curve indicates individual local optimum; the best fit had an objective function of 0.2435. The optimal parameter values are listed in Additional File 1
, and fits to data are shown for a local optimum and a global optimum case (Figure and , respectively).
Figure 6 Quantitative model calibration by multi-start method. (a) Plot showing the multiple optimal solutions obtained by deterministic multi-start global optimization and sorted according to the magnitude of the objective function. (b, c) Plot comparing the (more ...)
It is not surprising that many local minima were detected using a multi-start method because nonlinear and complex models like cell signaling networks usually exhibits objective function surfaces with multiple local optima. The plateaus near the global optimum and around the objective function values of 40 and 100 in Figure could be due to: 1) a wide well on the hypothetical surface of parameter space so that estimations from many nearby starting points converge to a single minimal solution, allowing us to easily arrive at the optimal solution or 2) a valley-shaped local optima on the surface of objective function. Because a wide well on the surface of parameter space is rare in network models, the most likely causes of the plateaus were valley-shaped optima. Along a valley, solutions may be distinct if they are located far apart from one another, but nevertheless fit the model in similarly well. Ideally, when constructing predictive models, this situation should be avoided by reducing valleys to more focused wells on the surface of parameter space by adding constraints to the optimization problem.
Despite its ability to find good fits to the data, the multi-start method had the critical drawback of having a heavy computational load (Table ). As an alternative, optimization was significantly accelerated by using SRES (Table ). For SRES, the initial population of parameter value combinations, or "elements", was generated by random selection from a uniform distribution over the 8-dimensional parameter space with the boundaries described in Methods. The population was composed of 200 individuals and, for each round, 30 individuals with best fits were defined as parents for the next generation. To decide when to terminate the optimization, we posed as a requirement a minimum of a double-digit decline in the objective function value from the first generation, and the estimation was stopped at the 33rd
generation, after ~2 h of computation time. Using this fast method, we compared the fits obtained by including each of the three sets of dominant parameters obtained by the different sensitivity analysis methods. We found that the set of parameters identified by the average of local sensitivities was the best, although none of the fits obtained were as good as that obtained with the deterministic multi-start method (Tables , ). If, in the case of the parameters identified with the average of local sensitivities, we allowed the evolution to proceed further, the fit did improve very slowly while the CPU usage time increased significantly (Table ). To obtain a goodness-of-fit equivalent to that achieved with the deterministic multi-start method, we implemented a previously described hybrid method[24
]. Using this method, the optimization was carried out in two sequential phases: first, a local solution in the vicinity of the global optimum was rapidly reached by the SRES method and, second, the solution was refined by a fast local estimation method until a pre-defined tolerance was satisfied (see Methods). As Table shows, this hybrid method could fit the model in much less computation time than the deterministic multi-start method, with an objective function as good as that obtained with the laborious multi-start method. Generally, the choice of optimization methods is dependent on not only model type but also on resource availability or approximation tolerance. Each method may have different performance for different models. With respect to the model in this study, the combination of SRES and local estimation performed the most efficient survey of the parameter space in a global optimization results. This efficiency was due to its combination of rapid stochastic surveying of the whole space and deterministic searching within local regions.
Comparison of estimation performance by different optimization algorithms
Influence of dominant parameter choice on optimization
To validate our choice of eight dominant parameters to estimate, we examined goodness-of-fit and computational cost while varying the number of parameters to be estimated for two deterministic optimization methods, where the number of parameters optimized has the greatest impact on computation time. In Figure , we show CPU time for the multi-start search and optimal objective function values as a function of the number of parameters estimated, for both the local search and the multi-start search. The fit to the data at the global optimum solution detected by multi-start search improved with increasing number of parameters, reaching a plateau at eight parameters, while computational cost increased exponentially. Importantly, the performance of the local search deteriorated significantly when the number of parameters increased. This is because when the local search starts from a poor initial guess, the chance of arriving at local optimum solutions with poor fitting performance increases, and with a larger parameter space to sample, the local search is more likely to start from a poor initial guess. This result shows how important it is to apply a global, or hybrid, optimization algorithm to obtain the best fits (Table ), or to adequately limit the search space when using a local search. Overall, the good performance (using a multi-start search) and affordable computational cost lead us to conclude that choosing the eight parameters identified as dominant in global sensitivity analysis for quantitative model fitting was indeed an appropriate compromise. Fitting more than eight parameters for the TRAIL model optimization would yield only little improvement in fit, at much greater computational cost. The number of dominant parameters in a particular model would certainly be dependent on its size and complexity, but the sensitivity analysis-based method described above allows their identification.
Figure 7 Effect of number of parameters on model calibration performance. Effect of varying the number of parameters included in model calibration on model fit, considering local or multi-start deterministic search and CPU usage time for global multi-start optimization. (more ...)
Finally, Figure shows how much the model improved using our method relative to the manual calibration used in the original study [7
]. It is noteworthy that adjustment of a few important parameters could substantially improve agreement between model output and experimental data. The procedure to identify those important parameters and estimate them is straightforward by systematic methodology compared to manual calibration which is inevitably labor-intensive and time-consuming with less guarantee of successful model fitting.
Figure 8 Improvement of model fitting by systematic calibration process. (a) Model fitting with the basal rate constants which were manually calibrated . (b) Model fitting with the estimated rate constants by the systematic calibration process.