Figure depicts the exemplars of the two classes that were the targets of the fitting. In order to demonstrate the intrinsic variability that these neurons exhibit for a repeated (frozen) input, two repetitions of the same 2 second long, 150 pA depolarizing current are displayed for each class. Note the difference in the two traces within class, albeit the fact that the exactly same stimulus was given in the two cases. Our fitting method, therefore, aims at capturing the main features of these responses rather than targeting a specific trace (see Methods).
The six different features chosen in the present study to characterize the firing patterns are depicted in Figure . Their corresponding experimental target values (mean
SD) for the two classes is summarized in Table . As can be seen in this table, the values of some of these features are clearly distinct between the two classes, supporting the possibility that the fitting procedure will give rise to two different models (with different sets of maximal conductances for the different ion channels current chosen, see Methods) for these two different classes.
Table shows the mean and variability of the features for the neuron selected as the exemplar of the accommodating class (AC 5) alongside four other neurons of the same class. Also given for each feature are the class statistics obtained by taking the mean of each of the five neurons as a single sample. As can be seen, the exemplar represents the class rather faithfully, with somewhat more pronounced accommodation.
All that the algorithm requires are the target statistics – the mean and SD of the different features. Thus, one can create a model based on the statistics of each individual cell separately, or fit directly the general class statistics. We find that the algorithm managed to fit all of the single neuron statistics presented here as well as the class statistics. Table shows the equivalent of Table for the fast spiking class. As can be seen, the spike rate feature is quite variable in this class.
In Figure , the results of our fitting procedure are demonstrated for the exemplars of the two electrical classes (AC 5, FS 5). In Figure A, we plot the progression of error values with generation for the accommodating behavior. The error (y-axis) is composed of the sum of errors for six features and is thus six-dimensional. In order to simplify the graph, the error values for the six features are weighted equally and summed to yield a single error value. In this graph, the error value of the organism with the minimal error of the total of the 300 evaluated in each generation is plotted. Three repetitions of the fitting with different random seeds are shown. In the three cases, the final iterations all converge to a similar error value ranging between 5 and 7 (red error at right). This means that, on average, every evaluated feature of the best model fell within one SD of the experimental traces.
Figure 3 Fitting of model to experimental traces using multi-objective optimization procedure. (A) Convergence of summed errors for the accommodating behavior with generation. Errors for the six objectives (in SD units) are weighted equally and summed to a single (more ...)
Figure B shows a response of the model that had the lowest error value at the last iteration to a 225
ms long depolarizing current pulse to allow a visual impression of the fit (experimental response in red and model in green). As can be seen, though the model was never directly fit to the raw voltage traces, the resulting voltage trace closely resembles the experimental one. Indeed, zooming in into these traces (Figure C) shows that the general shape of the AP (its height, width, AHP depth) looks very similar when comparing model and experiments. The same close fit between model and experiments is also obtained for the other class – the fast spiking interneuron (Figures D–F).
Regarding the quality of fit, one may see that the algorithm manages to find a good, but not perfect match to both target interneuron electrical behaviors. As stated previously, when one wishes to fit experimental data with a limited set of channels, not finding a perfect fit is hardly surprising. This could be due to inaccuracies in the experimentally derived dynamics of the channels or due to the fact that the real cell might contain on the order of tens of channels inhomogenously distributed across the surface of the dendrite and the model contains only 10 channels distributed over the soma and a passive leak current in the dendrite and axon. It is also important to note that there are a few significant deviations between the model derived from the feature-based error and the experimental traces. This is most marked in the height of the first spike (see for instance Figures B and E). This could have been addressed by modifying the details of the experimentally found Nat channel dynamics but we chose not to do so.
Using MOO, we are interested in exploring the best possible trade-offs between objectives as found at the end of the fitting procedure. In order to visualize this, the value of the error of each of the M different objectives should be plotted against the error in the other objectives. In Figure , we project this M-dimensional error space into a two-dimensional space, choosing two examples (Figures A and B) out of the total of M choose 2 possible examples. Three hundred organisms (parameter sets) of the fitting of the accommodating behavior at the final iteration (number 1000) are shown. The line connecting the error values of the solutions that do not dominate each other (and thus represent the best tradeoffs) is the pareto front (black dashed line). The line is obtained by finding the solutions that do not dominate each other (for definition, see Methods) and connecting them by a line (in two-dimensional space or the equivalent manifold in higher dimensional space).
The goal of the fitting procedure was to minimize each of the error values. Though the full error and hence also the pareto front reside in six-dimensional case, in order to visualize the pareto front, let us consider the full error to consist of solely the two-dimensional projection under consideration. Under that assumption, in the two-dimensional plot, the best result would be to have points whose value is as close to zero in both the x and y dimensions. If a value of zero cannot be simultaneously reached in both dimensions, then the exact nature of the trade-offs between the two plotted error functions, i.e., the shape of the pareto front, becomes of interest. While the shape of the front can be quite arbitrary, it is useful to consider two different boundary cases in two dimensions. One is the case in which there is no effective trade-off between the two objectives (accommodation index vs. AP overshot, Figure A). In this case, in the left part of Figure A, the pareto front is almost parallel to the y-axis. Thus, for the lowest value of the error in the AP overshoot (the x-axis) there is a range of both high and low error values for the accommodation index (y-axis). Naturally, the desired value would be that with the lowest error in the y-axis. The same holds mutatis mutandis when one considers the x-axis, as the pareto front in its lower part is parallel to the x-axis. Hence, the best solution is clearly that in the lower left corner (black arrow). Note that as a minimum in both error values can be achieved simultaneously there is no effective trade-off between the two objectives. In this case, the lower left point in the pareto front would also be the optimal solution for weighted sums of the two objectives also when the weighting of the two errors is not identical.
In contrast, the pareto front in Figure B (accommodation index vs. spike rate) is not parallel to the axes. Thus, some of the points along its perimeter will have lower values of one feature but higher values of the other one (e.g., the points marked by the three arrows). As the minimal value of one feature can be achieved only by accepting a value of the other objective higher than its minimum value, some trade-off exists between these two features. Therefore, different decisions on the relative importance of the two features will result in different points on the pareto front considered as the most desired model. Accordingly, if one wishes to sum the two errors and sort the solutions on the pareto front by the value of this sum, different weighings of the two objectives will result in different models considered as the minimum of the sum. The black arrow in Figure B marks the point considered to be the minimum by an equal weighting. Alternatively, putting more emphasis on the error in spike rate (x-axis) would select a point such as that marked by the light green arrow (top-left). Conversely, weighing the error in accommodation (y-axis) more heavily will favor a point such as that marked by a dark green arrow (bottom-right).
Figures C and D serves to demonstrate the effect of selecting different preferences regarding the two features. Figure C shows the response of the model that corresponds to the error values marked by a light green arrow in Figure B (upper-left) and Figure D shows the model response corresponding to the error marked by the dark green arrow (lower-right), both to a 150
seconds depolarizing step current. As can be seen, spike accommodation of the light green model trace (Figure C) is less pronounced than that of the dark green model trace (Figure D). This is the reflection of the fact that the error value for accommodation of the dark green model is smaller than that of the light green model. On the other hand, in the light green model, more weight was given to the spike rate feature. Indeed, the light green trace has 14 APs (exactly corresponding to the experimental mean rate of seven spikes per second) while the dark green has only 13 APs. Note that similar variability as depicted in these two model-generated green traces may be found in the experimental traces for the same cell and same depolarizing step. The light green model trace is more reminiscent of the first experimental trace of the accommodating cell (Figure A upper-left) whereas the dark green trace is more similar to the second (Figure A upper-right).
Figure portrays the spread of model parameter values at the end of the fitting for both electrical classes (red – accommodating; blue – fast-spiking). Each of the 300 parameter sets (each set composed of 12 parameters – the maximal conductance of the different ion channels) at the final iteration of a fitting that passes a quality criterion is represented as a circle. The criterion in this case was an error of less than 2 SD in each feature. The circle is plotted at a point corresponding to the value of the selected channel conductance normalized by the range allowed for that conductance (see Methods). Since there are 12 parameters, it is difficult to visualize their location in the full 12-dimensional space. Thus, for illustration, we project the parameter values of all models onto a three-dimensional subspace (Figures A and B) and to one-dimensional space for each of the 12 parameters (Figure C). Note that many solutions overlap. Hence, the number of circles (for each class of firing types) might seem to be smaller than 300. In Figure D, a subset of only seven solutions depicted in Figure C is displayed with different colored lines connecting the parameters values of each acceptable solution.
Figure 5 Parameter values of acceptable solutions. A–B. Each of the 300 parameter sets at the final iteration of a fitting attempt that has an error of less than 2 SD in each objective is represented as a circle. The circle is plotted at the point corresponding (more ...)
A few observations can be made considering Figures A–C. First, there are many combinations of parameters that give rise to acceptable solutions (i.e. non-uniqueness, see (Golowasch et al., 2002
; Keren et al., 2005
; Prinz et al., 2003
) and see below). Second, confinement of the parameters for the two different modeled classes (red vs. blue) to segregated regions of the parameter space can be seen both in the three-dimensional space (Figure A) and even in some of the single dimensional projections (for instance Nat
in Figure C). While for other subspaces, both in three dimensions (Figure B) and single dimensions (for instance Im
in Figure C), the regions corresponding to the electrical classes are more intermixed. Third, for some channel types, successful solutions appear all across the parameter range (e.g., SK or Im
) whereas for other channel types (e.g., Ih
) successful solutions appear to be restricted to a limited range of parameter values.
Figure portrays the experimental (red) versus
model (green) variability. The source of the variability of the in vitro
neurons is most likely due to the stochastic nature of the ion channels (Mainen and Sejnowski, 1995
; Schneidman et al., 1998
). In contrast, the in silico neurons are deterministic and have no internal variability. Yet, if one considers the full group of models generated to fit one cell, differences in the channel conductances may bring about similar variability. Thus, even though the sources of the variability are disparate, the range of models may be able to capture the experimental variability. As the number of repetitions performed experimentally was low, we normalize the values of the different repetitions of each cell to the mean and SD of that cell and pool all five cells together. As can be seen in Figure , the models (green circles) manage to capture nearly the entire range of experimental variability (red circles) with minimal bias.
Figure 6 Experimental versus model variability – accommodating neurons. For each of the five cells shown in Table , the values of each feature for the 225pA step current is extracted for all repetitions. The mean of the feature (more ...)