|Home | About | Journals | Submit | Contact Us | Français|
Accurately predicting loop structures is important for understanding functions of many proteins. In order to obtain loop models with high accuracy, efficiently sampling the loop conformation space to discover reasonable structures is a critical step. In loop conformation sampling, coarse-grain energy (scoring) functions coupling with reduced protein representations are often used to reduce the number of degrees of freedom as well as sampling computational time. However, due to implicitly considering many factors by reduced representations, the coarse-grain scoring functions may have potential insensitivity and inaccuracy, which can mislead the sampling process and consequently ignore important loop conformations.
In this paper, we present a new computational sampling approach to obtain reasonable loop backbone models, so-called the Pareto Optimal Sampling (POS) method. The rationale of the POS method is to sample the function space of multiple, carefully-selected scoring functions to discover an ensemble of diversified structures yielding Pareto optimality to all sampled conformations. POS method can efficiently tolerate insensitivity and inaccuracy in individual scoring functions and thereby lead to significant accuracy improvement in loop structure prediction. We apply the POS method to a set of 4- to 12-residue loop targets using a function space composed of backbone-only Rosetta, DFIRE, and a triplet backbone dihedral potential developed in our lab. Our computational results show that in 501 out of 502 targets, the model sets generated by POS contain structure models are within subangstrom resolution. Moreover, the top-ranked models have Root Mean Square Deviation (RMSD) less than 1A in 96.8%, 84.1%, and 72.2% of the short (4~6 residues), medium (7~9 residues), and long (10~12) targets, respectively, when the all-atom models are generated by local optimization from the backbone models and are ranked by our recently developed Pareto Optimal Consensus (POC) method. Similar sampling effectiveness can also be found in a set of 13-residue loop targets.
Prediction of the loop regions conformations in proteins is important in structural biology for its wide applications in homology modeling1, segment defining2, protein design3, characterizing protein functions4,5, and ion channel simulation6,7. Loop structure prediction without the use of a structural template typically involves several phases including sampling, clustering, refining, and ranking8,9. In the sampling phase, the loop conformation space is explored to produce a large ensemble of reasonable, coarse-grain models. Then, representative models are selected from the ensemble using a clustering algorithm to reduce redundancy. These representative models are used to build fine-grain models in the refining phase. Finally, in the ranking phase, the models are assessed and the top-ranked ones are determined as the predicted results. The sampling phase is of particular importance – if the sampling process cannot reach conformations close enough to the native, it is unlikely to obtain a high-resolution near-native model in the refining phase.
Although significantly smaller than that of a complete protein molecule, the loop conformation space still poses sampling challenge for loops of nontrivial length. To reduce the number of degrees of freedom to achieve sampling efficiency, coarse-grain approaches are often employed to construct loop conformations, in which the loop peptide can be modeled using a reduced representation. A popular approach is to approximately represent loop backbone conformations using a set of (θ, ψ) torsion angles8–14. Alternatively, fragment libraries15,16 with backbone atoms and a few side chain atoms can also be used to buildup loop backbones. Loop closure constraints are satisfied by finding analytical solutions46,47, using random tweak17, inverse kinematics18, or Cyclic Coordinate Descent (CCD)19. The recent kinematic closure approach developed by Mandell et al.20 can lead to loop buildups with subangstrom resolution in long loops.
Coupled with reduced representations in loop modeling, coarse-grain energy (scoring) functions are developed to guide the sampling process toward native-like conformations. For example, Zhang et al.22 developed a soft-sphere potential to fast construct loops. Rohl et al.15 modeled loops by optimizing the Rosetta score using fragment buildup. We also developed a statistics-based potential22 to assess loop torsions based on the distribution of adjacent θ-ψ backbone torsion angle pairs in the context of all possible residues as derived from structural data in a loop library. Jacobson et al.8 built loop samples using a high resolution rotamer library and then filtered them with various heuristic scores/criteria. On the other hand, all-atom statistics- or physics-based energy (scoring) functions are used in the refining and ranking phases. DFIRE, the statistics-based scoring function, has proven to be successful in loop selection26. Cui et al. developed a grid-based force field, where their Monte Carlo sampling approach can obtain solutions less than 1.8A in a test set containing 14 protein loops48. Jacobson et al.8, Zhu et al.9, Rapp and Friesner12, de Bakker et al.13, Felts et al.14, and Rapp et al.23 modeled loops using physics-based potentials such as CHARMM29, AMBER30, and OPLS-AA31 with various solvent models. Other terms can also be incorporated to improve the accuracy of the scoring functions. For example, Xiang et al.11 developed a combined energy function with force-field energy and RMSD (Root Mean Square Deviation) dependent terms while Fogolari and Tosatto24 took advantage of the concept of “colony energy” by considering the loop entropy (an important component in flexible loops) as part of the total free energy.
The scoring functions play a critical role in loop modeling. One of the problems in coarse-grain scoring functions is their potential inaccuracy due to the fact that many factors are treated implicitly and thus the coarse-grain score may not faithfully reflect the true energy surface. Although not always, distortion introduced by the coarse-grain scoring functions may potentially mislead the sampling process and thus ignore some important conformations. Even the all atom scoring functions have the insensitivity problem, i.e., the native or the near native-like conformations often do not necessarily exhibit the lowest scores. The scoring function inaccuracy or insensitivity have been shown and discussed in several loop modeling studies8,10,11,25,27 with a variety of energy functions.
In this article, we report a Pareto Optimal Sampling (POS) method to sample loop conformation space. The fundamental idea of the POS method is to build a function space composed of multiple, carefully-selected scoring functions and then sample an ensemble of diversified conformations yielding Pareto optimality. In certain sense, the POS method can be thought as a super consensus method –instead of finding a conformation with optimality in a scoring function or certain scoring functions combination, the POS method intends to cover all structurally diversified conformations with optimal consensus in every possible positive score combinations. This is achieved by obtaining diverse coverage of the Pareto optimal front of the function space composed of multiple scoring functions. The POS sampled conformations are expected to tolerate insensitivity in individual scoring functions, particularly those in coarse-grain, and thus increases the chance of discovering native-like, good conformations. We apply the POS method to a set of 4~12-residue loop targets and compare with the decoy sets generated by Jacobson et al.8 using hierarchical all atom prediction approach. The loops in Jacobson's decoy sets are regarded as “difficult” targets26,33 and Jacobson’s decoy sets have been frequently used as a benchmark for loop prediction26,33,34. In addition to the loop targets in Jacobson’s decoy sets, we compare POS sampling with recent results9 on a set of 13-residue targets. Theoretical comparisons between POS and other consensus methods are also discussed.
The theoretical foundation of the POS method is the Pareto optimality36, whose definition is based on the dominance relationship. Without loss of generality, assuming that minimization is the optimization goal for all scoring functions, a conformation u is said to dominate another conformation v (u v) if both conditions i) and ii) are satisfied:
By definition, the conformations which are not dominated by any other conformations in the conformation set form the Pareto-optimal solution set. The complete set of Pareto-optimal solutions is referred to as the Pareto-optimal front.
We employ three scoring functions, including backbone Rosetta, DFIRE, and Triplet, to form the function space for loop sampling in POS. These scoring functions all have demonstrated their effectiveness in loop modeling. The Rosetta scoring function is a complex combination of physics- and statistics-based energy terms, which is one of the most successful tools in predicting overall backbone fold for protein domains that lacks any detectable structural analogs in PDB38,39. The Rosetta program has also demonstrated success in predicting protein loops15,20. DFIRE is a statistics-based potential energy which was derived from a high-resolution protein structure data set. DFIRE has previously proven to be successful in loop selection26 as well as in filtering unreasonable models34. The Triplet torsion angle scoring function developed in our lab is based on the distribution of adjacent phi-psi backbone torsion angle pairs in the context of all possible adjacent residue triplets as derived from the coil library37. The Triplet scoring function has shown its effectiveness in distinguishing the native loop from the erroneous ones22. Due to the fact that they are developed using different methods and data sets, there is little correlation among these three scoring functions.
In most optimization methods used in protein structure modeling, the goal is to minimize single scoring (energy) function to search for a solution with lowest energy; whereas in POS sampling involving multiple scoring functions, there are two equally important goals: 1) sampling a set of solutions near the Pareto optimal front; and 2) obtaining a diverse coverage of solutions near the Pareto optimal front. The POS sampling also differs from the classical multi-objective optimization (MOO) problems36, in that POS is not only interested in the conformations at the Pareto optimal front, but also in those near the Pareto optimal front with diversified structures.
The POS method is a population-based approach to sample the loop backbone conformations in the function space of multiple scoring functions. In the population-based sampling method, initially, population P with N loop conformations, C1, …, CN, is randomly generated. Each loop structure conformation Ci with n residues is represented by a vector (θ1, …, θ2n), which represents the dihedral angles of (θ1, ψ1, …,θn, ψn). The dihedral angles of ωi are kept constant at their average value of 180° while the bond angles and bond lengths also remain constant. Fitness is designed to measure the dominance relationship and popularity of a conformation in the population. Differential evolution style41 crossover for continuous torsion angles is developed to generate new conformations satisfying loop closure condition. Adaptive Monte Carlo is used to determine acceptance of new conformations based on the Metropolis rule. The population is updated by the Metropolis transition and only the top-ranked conformations are selected for reproduction, which will evolve the population toward the Pareto optimal front. Moreover, the similarities between conformations in the population can be exploited, which will allow the sampling process to discriminate the popular ones so as to achieve solution diversity.
The POS sampling program uses a fitness assignment scheme based on the number of external non-dominated points40. Considering a population P with N individual conformations, C1, …, CN, the fitness calculation is based on the strength si of each non-dominated conformation Ci, which is defined as the proportion of conformations in P dominated by Ci. As a result, the fitness of an individual Ci is defined as
The conformations with fitness less than 1.0 are the non-dominated ones in the population. For the nondominated conformations, the fitness function achieves diversity by biasing to those with less dominated conformations while discriminating the popular conformations dominating a lot of conformations40. For the dominated conformations, preferences are given to those dominated by fewer conformations with less strength. Conformations in a population are sorted according to their fitness and only the top m ones are allowed for reproduction in POS.
Differential Evolution (DE)41 is used in POS sampling to produce new conformations by crossing over the selected conformations from the current population in loop conformation sampling. We select the “DE/rand/2/exp” scheme listed in DE41, which is a robust scheme particularly suitable for scoring functions with complicated landscapes. In “DE/rand/2/exp” scheme, for each configuration Ci(θ1, …, θ2n), a mutant vector Vi is typically formed by
where i, r1, r2, r3, r4, and r5 are mutually distinct integer random numbers in the interval [1, m],m refers to the top m conformations selected for reproduction, and F > 0 is a tunable amplification control constant as described in DE41. Other forms of mutant vector generation are also provided in DE41. Then, a new conformation Ci’(θ1’, …, θ2n’) is generated by the crossover operation on Vi and Ci:
where . 2n denotes the modulo operation with modulus 2n, k is a randomly generated integer from the interval [0, 2n-1], L is an integer drawn from [0, n-1] with probability Pr(L = l) = (CR)l, and CR [0,1) is the crossover probability. Practical advice suggests that CR = 0.9 and F = 0.8 are suitable choices the DE scheme41. These parameter values are also adopted in our sampling algorithm.
After DE crossover, the loop closure condition of the new loop conformation is usually not satisfied. The greedy-heuristic Cyclic Coordinate Descent (CCD) algorithm19 is applied to each new conformation to close the loop. To obtain minimum distortion of the newly crossover section, CCD starts from the torsion angle next to the crossover torsion segment. Figure 1 illustrates the procedure using DE-style crossover to generate new loop conformation.
In POS sampling, Monte Carlo moves based on Metropolis transition are used to determine acceptance of newly generated conformations in the population. The transition probability depends only on the change of the fitness function value. The Metropolis-Hastings ratio is calculated as:
The new configuration Ci’ generated by DE is accepted with the probability
T is the simulated temperature, which is used to control the acceptance rate of the Metropolis transitions. Studies43,44,49 in Monte Carlo method literature have shown that the Markov Chain Monte Carlo (MCMC) sampler yields the best possible performance with an acceptance rate of around 20%. In this paper, we also maintain an acceptance rate of 15%~25% by adjusting T in each iteration. When the acceptance rate is less than 15%, T is increased by 10%. When the rate is more than 20%, T is decreased by 10%.
Using DE in population-based MCMC is proved to guarantee detailed balance55. However, control the acceptance rate by adjusting the simulated temperature T in POS may impact the detailed balance nature of the underlying MCMC and thus the resulting samples may not strictly follow Boltzmann distribution. Nevertheless, complying with detailed balance condition is not necessary in this loop modeling application, since our goal is to obtain a good coverage of loop conformations near the Pareto optimal front.
We evaluate the convergence of the loop conformation sampling by measuring the hypervolume indicator42. For a population P(C1, …, Ck, …, CN), the hypervolume indicator, HYP(P), relative to a reference point R, is defined as
R can be arbitrarily selected in the scoring function space where max for all scoring functions f1, …, fi, …, fn. For convenience, we select the worst scoring function values to build R. VOL(U, V) is the Lebesgue measure of two points, U and V, in the scoring function space, which is defined as . Sk is the corresponding point of conformation Ck in the scoring function space.
The hypervolume indicator measures the dominated volume of the non-dominated solutions. Generally, the dominated solutions do not contribute to the hypervolume indicator and the worse case computational complexity of computing hypervolume is O(M d-1), where d is the number of scoring functions and M is the number of non-dominated solutions. Usually, for large d, calculation of hypervolume is very costly. However, in our loop conformation sampling program using POS, only three scoring functions are used and thereby calculation of the hypervolume indicator does not pose significant overhead to the overall sampling computation. The hypervolume indicator value is monitored during the sampling process. Since POS is based on MCMC transitions, we can calculate the autocorrelation function, ΓHYP(T), to evaluate the integrated autocorrelation time (IAT), τHYP, to estimate the number of iterations to achieve convergence. The autocorrelation function, ΓHYP(T), is defined as,
where HYP(Pt) refers to the hypervolume indicator of the POS population P at time t. The IAT can be estimated by
Literatures45 in MCMC indicate that the number of iterations, NUM, for the sampler should be NUM >> τHYP.
Moreover, due to the large conformation space, a single sampling run may not lead to sufficient coverage of the diversified conformations near the Pareto optimal front. As a result, we repeat the sampling procedure with different random number sequences until the overall hypervolume indicator of the all models generated stops growing. The outputted models from these sampling runs form the backbone model set.
Figure 2 shows the multiple scoring functions coordinate plots of the model sets in targets 1alc(34:41), 1fnd(262:269), 1ptf(65:71), and 1onc(70:78) obtained by the POS method. All scores are linearly normalized in [0, 1]. As shown in 2(a), 2(b), and 2(c), respectively, models yielding the lowest scores in DFIRE, Triplet, and Rosetta have the best RMSD values (< 0.5A), while those with lowest scores in the other two scoring functions are far deviated with RMSD > 2.0A. This strongly indicates that neither DFIRE, Triplet, nor Rosetta is a perfect scoring function – one may be effective in some targets while fail in some others. More often, as shown in the example in 2(d), the native-like models do not exhibit minimum scores in either one of these scoring functions, but exhibit Pareto optimality in certain scoring functions combination. Our computational results on 502 targets in Jacobson’s decoy set show that, in 41 (8.2%) and 22 (4.4%) targets, the Pareto optimal models with best RMSD values in the model set generated by POS sampling are at least 0.5A and 1.0A better in RMSD than the models with lowest DFIRE, Triplet, or Rosetta scores, respectively. Ideally, the POS method intends to capture all structurally diversified conformations on the Pareto optimal front in the functional space of the scoring functions.
After sampling using POS, three major steps are applied to the generated backbone model sets to obtain final all-atom prediction results. First of all, the backbone model set is clustered with a 0.5A cutoff and one representative candidate is selected for each cluster. Secondly, the PLOP package8 is used to add side chains to the representative backbone models and then perform local optimization using OPLS/AA force field with SGB solvent models to generate the all-atom model set. Finally, we use our recently developed Pareto Optimal Consensus (POC) method27 to rank the all-atom models. Five scoring functions, including all-atom Rosetta, DFIRE, DOPE, OPLS/AA, and triplet, are used to form the function space in POC. The top-ranked models are selected as the predicted models.
Figure 3 shows the comparison of average RMSD of best models as well as the model set size in POS model set and Jacobson’s decoy set. One can find that for the short (4~6 res) and medium (7~9 res) targets, POS model set is averagely smaller than Jacobson’s while for long targets (9~12 res), the average sizes of the two model sets are approximately the same. By comparing the resolution of models, it is important to notice that the average RMSD’s of the best models in POS model set are lower than those in Jacobson’s decoy set, particularly for medium and long targets. If models within 1A to the native are considered as native-like models, Table 1 lists all targets in which either POS or Jacobson’s decoy set do not contain at least one native-like model. There are 20 out of 502 targets that Jacobson’s decoy set misses a native-like model. In contrast, the model set generated by the POS method has only one miss in target 1poa(79:83), which demonstrates the efficiency of Pareto optimality-based sampling by integrating multiple scoring functions.
The 1poa(79:83) has some particularities that make it hard to predict especially by the statistical potentials, which are trained on mostly “regular” structures. Normally, a loop’s backbone folds by packing itself against the rest of the protein. The loop usually has buried hydrophobic residues that help its folding, while the polar residues are exposed to the solvent. By contrast, 1poa(79:83) has no hydrophobic residues and a polar Asparagine residue is partially buried forming two favorable hydrogen bonds inside the protein (the black dashed lines in Figure 4). Also, the loop’s backbone is solvated and flexible (with two successive Glycine residues). The stability of this structure is helped by two disulfide bridges (yellow bars in Figure 4) formed by Cystine residues situated just near the two ends of the loop, which is also a rare situation.
In addition to the sensitivity problem that the scoring functions might have, the 1poa(79:83) case poses a sampling problem. Our sampling approach is hierarchical where the good backbone conformations are firstly found and then the side-chains are later added and optimized. Nevertheless, in the case of 1poa(79:83), the Asparagine’s side chain is the key to the loop’s structure formation and the backbone adapts to Asparagine’s structure. As a result, the backbone sampling approaches have difficulty in finding structures in the vicinity of the native.
In addition to the targets listed in Jacobson’s decoy set, we also apply the POS method to a set of 13-residue targets listed in Zhu et al. Table 2 compares the sampling results. For comparison convenience, in Table 2, as Zhu et al. does, we also use superimposed RMSD instead of regular RMSD. The POS method results in significantly lower superimposed RMSD compared to Zhu et al. In 9 targets, including 1ojq(A167:A179), 1ock(A43:A55), 1bhe(121:133), 1cnv(110:122), 2hlc(A:91:A103), 1ako(203:215), 1g8f(A72:A84), 1f46(A64:A76), and 1jp4(A153:A165), POS can produce models within 1A resolution while Zhu et al. has sampling results over 1A or more.
There are three targets, 1eok(A147:A159), 1hxh(A87:A99), and 1qqp(2_161:2_173), where the POS method is significantly worse compared to Zhu et al’s approach. Our further analysis finds that native loop 1eok(A147:A159) interacts with two sulfate ions and an asymmetric unit (Figure 5). Native loops 1hxh(A87:A99) and 1qqp(2_161:2_173) deeply interact with external chains as shown in Figure 6. Because the scoring functions we use in the POS method make no assumption of interactions with ligands, other asymmetric units, or external chains, the native or native-like loops are not scored well DFIRE, Triplet, Rosetta, or combination of these scoring functions. As a result, the native-like conformations in these targets are not present in the Pareto optimal front in the function space of these scoring functions and thereby the POS method has a high chance to miss those close to the native.
To identify the models with best quality within the model set, we use our recently developed Pareto Optimal Consensus (POC) method27 for model ranking. In POC ranking, we evaluate models with all-atom scoring functions including Rosetta, DOPE, DDFIRE, I-TASSER, and OPLS-AA/SGB as well as Triplet, identify the models at the Pareto optimal front, and then rank them based on the fuzzy dominance relationship to the rest of the models in the model set. Table 3 lists the average RMSD of top-ranked model as well as the best top-5-ranked model in POS model set and Jacobson’s decoy set. One can notice that in short, medium, and long loop targets, there is 0.05A~ 0.2A shift to the native in the model set generated by the POS method compared with Jacobson’s decoy set. This is due to the fact that, in quite a few targets such as those listed in Table 1, the POS model set contains models with significantly higher resolution than those in Jacobson’s decoy set. These models are identified as top models in the POC ranking process. Correspondingly, the POS method also leads to improved percentages of top-ranked and top-5-ranked models with RMSD less than 1A.
Various methods have been developed to take advantage of multiple scoring functions to improve protein modeling accuracy. One approach is to combine models generated by sampling guided by different scoring functions. For example, Fujitsuka et al.35 mixed models generated by SimFold and Rosetta separately and obtained higher prediction success rate than individual predictions.
We use a conceptual scenario illustrating a function space of two scoring functions S1 and S2 shown in Figure 7 to compare POS and sampling of individual scoring functions. Successfully sampling an individual scoring function typically leads to clustered solutions near its correspondent global minimum, as shown in Figure 7. However, some Pareto optimal solutions compromised between S1 and S2 will likely be ignored by the sampling process due to their higher scores compared to those near the global minimum. In contrast, POS intends to explore not only solutions near the respective global minima of individual scoring functions, but also the other Pareto optimal solutions in the multiple scoring function space.
A more popular approach is to obtain a consensus scoring function by combining multiple scores via machine learning approaches. To name a few examples, Qiu et al.28 used Support Vector Regression (SVR) to combine various terms into a consensus scoring function to rank protein models. Gao et al.29 combined independent latent servers by reducing server correlation, which achieved significant accuracy improvement in contact prediction. We hereby compare POS with a consensus scoring function generated by the weighted-sum approach to theoretically show that certain Pareto optimal solutions may be unreachable in a weighted-sum consensus scoring function.
In the weighted-sum approach, weights are assigned to various scoring functions and a consensus scoring function is built by linearly combining the weighted score terms. Figure 8 shows a conceptual scenario of a nonconvex scoring function space of two scoring functions S1 and S2. When a set of weights are selected, a contour line is formed and the minimum solution of the weighted sum function corresponds to a solution on the Pareto-optimal front, which is a tangent point of the contour line and the feasible solution space. However, there exists no contour line that can produce a tangent point with the feasible solution space in the region BC. This is because before a tangent point is reached in BC, the contour line becomes a tangent at another point at AB or CD, which yields a lower weighted-sum function value. In other words, in weighted-sum function optimization, solutions in AB or CD will attract the optimization process to drive away from solutions in BC. Even if nonlinear combination is used to integrate various score terms, some regions in the Pareto optimal front may still be unreachable. Compared to a consensus scoring function combining multiple scores, the POS method can explore solutions in both concave and convex Pareto optimal fronts, which can potentially lead to broader exploration of the loop conformation space.
Nonconvex Pareto optimal front are common in practical problems involving multiple scoring functions50. Nonconvexity in scoring functions and/or nonconvexity in feasible solution region can lead to formation of nonconvex Pareto optimal front36, although this is not a sufficient condition. Proteins are generally thought to have a rough, funnel-like folding energy landscape with many deep local minima52. Thus, scoring functions approximating the protein energy are clearly nonconvex. Moreover, the Ramachandran plots suggest that certain torsion angles combinations in protein conformations are infeasible and the feasible solution regions are often nonconvex51. As a result, the Pareto optimal front formed by multiple scoring functions can potentially and likely have non-convex regions.
The selection of scoring functions is important in the POS method. We select the scoring functions for loop conformation sampling based on the following three criteria. First of all, the scoring functions selected should have certain accuracy. A poor scoring function may complicate the sampling process and bring erroneous models into the final solution set, although it will not prevent good models from being included in the final solution set when good scoring functions are present. The three scoring functions we selected in POS, Rosetta, DFIRE, and Triplet, have demonstrated their accuracy in loop modeling, selection, and/or filtering15,20,26,34,37. Secondly, the scoring functions selected should exhibit low correlation. Incorporating additional highly correlated scoring functions into the POS scheme contributes little to sampling but increases the overall computational time in sampling, because scoring function calculation is usually costly. Table 4 shows the average Intraclass Correlation Coefficient (ICC)55 of pairwise Rosetta, DFIRE, and Triplet in targets listed in Jacobson’s decoy set. One can find that Rosetta-DFIRE has stronger correlation than Rosetta-Triplet and DFIRE-Triplet. This is because Triplet is scoring function in torsion space while no other atom-atom interactions are taken into consideration. Moreover, the scoring functions have stronger correlations in short targets than long ones. After all, the overall correlation among these scoring functions is low. It is important to notice that although having low overall correlation, the selected scoring functions likely agree on the good models with reasonable structures by yielding low scores due to their reasonably good accuracy. Figure 9 shows an example in 1nwp(A84:A91), where Rosetta and DFIRE have relatively low overall correlation (ICC = 0.165) (inconsistency particularly in poor models) but strong agreement models in native-like models. Finally, the number of scoring functions should be limited. The number of scoring functions strongly influences the performance of population-based multi-scoring functions optimization/sampling methods53. This is due to the fact that as the number of scoring functions increases, the possible solutions at the Pareto optimal front generally increase exponentially – as a result, sampling the solutions at the Pareto optimal front requires significantly larger population and more computational times to reach convergence.
The typical computational time for our serial program to model a 12-residue loop, including sampling, local optimization, and selection, ranges from 4 to 20 hours on an Intel Xeon 2.13GHz processor. Given the computational results shown in Session 4, we consider our selection of scoring functions effective.
By sampling the diversified conformations in the function space composed of Rosetta, DFIRE, and Triplet, our newly developed POS method is shown to be effective in exploring the loop conformation space. Theoretical analyses also prove that sampling the solution near the Pareto optimal front can lead to potentially broader exploration of the loop conformation space compared to sampling individual scoring functions or a consensus function. Consequently, the POS method produces effective model sets covering near-native conformations in targets in Jacobson’s decoy set as well as those listed in Zhu’s paper. As a result, the effective sampling directly leads to prediction resolution improvement in the top-ranked and top-5-ranked all-atom models.
One of the minor disadvantages of the POS method is its computational cost compared to sampling a single scoring function, where evaluations of multiple scoring functions are needed. Scoring function evaluation is usually the most costly component in a protein modeling program. However, the computations of multiple scoring functions are independent, which can be easily parallelized on modern high performance computing platforms. More interestingly, recent study by Handl et al.32 also shows that optimizing multiple energy function components can even help reach the global energy minima in protein structure prediction by reducing the number of local minima as well as the local minima escaping time.
Our future research directions include incorporating side chain atoms into sampling and extending our Triplet scoring function by considering χ angles starting from χ1. This may further enhance the accuracy of loop models using the POS method. We are also interested in applying the POS method to other protein modeling applications, such as protein folding, protein-ligand docking, and protein-protein interactions.
We acknowledge support from NIH grants 5PN2EY016570-06 and 5R01NS063405-02 and from NSF grants 0835718, 0829382, and 1066471.