PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nistpaNIST's Public Access PlanNIST WebsiteHow to Use PMC
 
J Chem Phys. Author manuscript; available in PMC 2017 April 21.
Published in final edited form as:
PMCID: PMC5087108
NIHMSID: NIHMS819324

Bayesian Calibration of Coarse-Grained Forces: Efficiently Addressing Transferability

Abstract

Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics (MD). In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as Force-Matching (FM). Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our work flow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.

I. INTRODUCTION

In recent years, the high-throughput capabilities of molecular dynamics (MD) simulations have attracted the interest of companies involved in materials research and development.13 In such settings, these tools are becoming an integral component of the decision-making process, given their ability to rapidly search design spaces for novel systems that warrant further experimental study. While it is generally agreed that this procedure can significantly reduce material development costs, a key theme has repeatedly emerged from practical implementations thereof: in order to be informative for commercial applications, computational models require techniques that can rapidly calibrate inputs and assess uncertainties in simulated predictions.24

In the context of coarse-grained (CG) MD, this theme has proven to be especially germane. In particular, high-throughput applications generally require the ability to rapidly customize the level of coarse-graining for arbitrary systems across a range of state points (e.g. temperatures); inexpensive and robust calibration techniques are critical for enabling this degree of exibility. Often, the key input to be determined is the CG interparticle forcefield f(r, q) as a function of separation r and state-point q, and many techniques have been developed to compute these functions from thermodynamic considerations of atomistic systems.513 However, such methods typically draw information from a limited number of state-points, and the resulting forces can yield incorrect predictions for even small changes away from the calibration conditions (see Fig. 1). This has led several authors to consider the following questions:1425

FIG. 1
Comparison of coarse-grained density-temperature relation (solid line) and atomistic results ( An external file that holds a picture, illustration, etc.
Object name is nihms819324ig2.jpg) for polystyrene. To generate atomistic data, we simulated a roughly 8000 atom system for 10 ns at anchor temperatures T = 100 K, 200 K,..., 800 K and averaged ...
  1. Is there a general and inexpensive method for calibrating f(r, q) so that a CG simulation yields the same predictions as a corresponding atomistic simulation on a large interval of state-points?
  2. Can uncertainty quantification (UQ) be used to speed up the CG process?

In this paper, we propose a workflow that addresses these questions through a combination of techniques inspired by Bayesian probability, uncertainty quantification, and numerical analysis. This main steps in this work flow amount to:

  1. building an analytical, target model T(q) to describe the predictions that the CG simulations should reproduce;
  2. postulating an initial or guess set of forces f0(r, q);
  3. iteratively correcting f0 via a pointwise (in q) but inexpensive Bayesian algorithm that
    1. estimates functional derivatives δSδf[f0,q] of the CG prediction S[f0, q], and
    2. uses these derivatives to correct f0 by comparing CG simulations to the target.

Importantly, this work flow overcomes the difficulties of generating transferable forces via the low cost of Step III, which can quickly calibrate f0 constructed by non-robust but cheap interpolations in state-space. Moreover, this approach is meant to complement existing methods for coarse-graining insofar as it can use their outputs to inform these initial guesses. In contrast to previous works, our procedure also yields a probability P[f; q] expressing our belief that a given f is a physically plausible and well-calibrated input to the CG simulations.26 By means of this P, we can compute an average (or most-plausible) estimate f(r, q) for which the difference S[ f (r, q), q] T (q) can be rendered statistically insignificant by iteratively applying Step III.

A central idea that facilitates our analysis is a spectral decomposition of the forces via

f(r,q)=m=1Mwm(q)ϕm(r)=w·ϕ,
(1)

where ϕm(r) are a set of orthogonal basis functions, wm(q) are q dependent weights, M is a cutoff selected according to the needs of the modeler, and bold letters denote vector representations of the corresponding variables. Several notable works have addressed issues related to development and practical implementation of basis functions ϕ; see, for example, Refs. 13, 27, and 28. Here we do not advocate for one choice or another, but rather emphasize that appropriately selecting ϕ allows one to represent f using a modest number M [less, similar] O(10)) of weights. This, in combination with the low-cost of running CG simulations, allows us to numerically approximate enough functional derivatives

δSδfS[w+εΔw]-S[w]ε,ε1
(2)

(for a given direction Δw) to alter the weights in systematic and meaningful ways.

As we have suggested, though, a key difficulty repeatedly appears: it is generally too expensive to realize any simulated quantity at a dense set of state-points. The problem is often most pronounced when generating individual f0 by CG methods such as Force Matching (FM), which can take thousands of CPU hours; similar problems arise when estimating functional derivatives for large M. As a result, a frequent and unavoidable task in our analysis is to interpolate weights and derivatives between so-called anchor points in the variable q. Importantly, we desire that these interpolations yield an estimate of their uncertainties in order to avoid overstating our confidence in f(r, q). In what follows, we make extensive use of Gaussian-process regression (GPR). We refer the reader to Refs. 2931 for excellent reviews of this subject.

The problem of non-transferable force fields has been considered by several authors; see, for example, Refs. 7, 9, 12, 1418, 2125, 3236 and Sec. IV A. In many instances, these works have been successful in computing interparticle potentials that correctly predict material properties across a range of state points. To the best of our knowledge, however, these methods are limited in at least one of the following ways: they (i) cannot be calibrated to arbitrary reference data; (ii) are tailored for specific systems and/or material properties; (iii) lack a systematic correction algorithm to update force curves that yield incorrect predictions. Here our goal is to propose a generic, inexpensive, and robust calibration method whose usefulness does not strongly depend on either the system at hand or the details of coarse-graining. By making uncertainty quantification an integral part of the workflow, we also emphasize the subjective nature of coarse-graining, thereby uncovering decision-making elements of the problem. Within this framework, the CG methods advocated by the aforementioned references can be incorporated to inform such decisions in a systematic way.

We mention a few limitations of our analysis. For one, estimates of f (r, q) only yield CG predictions that are as good as our knowledge of the target properties in the first place. Given that the latter are never known with infinite precision, there is inherent uncertainty in our assessment of what constitutes a calibrated force curve. Moreover, the Bayesian algorithm may not necessarily calibrate f against all properties of interest, even though the procedure is meant to be general in nature. This question is difficult to address insofar as its answer depends on how smoothly the CG predictions change relative to wm. We expand on these and further limitations in the following sections.

The rest of the paper is organized as follows. In Sec. II, we present the main ideas and key equations behind our calibration workflow. We begin by assuming an initial set of force curves and show how corrections are computed. In Sec. III, we consider stability of the correction algorithm to changes in the underlying inputs, e.g. the interpolation methods used in Sec. II. Section IV frames our analysis in the context of past works and more fully considers extensions and limitations of our approach. Throughout, we illustrate aspects of the analysis via a running example of calibrating density-temperature relations for CG polystyrene. Appendices AD provide relevant information on simulation techniques, FM and the coarse-grained system, building target models, and interpolation procedures.

A. Notation and conventions

We review notation and other conventions used throughout this work.

  • The capital letters N, M, Q, S denote positive integers.
  • The letter q denotes a state-point such as temperature or pressure. In our running examples throughout, we will take q to be the temperature T.
  • Non-bold letters are scalars if no argument or a non-bold (scalar) argument is indicated. Examples include q, w, and w(q).
  • Bold letters q = (q1,…,qQ) and w = (w1,,wM ) = (w1(q) ,…,wM (q)) = w(q) denote vectors. In particular, q is a vector of state-points, while w(q) = (w1(q) ,…,wM (q)) denotes the weights in Eq. (1) evaluated at a fixed q.
  • In contrast to normal conventions, we allow a scalar function of a scalar argument to also take a vector argument. In such cases, this denotes the vector whose components are the scalar function evaluated at each element of the vector. That is, w(q) = (w(q1) ,…,w(qQ)) if q = (q1,…,qQ).
  • We use k(q, q′) to denote a scalar function of two variables. If we replace the arguments with Q-dimensional vectors q and q′, the object k(q, q′) is a Q × Q matrix
    k(q,q)=[k(q1,q1)k(q1,q2)k(q1,qQ)k(q2,q1)k(q2,q2)k(q2,qQ)k(qQ,q1)k(qQ,q2)k(qQ,qQ)].
    (3)
    Likewise, k(q, q) and k(q, q) denote the corresponding row and column vectors formed by pairing q with the elements of q.
  • We indicate elements of a vector v or matrix Σ with the subscripted, non-bold versions vi and Σi,j. When order of operations is otherwise unclear, we use the notation [v]j and [Σ]i,j; see immediately below.
  • The use of a Roman T as a superscript denotes the transpose, not the temperature as an exponent. That is, [ΣT]i,j = Σj,i for matrix Σ.
  • Unless otherwise noted, we distinguish the meaning of indices according to the parent symbol as follows.
    • qk refers to the kth state-point in a collection thereof.
    • wm refers to the mth mode in Eq. (1).
    • Importantly, wn, fn, Pn, and variations thereof refer to the nth updates of (or corrections to) the weights, forces, and probabilities.
    • The double-indexed symbol wm,n(q) denotes the nth correction of the mth mode evaluated at q.
    • Superscripts in braces, such as {i} in w{i}, denote the ith independent, identically distributed (i.i.d.) realization of a random quantity.
  • We sometimes denote a probability over forces using P[f; q]. In this expression, P depends on q but is not normalized with respect to this variable.

II. CALIBRATION WORKFLOW

A. Goals, tasks, and the Bayesian framework: a conceptual overview

Stated imprecisely, our goal is to calibrate f on an interval of state-points by ensuring that it (a) is thermodynamically consistent with an atomistic model, and (b) yields the same predictions as some target T (q), e.g. experimental data or atomistic simulations. Given infinite computational resources, any of the techniques advocated by Refs. 59, 12, 1518, 2123, 3234 determine functions f0 that satisfy these criteria pointwise in q; thus the problem is solved in theory.

In practice, however, several difficulties arise. First, coarse-graining methods generally rely on exhaustive sampling of configurations from fully atomistic simulations. This sampling is expensive, and often one can only realize such f0 at a small set of anchor points qk. Interpolation is sometimes used to overcome this limitation, but such techniques suffer from a second issue: CG simulations may be sensitive to minor changes in f. When so, reasonable interpolations of f0 can yield incorrect predictions at intermediate q. A third problem complicates the picture further: coarse-graining computations are rarely converged, so that non-negligible uncertainties accompany estimates f0. Thus, as we show, even our anchor points may yield incorrect predictions, sometimes by a factor of two or more.

These observations suggest the need to augment a typical calibration procedure with systematic correction and uncertainty quantification (UQ) steps that address the limitations of interpolation. To achieve this, we begin by invoking a Bayesian perspective wherein we model our belief that a given f is the best choice to satisfy the criteria (a) and (b) above. Operationally, we ask: what is the probability P[f; q] that f is well-calibrated at state-point q. We caution that this probability should not be understood in the frequentist sense; i.e. there is no notion of a random draw from a set of optimal force curves. Rather, P quantifies belief in a way that allows for analytical and systematic treatment.37 This is useful insofar as we can encode information about our initial estimates f0 (computed, e.g. from coarse-graining or interpolation) and their uncertainties in a prior probability P0[f ; q], which we update by comparing CG simulations with the target model. In distinctly Bayesian language, we compute a posterior P1[f ; q] via a relation of the form

Pn+1[f;q]P[T(q)f]Pn[f;q]=L(fδ,q)Pn[f;q]
(4)

where P[T (q)|f] = L(f|δ, q) is the probability that f yields the target predictions T (q), or equivalently a likelihood function L expressing our belief that f is well calibrated, given that its CG predictions differ from the target by an amount δ(q). Importantly, this procedure can be iterated indefinitely by treating the posterior as a new prior and re-updating according to Eq. (4). See Fig. 2 for a flowchart of this correction loop. Within this loop, it is straightforward to extract an average fn from Pn, which can be input into the CG simulations when a specific realization is needed. Henceforth, we refer to any initial estimate of forces entering the Bayesian correction loop as f0, irrespective of how it is computed.

FIG. 2
Process (solid black arrow) and information (dotted red arrow) flow in the Bayesian correction algorithm, corresponding to Step III outlined in Sec. I; see also Fig. 3. Symbol colors are used as follows: green denotes atomistic simulations, analysis thereof, ...

In order to generate transferable forces, we propose to use this correction loop twice as follows (see also Secs. II B – II D). First, assume that T (q) has been constructed. Second, use some preferred CG method (such as FM) to estimate f0 at a small set of anchor points qk and correct these estimates according to the algorithm in Fig. 2. Third, interpolate the resulting force curves in state-space between the anchors, and again apply the correction algorithm to update forces pointwise. Note that the only significant difference between the correction loops amounts to how we compute an initial estimate f0; see Fig. 3. We correct the anchor points first in order to render the interpolations more stable.

FIG. 3
Process (solid black arrow) and information (dotted red arrow) flow associated with our calibration work flow. Rectangles encompassing many symbols group the corresponding work flow components by the section where they are discussed. Symbol colors have ...

In Secs. II B–II D, we discuss in more detail the mathematics needed to use Eq. (4). Specifically, we outline procedure by which the ingredients P0, L, and P1 can be constructed for one iteration of the Bayesian correction and show results for the work flows in Figs. 2 and and3.3. Before continuing, however, a few comments are in order.

First, we can draw a close analogy between Eq. (4) and other well known optimization techniques. Specifically, in Sec. II C we will adopt the perspective that L is the probabilistic counterpart of a Newton's method update equation of the form “ fn+1-fn=-S[fn]-TS[fn],” where, informally, S′ is a derivative with respect to f and the quotes emphasize that this equation is not expressed rigorously. As Newton's method solves g(x0 + Δx) = 0 (for arbitrary g) by computing corrections Δx to an initial guess x0, similarly L weights changes Δf according to the likelihood that S [f0f, q] − T (q) = 0. In perfect correspondence, P0 is the probabilistic counterpart of an initial guess, and the product P1L P0 ensures that the posterior P1 is (loosely speaking) localized close the corresponding f0. In Sec. II D, we make these ideas more precise by showing that, for specific choices of priors and likelihoods, the mean f1 of P1 is the solution to a constrained optimization problem whose metric ||ff0|| is weighted according to the uncertainty in f0.

Second, we assumed that an analytical or surrogate model T (q) is already known. While the construction of such models is non-trivial, many well-known techniques have been developed to accomplish similar tasks; moreover, the formulation of T (q) is often problem dependent. For these reasons, we leave the discussion of surrogate building for Appendix C. We note, however, that it is useful to express T in the form

T(q)=T(q)+ηT(q),
(5)

where T is the average value of T and ηT (q) is a random process accounting for uncertainty through its variance σT2=Var[ηT(q)]. Figure 4 provides an example of what Eq. (5) could look like for density-versus-temperature data. The yellow stripe indicates our level of un-certainty in the black curve, which is fit to atomistic simulation data ( An external file that holds a picture, illustration, etc.
Object name is nihms819324ig1.jpg). For the purposes of calibrating f, any force curve that yields CG predictions within the yellow stripe is statistically consistent T, and thus acceptable. This highlights the usefulness of adopting a probabilistic framework inherent in Eq. (4): we avoid overcorrecting our force curves to match predictions that may be unreliable or have large uncertainties.

FIG. 4
Example of a surrogate model T (q) = ρa(T ) for atomistic density-temperature relations ρa(T ). The An external file that holds a picture, illustration, etc.
Object name is nihms819324ig1.jpg denote atomistic simulation results, while the black curve is the hyperbola fit to the data. The inset shows the underlying probability ...

Third, when the total number of modes M is finite, it is convenient to express P[f; q] = P[w; q] in terms of a vector w of weights. The equivalence of these expressions follows from Eq. (1), since a force curve can be reconstructed from only knowledge of wm(q). In general, it is desirable that M be as small as possible to reduce work associated with computing P. In the examples that follow, a Chebyshev polynomial basis set allows us to truncate Eq. (1) at values of M [less, similar] O(10) when modeling quantities such as the density.

Fourth, we have noted that Eq. (4) operates pointwise in q; that is, the Bayesian update is agnostic to whether P0 is constructed from CG forces f0 or interpolations. For this reason, we combine the discussions of interpolation and building priors in the next section. Gaussian processes, in particular, provide a natural means of accomplishing this task.

B. Building a prior

From a mathematical perspective, we find it useful to view a non-ideal (or finite-resource) CG algorithm as generating i.i.d. realizations from the statistical model

f0(r,q)=f0(r,q)+ηf(r,q),
(6)

where f0(r, q) is a deterministic force curve coming from an ideal (or infinite-resource) GC algorithm, and ηf (r, q) is a zero-mean random process. In the context of FM used in our running examples, ηf accounts for the (sometimes neglected) fact that f0 are constructed from atomistic simulations that are finite in both size and time. As a result, considerable thermal noise propagates into the coarse-graining computations, so that realizations f0{i} differ from one another; see the last paragraph of Sec. IV A (as well as Appendix B) for a discussion of why this uncertainty matters. We expect that related arguments justify Eq. (6) as a model for the output of other CG methods such as iterative Boltzmann inversion (IBI), etc..38

Assuming orthogonality of the basis functions, we formally decompose f0(r, q) according to Eq. (1) to yield a set of random weights w0(q) whose average values E[w0(q)] = w0(q) reconstruct f0(r, q). We also assume that for fixed q, the functional form of each f0{i}(r,q) is given on the entire domain 0 ≤ r ≤ 1, where r is a non-dimensionalized separation, and r = 1 is a cutoff distance beyond which particles do not interact;39 see Appendix A for details on non-dimensionalization.

In order to generate P0 for the first correction loop (cf. Fig. 3), we generate N ≥ 1 realizations of f0{i} at anchor points qk in a vector q = (q1, q2, …, qQ) of Q state-points. Given these, we construct the sample mean

w0(qk)=1Ni=1Nw0{i}(qk)
(7)

(which is random !) and, when we can compute it, the sample covariance,

Cov[wm,0(qk),wm,0(qk)]1N(N-1)i=1N[wm,0{i}(qk)-w¯m,0(qk)][wm,0{i}(qk)-wm,0(qk)],
(8)

which quantifies correlations between modes and our uncertainty in w0 when approximating this quantity with w0. Given Eqs. (7) and (8), we then postulate that a reasonable prior at each anchor point is

P0[w;qk]=1(2π)MCexp[-[w-w0(qk)]TC-1[w-w¯0(qk)]2],
(9)

where C = Cov[w0(qk), w0(qk)] and w0 = w0 is defined for later convenience.41 Note that Eq. (9) is just the Gaussian probability density with a mean and covariance computed from Eqs. (7) and (8).

In order to generate P0 for input into the second correction loop of Fig. 3, we must interpolate the weights in q. In general, we would like for P0 to account for additional uncertainty that arises when q is far from any of the anchor points qk. To achieve this, we use GPR, a common tool in the machine learning and uncertainty quantification communities;2931 cf. App. D for a brief review of the key ideas. What we wish to emphasize is that GPR outputs multivariate Gaussian probabilities of the form

P0[w;q]=1(2π)Mexp{-[w-w¯0(q)]T-1[w-w¯0(q)]2}
(10)

where Σ is a covariance matrix, |Σ| is the corresponding determinant, and the w0(q) are linear combinations of input weights (typically output by the first correction loop) at the anchor points; see also Eqs. (D5) and (D6) in Appendix D. In both Eqs. (9) and (10), the motivation for using Gaussian probabilities arises from the observation that realizations f0{i} are themselves averages constructed from many i.i.d. samples of a thermodynamic ensemble. As such, it is reasonable to assume that a Central Limit Theorem (and hence a normal distribution) accurately describes the fluctuations of f0{i} about their mean.

In the special case that the weights wm(q) are uncorrelated, the covariance matrix becomes diagonal (i.e. m,n=λm2δm,n), and Eq. (10) takes the simple form

P0[w;q]=m=1MP[wm;q],
(11)
P[wm;q]=12πλm2(q)exp{-(wm-w¯m,0(q))22λm2(q)}.
(12)

While Eq. (11) neglects dependency between weights, we show in Sec. II D that this example provides an important connection between Eq. (4) and other well known optimization techniques.

In Fig. 5, we illustrate priors calculated according to Eq. (12) for an initial set of force curves f{i} that were generated using the FM method. For eight temperatures q = (100 K; 200 K, … ,800 K), we generated 100 realizations f0{i}(r), which were used to construct sample means and variances according to Eqs. (7) and (8); see the top sub-plots, where ϕ are the Chebyshev polynomials and the number of modes M = 10 (only 7 are shown). The bottom-left sub-plot shows estimates w0 and their uncertainties according to Eq. (12), where we used the output of the first correction loop (iterated twice) as inputs to the GPR. In the bottom-right sub-plot, we reconstruct the mean force curves from these average weights according to Eq. (1). See Appendix D for more details on how these computations were performed.

FIG. 5
Raw and interpolated weights, along with corresponding uncertainties and force curves. Upper left: average weights w0(T ) at temperatures T = 100 K, 200 K, ..., 800 K and computed according to Eq. (7). The dashed lines are included to ...

We end this section with the caveat that, in general, there are many ways to construct reasonable priors. In essence, the task amounts to one of modeling, and this science is well developed; see, for example, Chapter 3 of Ref. 42. Our analysis is meant to be exploratory, and it is likely that more advanced interpolation methods based on statistical mechanics (e.g. as in Ref. 15) could lead to more suitable priors than we derive here. Nonetheless, we believe the usefulness of our approach here is born out by the results of the following sections and omit more in-depth studies as being beyond the scope of this work.

C. Sensitivity analysis and the likelihood function

The key idea underlying the construction of our likelihood function is to invoke a linear correction equation based on Taylor's theorem and Newton's method. To motivate this formalism, fix q, denote an initial set of weights as w0, and temporarily assume that T (q) is deterministic.43 Provided T (q) ≠ S [w0, q], we can estimate new weights w via an equation of the form

T(q)-S[w0,q]=(w-w¯0)·wS[w,q]|w=w¯0+O(|w-w¯02)
(13)

where [nabla]wS [w,q]|w=w0 is the gradient of the CG simulations with respect to the weights at w0, and terms O (|ww0|2) are assumed to be negligible.44 Equation (13) is equivalent to requiring that S [w, q] = T (q) under a linear approximation of the CG simulations.

In order to make use of Eq. (13), we must estimate the functional derivatives [nabla]wS [w, q], which we also refer to as the linear sensitivities. Let s = (s1, s2, ,…, sD)T denote the vector of such sensitivities at fixed q. We formally define these quantities via

sj(f,q)=limε0S[f(r)+εϕj(r),q]-S[f(r),q]ε.
(14)

In general, it is impossible to evaluate this limit analytically, so we resort to approximating

sj(f,q)S[f(r)+εϕj(r),q]-S[f(r),q]ε,
(15)

for ε [double less-than sign] 1. In practice, evaluating Eq. (15) simply amounts to running CG simulations with the inputs f(r) + εϕj(r) and computing the relevant finite differences. Keeping M = O(10) modes, we are able to estimate the functional derivatives at a modestly large number O(10) of anchor points qk [set membership] [qmin, qmax] using a few hundred CPU hours and ε = 0:01. See Fig. 6 for sensitivities of density-versus temperature simulations.

FIG. 6
Raw sensitivities and interpolations thereof. Top plot: Sensitivities for 8 temperatures as a function of mode number. The dotted lines are meant to aid the eye in seeing trends; they do not indicate values of sensitivities between integer modes. Note ...

When computing sensitivities for the second correction loop of Fig. 3, it may be necessary to interpolate their values between a grid of anchor points. We omit discussion of this interpolation, as it can be done by the same steps that lead to the probabilistic models of w0 and T; see Appendix D. The bottom sub-plot of Figure 6 shows an example of interpolated sensitivities for our running example of density-versus-temperature relations. Note that while we use the same anchor points as in Sec. II B, this is done entirely for convenience. In principle any set of anchors is acceptable so long as they are chosen where one has estimates of w0. As in the case of T (q), it is useful to express the sensitivities via an expression of the form

s(q)=s¯(q)+Dηs(q),
(16)

where ηs(q) = (η1(q), η2(q),…,ηS(q))T is a vector of i.i.d. Gaussian random random variables, S is the number of anchor points, and D is an S × S matrix (determined during GPR; cf. Appendix D) that describes correlations between the sensitivities. The probability density for s(q) is identical to Eq. (10), with w0(q) replaced by s (q) and Σ replaced by DDT. In Fig. 6, we take D to be diagonal with respect to mode number and assume that the covariance between state-points is given by Eq. (D2) when used in the second correction loop.

In order to construct a likelihood function, we first re-write Eq. (13) by replacing deter- ministic quantities with their probabilistic counterparts. Defining Δw0 := ww 0, we find that up to first order, corrections must satisfy

δ-Δw0·s¯(q)=-ηT(q)+Δw0·Dηs(q),
(17)

where δ = T (q) − S [w0, q]; cf. Eq. (5). The above expression states that that we can effect a change δ in the CG predictions via Δw0, but subject to the fact that our target T and sensitivities are known imprecisely. We identify a likelihood function by noting that the left- hand side of Eq. (17) is normally distributed with mean zero and variance σT2+(Δw0·D)2; viz

L[Δw0δ]:=P[δΔw0]=12π[σT2+(Δw0·D)2]exp[-(δ-Δw0·s¯)22[σT2+(Δw0·D)2]],
(18)

where we equate the likelihood with P[δ |Δw0], the probability of effecting a change δ in the CG predictions conditioned on Δw0.

D. Bayesian update: construction and interpretation

Given the prior probability in Eq. (10) and likelihood function in Eq. (18), Bayes theorem yields

P1[w;q]P0[w;q]L[Δw0(q)],12π[σT2+(Δw·D)2]exp{-12[(δ-Δw·s¯)2σT2+(Δw·D)2+ΔwT-1Δw]}
(19)

where a constant of proportionality is required to ensure that P1 is normalized. Several comments are in order. For one, δ, D, and σT are all functions of q. Second, the general form of Eq. (19) makes it difficult to analytically compute the mean wP1[w] dw and associated covariances, which are likely to be of most interest to modelers. Nonetheless, it is still possible to understand key properties P1, which can lead to simplications and/or aid in numerical integration. In particular, P1 becomes large when both the square root and exponent are close to zero. Minimizing the former entails orienting Δw0 in directions for which the sensitivities are well characterized (i.e. the corresponding eigenvalues of D are small). Maximizing the exponent (which is always non-positive) requires that |Δw0| be small relative to the eigenvalues of Σ and oriented in directions of high sensitivity. From these observations, we conclude that P1 is a compromise seeking the smallest change from the original w0 while maximizing the effect on S.

In practical applications, the inexpense of CG simulations may allow one to characterize the sensitivities s sufficiently well so that we can approximate D = 0. In this case, the posterior takes the simple form of a Gaussian

P1[w]exp[(w-w¯1)TΛ(w-w¯1)]
(20)

Λi,j=[-1]i,j+sisjσT2
(21)

w¯1=w¯0+δσT2Λ-1s,w¯1,i=w¯0,i+δ[s]iσT2+sTs,
(22)

where the w1,i are computed with the aid of the formula

[Λ-1]i,j=i,j-[s]i[s]jσT2+sTs,
(23)

due to Miller.45 The mean weights w1 should yield improved predictions when used in place of w0 in the CG simulations.

We highlight several features of Eq. (22). For one, using the prior Eq. (12) reduces w1 to the much simpler

w¯1,i-w¯0,i=δsiλi2σT2+(s·λ)2
(24)

If we further suppose that a single sensitivity si or uncertainty λi2 is much greater than all other terms, then the corresponding difference w1,iw0,i dominates the vector w1w0. In these limits, w1 modifies only (i) the most sensitive element of w0 for large si, or (ii) the most uncertain element if λi2 dominates. Furthermore, it is possible to show that in the limit σT20, w1 is the solution to the constrained optimization problem wherein we seek to minimize the norm

L2,λ(w-w¯0)=i([w-w¯0]i)2λi2
(25)

subject to the constraint that Δw0 · s = δ. Equation (25) is simply the Euclidean norm of the vector Δw0 with each term reduced according to the uncertainty with which we know w0 in the prior P0.

When σT2 is finite, Eq. (24) also optimizes Eq. (25) subject to the modified constraint

Δw0·s=δ(s·λ)2σT2+(s·λ)2δ.
(26)

Again, limiting cases bring clarity to this equation. In particular, when σT2(s·λ), one finds that Δw0 ≈ 0; i.e. the uncertainty in the atomistic results is too high to justify forcing the CG simulations to agree with the surrogate. On the other hand, when σT2(s·λ), the sensitivities and/or uncertainties λ are so large that we incur little inconsistency with the prior by requiring exact equality between the CG predictions and their mean atomistic counterparts.

Figure 7 shows the results of correcting force curves for our running example of density-temperature data first discussed in Figs. 5 and and6.6. We used our training data at q = (100 K, 200 K, … ,800 K) to estimate the mean weights and their uncertainty via Eqs. (7) and (8), which are inputs to the first correction loop in Fig. 3. Priors were computed according to Eq. (9), and sensitivities were recomputed for each iteration. After two passes through this loop, we interpolated the posterior mean forces at 10 K intervals and used these as inputs to the second correction loop in Fig. 3. We assumed independence of the modes to generate an interpolated prior according to Eq. (11) and set μ0 = 0 and p = 2, which are inputs to the GPR [cf. Eqs. (D1) and (D2) in Appendix D]. Approximating D = 0, we were able to generate well-calibrated forces after only one iteration through the second correction loop.

FIG. 7
Illustration of the workflow according to Fig. 3. To generate this plot, we proceeded as follows. First, we generated prior force curves according to Eq. (9) at 8 anchor temperatures. Then, we applied Eq. (4) to update the anchor points individually in ...

III. DEPENDENCE ON UNDERLYING ASSUMPTIONS AND ACCELERATED COARSE-GRAINING

As is frequently the case with numerical optimization recipes, the Bayesian correction algorithm requires certain assumptions and inputs for practical implementation, e.g. hyper-parameters (i.e. inputs to GPR) and prior probabilities. As many of these quantities are not typically encountered in the context of coarse-graining, we explore the dependence of our method on its underlying assumptions (Secs. III A and III B). A key theme repeatedly appears: knowledge of the target data is more important than details of the prior. In this vein, we also demonstrate how Eq. (4) can accelerate coarse-graining by correcting initial force curves that are constructed by force matching undersampled atomistic simulations (Sec. III C).

A. Dependence on basis set

Formally speaking, the posterior probability P1[f ; q] that we recover from Eq. (4) should be independent of the basis set ϕ, provided (i) the latter is complete (in a mathematical sense), (ii) M → ∞, and (iii) the linear approximation in Eq. (13) is valid. From a practical standpoint, however, P1 will depend on the size of M and how well a given basis set captures relevant features of the CG forces. While it is beyond the scope of this work to present general statements about convergence, we do demonstrate that calibrating an initial set of force curves with different ϕ can lead to approximately the same CG predictions and posterior mean.

Here we consider two additional bases: Bessel functions with M = 15 modes and Legendre polynomials with M = 10 modes. For each set, we use our training force curves at q = (100 K, 200 K, … , 800 K) and update the modes according to the procedure underlying Figs. 3 and and7;7; see Sec. II D. In Fig. 8, we show the results of these computations. In both cases, we only require a few updates of the anchor points, followed by one update of the interpolated curves to yield good agreement with the target data. Thus for the three basis sets considered (including Chebyshev polynomials in Fig. 7), the algorithm converges with roughly the same amount of computational work.

FIG. 8
CG predictions according to different basis sets and calibrated using the work flow in Fig. 3. Bessel function results are indicated by × and dashed lines; Legendre polynomials are indicated by squares and solid lines. Note that despite minor ...

B. Dependence on interpolation method

In this section, we verify that corrections according to Eq. (4) are relatively stable to changes in the hyperparameters (i.e. underlying assumptions) of the GPR. We fix {ϕm} to be the Chebyshev polynomials, set M = 10, and consider four additional interpolation schemes corresponding to the following hyperparameter choices (see also Appendix D):

  • μ0 = 0 and p = 1,
  • μ0 = 0 and p = 1.5,
  • μ0 = bT + c and p = 2,
  • μ0 = aT2 + bT + c and p = 2,

where p is the exponent appearing in Eq. (D2), μ0 is a prior assumption of the functional form of each weight as a function of temperature, and a, b, and c are free parameters given by a least squares fit to the posterior mean weights output by the first correction loop of Fig. 3.

Given these interpolation methods, we then perform the same steps as in the previous section and Sec. II D, interpolating weights as a function of T, computing sensitivities and updating force curves at 10 K increments. As before, we approximate D = 0. Figure 9 shows the results of this process. While each interpolation scheme yields somewhat different predictions, the interpolated forces all converge to acceptable targets within a few updates.

FIG. 9
CG predictions arising from different interpolations of the Chebyshev weights. The anchor points were computed in the same way as Fig. 7 and are not shown. The main figure shows densities computed from interpolations of the posterior mean weights output ...

In this figure, it is interesting to note that the linear interpolation (μ0 = bT +c) behaves far worse than the others. While it is not clear why this is so, our work flow nonetheless succeeds in calibrating the underlying forces. This example points to the difficulty of interpolation and need for robust work flows. In such cases, a reasonable approach is to accept the ad hoc nature of a given choice and attempt to systematically account for and correct its effects.

C. Accelerated coarse-graining

In the previous section, we considered how arbitrary changes in the prior affect the correction algorithm. Here we examine more closely the special case in which we change P0 through the number N of force curves realized at each qk. Specifically, we demonstrate that it is possible to find a reasonable f by constructing priors with one tenth of the atomistic simulation resources used in the FM algorithm.

Figure 10 illustrates the results of this task. To generate this data, we followed the steps leading to Fig. 7, but using one tenth of the i.i.d. force curves f0{i} available at each of the anchor temperatures when constructing the prior. The target data was kept the same. To arrive at convergence, we require more iterations of the update procedure applied to the interpolated forces. However, given the computational cost associated with running atomistic simulations, significantly less resources are required to calibrate f as compared to Fig. 7. In particular, each realization of f0{i} takes roughly 40 CPU hours to generate, while each CG simulation takes ≈ 10 CPU hours. Thus, Fig. 7 required about 32,000 CPU hours to generate all the f0{i} and roughly 4,500 CPU hours to run CG simulations, for a total of 36,500 CPU hours.46 In contrast, Fig. 10 required 3,200 CPU hours to generate f0{i} fig and 8,100 CPU hours for CG simulations, for a total of about 11,300 CPU hours, less than a third of the case for Fig. 7.

FIG. 10
Correction algorithm applied to a prior P0 constructed from one tenth of the force curves in Fig. 7. Although we must run more CG simulations, the algorithm converges to forces that yield a reasonable density-temperature curve.

IV. DISCUSSION

A. Comparison with past works

Given the usefulness of CG simulations, it is not surprising that a vast amount of literature has been devoted to the subject of constructing and calibrating coarse-grained force fields; see, e.g. 7, 10, 11, 1518, 20, 22, 32, 33, and 47. To the best of our knowledge, however, none of these works implement uncertainty quantification as a widespread and integral part of their analysis. Here we wish to emphasize that UQ can play a central role in these analyses by (i) explicitly revealing the subjective aspects of calibration, (ii) recasting the latter as a decision-making problem, and (iii) providing systematic and robust methods for choosing calibrated forces.

To be more specific, the Bayesian framework illustrates that infinitely many plausible force curves yield the target predictions. This degeneracy (in a mathematical sense) is perhaps already foreshadowed by the variety of CG methods that have been successfully developed: empirical Leonard Jones (LJ) potentials,11,18,47 iterative Boltzmann inverseion (IBI),10,16,17 FM,5,6 etc.. But even within the context of a single method, statistical under-sampling, numerical error, and uncertainty in target data suggest that it is difficult, if not impossible to identify a unique, optimal force for a given coarse-graining procedure. Thus, any method for determining force curves involves a subjective choice by the modeler at some level.

Implementation of UQ throughout a CG procedure is useful in that it assesses the extent to which certain choices are more reasonable than others. This is illustrated by Eq. (25), wherein updates Δw are computed by minimizing the norm

L2,λ(Δw)=i([Δw]i)2λi2.
(27)

Within the Bayesian framework, changing modes with small uncertainties would be heavily penalized, and hence implausible. We may interpret this as a compatibility statement: if a given weight output by the CG algorithm is very certain, a force curve with a very different value of this weight is inconsistent with the underlying assumptions or physics of coarse-graining. Thus, the simplest method of calibration (e.g. changing a single mode) could potentially be the least plausible.

Along related lines, our correction algorithm is robust, which is of particular importance when generating transferable potentials; see, e.g. Refs. 10, 1618. These works showed that IBI typically outputs non-transferable forces, and other authors found that individually tailored methods (e.g. empirical potentials or specific interpolation methods) were required to overcome this limitation.11,15,18,20,22,47. Figure 6 illustrates a possible culprit: large sensitivities to small uncertainties in a given mode can lead to wildly incorrect CG predictions. To highlight this issue, consider that for the Chebyshev basis at 800 K, an uncertainty of ±2 GN / mol in the first mode (approximately 2.5% of the weight, corresponding to a vertical translation of the force) leads to a variation of roughly ±0.25 g/cm3 (about ±30% !) in the density.48 Thus, small differences in reasonable interpolation methods can still lead to vastly different CG predictions. By updating forces according to a paradigm according to Eq. (4), we both (i) acknowledge this problem, while (ii) providing a systematic method for overcoming it; see Figs. 810.

B. Extensions

Given its similarity to Newton's method, our interpretation and formulation of Eq. (4) lends itself to several extensions. In particular, decades of research have produced a variety of higher-order update equations that are known to have faster rates of convergence than Eq. (13); e.g. see Ref. 49. Generally speaking, it should be straightforward to write expressions equivalent to our likelihood L using any of these updates. In many instances, however, the corresponding algorithms may require running additional CG simulations in order to compute changes in sensitivities or higher order corrections to the linear approximation. Moreover, the resulting likelihoods may not be Gaussian, making it harder to analytically compute posterior means. Nonetheless, such extensions could provide promising improvements over the simple linear approximation that we invoke.

A second extension of this work is to enable calibration of multiple physical properties simultaneously. Within the context of the linear approximation, it should be relatively straightforward to solve such a problem. In particular, one can define a sensitivity vector sk for the kth physical property of interest and construct the corresponding likelihood L k from sk and the difference between the CG and atomistic results. Assuming, for example, independence of these sensitivities, Eq. (4) takes the form

P1[w,q]P0[w,q]kLk[δk,q].
(28)

Our analysis is also easily generalized to the case in which the target data and sensitivities are both known precisely by reinterpreting likelihoods as Dirac delta functions. In a similar fashion, the above formulas can be generalized to multi-dimensional state points (e.g. temperature - pressure pairs) by recasting Eq. (13) in terms of the appropriate functional derivatives.

C. Limitations

A key limitation of our analysis is the need to run a large number of CG simulations. Often times, these simulations are so inexpensive that the practical limitations of the method are negligible. However, in certain cases the cost of a CG simulation may be high enough so that, perhaps only O (100) CG runs are available. In this case, the accuracy and/or efficiency of the method is limited by the available resources.

A related issue arises from the fact that, in general, CG simulations are not linear functions of the weights. Figure 7 illustrates the consequences of this; specifically, the correction algorithm sometimes over- or under-corrects the CG predictions, requiring additional iterations of the Bayesian update to fully optimize the forces. While this is a common occurence in numerical root finders that use, e.g. Newton's methods, such algorithms often benefit from the extreme inexpense [often less than O (0.01) s per computation] of evaluating a function. Given that our function is an entire simulation, the impact of a poor guess f0 on efficiency cannot be overstated; see Sec. III C. As suggested in the previous section, higher-order methods could also mitigate this issue.

Acknowledgments

All authors thank Debra Audus, Timothy Burns, Andrew Dienstfrey, and Anthony Kearsley for useful comments during preparation of the manuscript. TR thanks Sergei Izvekov (Army Research Lab) for the use of his FM code implemented on the Garnet cluster located at the ERDC DSRC. Part of this work was completed while PP was a postdoctoral fellow at the Institute the Institute for Mathematics and its Applications (IMA) under a 506 grant from NIST to the IMA. The IMA is a National Science Foundation Math Institute 507 funded under award DMS-0931945.

Appendix A: Simulation Details

The reference system used to construct all of the figures consisted of 10 chains of 50 monomers of atomistically resolved atactic polystyrene. To generate an initial configuration we used the amorphous builder within the commercial package MAPS by Scienomics50.51 All molecular dynamics simulations were performed with the open source package LAMMPS52; temperature and pressure was maintained with a Nose-Hoover thermostat and barostat with coupling constants of 0.1 ps and 1.0 ps, respectively. The forcefield applied to the atomistic system was PCFF53 with a cutoffof 1.2 nm for both Lennard-Jones (van der Waals) and Coulombic nonbonded interactions. Long-range electrostatic interactions were calculated using the particle-particle particle-mesh method with a precision of 1.0 × 10−6. Due to ease of implementation, the tabular force fields generated by the Bayesian calibration found in this work (e.g. Fig. 5) were calculated using dimensionless separations. Because very few interactions for CG sites in the atomistic reference system existed for distances less than 0.48 nm apart, this was defined as r = 0. To determine forces at separations less than 0.48 nm the first 600 values were fit to a linear function and extrapolated to 0 nm. The spacing between force values in real units was 5.0 × 10−5 nm. A real space cutoffof 1.4 nm was used in all CG simulations, corresponding to the dimensionless r = 1. Bond and bond angle potentials were derived using Inverse Boltzmann inversion at 500 K and applied to all temperatures.

To relax the initial atomistic configuration, we first minimized the energy of the amorphous structure using the Polak-Ribiere conjugate gradient method with a force tolerance of 1.0 × 10−8. Four consecutive NVT simulations at 298 K were then performed for 5.0 × 105 timesteps each, with a timestep of 0.001 fs, 0.01 fs, 0.1 fs, and 1.0 fs, respectively. All following simulations were performed with a timestep of 1.0 fs. A 0.5 ns NVT simulation that continuously increased the temperature from 298 K to 600 K was then performed. Finally, a 2 ns NPT simulation at 600 K was run to ensure an equilibrated structure. To obtain initial configurations for the higher temperatures investigated in this study, we further equilibrated for 0.5 ns at those temperatures. To obtain initial configurations for the lower temperatures we decreased the temperature continuously for 5 ns and then equilibrated for another 0.5 ns. After equilibration, a production run at each temperature was performed for 12 ns with coordinates, velocities, and forces recorded every 1 picosecond. This capture rate determines the number of frames used in the FM averaging of Eq. (B3) in Appendix B.

All CG simulations were performed in the NPT ensemble with a timestep of 1.0 fs. Initial configurations were generated from a direct mapping of the atomistic system at each temperature. Equilibration consisted of 100 ps, followed by a 10 ns production run.

Appendix B: Force Matching

The force matching method applied to coarse-grained systems has been described in detail elsewhere.5,6 Here we only outline specifics as they pertain to the initial guess for the CG force curves.

Force matching yields the optimal (in the least-squares sense) pairwise decomposition of the atomistic free-energy surface in CG coordinates. The result is a series of tabular functional-free force curves f{i} (r, T) that we process via the work flow illustrated in Fig. 3.

Given an ensemble of atomic positions, momenta, and forces, rj, pj, and fj , define the ollowing mappings:

RJ=j=1,aJcjJrj,PJ=j=1,aJpj,FJatm=j=1,aJfj,
(B1)

where aJ is the number of atoms in the Jth group (in this case 16) and cjJ = mj/MJ is the ratio of an atoms mass to the CG total mass; see Fig. 11. Next, we postulate that the total force on the Jth group can be expressed in the form

FIG. 11
Illustration of CG beads compared to their atomistic counterparts.

fJFM(Ω)=KJfFM(RJK,Ω)nJK,
(B2)

where RJK = |RJK| = |RJRK|, nJK = RJK/RJK, and fFM(r, Ω) is a yet-to-be-determined function of some free parameters . Here we represent fFM as a cubic spline on a large collection of points that mesh the r-domain out to the cutoff distance; thus correspond to the spline parameters. For N total CG sites, we then determine fFM by minimizing

χ(Ω)=J=1NFJatm-fJFM(Ω)2,
(B3)

subject to the constraint that the CG force field also reproduce the atomistic virial, viz.

fFMR=3VPinst
(B4)

where Pinst is the instantaneous pressure of the reference configuration at a volume V . In Eq. (B3), the averaging (indicated by brackets left angle bracket·right angle bracket) is taken with respect to all of the frames that are saved during the atomistic simulation. The output fFM(r, Ω) of optimizing Eq. (B3) is a realization f0{i} of Eq. (6). In order to compute multiple such realizations, we iterate this procedure using new intervals (or trajectories) of atomistic simulations to compute the averages in Eq. (B3).

Appendix C: Surrogate building and target models

In general, there is no univeral technique for constructing a surrogate or target model; details often depend on the problem at hand. However, certain generic features are useful: (i) T (q) should provide estimates of the quantity of interest at all state-points of interest; (ii) such estimates should also have a quantified uncertainty; (iii) T (q) should be quick to evaluate, so that differences δ(q) are easy to compute.

In the case of simulated density-vs-temperature curves, we construct a surrogate by fitting a hyperbola to atomistic data.54 The motivation for representing the data via hyperbolas stems from the observation that, when a polymer system is deep within the glassy or rubbery regimes, density-temperature relationships ρ(T) should be approximately linear. The asymptotes of a hyperbola can capture this bilinear behavior, while its transition region can model intermediate temperatures. If we let γ denote the parameters of a hyperbola, we determine this quantity for our data by least-squares fit. For more details, see Ref. 54.

Given a hyperbola parameterized by γ, we model uncertainty in the atomistic predictions by a simple stochastic process. Specifically, letting ρH(γ, T) denote the densities predicted by the hyperbola, we assume that the atomistic data ρα(T) can be modeled via

ρa(T)=ρ(γ,T)+ηa(T),
(C1)

where ηα(T) is a zero mean, delta-distributed Gaussian random process. In other words, we assume that ηα(T) is uncorrelated across temperatures and has a fixed variance Var[ηa(T)]=σT2. In order to estimate σT2, we compute the residuals R(Tk) = ρH(γ, Tk) − ρα(Tk) at the simulated anchor temperatures Tk and identify

σT2=1Qk=1QR(Tk)2
(C2)

for Q anchor points. Given this parameter, we identify our target T (T) = ρα(T) according to Eq. (C1).

We note in passing that this simple model neglects effects (e.g. correlations) that could be represented by more sophisticated models. Here we do not pursue such models since our focus is on developing tools for updating CG force curves.

Appendix D: Gaussian-process regression

In the UQ and machine learning communities, Gaussian-process regression is a well-known set of techniques that can interpolate data while simultaneously assessing confidence in the interpolated results. As there are many references that provide comprehensive reviews of the subject, we only introduce the main ideas and highlight issues that are relevant for constructing force-curves.2931 We confine our discussion to a simple example of interpolating a single weight as a function of temperature, although the formulas we discuss are easily generalized to more complicated scenarios.

To begin, let q = (q1, …, qQ) be a vector of Q state-points and w(q) = (w(q1),…, w(qQ)) correspond to an average weight estimated at individual qk. The starting point of GPR is to assume that the w(q) are points sampled from a Gaussian stochastic process, i.e. a random function 𝖜 (q) of whose joint probabilities between finitely many values of q is multivariate Guassian. Mathematically, this means that the probability density of 𝖜 taking values w(q) is given by

P[w(q)]=1(2π)Q/2K(q,q)exp{-12[w(q)-μ(q)]TK(q,q)-1[w(q)-μ(q)]},
(D1)

where μ(q) is the average behavior of 𝖜 as a function of q and K(q, q′) is a covariance matrix element indicating the extent to which 𝖜 (q) is correlated with 𝖜 (q′). Loosely speaking, the goal of GPR is to use the data (q, w(q)) to determine the the functions μ and K. In effect, this amounts to finding the stochastic process 𝖜 that is consistent with the available measurements.

Now, it is generally impossible for data to effect a regression by itself; we must invoke some additional or outside information. In the case of GPR, one postulates (i) a prior guess μ0(q) for the mean, and (ii) a functional form of the covariance K(q, q′;ψ), parameterized by some unknown constants ψ The constants are often referred to as hyperparameters. For all of the regressions in this work, we use an altered form of the squared exponential covariance function

K(q,q;σ,,p)=σ2qqexp[-q-qp2],
(D2)

1 ≤ p ≤ 2, σ2, and [ell]2 are free parameters. The exponent p alters the smoothness properties of the underlying random functions, while the term qq is motivated by the observation that MD predictions are noisier at higher temperatures [recall K is applied to our running example of ρ(T)]. The σ2 determines the scale of the overall noise and [ell] determines the temperature scale over which different observations are correlated. In this work, we manually control the values of p and μ0(q), but determine σ and [ell] by means of a maximum likelihood analysis, which works as follows. First define Λ = diag (Var[w(q)]) to be a diagonal matrix of the uncertainties in w(q) [e.g. according to Eq. (8)]. Given this, we insert K = K + Λ into Eq. (D1) and maximize the probability of finding our data q, w(q) as a function of these two parameters (assuming that p and μ0 are fixed). Including the term Λ accounts for the uncertainties in the sample estimates of w(q) when computing the hyperparameters.

Given the covariance function K(q, q′), it is a straightforward matter to estimate μ(q) and Var[w(q)]. Specifically, define q = (q1, …, qQ, q)T as the vector of state-points q plus the q of interest, and let = (w(q1), …, w(qQ), w(q)) be the corresponding vector for w. The joint probability of finding (q, ) is

P(wˋ)=1(2π)1+QKˋ(qˋ,qˋ)exp{-12[wˋ(qˋ)-μ0(qˋ)]TKˋ(qˋ,qˋ)-1[w(qˋ)-μ0(qˋ)]},
(D3)

where K(qk, qk) = K(qk, qk) + Var[w(qk)] for qkq and K = K for all other pairs of state-points. Given that w(q) is the only unknown quantity appearing in the above expression, we can rewrite the probability in terms of this single variable to find

P[w(q)]exp{-(w-μ1(q))22Var[w(q)]},
(D4)

where

μ1(q)=μ0(q)+K(q,q)[K(q,q)+Λ]-1w(q)
(D5)
Var[w(q)]=K(q,q)-K(q,q)[K(q,q)+Λ]-1K(q,q).
(D6)

Note that this approach updates the prior μ0(q) by taking into account correlations in the data relative to our initial guess for a functional form. Also, when Λ → ∞, note that μ1μ0; that is, data with large uncertainties provide us with no information about how we should update our prior μ0.

It is straightforward to generalize these results to the case where the w are correlated across modes and/or additional information about uncertainties in our estimates of w are brought to bear. We refer the reader to Refs. 29 for more information.

Footnotes

This work is a contribution of the National Institute of Standards and Technology and is not subject to copyright in the United States.

References

1. Li C, Strachan A. J Polym Sci, Part B: Polym Phys. 2015;53:103.
2. Allison J, Collins P, Spanos G, editors. Proceedings of the 1st World Congress on Integrated Computational Materials Engineering (ICME); John Wiley & Sons, Inc; 2011.
3. Li M, Campbell C, Thornton K, Holm E, Gumbsch P, editors. Proceedings of the 2nd World Congress on Integrated Computational Materials Engineering (ICME); John Wiley & Sons, Inc; 2013.
4. Dienstfrey A, Phelan FR, Jr, Christensen S, Strachan A, Santosa F, Boisvert R. Journal of the Minerals Metals and Materials Society (JOM) 2014;66:1342.
5. Izvekov S, Voth G. J Chem Phys. 2005;123:134105. [PubMed]
6. Izvekov S, Voth G. J Phys Chem B. 2005;109:2469. [PubMed]
7. Moore TC, Iacovella CR, McCabe C. J Chem Phys. 2014;140:224104. [PubMed]
8. Bayramoglu B, Faller R. Macromolecules. 2012;45:9205.
9. Izvekov S, Chung PW, Rice BM. J Chem Phys. 2010;133:064109. [PubMed]
10. Bayramoglu B, Faller R. Macromolecules. 2013;46:7957.
11. Hsu DD, Xia W, Arturo SG, Keten S. J Chem Theory Comput. 2014;10:2514. [PubMed]
12. D'Adamo G, Pelissetto A, Pierleoni C. Soft Matter. 2012;8:5151.
13. Das A, Andersen HC. J Chem Phys. 2012;136:194114. [PubMed]
14. Jacobson LC, Kirby RM, Molinero V. J Phys Chem B. 2014;118:8190. [PubMed]
15. Krishna V, Noid WG, Voth GA. J Chem Phys. 2009;131:024103. [PubMed]
16. Carbone P, Varzaneh HAK, Chen X, Müller-Plathe F. J Chem Phys. 2008;128:064904. [PubMed]
17. Ghosh J, Faller R. Mol Simul. 2007;33:759.
18. Maerzke KA, Siepmann JI. J Phys Chem B. 2011;115:3452. [PubMed]
19. Brini E, Herbers CR, Deichmann G, van der Vegt NFA. Phys Chem Chem Phys. 2012;14:11896. [PubMed]
20. Farah K, Fogarty AC, Boehm MC, Müller-Plathe F. Phys Chem Chem Phys. 2011;13:2894. [PubMed]
21. Martin MG, Siepmann JI. J Phys Chem B. 1998;102:2569.
22. Qian H-J, Carbone P, Chen X, Karimi-Varzaneh HA, Liew CC, Müller-Plathe F. Macromolecules. 2008;41:9919.
23. Ganguly P, van der Vegt NFA. J Chem Theory Comput. 2013;9:5247. [PubMed]
24. Hsu DD, Xia W, Arturo SG, Keten S. J Chem Theory Comput. 2014;10:2514. [PubMed]
25. Hsu DD, Xia W, Arturo SG, Keten S. Macromolecules. 2015;48:3057.
26. The Bayesian interpretation of probabilities is distinct from ones based on ideas of chance and random draws; see, for example, Ref. 55. For a good example of Bayesian calibration techniques applied to computer models, see Ref. 56 and the references contained therein. See Sec. II A for more discussion.
27. Das A, Andersen HC. J Chem Phys. 2012;136:194113. [PubMed]
28. Das A, Andersen HC. J Chem Phys. 2009;131:034102. [PubMed]
29. Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) The MIT Press; 2005.
30. Santner TJ, Williams B, Notz W. The Design and Analysis of Computer Experiments. Springer-Verlag; 2003. p. 283.
31. Sacks J, Welch WJ, Mitchell TJ, Wynn HP. Stat Sci. 1989;4:409.
32. Potestio R, Peter C, Kremer K. Entropy. 2014;16:4199.
33. Peter C, Kremer K. Faraday Discuss. 2010;144:9. [PubMed]
34. Louis A. J Phys Condens Matter. 2002;14:9187.
35. Lu J, Qiu Y, Baron R, Molinero V. J Chem Theory Comput. 2014;10:4104. [PubMed]
36. Lobanova O, Avendao C, Lafitte T, Müller EA, Jackson G. Mol Phys. 2015;113:1228.
37. Several additional observations suggest that this perspective is appropriate: (i) there are many reasonable coarse-graining methods for computing f0; (ii) it is often difficult to identify a best interpolant; and, as we show, (iii) many plausible force curves can yield the same target predictions. Thus, calibration of CG forces already requires subjective decision-making, which we only seek to acknowledge and systematize.
38. In the unusual case that ηf → 0, is known to arbitrary accuracy, one can take f0, which leads to Dirac delta-function priors following the procedure in this section.
39. Generally speaking, an arbitrary CG algorithm may yield f(r, q) at a finite, but large number of discretized points on the r domain. Here we assume that the spacing between such points is so small that the weights in Eq. (1) can be computed to arbitrary precision using numerical approximations of the appropriate projection integrals.
40. For IBI, it is reasonable that N = 1, so that we might need to invoke different methods for determining uncertainties computed in Eq. (8).
41. We use w0 to denote the mean of a probability density in general so that we do not have to write separate versions of Eq. (4) for the first and second correction loops in Fig. 3.
42. Berger J. Springer Series in Statistics. Springer; 1985. Statistical Decision Theory and Bayesian Analysis.
43. Given that CG simulations are inexpensive, we assume that their outputs are always deterministic.
44. Equation (13) does not specify a unique w whenever M ≥ 2; in such cases, one could use the Moore-Penrose pseudoinverse to solve for w. The Bayesian update that we describe is a variant of this procedure.
45. Miller KS. Math Mag. 1981;54:67.
46. We approximate 900 CPU hours for each iteration of the CG simulations, be it an anchor point update, computation with interpolated forces, or computation with corrected interpolated forces.
47. Mognetti BM, Virnau P, Yelash L, Paul W, Binder K, Müller M, Mac-Dowell LG. J Chem Phys. 2009;130:044101. [PubMed]
48. In this light, it is perhaps no surprise that transferable forces are notoriously difficult to calibrate.
49. Stoer J, Bartels R, Gautschi W, Bulirsch R, Witzgall C. Introduction to Numerical Analysis, Texts in Applied Mathematics. Springer; New York: 2002.
50. “Scienomics, maps platform, version 3.4, 2014, France,”.
51. Certain commercial equipment, instruments, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
52. Plimpton S. J Comput Phys. 1995;117:1.
53. Sun H, Mumby S, Maple J, Hagler A. J A Chem Soc. 1994;116:2978.
54. Patrone PN, Dienstfrey A, Browning AR, Tucker S, Christensen S. Submitted to Polymer. 2015
55. Gelman A, Carlin J, Stern H, Dunson D, Vehtari A, Rubin D. Bayesian Data Analysis. 3. Chapman & Hall/CRC Texts in Statistical Science, Taylor & Francis; 2013.
56. Kennedy MC, O'Hagan A. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2001;63:425.