PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Technometrics. Author manuscript; available in PMC 2010 July 2.
Published in final edited form as:
Technometrics. 2010 February; 52(1): 52–66.
doi:  10.1198/TECH.2009.08104
PMCID: PMC2895940
NIHMSID: NIHMS206194

Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill

Abstract

We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the “correct” model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

1. INTRODUCTION

We consider the problem of online prediction when it is uncertain what the best prediction model to use is. Online prediction is often done for control purposes and typically uses a physically based model of the system. Often, however, there are several possible models and it is not clear which is the best one to use.

To address this problem, we develop a new method called Dynamic Model Averaging (DMA) that incorporates model uncertainty in a dynamic way. This combines a state–space model for the parameters of each of the candidate models of the system with a Markov chain model for the best model. Both the state space and Markov chain models are estimated recursively, allowing the identity of the best model to change over time. Both the state space and Markov chain models are specified using versions of forgetting, which allows a highly parsimonious representation. The predictive distribution of future system outputs is a mixture distribution with one component for each physical model considered, and so the best prediction is a weighted average of the best predictions from the different models.

The physical theory underlying the prediction or control problem is often somewhat weak, and may be limited essentially to knowing what the inputs are that could potentially influence the output of a system. In that case we consider a model space consisting of all possible combinations of inputs that are not excluded by physical considerations.

The DMA methodology combines various existing ideas, notably Bayesian model averaging, hidden Markov models, and forgetting in state–space modeling. Bayesian model averaging (BMA) (Leamer 1978; Raftery 1988; Hoeting et al. 1999; Clyde and George 2004) is an established methodology for statistical inference from static datasets in the presence of model uncertainty, and has been particularly well developed for linear regression when there is uncertainty about which variables to include (Raftery, Madigan, and Hoeting 1997; Fernández, Ley, and Steel 2001; Eicher, Papageorgiou, and Raftery 2009). BMA usually addresses the problem of uncertainty about variable selection in regression by averaging over all possible combinations of regressors that are not excluded by physical considerations. BMA is restricted to static problems, however. An extension to dynamic updating problems was proposed by Raftery et al. (2005) in the context of probabilistic weather forecasting, using a sliding window estimation period consisting of a specified previous number of days. DMA is a recursive updating method rather than a windowing one.

The idea of the hidden Markov model is that there is an underlying unobserved discrete-valued process whose value affects the system state and which evolves according to a Markov chain. The idea seems first to have been proposed independently by Ackerson and Fu (1970) for the case where the noise in a Kalman filter is a Gaussian mixture, and by Harrison and Stevens (1971) for modeling time series that can have outliers and jumps in level and trend. The latter has been extended to the dynamic linear model (Harrison and Stevens 1976; West and Harrison 1989) and the multiprocess Kalman filter (Smith and West 1983). The basic idea has since been widely used, often under different names, in different disciplines including speech recognition (Rabiner 1989) and genomics (Eddy 1998, 2004). In economics, the Markov switching model of Hamilton (1989) is widely used for time series in which the autoregressive parameters switch between different regimes. Markov switching models are also widely used for tracking moving or maneuvering objects, particularly in aerospace engineering (Li and Jilkov 2005). An important unifying framework is the conditional dynamic linear model (CDLM) of Chen and Liu (2000), which includes several earlier proposals as special cases,

To specify DMA, we postulate the existence of a hidden Markov chain on the model space. This differs from many other hidden Markov applications because the definition of the state itself, and not just its value, depends on the current value of the chain. As a result, our method is not a special case of the CDLM, although it is related to it.

One of the difficulties with hidden Markov models is the need to make inference about the full sequence of hidden values of the chain, and the resulting computational explosion. Various approximations have been proposed, including finite-memory approximations (Ackerson and Fu 1970; West and Harrison 1989), the interacting multiple model (IMM) algorithm of Blom and Bar-Shalom (1988), which is popular in tracking applications (Mazor et al. 1998), the particle filter (Gordon, Salmond, and Smith 1993), also known as sequential importance sampling, and the ensemble Kalman filter (Evensen 1994). The fact that in our case the state vector has a different definition for each candidate model allows us to use a very simple approximation in which each model is updated individually at each time point. We show that this is equivalent to an age-weighted version of BMA, which makes it intuitively appealing in its own right.

As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging (RMA).

In many previous hidden Markov applications there is considerable information about the evolution of the state. In the rolling mill application that motivated our work, there is considerable physical knowledge about the rolling mill itself, but little physical knowledge about how the regression model and its parameters (which define the system state in our setup) are likely to evolve. All that can be assumed is that the parameters are likely to evolve gradually in time, and that the model is likely to change infrequently. As a result, rather than specify the state–space model fully, which is demanding and for which adequate information is not available, we specify the evolution of the parameters and the model by exponential forgetting (Fagin 1964; Jazwinsky 1970; Kulhavý and Zarrop 1993).

We illustrate the methodology by applying it to the prediction of the outgoing strip thickness of a cold rolling mill, ultimately for the purpose of either human or automatic control. The system input and three adjustable control parameters are measured, and the system output is measured with a time delay. While there is considerable understanding of the physics of the rolling mill, three plausible models have been proposed, of which two are physically motivated. We first implemented DMA for these three models, one of which turned out to be clearly best. DMA converged rapidly to the best model, and the performance of DMA was almost the same as that of the best model—DMA allowed us to avoid paying a penalty for being uncertain about model structure.

We then extended the set of models to allow for the possibility of each of the inputs having an effect or not, yielding a much bigger model space with 17 models. In this case we found that DMA performed better than the best physical model in the initial, most unstable period when control is most difficult, and comparably over the rest of the period, when prediction errors were generally stable. Thus, even in this case with a larger model space, DMA allowed us to avoid paying a penalty for model uncertainty, and indeed allowed us to use the model uncertainty to increase the stability of the predictions in the early unstable period.

To assess how well the method recovered time-varying and constant regression parameter values and model specifications, we also applied it to four simulated examples closely modeled on the rolling mill problem.

The rest of the paper is organized as follows. In Section 2 we describe the rolling mill problem that motivated our research. In Section 3 we describe the dynamic model averaging methodology, and in Section 4 we show the results of single-model and DMA prediction for the rolling mill. In Section 5 we give the results of applying our method to four simulated examples. In Section 6 we discuss limitations, possible extensions and alternatives to the proposed methodology.

2. THE ROLLING MILL PROBLEM

The problem that motivated our methodology is that of predicting the output of a reversing cold rolling mill. A cold rolling mill is a machine used for reducing the thickness of metal strips; a schematic drawing is shown in Figure 1. Metal is passed through a gap and subjected to the rolling force. The strip thickness is measured by diamond-tipped contact meters on both sides of the rolling mill, providing measurements of the input and output thickness. A target thickness is defined, and this needs to be achieved with high accuracy depending on the nominal thickness. In our case a typical tolerance was ±10 microns. We are dealing here with a reversing rolling mill which reduces the strip thickness in several passes, alternating the rolling direction.

Figure 1
Schematic drawing of a reversing cold rolling mill.

The output that we want to predict is the output strip thickness which is, under normal conditions, securely controlled using automatic gauge control (Ettler and Jirovský 1991). This control method can be improved manually by actions of the operators, potentially supported by a decision-support system (Quinn et al. 2003; Ettler, Kárný, and Guy, 2005).

Reliable thickness prediction can improve control, particularly under adverse conditions that the automatic gauge control system is not designed to handle. These can arise at the beginning of a pass of the material through the rolling mill, when the thickness meters often do not operate for a period because there is a danger that they may be damaged. Adverse conditions can also arise for welded strips and for strip parts with uneven surfaces.

Important variables that can be adjusted to achieve the target are the size of the rolling gap governed by the roll positioning system, the rolling force, the input and output rolling speeds and—possibly—strip tensions. Rolling forces can be of the order of 106 N, and rolling speeds of the order of 0.1–8.0 m/s. The rolling gap cannot be reliably measured, but the roll position is available instead, measured against the mill frame. This position differs from the gap mainly because of frame elongation during rolling.

Modern rolling mill control systems allow every sample processed to be archived. Data collection is triggered by the strip movement. In our case, data samples are recorded every 4 cm, so that the sampling period varies according to the strip speed. For instance, if the speed is 1 m/s, the sampling period is 40 milliseconds. This imposes constraints on the complexities of online data processing, which must be completed well within the sampling period. These constraints are a key element of the problem, and imply that the methods used must be computationally efficient.

Our goal is to predict the output thickness for the piece of the strip just leaving the rolling gap. We wish to do this online, either to provide guidance to human operators controlling the system in real time in situations that preclude the use of automatic gauge control, or as an additional input to the control system. This is essentially a regression problem. However, an important complicating aspect of the data collection system is the time delay problem, illustrated in Figure 2.

Figure 2
Measurement time delay. A measurement is made every time the material moves by a length Δ, equal to one sample. The output thickness currently at the gap is not measured until the material has moved a distance L beyond the gap. The measurement ...

Current values of the regressors are available for the piece of the strip in the rolling gap. However, the system output—the output strip thickness—is measured only with a delay d, which in our case was d = 24 samples. The task thus consists of predicting the output thickness, given the input thickness and other measured data at the gap. But data for estimating the regression relationship are available only with a delay d.

The data that we analyze come from a strip of length about 750 m that yielded 19,058 samples. The roll position was controlled by automatic gauge control with feedback. The data from the first 1000 samples are shown in Figure 3. Figure 3(a) shows the deviations of the input and output thicknesses from their nominal values, which are 1398 and 1180 microns, respectively. (The input was about 200 microns thicker than the desired output, allowing a margin for the rolling mill to do its work of thinning the material.) Initially the output was too thick, and gradually converged to a relatively stable situation after about 600 samples. Figure 3(b) shows the roll position, Figure 3(c) the ratio of the input to the output rolling speeds, and Figure 3(d) the rolling force. The gradual adjustment of the controlled variable by the feedback controller can be seen. The two outliers in the rolling speed ratio at samples 42 and 43 provide a challenge to the system.

Figure 3
Data from the rolling mill: first 1000 samples. (a) Input and output thickness deviations from their nominal values (in microns); (b) roll position (in microns); (c) ratio of input to output rolling speeds; (d) rolling force at the gap (in MN).

3. DYNAMIC MODEL AVERAGING (DMA)

We first consider the situation where a single, physically based regression-type model is available for prediction. We will review a standard state–space model with forgetting for adaptive estimation of the regression parameters and prediction with time delay. This is essentially standard Kalman filtering, but we review it here to fix ideas and notation. We will then address the situation where there is uncertainty about the model structure and several plausible models are available. For concreteness, we will use the rolling mill terminology.

3.1 The One-Model Case

Let yt be the deviation of the output thickness of sample t from its target value, and let xt = (xtj: j = 1,…, ν) be the corresponding vector of inputs. Typically xt1 = 1, corresponding to the regression intercept, and xt will also include the input thickness and other inputs and possibly also a past value of the output; the latter would lead to an ARX model (Ljung 1987).

The observation equation is then

yt=xtTθt+εt,
(1)

where a superscript T denotes matrix transpose, θt is a vector of regression parameters, and the innovations εt are distributed as εt~iidN(0,V), where N(μ, σ2) denotes the normal distribution with mean μ and variance σ2. The regression parameters θt are allowed to evolve according to the state equation

θt=θt1+δt,
(2)

where the state innovations δt are distributed as δt~indN(0,Wt).

Inference is done recursively using Kalman filter updating. Suppose that θt−1|Yt−1 ~ N([theta w/ hat]t−1, Σt−1), where Yt−1 = {y1,…, yt−1}. Then

θt|Yt1~N(θ^t1,Rt),
(3)

where

Rt=t1+Wt.
(4)

Equation (3) is called the prediction equation.

Specifying the ν × ν matrix Wt completely is demanding, and often little information for doing so is available. Instead we follow Fagin (1964) and Jazwinsky (1970) and specify it using a form of forgetting. This consists of replacing (4) by

Rt=λ1t1,
(5)

where λ is called the forgetting factor and is typically slightly below 1. The resulting model is a properly defined state–space model, with Wt = aΣt−1, where a = (λ−1 − 1). As pointed out, for example, by Hannan, McDougall, and Poskitt (1989), estimation in this model is essentially age-weighted estimation where data i time points old has weight λi, and the effective amount of data used for estimation, or the effective window size, is h = 1/(1 − λ). This suggests that forgetting may be roughly comparable to windowing (Jazwinsky 1970), in which the last h available samples are used for estimation and are equally weighted. A windowing method like this was used in the multimodel context by Raftery et al. (2005).

Inference is completed by the updating equation,

θt|Yt~N(θ^t,t).
(6)

In (6),

θ^t=θ^t1+RtxtT(V+xtTRtxt)Tet,
(7)

where et=ytxtTθ^t1 is the one-step-ahead prediction error, and

t=RtRtxt(V+xtTRtxt)1xtTRt.
(8)

The inference process is repeated recursively as the measurements on a new sample become available. It is initialized by specifying [theta w/ hat]0 and Σ0; we will discuss their values for the rolling mill problem in Section 4.

The information available for predicting yt is d samples old because of the time-delay problem, and so we use

y^t=xtTθ^td1.
(9)

The observations innovations variance V needs to be specified by the user. Here we estimate it by a recursive method of moments estimator, using the fact that the one-step-ahead predictive distribution of yt is given by

yt|Yt1~N(xtTθ^t1,V+xtTRtxt).
(10)

It follows that

Vt*=1tr=1t[(ytxtθ^t1)2xtTRtxt]

is a consistent estimator of V in the sense that Vt*V as t → ∞ in probability under the model defined by (1) and (2). It is not guaranteed, however, that Vt*>0. This leads us to define the recursive moment estimator

V^t={Atif At>0,V^t1otherwise,

where

At=(t1t)V^t1+1t(et2xtTRtxt).

3.2 The Multi-Model Case

We now consider the case where multiple models, M1,…, MK are considered and there is uncertainty about which one is best. We assume that each model can be expressed in the form of Equations (1) and (2), where the state vector, θt, and the predictor vector, xt, for each model are different. They can be of different dimensions and need not overlap. The quantities specific to model Mk are denoted by a superscript (k). We let Lt = k if the process is governed by model Mk at time t. We rewrite (1) and (2) for the multimodel case as follows:

yt|Lt=k~N(xt(k)Tθt(k),V(k)),
(11)

θt(k)|Lt=k~N(θt1(k),Wt(k)).
(12)

We assume that the model governing the system changes infrequently, and that its evolution is determined by a K × K transition matrix Q = (qk[ell]), where qk[ell] = P[Lt = [ell] |Lt−1 = k]. The transition matrix Q has to be specified by the user, which can be onerous when the number of models is large. Instead we again avoid this problem by specifying the transition matrix implicitly using forgetting.

Estimation of θt(k) in general involves a mixture of Kt terms, one for each possible value of Lt = {L1,…, Lt}, as described by Ackerson and Fu (1970) and Chen and Liu (2000), for example. This number of terms rapidly becomes enormous, making direct evaluation infeasible, and many different approximations have been described in the literature. Here we are interested in prediction of the system output, yt, given Yt−1, and this depends on θt(k) only conditionally on Lt = k. This leads to a simple approximation that works well in this case, consisting of simply updating θt(k) conditionally on Lt = k for each sample.

In this multimodel setup, the underlying state consists of the pair, (θt, Lt), where θt=(θt(1),,θt(K)). The quantity θt(k) is defined only when Lt = k, and so the probability distribution of (θt, Lt) can be written

p(θt,Lt)=k=1Kp(θt(k)|Lt=k)p(Lt=k).
(13)

The distribution in (13) is what we will update as new data become available.

Estimation thus proceeds analogously to the one-model case, consisting of a prediction step and an updating step. Suppose that we know the conditional distribution of the state at time (t − 1) given the data up to that time, namely

p(θt1,Lt1|Yt1)=k=1Kp(θt1(k)|Lt1=k,Yt1)×p(Lt1=k|Yt1),
(14)

where the conditional distribution of θt1(k) is approximated by a normal distribution, so that

θt1(k)|Lt1=k,Yt1~N(θ^t1(k),t1(k)).
(15)

The prediction step then involves two parts: prediction of the model indicator, Lt, via the model prediction equation, and conditional prediction of the parameter, θt(k), given that Lt = k, via the parameter prediction equation. We first consider prediction of the model indicator, Lt. Let πt−1|t−1, [ell] = P[Lt−1 = [ell]|Yt−1]. Then the model prediction equation is

πt|t1,kP[Lt=k|Yt1]==1Kπt1|t1,qk.
(16)

To avoid having to explicitly specify the transition matrix, with its K2 elements, we replace (16) by

πt|t1,k=πt1|t1,kα+c=1Kπt1|t1,α+c.
(17)

In (17), the exponent α is a forgetting factor, which will typically be slightly less than 1, and c is a small positive number, introduced to avoid a model probability being brought to machine zero by aberrant observations; we set this to c = 0.001/K. Equation (17) increases the uncertainty by flattening the distribution of Lt. This is a slight generalization of the multiparameter power steady model introduced independently by Peterka (1981) and Smith (1981), generalizing the one-dimensional steady model of Smith (1979). Although the resulting model does not specify the transition matrix of the Markov chain explicitly, Smith and Miller (1986) argued that this is not a defect of the model, since the data provide information about πt−1|t−1,k and πt|t−1,k, but no additional information about Q. Exponential forgetting has been used for updating discrete probabilities in a different two-hypothesis context by Kárný and Andrýsek (2009).

With forgetting, the parameter prediction equation is

θt(k)|Lt=k,Yt1~N(θ^t1(k),Rt(k)),
(18)

where Rt(k)=λ1Σt1(k).

We now consider the updating step, which again has two parts, model updating and parameter updating. The model updating equation is

πt|t,k=ωtk/=1Kωt,
(19)

where

ωt=πt|t1,f(yt|Yt1).
(20)

In (20), f[ell](yt|Yt−1) is the density of a N(xt()Tθ^t1(),V()+xt()TRt()xt()) distribution, evaluated at yt.

The parameter updating equation is

θ(k)|Lt=k,Yt~N(θ^t(k),Σt(k)),
(21)

where θ^t(k) is given by (7) and Σt(k) is given by (8), in each case with the superscript (k) added to all quantities. This process is then iterated as each new sample becomes available. It is initialized by setting π0|0,[ell] = 1/K for [ell] = 1,…, K, and assigning values to θ0(k)and Σ0(k).

The model-averaged one-step-ahead prediction of the system output, yt, is then

y^tDMA=k=1Kπt|t1,ky^t(k)=k=1Kπt|t1,kxt(k)Tθ^t1(k).
(22)

Thus the multimodel prediction of yt is a weighted average of the model-specific predictions y^t(k), where the weights are equal to the posterior predictive model probabilities for sample t, πt|t−1,k. For the rolling mill problem with delayed prediction, we take the predicted value of yt to be

y^tDMA=k=1Kπtd|td1,ky^t(k)=k=1Kπtd|td1,kxt(k)Tθ^td1(k).
(23)

One could also project the model probabilities into the future and allow for model transitions during the delay period by replacing πtd|td−1,k in (23) by πtd|td1,kαd, but we do not do this here.

We call the method dynamic model averaging (DMA), by analogy with Bayesian model averaging for the static linear regression model (Raftery, Madigan, and Hoeting 1997).

3.3 Connection to Static Bayesian Model Averaging

Standard Bayesian model averaging (BMA) addresses the static situation where the correct model Mk and its parameter θ(k) are taken to be fixed but unknown. In that situation, the BMA predictive distribution of yn+d given Yn is

p(yn+d|Yn)=k=1Kp(yn+d|Yn,Mk)p(Mk|Yn),

where p(Mk|Yn) is the posterior model probability of Mk. If, as here, all models have equal prior probabilities, this is given by

p(Mk|Yn)=p(Yn|Mk)=1Kp(Yn|M),

where p(Yn|Mk) = ∫p(Yn(k), Mk)p(k)|Mk) dθ(k) is the integrated likelihood, obtained by integrating the product of the likelihood, p(Yn(k), Mk), and the prior, p(k)|Mk), over the parameter space. See Hoeting et al. (1999) and Clyde and George (2004) for reviews of BMA.

Dawid (1984) pointed out that the integrated likelihood can also be written as follows:

p(Yn|Mk)=t=1np(yt|Yt1,Mk),
(24)

with Y0 defined as the null set. The posterior model probabilities in BMA can be expressed using Bayes factors for pairwise comparisons. The Bayes factor for Mk against M[ell] is defined as the ratio of integrated likelihoods, Bk[ell] = p(Yn|Mk)/p(Yn|M[ell]) (Kass and Raftery 1995). It follows from (24) that the log Bayes factor can be decomposed as

log Bk=t=1nlog Bk,t,
(25)

where Bk[ell],t = p(yt|Yt−1, Mk)/p(yt|Yt−1, M[ell]) is the sample-specific Bayes factor for sample t.

In the dynamic setup considered in the rest of the paper, it follows from (17), (19), and (20) that when c = 0 in (17),

log (πn|n,kπn|n,)=t=1nαntlog Bk,t,
(26)

where Bk[ell],t is defined as in (25). Thus our setup leads to the ratio of posterior model probabilities at time n being equal to an exponentially age-weighted sum of sample-specific Bayes factors, which is intuitively appealing.

When α = λ = 1 there is no forgetting and we recover the solution for the static situation as a special case of DMA, as (25) and (26) are then equivalent. In this case, the method is recursive but not dynamic, and could be called recursive model averaging (RMA). This may have computational advantages over the standard computational methods for BMA. To our knowledge this also is new in the literature.

4. RESULTS FOR THE ROLLING MILL

For the rolling mill problem, the four candidate predictor variables of the deviation of the output thickness from its target value for sample t, yt, are:

  • ut: the deviation of the input thickness from its nominal value for sample t,
  • vt: the roll position (in microns),
  • wt: the ratio of the output rolling speed to the input rolling speed, and
  • zt: the rolling force applied to sample t.

In previous work, three models have been used, the first two of which are physically motivated (Ettler, Kárný, and Nedoma 2007). The third model was based on exploratory empirical work. All three have the regression form (1).

The first of these models is based on the gauge meter principle which has been used in this area for several decades (e.g., Grimble 2006). It works with the stretching of the mill housing during rolling. The output thickness satisfies the equation

Yt=vt+f(zt),
(27)

where f(z) is the nonlinear stretch function, depending on the rolling force z. If the stretch function is approximated by a linear function of the rolling force, the gauge meter principle leads to predicting output thickness as a linear function of the roll position and the rolling force. In our implemention of this model we generalize Equation (27) by allowing vt to have a coefficient different from 1, yielding a more flexible model.

The second physically motivated model is based on the mass flow principle, which is commonly used for rolling mill control (e.g., Maxwell 1973). This follows from the continuity of material flow through the mill. It says that the ratio of input thickness to output thickness is equal to the ratio of output rolling speed to input rolling speed. This implies that output thickness can be predicted using the product of the input thickness and the ratio of the speeds. The ratio of the speeds is also included as a regressor in its own right to allow for the fact that a constant has been subtracted from the input thickness and to give some additional flexibility.

The predictors to which the three models correspond are as follows:

M1:xt(1)=(1,vt,zt),M2:xt(2)=(1,wt,utwt),M3:xt(3)=(1,ut,vt,wt).

We considered prediction using each of these models individually, and DMA based on the three models.

The theory underlying the physically based models does not exclude the possibility of removing any one of the four predictors from the model. To explore what happens when the model space is larger, we therefore considered all possible combinations of the four predictors; this yielded 24 = 16 models. We considered DMA with 17 models: these 16 models, together with model M2 which also includes the interaction term (utwt). The full list of models is shown in Table 1.

Table 1
List of Prediction Models Used. The first three models are the physically motivated ones. The remaining 14 are the empirical models considered. See text for the definitions of the variables

To initialize the process, θ^0(k)and Σ0(k) need to be specified for each model Mk. In general this should be done using external information, including the machine specifications and data on the rolling mill from other strips that have been run through; such data will generally accumulate rapidly. We did not have such data readily available, and to approximate this we used the data themselves to specify a prior distribution that was spread out relative to the precision available in the data. We specified θ^0(k)=0 for each k, and Σ0(k)=diag(s12(k),,sμk2(k)). We used sj2(k)=Var(yt)/Var(xt,j(k)) for j = 2,…, νk. The rationale for this is that, in linear regression, a regression coefficient for an input variable X is likely to be less than the standard deviation of the system output divided by the standard deviation of X. This is always the case when there is just one input variable, by the Cauchy–Schwarz inequality, and empirical evidence suggests that it holds more generally; see, for example, figure 1 in Raftery, Madigan, and Hoeting (1997).

The prior variance of the intercept term, s12(k), is trickier. We fitted a static linear regression model to the full dataset and used s12(k)=β^02+Var(yt), where [beta]0 is the estimated intercept. The resulting initial distributions proved to be amply spread out relative to the estimates, and the results were not sensitive to reasonable modifications to them.

We also needed to specify the forgetting factors, λ for the parameters and α for the models. In experiments we found that there was no gain from using different forgetting factors, and so we used λ = α, leaving just one forgetting factor to be specified. We used λ = α = 0.99 for all our experiments. We found that our results were relatively insensitive to modest changes in this value, in the range 0.97 to 0.995.

The evolution of posterior model probabilities for the three-model case is shown in Figure 4. This is a case where one of the models (M3) is clearly superior to the other two (M1 and M2). DMA quickly picks this up, and from Figure 4(a) we see that the posterior probability of M3 rapidly grows close to 1 once enough data have been collected to allow a comparison. However, this is not an absorbing state, and the other models occasionally become important, as can be seen from the plot for all 19,058 samples in Figure 4(b). Thus DMA is similar but not identical to prediction based solely on M3 in this case.

Figure 4
Posterior model probabilities for the three initially considered models. The correspondence between models and line types is the same in both plots. (a) Samples 26–200; (b) all samples, 26–19,058.

The relative performance of the different methods is shown in Table 2. For all the better methods, the prediction errors had stabilized by around sample 200. A key issue for controlling the rolling mill is how quickly prediction stabilizes after the initial transient situation. Thus we report performance results for the initial period, samples 26–200 (the first 25 samples cannot be predicted using the inputs because of the time delay of 24 samples), and the remaining samples 201–19,058. We report the mean squared value of the prediction error, MSE, the maximum absolute prediction error, MaxAE, and the number of samples for which the prediction error was greater than the desired tolerance of 10 microns.

Table 2
Sample statistics of prediction errors

Table 2 shows that M3 was much better than M1 or M2 for both periods, particularly the initial period. For the initial period, DMA with three models was slightly better than M3 on all three criteria we report, while for the stable period DMA was essentially the same as M3. Thus DMA allows us to avoid paying a penalty for our uncertainty about model structure in this case. Indeed, it even yields slightly more stable predictions in the initial unstable period, perhaps because it allows some weight to be given to the simpler models, M1 and M2 at the very beginning, before enough data has accumulated to estimate the more complex M3 accurately.

We now consider what happens when the space of candidate models is much larger, and 17 models are considered. Figure 5(a) shows the evolution of the posterior model probabilities in the initial unstable period. Only four models (M3, M15, M16, and M17) have more than negligible weight past the first 25 samples or so, and so, even with a large model space, DMA yields a relatively parsimonious solution. In the initial unstable period, the relatively simple model M15 had high weight, and in the later stable period, the more complex models M16 and M17 had more weight, as can be seen from Figure 5(b).

Figure 5
Posterior model probabilities for all 17 models. The legends are the same in both plots. (a) Samples 1–175; (b) samples 1–2000.

Table 2 shows, strikingly, that DMA with 17 models achieved significantly better performance in the initial unstable period than either M3 or DMA with three models, on all three criteria. This may be because it allows weight to be put on simple, parsimonious models in the early period before stable data has accumulated to estimate more complex models reliably. In the later stable period, DMA with 17 models did slightly better than both DMA with three models, and than M3 on its own. Thus DMA yielded clear gains in the initial unstable period and smaller ones in the later stable period. It allowed us to avoid paying a price for model uncertainty, even when the model space was larger. Overall, including all possible combinations of regressors led to better performance.

In order to investigate why DMA with 17 models did so well in the initial unstable period, Figure 6(a) shows the prediction errors for M3 and DMA with 17 models. Figure 6(b) shows the absolute prediction error for DMA minus the absolute prediction error for M3 (so that positive values correspond to DMA doing better). Up to sample 50, there are large positive values, and in samples 50–100 there are consistent nonnegligible positive values. In samples 100–200 the differences are much smaller. This provides support for our conjecture: in essence, DMA does better in the initial unstable period because it is more adaptive than a single model, even a good one such as M3, and being adaptive is more important during the initial unstable period than later.

Figure 6
Comparison of prediction errors for the best of the three initially considered models and for DMA: (a) Prediction errors for Model 3, and for DMA based on all 17 models. (b) Absolute prediction errors for Model 3 minus absolute prediction errors for DMA. ...

It is important for a real-time application such as this that methods run fast. Our experiments were run in the interpreted statistical language R on a 2005 Apple Powerbook G4 laptop. Computer time scaled roughly linearly with the number of samples times the number of models. DMA took about 2 milliseconds per model per sample. It seems reasonable to expect that running it on newer hardware would speed it up by a factor of at least 4, and that implementing the method in a more efficient, compiled language would speed it up by a factor of 10 or more, suggesting speeds of about 0.05 milliseconds per model per sample with good hardware and software in 2008. If one requires that computations take no more than 20 milliseconds per sample, this suggests that with an efficient implementation, DMA could be used for the rolling mill with up to about 400 models. It is thus well within the range of practical application. Note that these time constraints preclude the use of methods that are much more computer-intensive than the ones described here.

5. SIMULATED EXAMPLES

In the previous section, we assessed our method on the basis of its predictive performance for the rolling mill data, which is what matters most for the application at hand. We now use four simulated examples, closely modeled on the rolling mill data, to assess the method’s ability to recover the generating mechanism, including whether it favors the right model and can track changing parameter values and model specifications.

In all four simulated examples, the length of the dataset (19,058 samples) and the predictor variables, ut, vt, wt, and zt are the same as in the rolling mill example. Only the system output, yt, is different.

In the first three simulated examples, the generating model remains constant over time. It is Model 15, which includes ut and vt. However, the analyst does not know what the generating model is, and so there is model uncertainty. In the first example, the regression parameters also remain constant over time. In the second example, a regression parameter changes smoothly over time, and in the third example a regression parameter changes abruptly.

The first three simulation examples are as follows, where βut, βvt, βwt, and βzt are the regression parameters for ut, vt, wt, and zt for sample t:

  • Simulation 1 (constant parameters): βut = 0.35, βvt = 0.8, βwt = βzt = 0.
  • Simulation 2 (smooth change in βut):
    βut={0.60.4t/12,000ift<12,0000.2+0.4t/12,000otherwise,
    βvt = 0.8, βwt = βzt = 0.
  • Simulation 3 (abrupt change in βut at sample 12,000): βut = 0.6 for t < 12,000, βut = 0.2 for t ≥ 12,000, βvt = 0.8, βwt = βzt = 0.

The results for the first simulated example are shown in Figure 7. The upper plot shows estimates and 95% confidence intervals for βut and the lower plot shows the posterior model probabilities for samples 11,000–13,000. The estimates of βut remain close to the true value, and the confidence interval contains the true value most (99.6%) of the time. The high coverage of the confidence intervals reflects the fact that the model anticipates changes in the regression parameter, and so is wider than needed to capture a constant parameter.

Figure 7
Simulated example 1. Upper plot: Estimates (solid line), 95% confidence intervals (dotted) and true values (thick black) of the regression parameter for ut. Lower plot: Posterior model probabilities for samples 11,000–13,000. The true model is ...

Model 15 is the true generating model, and its posterior model probability is shown by the solid lines in the lower plot of Figure 7. It has the highest posterior probability among all the 17 models most (73%) of the time. Models 3, 16, and 17 have nonnegligible posterior probabilities much of the time also. Model 15 is nested within each of Models 3, 16, and 17, and so they are also “correct,” in the sense that they include the generating mechanism as a special case. The other 13 models are incorrect in that they do not include the generating mechanism. The posterior model probabilities of the 13 incorrect models are all shown in grey, and are small throughout.

For Simulation 2, the method tracks the value of the smoothly changing βut well, as shown in Figure 8. For Simulation 3, the method adapts quickly to the abrupt change at sample 12,000 (Figure 9). In both these cases, the posterior model probabilities perform well, similarly to Simulation 1.

Figure 8
Simulated example 2. Upper plot: Estimates (solid line), 95% confidence intervals (dotted) and true values (thick black) of the regression parameter for ut. Lower plot: Posterior model probabilities for samples 11,000–13,000. The true model is ...
Figure 9
Simulated example 3. Upper plot: Estimates (solid line), 95% confidence intervals (dotted) and true values (thick black) of the regression parameter for ut. Lower plot: Posterior model probabilities for samples 11,000–13,000. The true model is ...

Simulation 4 features a change of generating model at sample 12,000, from Model 15 containing ut and vt, to Model 3, which also includes wt. Figure 10 shows the results. The upper plot shows that the change in βwt from 0 to 50 at sample 12,000 is well captured. The lower plot shows the posterior model probabilities. Model 15 (solid) was the generating model for the first 12,000 samples, and it had the highest posterior probability for most (69%) of these samples. Model 3 (dashed) was the generating model for the remaining 7058 samples, and it had the highest posterior probability for most (65%) of these samples.

Figure 10
Simulated example 4: Upper plot: Estimates (solid line), 95% confidence intervals (dotted), and true values (thick solid) of the regression parameter for ut. Lower plot: Posterior model probabilities for samples 11,000–13,000. The true model up ...

Another way of looking at the posterior model probabilities is given by Figure 11. This shows the log posterior odds for Model 3 (containing ut, vt, and wt) against Model 15 (containing just ut and vt). This can thus be viewed as showing a measure of the evidence for an effect of wt; when it is positive it favors an effect, and when it is negative it favors no effect. In the first 12,000 samples, when there is no effect of wt, the posterior odds favor the no-effect hypothesis 81% of the time, and in the remaining samples, when there is an effect of wt, the posterior odds favor the hypothesis of an effect 79% of the time.

Figure 11
Simulation example 4: Log posterior odds for Model 3 (ut, vt, wt) against Model 15 (ut, vt) for samples 11,000–13,000. When the log posterior odds are below zero (shown by the thin horizontal line) Model 15 is favored, and when they are above ...

It is conventional to view Bayesian posterior odds as providing evidence “worth no more than a bare mention” when they are between 1/3 and 3, namely when the log posterior odds is less than 1.1 in absolute value (Jeffreys 1961; Kass and Raftery 1995). By this measure, the posterior odds made the wrong choice less than 1% of the time.

6. DISCUSSION

We have introduced a new method for real-time prediction of system outputs from inputs in the presence of model uncertainty, called dynamic model averaging (DMA). It combines a state–space model for the parameters of the regression models used for regression with a Markov chain describing how the model governing the system switches. The combined model is estimated recursively. In experiments with data from a cold rolling mill with measurement time delay, we found that DMA led to improved performance relative to the best model considered in previous work in the initial unstable period, and that it allowed us to avoid paying a penalty for not knowing the correct model even when model uncertainty was considerable. Including all possible combinations of predictors gave better results than restricting ourselves to a small number of physically motivated models. Four simulated examples modeled on the rolling mill data indicated that the method was able to track both time-varying and constant regression parameters and model specifications quite successfully.

The procedure is largely automatic: the only user-specified inputs required are the forgetting factor and the prior mean and variance, which can be based on machine specifications and previous data. It would be possible to estimate the forgetting factor from external data by choosing it so as to optimize performance. It would also be possible to estimate it online so as to optimize predictive performance over past samples, but this would be more expensive computationally than the current method and would effectively preclude its use in the present application. Another possibility that would be suboptimal but could be computationally feasible would be to select a small predefined grid λj on it and make the model indices equal to pairs (k, j). Then our proposed procedure would be applicable, albeit with a larger model spece. We found that the method’s performance was relatively insensitive to reasonable changes in the forgetting factor.

One question that arises is how to check that the model fits the data. For our purposes, the most relevant aspect of the model is the predictive distribution f(yt|Yt1)=k=1Kπt|t1fk(yt|Yt1), where the notation is as in (20). One could detect deviations from this in various ways. One way would be to form the residuals (yty^tDMA) using (22) and carry out standard residual analysis on them. A refinement would be to form standardized residuals by dividing these residuals by their predictive standard deviation, namely the standard deviation of the model-averaged predictive distribution, which is the mixture distribution given by the right-hand side of Equation (22). This is the square root of the model-averaged variance (Hoeting et al. 1999, p. 383). A further refinement would lead to assessing deviations from the model in terms of the observations and the predictive distribution itself, using the tools discussed by Gneiting, Balabdaoui, and Raftery (2007).

We have applied our method to model spaces of three and 17 models. For much bigger model spaces, however, it may not be feasible to run all the models in parallel. Such large model spaces do arise in regression problems, for example, with moderate to large numbers of candidate regressors where all possible combinations are considered. It would be possible to modify the method for this kind of situation. One way to do this would be via an “Occam’s window” approach (Madigan and Raftery 1994), in which only the current best model and other models whose posterior model probability is not too much less than that of the best model are “active,” and are updated. When the posterior probability of a model relative to the best model falls below a threshold, it is removed from the active group. Inactive models are periodically assessed, and if their predictive performance is good enough, they are brought into the active group. Methods along these lines have been proposed in other contexts under the name of model set adaptation (Li 2005; Li, Zhao, and Li 2005).

We have used exponential forgetting for the parameters of each model (Fagin 1964; Jazwinsky 1970), as specified by (5). If this is used as the basis for an automatic control procedure, however, there is a risk that Σt may become degenerate because the control process itself can induce high correlations between system inputs and output, and hence high posterior correlations between model parameters, which could lead to singularity or near-singularity of Σt. To avoid this, Kulhavý and Zarrop (1993) proposed “Bayesian forgetting,” in which a prior distribution is added to the recursion at each iteration. If the prior distribution is Gaussian, this would amount to adding the prior covariance matrix to Rt in (5), thus essentially regularizing the updating process. We applied this in the present context and it made no difference for our data. However, it could be worthwhile for a linearly controlled system.

Various alternatives to the present approach have been proposed. Raftery et al. (2005) proposed a windowed version of Bayesian model averaging, in which the predictive distribution is a mixture with one component per candidate model, and is estimated based on a sliding window of past observations. Ettler, Kárný, and Nedoma (2007) proposed several methods including the predictors-as-regressors (PR) approach, which consists of recursively estimating each candidate model as above, and then running another recursive estimation with the predictors from each model as regressors. Practical aspects of the approach were elaborated in Ettler and Andrýsek (2007).

The DMA model given by (11) and (12) that underlies our work is related to the conditional dynamic linear model (CDLM) (Ackerson and Fu 1970; Harrison and Stevens 1971; Chen and Liu 2000) that has dominated work on adaptive hybrid estimation of systems that evolve according to a Kalman filter model conditionally on an unobserved discrete process. However, it is not a special case of the CDLM, because the form of the state vector θt(k), and not just its value, depends on the model Mk. The CDLM would be given by (11) and (12) with θt(k) replaced by θt, where θt is the same for all models Mk. It could be argued that DMA could be recast as a CDLM by specifying xt to be the union of all regressors considered, and θt to be the set of regression coefficients for these regressors. In Equations (11) and (12), the νk-vector xt(k) would then be replaced by a ν-vector with zeros for the regressors that are not present in Mk.

The difficulty with this is that the state Equation (12), now in the form θt|Lt=k~N(θt1,Wt(k)), would no longer be realistic. The reason is that when the model changes, for example from a model with two correlated regressors to one with just one of the two regressors, then there is likely to be a big jump in θt, not a gradual change. To be realistic, the state equation would thus have to involve both Lt and Lt−1, which would be unwieldy when the number of models is not small. Our formulation avoids this difficulty.

ACKNOWLEDGMENTS

Raftery’s research was supported by National Science Foundation grant ATM 0724721, the Joint Ensemble Forecasting System (JEFS) under subcontract S06-47225 from the University Corporation for Atmospheric Research (UCAR), and the DoD Multidisciplinary Research Initiative (MURI) administered by the Office of Naval Research under grant N00014-01-10745. Kárný’s was supported by grant number 1ET 100 750 401 from the Academy of Sciences of the Czech Republic, “Bayesian adaptive distributed decision making,” and by grant number 1M0572 from the Czech Ministry of Education, Youth and Sports (MSMT), “Research Center DAR.” The authors are grateful to Josef Andrýsek for helpful research assistance, and to the editor, the associate editor, and two anonymous referees for very helpful comments that improved the manuscript. Part of this research was carried out while Raftery was visiting ÚTIA.

Contributor Information

Adrian E. Raftery, University of Washington, Seattle, WA 98195-4322, (ude.notgnihsaw.u@yretfar).

Miroslav Kárný, Department of Adaptive Systems, Institute of Information Theory and Automation (ÚTIA), Czech Academy of Sciences, Prague, Czech Republic, (zc.sac.aitu@loohcs)

Pavel Ettler, COMPUREG Plzeň, s.r.o., 30634 Plzeň, Czech Republic, (zc.gerupmoc@reltte)

REFERENCES

  • Ackerson GA, Fu KS. On State Estimation in Switching Environments. IEEE Transactions on Automatic Control. 1970;15:10–17. [2,5,13]
  • Blom HAP, Bar-Shalom Y. The Interacting Multiple Model Algorithm for Systems With Markovian Switching Coefficients. IEEE Transactions on Automatic Control. 1988;33:780–783. [2]
  • Chen R, Liu JS. Mixture Kalman Filters. Journal of the Royal Statistical Society, Ser. B. 2000;62:493–508. [2,5,13]
  • Clyde M, George EI. Model Uncertainty. Statistical Science. 2004;19:81–94. [1,6]
  • Dawid AP. Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach. Journal of the Royal Statistical Society, Ser. A. 1984;147:278–292. [6]
  • Eddy SR. Profile Hidden Markov Models. Bioinformatics. 1998;14:755–763. [2] [PubMed]
  • Eddy SR. What Is a Hidden Markov Model? Nature Biotechnology. 2004;22:1315–1316. [2] [PubMed]
  • Eicher T, Papageorgiou C, Raftery AE. Determining Growth Determinants: Default Priors and Predictive Performance in Bayesian Model Averaging. Journal of Applied Econometrics. 2009;24 to appear. [1]
  • Ettler P, Andrýsek J. Mixing Models to Improve Gauge Prediction for Cold Rolling Mills; Preprints of the 12th IFAC Symposium on Automation in Mining, Mineral and Metal Processing; Québec City: Université Laval; 2007. [13]
  • Ettler P, Jirovský F. Digital Controllers for Škoda Rolling Mills. In: Warwick MKK, Halousková A, editors. Advanced Methods in Adaptive Control for Industrial Application. Lecture Notes in Control and Information Sciences. Vol. 158. Berlin: Springer-Verlag; 1991. pp. 31–35. [2]
  • Ettler P, Kárný M, Guy TV. Bayes for Rolling Mills: From Parameter Estimation to Decision Support; Proceedings of the 16th IFAC World Congress; Amsterdam: Elsevier; 2005. [3]
  • Ettler P, Kárný M, Nedoma P. Model Mixing for Long-Term Extrapolation; Proceedings of the 6th EUROSIM Congress on Modelling and Simulation; Ljubljana: EUROSIM; 2007. [7,13]
  • Evensen G. Sequential Data Assimilation With Nonlinear Quasi-Geostrophic Model Using Monte Carlo Methods to Forecast Error Statistics. Journal of Geophysical Research. 1994;99:143–162. [2]
  • Fagin SL. Recursive Linear Regression Theory, Optimal Filter Theory, and Error Analyses of Optimal Systems. IEEE International Convention Record. 1964;(Part 1):216–240. [2,4,13]
  • Fernández C, Ley E, Steel MFJ. Benchmark Priors for Bayesian Model Averaging. Journal of Econometrics. 2001;100:381–427. [1]
  • Gneiting T, Balabdaoui F, Raftery AE. Probabilistic Forecasts, Calibration and Sharpness. Journal of the Royal Statistical Society, Ser. B. 2007;69:243–268. [13]
  • Gordon NJ, Salmond DJ, Smith AFM. Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation. IEE Proceedings–F. 1993;140:107–113. [2]
  • Grimble MJ. Robust Industrial Control Systems: Optimal Design Approach for Polynomial Systems. Wiley; 2006. [7]
  • Hamilton JD. A New Approach to the Economic Analysis of Non-stationary Time-Series and the Business Cycle. Econometrika. 1989:357–384. [2]
  • Hannan EJ, McDougall AJ, Poskitt DS. Recursive Estimation of Autoregressions. Journal of the Royal Statistical Society, Ser. B. 1989;51:217–233. [4]
  • Harrison PJ, Stevens CF. Bayesian Approach to Short-Term Forecasting. Operational Research Quarterly. 1971;22:341–362. [2,13]
  • Harrison PJ, Stevens CF. Bayesian Forecasting. Journal of the Royal Statistical Society, Ser. B. 1976;38:205–247. [2]
  • Hoeting JA, Madigan D, Raftery AE, Volinsky CT. Bayesian Model Averaging: A Tutorial (with discussion) Statistical Science. 1999;14:382–417. [1,6,13]
  • Jazwinsky AW. Stochastic Processes and Filtering Theory. New York: Academic Press; 1970. [2,4,13]
  • Jeffreys H. Theory of Probability. 3rd ed. Oxford: Clarendon Press; 1961. [12]
  • Kárný M, Andrýsek J. Use of Kullback–Leibler Divergence for Forgetting. International Journal of Adaptive Control and Signal Processing. 2009;23 to appear. [6]
  • Kass RE, Raftery AE. Bayes Factors. Journal of the American Statistical Association. 1995;90:773–795. [6,12]
  • Kulhavý R, Zarrop MB. On a General Concept of Forgetting. International Journal of Control. 1993;58:905–924. [2,13]
  • Leamer EE. Specification Searches: Ad hoc Inference With Nonexperimental Data. New York: Wiley; 1978. [1]
  • Li XR. Multiple-Model Estimation With Variable Structure—Part II: Model-Set Adaptation. IEEE Transactions on Automatic Control. 2005;45:2047–2060. [13]
  • Li XR, Jilkov VP. Survey of Maneuvering Target Tracking. Part V: Multiple-Model Methods. IEEE Transactions on Aerospace and Electronic Systems. 2005;41:1255–1321. [2]
  • Li XR, Zhao ZL, Li XB. General Model-Set Design Methods for Multiple-Model Approach. IEEE Transactions on Automatic Control. 2005;50:1260–1276. [13]
  • Ljung L. System Identification: Theory for the User. Englewood Cliffs, NJ: Prentice-Hall; 1987. [3]
  • Madigan D, Raftery AE. Model Selection and Accounting for Model Uncertainty in Graphical Models Using Occam’s Window. Journal of the American Statistical Association. 1994;89:1535–1546. [13]
  • Maxwell HS. Patent Number 3762194: Constant Speed Driven Continuous Rolling Mill. 1973. Assignee: General Electric Company. [7]
  • Mazor E, Averbuch A, Bar-Shalom Y, Dayan J. Interacting Multiple Model Methods in Target Tracking: A Survey. IEEE Transactions on Aerospace and Electronic Systems. 1998;34:103–123. [2]
  • Peterka V. Bayesian System Identification. In: Eykhoff P, editor. Trends and Progress in System Identification. Oxford: Pergamon Press; 1981. pp. 239–304. [6]
  • Quinn A, Ettler P, Jirsa L, Nagy I, Nedoma P. Probabilistic Advisory Systems for Data-Intensive Applications. International Journal of Adaptive Control and Signal Processing. 2003;17:133–148. [3]
  • Rabiner LR. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE. 1989;77:257–286. [2]
  • Raftery AE. Technical Report 121. University of Washington, Seattle: Dept. of Statistics; 1988. Approximate Bayes Factors for Generalized Linear Models. [1]
  • Raftery AE, Gneiting T, Balabdaoui F, Polakowski M. Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review. 2005;133:1155–1174. [2,5,13]
  • Raftery AE, Madigan D, Hoeting JA. Model Selection and Accounting for Model Uncertainty in Linear Regression Models. Journal of the American Statistical Association. 1997;92:179–191. [1,6,7]
  • Smith AFM, West M. Monitoring Renal Transplants: An Application of the Multiprocess Kalman Filter. Biometrics. 1983;39:867–878. [2] [PubMed]
  • Smith JQ. Generalization of the Bayesian Steady Forecasting Model. Journal of the Royal Statistical Society, Ser. B. 1979;41:375–387. [6]
  • Smith JQ. The Multiparameter Steady Model. Journal of the Royal Statistical Society, Ser. B. 1981;43:256–260. [6]
  • Smith RL, Miller JE. A Non-Gaussian State Space Model and Application to Prediction of Records. Journal of the Royal Statistical Society, Ser. B. 1986;48:79–88. [6]
  • West M, Harrison PJ. Bayesian Forecasting and Dynamic Models. New York: Springer-Verlag; 1989. [2]