PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Proc Am Control Conf. Author manuscript; available in PMC 2010 September 8.
Published in final edited form as:
Proc Am Control Conf. 2010 June 30; 2010: 6286–6292.
PMCID: PMC2935661
NIHMSID: NIHMS209912

A Novel Model Predictive Control Formulation for Hybrid Systems With Application to Adaptive Behavioral Interventions

Abstract

This paper presents a novel model predictive control (MPC) formulation for linear hybrid systems. The algorithm relies on a multiple-degree-of-freedom formulation that enables the user to adjust the speed of setpoint tracking, measured disturbance rejection and unmeasured disturbance rejection independently in the closed-loop system. Consequently, controller tuning is more flexible and intuitive than relying on move suppression weights as traditionally used in MPC schemes. The formulation is motivated by the need to achieve robust performance in using the algorithm in emerging applications, for instance, as a decision policy for adaptive, time-varying interventions used in behavioral health. The proposed algorithm is demonstrated on a hypothetical adaptive intervention problem inspired by the Fast Track program, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results in the presence of simultaneous disturbances and significant plant-model mismatch are presented. These demonstrate that a hybrid MPC-based approach for this class of interventions can be tuned for desired performance under demanding conditions that resemble participant variability that is experienced in practice when applying an adaptive intervention to a population.

Index Terms: Hybrid systems, Model predictive control, Robust performance, behavioral interventions

I. INTRODUCTION

Hybrid systems are characterized by interactions between continuous and discrete dynamics. The term hybrid has also been used to describe processes that involve continuous dynamics and discrete (logical) decisions [1], [2]. Applications of hybrid systems have arisen in manufacturing systems, automobile control, and process control, among others. In recent years, significant emphasis has been given to modeling, identification and control of linear and nonlinear hybrid systems (e.g. see [2] and references there in). However, less attention has been given to means for robustifying these algorithms and how can these hybrid control systems be tuned to tolerate uncertainty. This paper is an attempt to focus on these issues for hybrid systems.

Achieving desired performance under significant uncertainty in hybrid systems is an important consideration for many practical applications. Consider, for example, inventory management in supply chains, where factory production may occur at discrete levels. The dynamics of these systems can be highly uncertain [3]. Another example are adaptive interventions in behavioral health, which are receiving increasing attention as a means to address the prevention and treatment of chronic, relapsing disorders such as drug abuse [4]. In an adaptive intervention, dosages of intervention components are assigned based on the values of tailoring variables that reflect some measure of outcome or adherence. Recent work has shown the relationship between forms of adaptive interventions and feedback control systems [5]. In practice, these problems are hybrid in nature because dosages of intervention components correspond to discrete values. These interventions have to be implemented on a population or cohort that may display significant levels of interindividual variability. Hence a novel problem formulation is necessary in order to insure that the decision policy makes appropriate decisions for all members of a population, without demanding excessive modeling effort.

This paper presents a novel formulation of model predictive control (MPC) for linear hybrid systems, which is amenable for achieving robust performance. The proposed scheme extends the use of mixed logical and dynamical (MLD) framework [1] for the control of hybrid systems subjected to both measured and unmeasured disturbances. The proposed formulation offers a multiple-degree-of-freedom tuning for the control of linear hybrid systems such that it can handle measured as well as unmeasured disturbances. This multiple-degree-of-freedom formulation is patterned after the work of Wang and Rivera [3] that relies on a filter parametrization proposed by Lee and Yu [6] for unmeasured disturbance rejection and anticipation of measured disturbances in non-hybrid discrete-time systems. The formulation provides tuning parameters for adjusting the speeds of set-point tracking and disturbance rejection independently for each output of the closed-loop system. Systems with and without integrating dynamics are considered to allow for a broader base of applications.

The paper is organized as follows: Section 2 develops the MPC formulation with the multi-degree-of-freedom tuning. A case study problem of a hypothetical adaptive intervention based on Fast Track program and its hybrid modeling and simulation results are discussed in Section 3. A summary and conclusions is presented in Section 4.

II. MODEL PREDICTIVE CONTROL FOR HYBRID SYSTEMS

Model predictive control (MPC) is a form of closed-loop control where the current value of the manipulated variables is determined online as the solution of an optimal control problem over a horizon of given length. The behavior of the system over the horizon is predicted with a model and the current plant state estimate assumed as the initial state for this prediction. When information about the plant state is available at the next sampling instant, the model is updated and optimization repeated over a shifted horizon. The ability to systematically include constraints, the capability to handle plants with multiple inputs and outputs, the flexibility given to the user to define a cost function, and its disturbance rejection properties have made MPC an attractive technique in the process industries, and beyond [7].

A. Controller Model

The MPC controller presented in this paper uses the MLD-based model framework [1] described as follows,

x(k+1)=Ax(k)+B1u(k)+B2δ(k)+B3z(k)+Bdd(k)
(1)

y(k+1)=Cx(k+1)+d(k+1)+ν(k)
(2)

E5    E2δ(k)+E3z(k)E4y(k)E1u(k)+Edd(k)
(3)

where x and u represent states (both discrete and continuous) and inputs (both discrete and continuous) of the system. y is a vector of outputs, and d, d′ and ν represent measured disturbances, unmeasured disturbances and measurement noise signals, respectively δ and z are discrete and continuous auxiliary variables that are introduced in order to convert logical/discrete decisions into their equivalent linear inequality constraints summarized in (3) (for details, see [1]). The framework permits the user to include and prioritize constraints, and incorporate heuristic rules in the description of the model. Because disturbances are an inherent part of any process, it is necessary to incorporate these in the controller model that defines the control system. Equations (1)(3) are the MLD framework shown in [1], which is modified by incorporating measured and unmeasured disturbances. The model lumps the effect of all unmeasured disturbances on the outputs only, which is a common practice in the process control literature [3], [6]. We consider d′, the unmeasured disturbance, as a stochastic signal, described as follows,

xw(k+1)=Awxw(k)+Bww(k)
(4)

d(k+1)=Cwxw(k+1)
(5)

where Aw has all eigenvalues inside the unit circle and w(k) is a vector of integrated white noise. Here, it is assumed that the disturbance effect is uncorrelated. Thus, Bw = Cw = I and Aw = diag1, α1, (...), αny} where ny is number of outputs. In order to take advantage of well understood properties of white noise signal considering difference form of disturbance and system models and augmenting them as follows,

X(k+1)=𝒜X(k)+1Δu(k)+2Δδ(k)+3Δz(k)+dΔd(k)+wΔw(k)
(6)

y(k+1)=𝒞X(k+1)
(7)

where,

X(k)=  [ΔxT(k)ΔxwT(k)yT(k)]T𝒜=  [A000Aw0CAAwI]i=  [Bi0CBi],i=1,2,3,dw=  [0II]𝒞=  [00I]

Here Δ * (k) = *(k) − *(k − 1) and Δw(k) is white noise sequence.

B. MPC Problem

In this work, we use a quadratic cost function of the form,

min{[u(k+i)]i=0m1,[δ(k+i)]i=0p1,[z(k+i)]i=0p1}Ji=1pQy(y(k+i)yr)2+i=0m1QΔu(Δu(k+i))2+i=0m1Qu(u(k+i)ur)2+i=0p1Qd(δ(k+i)δr)2+i=0p1Qz(z(k+i)zr)2
(8)

subjected to mixed integer constraints of (3) and various process and safety constraints,

ymin  y(k+i)  ymax,1ip
(9)

umin  u(k+i)  umax,0im1
(10)

Δumin  Δu(k+i)  Δumax,0im1
(11)

p is the prediction horizon and m is the control horizon. umin, umax, and ymin, ymax are lower and upper bound on inputs and outputs, respectively. (·)r stands for reference trajectory and ‖ · ‖2 is for 2-norm. Qy, QΔu, Qu, Qd, and Qz are penalty weights on the control error, move size, control signal, auxiliary binary variables and auxiliary continuous variables, respectively. The objective function in (8) follows that of [1] with the additional term of move suppression. Moreover, the objective function (8) and linear model (1)(3) are governed by both binary and continuous variables. Thus, (8) along with linear system dynamics such as (3) and linear inequality constraints described in (9)(11) form a mixed integer quadratic program (miqp). The size of this miqp varies with the settings for the prediction and control horizons.

The MPC problem described by (8)(11) requires future predictions of the outputs and the mixed integer constraints in (3). The future predictions can be obtained by propagating (6), (7) and (3) for p steps in future, which results in the following prediction equations,

𝒴(k+1)=ΦX(k)+1𝒰(k)+2δ¯(k)+3Ƶ(k)+d𝒟(k)H11u(k1)H21δ(k1)H31z(k1)Hd1d(k1)
(12)

Ē5    Ē2δ¯(k)+Ē3Ƶ(k)+Ē1𝒰(k)+Ē4𝒴(k)+Ēd𝒟(k)
(13)

Equation (13) can be further simplify by substituting Y(k) and rewritten as,

ε5    ε2δ¯(k)+ε3Ƶ(k)+ε1𝒰(k)+ε4X(k)+εd𝒟(k)ε41u(k1)ε42δ(k1)ε43z(k1)ε4dd(k1)
(14)

where, Y(k + 1), U, [delta with macron] Ƶ and D are future values of outputs, inputs, auxiliary binary variables, auxiliary continuous variables and measured disturbances as given below:

𝒴(k+1)=[y(k+1)y(k+2)y(k+p)]T
(15)

𝒰(k)=[uT(k)uT(k+m1)]T
(16)

δ¯(k)=[δT(k)δT(k+p1)]T
(17)

Ƶ(k)=[zT(k)zT(k+p1)]T
(18)

𝒟(k)=[dT(k)dT(k+p1)]T
(19)

and Φ, H*, H*, ε* and Ē* are the appropriate coefficients matrices that can be generated using (3),(6) and (7), which are given in Appendix-I.

It should be noted that U, [delta with macron] and Ƶ are the decision variables of the MPC problem described in (8)(11) and can be found by an miqp optimizer. D is an externally generated forecast of the measured disturbances provided externally to the algorithm. Using (12)(19), the MPC problem in (8)(11) can be converted into a standard mixed integer quadratic program (miqp) algorithm as follows:

minξJ    12ξTξ+𝒢Tξ
(20)

𝒮ξ    b
(21)

where ξ = [U(k)T [delta with macron](k)T Ƶ(k)T]T is the vector of the decision variables. It should be noted that the hessian matrix, H, the gradient, G and constraint matrices S and b can be obtained using (A.3)(A.14). The details are not presented here for the sake of brevity. This problem then can be solved by any miqp solver available in the market. In this work, we have used the Tomlab\Cplex solver. The algorithm requires externally generated reference trajectories, forecasts of measured disturbances, and estimates of (disturbance free) initial states X(k) that influence the robust performance of the proposed formulation; this is discussed next.

C. Three-Degree-of-Freedom Tuning Parameters

The MPC formulation presented earlier is facilitated by the use of three-degree-of-freedom tuning where the performance requirements associated with setpoint tracking, unmeasured disturbance rejection and anticipated measured disturbance rejection can be adjusted independently. Having multi-degree-of-freedom functionality affects the controller response more intuitively and conveniently than traditional penalty weights as follows:

(I) Reference Trajectory and Setpoint Tracking

The output reference trajectory, Yr is generated using a filter as follows,

yr(k+i)ytarget=f(q,αrj),1jny,1jp
(22)

where f(q, αrj) is a Type-I discrete-time filter for jth reference that can be given as [8]:

f(q,αrj)=(1αrj)qqαrj,1jny
(23)

The setpoint tracking speed can be adjusted by choosing αrj between [0,1) for each individual output. The smaller the value for αrj, the faster the response for particular setpoint tracking. Thus, setpoint tracking speed can be adjusted for each output individually, which is more intuitive then adjusting move suppression weights. Note that the reference trajectories for inputs and auxiliary binary and continuous variables (i.e. Ur, [delta with macron]r and Ƶr) are considered constant over the prediction horizon at their respective predefined target values.

(II) Measured Disturbance Rejection

It was noted earlier that the proposed formulation uses an externally generated forecast of the measured disturbance as in (19). The speed required to reject each measured disturbance can be adjusted independently by using a filter, f(q, αdj), 1 ≤ jndist, foe each measured disturbance. Here ndist is the number of measured disturbances and αdj is a tuning parameter between between [0,1), for the jth measured disturbance. Smaller the αdj, faster the speed of particular disturbance rejection. Thus, filtered signal can be used as an anticipated unmeasured disturbance. The transfer function f(q, αdj) can be assumed as an asymptotically step or an asymptotically ramp (i.e. Type-I or Type-II signal as per [8]), as per the nature of the system dynamics. For example, a Type-II filter should be considered in case of an integrating system dynamics [3]. Type-I filter structure is given in (23) and Type-II filter structure can be given as follows [8]:

f(q,αdj)=(β0+β1q1++β1qω)×(1αdj)qqαdj
(24)

βk=6kαdj(1αdj)ω(ω+1)(2ω+1),1kω
(25)

β0=1(β1++βω)
(26)

(III) Unmeasured Disturbance Rejection and State Estimation

As discussed earlier, the proposed MPC scheme requires initial states, X(k) of the augmented system at each time instance. The states of the system can be estimated from the current measurements, y(k) while rejecting the unmeasured disturbance using a Kalman filter as follows:

X(k|k1)=𝒜X(k1|k1)+1Δu(k1)+2Δδ(k1)+3Δz(k1)+dΔd(k1)
(27)

X(k|k)=X(k|k1)+Kf(y(k)𝒞X(k|k1))
(28)

Here Kf is the filter gain, an optimal value of which can be found by solving an algebraic Riccati equation. However, this will require the covariance matrices for the unmeasured disturbance and measurement noise that may not be known accurately. Therefore, following Lee and Yu [6] (for Aw = diag1, α1, (...), αny}), we use the parametrization of filter gain as follows,

Kf=[0FbFa]
(29)

where

Fa=diag{(fa)1,,(fa)ny}
(30)

Fb=diag{(fb)1,,(fb)ny}
(31)

(fb)j=(fa)j21+αjαj(fa)j,1jny
(32)

(fa)j is a tuning parameter that lies between 0 and 1. While the unmeasured disturbances is rejected using state observer discussed by (27)(32) and speed of the rejection is proportional to the tuning parameter (fa)j. As (fa)j approaches zero, the state estimator increasingly ignores the prediction error correction, and control solution is mainly determined by the deterministic model, (27) and the the feedforward anticipation signal. On the other hand state estimator try to compensate for all prediction error as (fa)j approaches to 1. However, in the latter case the controller becomes extremely aggressive. Thus, by adjusting (fa)j one may influence each output individually, which is more intuitive than tuning move suppression in traditional MPC formulation. It should be noted that the MLD model presented in (1)(3) can also be applied to accommodate discrete outputs. However, in this work we assume that all the outputs are continuous. This assumption enables one to use the parameterized filter gain Kf as in (29).

D. Example: Production-Inventory System

As an illustration of the algorithm we consider a production-inventory system, the basic node of a supply chain [9]. This system can be modeled using a fluid analogy where the inventory is represented as a tank containing fluid, while the factory is represented as a pipe with throughput time θ + 1 sampling instants and yield K. The net stock, y(k) representing fluid level and input pipe flow, u(k) (factory starts) are controlled and manipulated variables, respectively, while output flow represents demand, d(k) that is composed of an unforecasted component du(k) and a forecasted component df (k − θf), with θf representing the forecast time. The dynamics of this system are given as follows,

y(k+1)=y(k)+Ku(kθ)d(k)
(33)

where d(k) = df(k − θf) + du(k) is the total customer demand. The operational goal of this system is to meet the customer demand d(k) while maintaining the net stock inventory level at a predefined target. This can be accomplished by manipulating factory starts u(k) and feedforward compensation of forecasted demand df (k). The manipulated input u(k) is restricted to take only four values 0, 33.33, 66.66 and 100. Thus, the plant according to (33) is a class of hybrid system with discrete inputs and continuous output y(t). In this case we consider a sampling interval of T = 1 day, θf = p = 30 days, system gain K = 0.9 and throughput time of 4 days.

Fig. 1 shows how the parameters αr, αd and fa can be manipulated to affect the nominal speed of response and hence robustness of the algorithm. The solid and dashed lines represent the controller performance for two different settings of the parameters αr, αd and fa, as noted in the figure caption. It can be seen that the speeds of setpoint tracking and measured disturbance rejection are inversely proportional to the values of αr and αd, respectively while speed of unmeasured disturbance rejection is proportional to value of fa.

Fig. 1
Evaluation of tuning parameters, inventory example: (i) Dashed line: αr = 0, αd = 0., fa = 1 (ii) Solid line: αr = 0.9, αd = 0.9, fa = 0.3. A measured disturbance df (k) and an unmeasured disturbance du(k) enter at day ...

III. CASE STUDY: ADAPTIVE INTERVENTIONS

As a representative case study of a time-varying adaptive behavioral intervention we examine the Fast Track program [10]. Fast Track was a multi-year, multi-component program designed to prevent conduct disorders in at-risk children. Youth showing conduct disorder are at increased risk for incarceration, injury, depression, substance abuse, and death by homicide or suicide. In Fast Track, some intervention components were delivered universally to all participants, while other specialized components were delivered adaptively. In this paper we focus on a hypothetical adaptive intervention described in [4] for assigning family counseling, which was provided to families on the basis of parental functioning. There are several possible levels of intensity, or doses, of family counseling. The idea is to vary the doses of family counseling depending on the needs of the family, in order to avoid providing an insufficient amount of counseling for very troubled families, or wasting counseling resources on families that may not need them or be stigmatized by excessive counseling. The decision about which dose of counseling to offer each family is based primarily on the family’s level of functioning, assessed by a family functioning questionnaire completed by the parents. As described in [4], based on the questionnaire and the clinician’s assessment, family functioning is determined to fall in one of the following categories: very poor, poor, near threshold, or at/above threshold. A corresponding decision rule that can be applied is as follows: families with very poor functioning are given weekly counseling; families with poor functioning are given biweekly counseling; families with near threshold functioning are given monthly counseling; and families at or above threshold are given no counseling. Family functioning is reassessed at a review interval of three months, at which time the intervention dosage may change. This goes on for three years, with twelve opportunities for a dose of family counseling to be assigned.

Rivera et al. [5] examined analyzing the intervention by means of a fluid analogy, as described in Fig. 2. Parental function PF(k) is treated as liquid in a tank, which is depleted by exogenous disturbances D(k). The tank is replenished by the intervention I(k), which is the manipulated variable. The use of fluid analogy enables developing a mathematical model of the open-loop dynamics of the intervention using the principle of conservation of mass. This model can be described by following difference equations which relates parental function PF(k) with the intervention I(k) as follows:

PF(k+T)=PF(k)+KII(kθ)D(k)
(34)

D(k)=i=1ndDi(k)
(35)

PFmeas(k)=PF(k)+N(k)
(36)

PF(k) is the parental function, I(k) refers to the intervention dosage (frequency of counselor home visits), KI is the intervention gain, θ represent time delay between intervention and its effect on parental function, PFmeas(k) is the parental function measurement. D(k) is the source of parental function depletion and N(k) represents the measurement noise. I(k) has a restriction on the frequency of counselor visits such that these can be only possible either weekly, biweekly or monthly, or none at all. This problem requires imposing a restriction on the intervention I(k) such that it takes only four values: 0, Iweekly, Ibiweekly and Imonthly. Thus, the problem has inherent discreteness along with the continuous dynamics, which can be characterized by the hybrid dynamical system. The parental function system may be model using the MLD framework. In order to capture discreteness in the intervention, four binary auxiliary variables, δ1, δ2, δ3, δ4 and four continuous auxiliary variables, I1, I2, I3, I4 are introduced. The detailed description of the MLD model is not presented here for the sake of brevity. As noted in the Introduction there exists variability in the dynamics among families participating in the intervention, and these need to be addressed when using control algorithms to assign dosages in an adaptive intervention.

Fig. 2
Fluid analogy corresponding to the hypothetical adaptive intervention. Parental function PF(k) is treated as material (inventory) in a tank, which is depleted by disturbances D(k) and replenished by intervention dosage I(k), which is the manipulated variable. ...

In the case study the controller is based on a nominal model (KI = 0.15 and θ = 0) with prediction horizon (p = 30) and control horizon (m = 10). The Tomlab-CPLEX solver is used to solve resulting mixed integer quadratic program (miqp) optimization problem. Figure 3(a) documents the simulation results for various levels of gain and delay mismatch in the presence of a setpoint change in parental function to 50% and a simultaneous step unmeasured disturbance disturbance D(k) = 5 using proposed MPC formulation. Tuning parameters Qy = 1, QΔu = Qu = Qd = Qz = 0, and (αr, fa) = (0, 0.3) are used for all the cases. From many possible cases, six cases involving gain and delay mismatch (ΔKI, Δθ) are presented in this paper, starting with case 1: (0, 0; nominal model, no mismatch), case 2: (21%, 0), case 3: (−15%, 0), case 4: (0%, 1), case 5: (−15%, 1), and case 6: (21%, 1). Various characteristics of an integrating system dynamics are reflected in these results. For example, a negative gain mismatch leads to the sluggish response (case 3) while a positive gain mismatch yields faster response (case 2) than nominal (case 1). On the other hand, mismatch in time delay is responsible for the overshoot in case 6. In spite of all these variations, the hybrid controller is able to produce stable responses with desired setpoint tracking for all the cases presented in this paper. This is an evidence of satisfactory performance of the proposed algorithm in presence of significant plant-model mismatch. Figure 3(b) represents the corresponding simulation results using the MPC formulation that relies on a constant plant-model mismatch over a prediction horizon and uses Qy = 1, QΔu = 1, Qu = Qd = Qz = 0 as tuning parameters. Figure 4 documents the sum of the 2-norm of the parental function deviation (from the parental function goal) and the required interventions over a time horizon of 36 months for each of above mentioned cases by asterisks ‘*’, compared them with the results obtained using the MPC formulation (denoted by circle ‘o’ in the figure) that relies on a constant plant-model mismatch over a prediction horizon. In the case of the proposed algorithm, the mean and variance of sum of 2-norm of the error are 2.17 × 103 and 3.36 × 105, respectively. On the other hand, using the MPC formulation with constant plant-model mismatch these values are 3.74 × 103 and 1.38 × 106, respectively. Thus, it can be concluded that the proposed algorithm offers more uniformity than the MPC formulation that assumes constant plant-model mismatch, while dealing with the significant uncertainties. In addition, from Figure 4 one can determine that the proposed formulation uses less intervention energy in all the cases, with mean and variance 4.79 × 104 and 1.70 × 108 as compared to the MPC formulation with constant plant-model mismatch that yields 5.63 × 104 mean with 3.74 × 108 variance. Corresponding 2-norm values using IF-THEN based decision policy [5] are also represented in Fig. 4 by plus ‘+’. This policy requires least intervention energy of all controllers considered, with mean and variance 3.98 × 104 and 6.19 × 107. However, the mean and variance of the sum of the 2-norm of the error are 2.87 × 103 and 1.52 × 106, which are much higher than the proposed MPC formulation. In addition, the parental function responses yields significant offset for the first five participant cases (not shown for brevity). Thus, the proposed formulation assigns intervention dosages more efficiently in the presence of uncertainty and reduces waste of intervention resources, while taking parental function to a desired goal.

Fig. 3
Evaluation of the hybrid MPC controller for the counselor visits-parental function intervention under conditions of gain and delay mismatch. A setpoint change from 10% to 50% parental function with simultaneous step disturbance D(k) = 5 are evaluated ...
Fig. 4
Comparison of the sum of the 2-norm for the control error e(k) = PF(k) − R(k) and intervention I(k) signals over time horizon using the proposed MPC formulation (‘*’); MPC formulation assuming a constant plant-model mismatch over ...

IV. SUMMARY

Applications of hybrid systems are becoming increasingly common in many fields. Recently, control engineering principles have been proposed for adaptive behavioral interventions [5]; these systems are naturally hybrid in nature. In this work, a novel MPC formulation for the linear hybrid systems and its application to adaptive behavioral intervention is presented. The formulation uses MLD framework that modified by inclusion of the measured and the unmeasured disturbances as they play an important role on the performance of any control system. The formulation offers a multiple-degree-of freedom tuning parameters for the speed of disturbance rejection and setpoint tracking affecting each output individually; this has both intutive and practical appeal. The applicability and efficiency of proposed MPC formulation is demonstrated on a production-inventory problem example and a hypothetical intervention problem intended for improving parental function in at-risk children. From the simulation results, it can be concluded that the proposed MPC scheme can handle significant plant-model mismatch and disturbances while offering robust performance. The algorithm can be useful in the context of applying control engineering-based interventions to populations or cohort groups displaying variability in model parameters. Allowing the model to be adjusted for the characteristics of individual participant families can improve the performance of these policies to an even greater extent.

ACKNOWLEDGMENTS

Support for this work has been provided by the Office of Behavioral and Social Sciences Research (OBSSR) of the National Institutes of Health and the National Institute on Drug Abuse (NIDA) through grants K25 DA021173 and R21 DA024266. Insight provided by Linda M. Collins (Methodology Center, Penn State University) and Susan A. Murphy (University of Michigan) towards better understanding adaptive behavioral interventions is greatly appreciated.

APPENDIX-I

1=[𝒞1000𝒞𝒜1𝒞1𝒞100𝒞𝒜21𝒞𝒜1𝒞𝒜1𝒞1𝒞1𝒞𝒜m11𝒞𝒜m21𝒞𝒜m21𝒞𝒜m11𝒞𝒜1𝒞1𝒞1𝒞𝒜m1𝒞𝒜m11𝒞𝒜m11𝒞𝒜m21𝒞𝒜21𝒞𝒜1𝒞𝒜1𝒞𝒜p11𝒞𝒜p21𝒞𝒜p21𝒞𝒜p11𝒞𝒜pm+11𝒞𝒜pm1𝒞𝒜pm1]
(A.1)

i=[𝒞i000𝒞𝒜i𝒞i𝒞i00𝒞𝒜2i𝒞𝒜i𝒞𝒜i𝒞i𝒞𝒜p1i𝒞𝒜p2i𝒞𝒜p2i𝒞𝒜p3i𝒞𝒜i𝒞i𝒞i],i=2,3,d
(A.2)

Φ=[𝒞𝒜𝒞𝒜2𝒞𝒜p]T
(A.3)

Hi1=[𝒞i𝒞𝒜i𝒞𝒜2i𝒞𝒜p1i]Ti=1,2,3,d
(A.4)

Ēi=diag{Ei,,Ei},i=2,3,d
(A.5)

Ē4=diag{E4,,E4}
(A.6)

Ē5=[E5E5E5]T
(A.7)

Ē1=[E1000E10E1]
(A.8)

εi=(Ē4¯i+Ēi),i=1,2,3,d
(A.9)

ε4=Ē4Φ¯
(A.10)

ε4i=Ē4¯i1,i=1,2,3,d
(A.11)

ε5=Ē5
(A.12)

¯j=[[0]nyj(1:(p1)ny,:)]j=1,2,3,d,11,21,31,d1
(A.13)

Φ¯=[𝒞Φ(1:(p1)ny,:)]
(A.14)

Here ny is number of outputs,[0]ny denotes matrix with ny rows that has all the elements 0 and *(1 : (p − 1)ny, :) should be read as row 1 to row (p − 1)ny of the matrix * with all the columns.

REFERENCES

1. Bemporad Alberto, Morari Manfred. Control of systems integrating logic, dynamics, and constraints. Automatica. 1999;35(3):407–427.
2. Nandola Naresh N, Bhartiya Sharad. A multiple model approach for predictive control of nonlinear hybrid systems. Journal of Process Control. 2008;18(2):131–148.
3. Wang Wenlin, Rivera DE. Model predictive control for tactical decision-making in semiconductor manufacturing supply chain management. Control Systems Technology, IEEE Transactions on. 2008 Sept.16(5):841–855.
4. Collins LM, Murphy SA, Bierman KL. A conceptual framework for adaptive preventive interventions. Prevention science. 2004;5(3):185–196. [PubMed]
5. Rivera Daniel E, Pew Michael D, Collins Linda M. Using engineering control principles to inform the design of adaptive interventions a conceptual introduction. Drug and Alcohol Dependence. 2007;88(2):S31–S40. [PMC free article] [PubMed]
6. Lee JH, Yu ZH. Tuning of model predictive controllers for robust performance. Computers & Chemical Engineering. 1994;18(1):15–37.
7. Prett DM, Garcia CE. Fundamental Process Control. Butterworths Series in Chemical Engineering. Butterworths publishers; 1998.
8. Morari M, Zafiriou E. Robust Process Control. Englewood Cliffs, NJ: Prentice-Hall; 1989.
9. Schwartz Jay D, Wang Wenlin, Rivera Daniel E. Simulation-based optimization of process control policies for inventory management in supply chains. Automatica. 2006;42(8):1311–1320.
10. Conduct Problems Prevention Research Group. A developmental and clinical model for the prevention of conduct disorders: The Fast Track program. Development and Psychopathology. 1992;4:509–528.