Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3182455

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Literature Review
- 3 Discrete-Time Markov Models for Spread of an Infectious Disease
- 4 Discrete-Time Markov Models for SIS, SIR and Their Derivatives
- 5 Reducing the State Space through State Aggregation
- 6 An Illustrative Example: SIR Model
- 7 Conclusion
- Supplementary Material
- References

Authors

Related links

Eur J Oper Res. Author manuscript; available in PMC 2012 December 16.

Published in final edited form as:

Eur J Oper Res. 2011 December 16; 215(3): 679–687.

doi: 10.1016/j.ejor.2011.07.016PMCID: PMC3182455

NIHMSID: NIHMS315337

Reza Yaesoubi, Harvard School of Public Health - Department of Epidemiology, 677 Huntington Ave., Boston, MA 02115, U.S.A;

Ted Cohen: ude.dravrah.hpsh@nehoct

Corresponding author: Phone number: +1 (919) 247-2575, Fax number: +1 (617) 566-7805, Email: moc.liamg@ibuoseay.azer

See other articles in PMC that cite the published article.

We propose a class of mathematical models for the transmission of infectious diseases in large populations. This class of models, which generalizes the existing discrete-time Markov chain models of infectious diseases, is compatible with efficient dynamic optimization techniques to assist real-time selection and modification of public health interventions in response to evolving epidemiological situations and changing availability of information and medical resources. While retaining the strength of existing classes of mathematical models in their ability to represent the within-host natural history of disease and between-host transmission dynamics, the proposed models possess two advantages over previous models: (1) these models can be used to generate optimal dynamic health policies for controlling spreads of infectious diseases, and (2) these models are able to approximate the spread of the disease in relatively large populations with a limited state space size and computation time.

The appearance of novel human pathogens (e.g. H1N1 and H5N1 influenza, SARS) and the persistent circulation of infectious diseases in communities (e.g. HIV and tuberculosis), have stimulated efforts to develop *dynamic health policies* for controlling the spread of infectious diseases. Dynamic health policies make *real-time* recommendations in response to changing population characteristics (e.g. disease prevalence, proportion of individuals that are immune), disease characteristics (e.g. infectivity, antimicrobial resistance), and resource constraints (e.g. vaccines, antimicrobial drugs, personnel, and budget) (Wallinga et al., 2010; Merl et al., 2009; Ludkovski and Niemi, 2010).

Most existing approaches for identifying optimal policies for infectious disease control use mathematical or simulation models of disease spread as a basis for comparing the performance of a number of *pre-determined* health policies (Dimitrov et al., 2009; Goldstein et al., 2010; Halloran et al., 2008). Although these approaches allow for projection of the potential impact of different interventions, they are not generally structured to assist dynamic decision making as real-time data on disease spread become available during an epidemic.

In contrast, a few studies have proposed new methods to find exact or approximate optimal dynamic health policies for epidemics. Wallinga et al. (2010) developed a framework for finding the optimal allocation schemes as new observations accrue in the initial phase of an emerging epidemic. These recommendations, however, are dependent on estimates of *R*_{0} which poses inherent drawbacks in dynamic decision making in stochastic environments (refer to Larson (2007) for detailed discussion on the limitations of *R*_{0}). Lefevre (1981) used a continuous-time Markov decision model, Merl et al. (2009) developed a statistical framework and Ludkovski and Niemi (2010) developed a simulation-based model for dynamic determination of optimal policies for emerging epidemics.

Since the transmission of infectious diseases is a stochastic process, optimal dynamic health policies for limiting disease spread can potentially be determined through dynamic programming techniques (Bertsekas, 2005) (or approximate dynamic programming (Powell, 2007)). These methods have proven to be efficient and effective for dynamic decision making in a wide variety of areas (e.g., medical treatment optimization (Schaefer et al., 2005), economics (Van and Dana, 2003), operations research (Winston, 2003; Bertsekas, 2005; Powell, 2007)). Yet, the use of these techniques for assisting the selection of infectious disease control strategies is limited (to the best of our knowledge, Lefevre (1981), Ludkovski and Niemi (2010) and Ge et al. (2010) are the only examples). In part, this may reflect the failure of existing models of infectious diseases to satisfy the requirements of dynamic optimization techniques.

There are two main features of infectious disease transmission processes that pose challenges for the use of dynamic optimization methods:

**Prohibitively large state space**: The state of an epidemic is usually described by the number of individuals in each disease compartment (e.g. susceptible, infectious, recovered). Therefore, as the population size grows, the size of the state space increases rapidly. For instance, in a closed population of size*N*, the size of the state space of a simple epidemic susceptible-infectious-removed (SIR) model will be*N*(*N*+ 1)/2. Such enormous state spaces mean dynamic programming methods to lose their efficiency very rapidly.**Unobservability of state**: The within-host natural history of an infectious disease can be complicated and variable; this, coupled with limited availability or access to diagnostic tests results in uncertainty about the true state of the epidemic at any point in time. For example, for infectious diseases with a longer period of incubation than latency, infectious individuals may be asymptomatic and hence unlikely to be diagnosed for a variable amount of time. Although the number of symptomatic infectious individuals may be observable, asymptomic infectious individuals will be hard to detect. This means that the overall state of the epidemic cannot be measured accurately. Therefore, in order to utilize (approximate) dynamic programming, the model of disease spread should be structured such that a probability*belief*can be formed about the state of disease spread using real-time data.

In this paper, we propose a class of mathematical models that retains the strength of existing modeling approaches (e.g. representation of the within-host natural history of disease and between-host transmission dynamics) while permitting the efficient use of dynamic programming techniques to develop optimal dynamic health policies for disease control. The state space in this class of models can be reduced by state aggregation while maintaining a desired level of accuracy for the disease spread model. Also, the proposed class of models provides a structure for formation of probability beliefs about the actual state of disease spread based on observed data.

In §2, we review a number of available infectious disease models and highlight the manner in which these models fail to meet the requirements of dynamic optimization. In §3, we describe a procedure to construct the proposed class of discrete-time Markov models for infectious disease transmission. Section 4 demonstrates how the proposed framework can be employed in constructing Markov models for susceptible-infectious-susceptible (SIS) and SIR models as well as their derivatives. Section 5 discusses a method to reduce the state space of Markov models of infectious diseases through state aggregation. Finally, using the proposed class of models, we present an illustrative SIR model for an influenza outbreak in an English boarding school.

Although infectious diseases generally spread in a stochastic fashion, deterministic models are commonly used as tools for studying epidemic behavior (Anderson and May, 1992; Hethcote, 2000). These deterministic models have been very useful in understanding the dynamics of infectious disease, estimating important epidemiologic parameters (e.g. basic reproductive numbers), and determining targets for disease control (e.g. critical proportions of the population to immunize). However, if the aim of a model is to help develop a dynamic health policy (that is a policy that can recommend switching interventions based on real-time observations about the epidemic state), it is not useful to consider models that produce epidemics with deterministic trajectories. If the epidemic trajectory could be known with certainty, it is possible to determine an optimal series of interventions at baseline and thus the motivation for dynamic decision-making is lost. Therefore, finding optimal dynamic health policies requires the use of stochastic models of infectious disease spread.

Many stochastic models of infectious diseases utilize non-negative integer-valued Markov processes in continuous or discrete time. To date, most stochastic models of infectious diseases have been based on a continuous-time Markov chain (Jacquez and O’Neill, 1991; Jacquez and Simon, 1993; Nåsell, 2002; Keeling and Ross, 2008). In these Markov models, the state of the process is defined as the *number* of individuals that are susceptible, infected, etc. Therefore, for population of any reasonable size, the number of Kolmogorov’s differential equations describing the disease spread is prohibitively large. For example, in a population of *N* individuals, to model SIS dynamics by continuous-time Markov chain, *N* + 1 differential equations are needed, and to model SIR dynamics, (*N* + 1)(*N* + 2)/2 differential equations are required (Keeling and Ross, 2008). In addition, dynamic optimization over a continuous-time model can be rather challenging, resulting in policies which are not convenient to implement in practice, and limited to very simple models of infectious diseases (Lefevre, 1981).

Most discrete-time Markov models assume that the time step is sufficiently small so that only *one* change in state is possible during the time step. A change may be a birth or death of a susceptible or infected individual, recovery of an infected individual, an infection of a susceptible individual, etc. (Allen, 1994; Allen and Burgin, 2000; Castillo-Chavez and Yakubu, 2001). As such, the transition probabilities obtained from these models simply approximate the transition probabilities in a continuous-time Markov jump process (Allen and Burgin, 2000), and cannot be used to build the transition probability matrix of the associated discrete-time Markov chain, which is required for dynamic optimization methods.

There are important historical examples of discrete-time Markov chain models for infectious diseases. Reed-Frost and Greenwood models are probably the best-known discrete-time stochastic epidemic models (Abbey, 1952; Greenwood, 1931). In these Markov models, the state of the disease spread is defined as the *number* of individuals that are susceptible, infected, etc. Therefore, similar to the case of continuous-time Markov process, as the population size grows, the size of the probability matrixes becomes prohibitively large. In addition to requiring a large state space, these discrete-time Markov chains have generally been used to describe the spread of a pathogen with an infectious period that is relatively short in comparison with the latent period (Daley and Gani, 1999). This assumption is violated for many diseases with complex natural histories.

In this section, we discuss the construction of a generalized class of discrete-time Markov models of infectious diseases that fulfill the requirements of dynamic programming techniques. We then show how the parameters of the proposed class of models can be estimated and belief states can be formed using the available data.

To illustrate the steps required to construct discrete-time Markov models of infectious disease spread, we consider a hypothetical infectious disease with a natural history that can be adequately summarized with *M* serial classes (see Figure 1).

At a given time *t*, we denote the number of individuals in class *C _{i}* by

$$\sum _{i=1}^{M}{X}_{{C}_{i}}(t)=N.$$

(1)

By Eq.1, the disease state is fully identified if we know *M* − 1 variables of {*X*_{C1}(*t*), *X*_{C2}(*t*), …, *X _{CM}*(

The state of the system changes as *events* occur, such as births or deaths of susceptibles, transmission episodes, recoveries or deaths of infectives, etc. We call these events the *dynamics driving events*. In Figure 1, the driving event from class *C _{i}* to

The random variables *C*_{2}(*t*), *C*_{3}(*t*), …, *C _{M}* (

The second step in constructing the discrete-time Markov model is to find the probability distribution of each of the driving event random variables conditional on the state of the disease at time *t: P*_{Ci(t)}(·|*X*_{C1}(*t*), *X*_{C2}(*t*), …, *X _{CM}*(

$${P}_{({C}_{2}(t),{C}_{3}(t),\dots ,{C}_{M}(t))}(({c}_{2},{c}_{3},\dots ,{c}_{M})\mid {X}_{{C}_{1}}(t),{X}_{{C}_{2}}(t),\dots ,{X}_{{C}_{M}}(t))=\prod _{i=2}^{M}{P}_{{C}_{i}(t)}({c}_{i}\mid {X}_{{C}_{1}}(t),\dots ,{X}_{{C}_{M}}(t)).$$

(2)

As we will see later, it is usually straightforward to find the probability function of each driving event random variable, and in consequence the joint probability mass function (2).

To construct the discrete-time Markov model, we then calculate the transition probability matrix whose rows represent the probability distribution of the disease state at time *t* + Δ*t*, given the state of the disease at time *t*:

$$Pr\{({X}_{{C}_{1}}(t+\mathrm{\Delta}t),{X}_{{C}_{2}}(t+\mathrm{\Delta}t),\dots ,{X}_{{C}_{M}}(t+\mathrm{\Delta}t))=({x}_{1},{x}_{2},\dots ,{x}_{M})\mid {X}_{{C}_{1}}(t),{X}_{{C}_{2}}(t),\dots ,{X}_{{C}_{M}}(t)\}.$$

(3)

In calculating the transition probabilities (3), we first need to find a way to relate the joint random variable (*X*_{C1}(*t* +Δ*t*), *X*_{C2}(*t* +Δ*t*), …, *X _{CM}*(

The set of *dynamics driving constraints* summarizes the relationships among the driving events during interval [*t*, *t*+Δ*t*] and the state of the disease at time *t* and *t*+Δ*t*. For the disease in Figure 1 with a fixed population size, the dynamics driving constraints are:

$${X}_{{C}_{1}}(t+\mathrm{\Delta}t)={X}_{{C}_{1}}(t)-{C}_{2}(t),$$

(4)

$${X}_{{C}_{i}}(t+\mathrm{\Delta}t)={X}_{{C}_{i}}(t)+{C}_{i}(t)-{C}_{i+1}(t),\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}i\in \{2,3,\dots ,M\},$$

(5)

$${X}_{{C}_{M}}(t+\mathrm{\Delta}t)={X}_{{C}_{M}}(t)+{C}_{M}(t).$$

(6)

These equations specify the new values of (*X*_{C1}(*t* + Δ*t*), *X*_{C2}(*t* + Δ*t*), …, *X _{CM}*(

Therefore, constraint (6) is redundant and can be dropped. By solving the set of equations (4)–(5) for *C _{i}*(

$${C}_{i}(t)=\sum _{j=1}^{i-1}({X}_{{C}_{j}}(t)-{X}_{{C}_{j}}(t+\mathrm{\Delta}t)),\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}i\in \{2,3,\dots ,M\}.$$

(7)

Given that the random variables *C _{i}*(

$$0\le \sum _{j=1}^{i-1}({X}_{{C}_{j}}(t)-{X}_{{C}_{j}}(t+\mathrm{\Delta}t))\le {X}_{{C}_{i-1}}(t),\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}i\in \{2,3,\dots ,M\}.$$

(8)

Therefore, in finding the transition probabilities (3), the support of the random variable (*X*_{C1}(*t*+Δ*t*), *X*_{C2}(*t* + Δ*t*), …, *X _{CM}*(

$${\mathrm{\Omega}}_{X(t)}=\{({x}_{1},{x}_{2},\dots ,{x}_{M})\in {\mathbb{N}}^{M}\mid 0\le \sum _{j=1}^{i-1}({X}_{{C}_{j}}(t)-{x}_{j})\le {X}_{{C}_{i-1}}(t),i\in \{2,3,\dots ,M\}\}$$

(9)

and the relationship between the random variables (*X*_{C1}(*t* + Δ*t*), *X*_{C2}(*t* + Δ*t*), …, *X _{CM}*(

Thus, by using the probability mass function (2) and the set of dynamics driving equations (7), the transition probabilities (3) can be calculated by:

$$\begin{array}{l}Pr\{({X}_{{C}_{1}}(t+\mathrm{\Delta}t),\dots ,{X}_{{C}_{M}}(t+\mathrm{\Delta}t))=({x}_{1},\dots ,{x}_{M})\mid {X}_{{C}_{1}}(t),\dots ,{X}_{{C}_{M}}(t)\}\\ =\{\begin{array}{l}{P}_{({C}_{2}(t),{C}_{3}(t),\dots ,{C}_{M}(t))}((\sum _{j=1}^{1}({X}_{{C}_{j}}(t)-{x}_{j}),\dots ,\sum _{j=1}^{M-1}({X}_{{C}_{j}}(t)-{x}_{j})\mid {X}_{{C}_{1}}(t),\dots ,{X}_{{C}_{M}}(t)),\text{if}\phantom{\rule{0.16667em}{0ex}}({x}_{1},\dots ,{x}_{M})\in {\mathrm{\Omega}}_{X(t)},\hfill \\ 0,\text{otherwise}.\hfill \end{array}\end{array}$$

(10)

To summarize, the discrete-time Markov model can be constructed through the following steps:

Data accrued over the course of an epidemic permits estimation (and updating) of model parameters. In this subsection, we discuss how a maximum likelihood method can be used to estimate the parameters of our proposed class of infectious disease models.

Suppose that we want to estimate the parameters ** ω** = (

Suppose that the initial state of disease spread is observable and denoted by
$\widehat{X}(1)=({x}_{1}^{1},{x}_{2}^{1},\dots {x}_{M}^{1})$, and that during *T* periods {1, 2, …, *T*} observations ** = (**_{1}, _{2}, …, * _{T}*) are gathered. The likelihood of observing

$$L({\omega}_{1},\dots ,{\omega}_{2};{\widehat{D}}_{1},{\widehat{D}}_{2},\dots ,{\widehat{D}}_{T},\widehat{X}(1))=Pr\{D(1)={\widehat{D}}_{1},D(2)={\widehat{D}}_{2},\dots ,D(T)={\widehat{D}}_{T}\mid \mathit{\omega},\widehat{X}(1)\}$$

(11)

Now, the model parameters ** ω** = (

$$\begin{array}{l}L({\omega}_{1},\dots {\omega}_{2};{\widehat{D}}_{1},{\widehat{D}}_{2},\dots ,{\widehat{D}}_{T},\widehat{X}(1))\\ =Pr\{D(T)={\widehat{D}}_{T}\mid D(1)={\widehat{D}}_{1},D(2)={\widehat{D}}_{2},\dots ,D(T-1)={\widehat{D}}_{T-1};\mathit{\omega},\widehat{X}(1)\}\\ \times Pr\{D(T-1)={\widehat{D}}_{T-1}\mid D(1)={\widehat{D}}_{1},D(2)={\widehat{D}}_{2},\dots ,D(T-2)={\widehat{D}}_{T-2};\mathit{\omega},\widehat{X}(1)\}\\ \times \dots \times Pr\{D(2)={\widehat{D}}_{2}\mid D(1)={\widehat{D}}_{1};\omega ,\widehat{X}(1)\}\times Pr\{D(1)={\widehat{D}}_{1}\mid \mathit{\omega},\widehat{X}(1)\}.\end{array}$$

(12)

In order to calculate the likelihood function (12) we define the following notation:

*X*(*t*) = (*X*_{C1}(*t*),*X*_{C2}(*t*), …,*X*(_{CM}*t*)): random vector representing the state of the disease spread at time*t*.*π*(_{t}*x*_{1},*x*_{2}, …*x*;_{M}, (1)): probability that at time*ω**t*the state of the disease spread is*X*(*t*) = (*x*_{1},*x*_{2}, …*x*) given the model parameters_{M}= (*ω**ω*_{1},*ω*_{2}…*ω*) and the initial state $\widehat{X}(1)=({x}_{1}^{1},{x}_{2}^{1},\dots ,{x}_{M}^{1})$._{Z}: set of all states that the disease spread may visit at time_{t}*t*given the observations**= (**_{1},_{2}, …,_{t}_{−1}).

Now given the observations ** = (**_{1}, _{2}, …, _{t}_{−1}), the term

$$Pr\{D(t)={\widehat{D}}_{t}\mid D(1)={\widehat{D}}_{1},D(2)={\widehat{D}}_{2},\dots ,D(t-1)={\widehat{D}}_{t-1};\mathit{\omega},\widehat{X}(1)\}$$

in likelihood function (12) is calculated as follows:

$$\begin{array}{l}Pr\{D(t)={\widehat{D}}_{t}\mid D(1)={\widehat{D}}_{1},D(2)={\widehat{D}}_{2},\dots ,D(t-1)={\widehat{D}}_{t-1};\mathit{\omega},\widehat{X}(1)\}\\ =\sum _{({x}_{1},{x}_{2},\dots {x}_{M})\in {\widehat{\mathrm{\Omega}}}_{t}}Pr\{D(t)={\widehat{D}}_{t}\mid X(t)=({x}_{1},{x}_{2},\dots {x}_{M})\}\frac{{\pi}_{t}({x}_{1},{x}_{2},\dots {x}_{M};\mathit{\omega},\widehat{X}(1))}{{\displaystyle \sum _{({x}_{1},{x}_{2},\dots {x}_{M})\in {\widehat{\mathrm{\Omega}}}_{t}}}{\pi}_{t}({x}_{1},{x}_{2},\dots {x}_{M};\mathit{\omega},\widehat{X}(1))}.\end{array}$$

(13)

It only remains to establish a procedure to update the set * _{t}* as observations accrue. For time

$${\widehat{\mathrm{\Omega}}}_{1}=\{({x}_{1},{x}_{2},\dots {x}_{M})\in {\mathbb{N}}^{M}\mid ({x}_{1},{x}_{2},\dots {x}_{M})=({x}_{1}^{1},{x}_{2}^{1},\dots {x}_{M}^{1})\}.$$

For time *t* > 1, the set * _{t}* can be determined recursively by using the set

$$\begin{array}{l}{\widehat{\mathrm{\Omega}}}_{t}=\{({x}_{1},{x}_{2},\dots ,{x}_{M})\in {\mathbb{N}}^{M}\mid ({y}_{1},{y}_{2},\dots ,{y}_{M})\in {\widehat{\mathrm{\Omega}}}_{t-1},\\ 0\le \sum _{j=1}^{i-1}({y}_{j}-{x}_{j})\le {y}_{j-1},i\in \{2,3,\dots ,M\},\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}{\widehat{C}}_{i}(t-1)=\sum _{j=1}^{i-1}({y}_{j}-{x}_{j}),i\in O\}.\end{array}$$

We note that for some special cases, the number of individuals in some classes (compartments) may also be observable. For example, if all individuals remain symptomatic throughout their infectious period, the prevalence of disease can be measured at any time. While in this section we assumed that the only observable data on the disease spread come from a set of driving events, the likelihood function (11) and analysis presented above can be modified when a set of disease classes (compartments) is also observable.

In the previous subsection, we demonstrated how model parameters can be updated by using the observations ** = (**_{1}, _{2}, …, * _{T}*) gathered during the first

$$\begin{array}{l}Pr\{X(T+1)=({x}_{1},{x}_{2},\dots {x}_{M});\mathit{\omega},\widehat{\mathbf{D}},\widehat{X}(1))\\ =\{\begin{array}{l}\frac{{\pi}_{t}({x}_{1},{x}_{2},\dots {x}_{M};\mathit{\omega},\widehat{X}(1))}{{\displaystyle \sum _{({y}_{1},{y}_{2},\dots {y}_{M})\in {\widehat{\mathrm{\Omega}}}_{T}}}{\pi}_{t}({y}_{1},{y}_{2},\dots {y}_{M};\mathit{\omega},\widehat{X}(1))},\text{if}({x}_{1},{x}_{2},\dots {x}_{M})\in {\widehat{\mathrm{\Omega}}}_{T},\hfill \\ 0,\text{otherwise},\hfill \end{array}\end{array}$$

where *π _{t}*(

Discrete-time Markov chain models are typically used for pathogens with relatively short (and fixed) durations of infectiousness (Daley and Gani, 1999; Larson, 2007); in these models, the time step (Δ*t*) is usually set to the duration of the infectious period. In §4.1 we develop a discrete-time Markov SIS model incorporating this convention to demonstrate that this model is a special case (a member) of our proposed class of infectious disease models. For the purpose of dynamic decision making, however, we require a time step Δ*t* that represents the interval between the two consecutive decision epochs, which is determined by the decision maker and will depend on the context of the problem being studied. Hence, models for disease transmission that will support dynamic decision-making should be flexible enough to incorporate different Δ*t*, varying over a reasonable range where transition probabilities are still reasonably accurate. Accordingly, in subsequent subsections the time steps Δ*t* can be set by the decision maker.

To denote the classes, we use conventional notation: Upon birth, an individual enters into the susceptible class S. If contact between an susceptible and an infective occurs and results in the transmission of infection, the susceptible moves to exposed class E; individuals in this class are in a latent period and are infected but not yet infectious. When the latent period ends, the individual enters the infective class I and is capable of transmitting the infection. If infection results in permanent immunity, an individual cured from infection enters the recovered class R, otherwise the individual moves back to the susceptible class S.

For each models presented, we assume that the members of population become infected only through contact with other infectious members, and that contacts during the interval [*t*, *t* + Δ*t*] occur according to a homogenous Poisson process, with rate *λ* Delta;*t*. We initially assume a closed population size of size *N*. In §4.3, we discuss extensions allowing for birth and death.

SIS models are simple models in which individuals move from the susceptible class to the infective class and then back to the susceptible class upon recovery; there is no immunity conferred by a previous infection (Hethcote, 2008). We assume that the infectious period is fixed at length Δ*t*; that is, an infective at time *t* remains infectious over the interval [*t*, *t*+Δ*t*] but will be diagnosed and effectively treated at the end of this period. Such person will reenter the population as susceptible at time *t* + Δ*t*. The SIS model consists of two classes: the susceptibles *C*_{1} = *S*, and the infectives *C*_{2} = *I*. Let *X _{S}*(

For a population with fixed size *N*, the dynamics state equation will be *X _{S}*(

In Step 2, we find the probability distribution of the driving event *I*(*t*) conditional on the state of the disease at time *t*, i.e., *P _{I}*

*r*(*t*): probability that a susceptible person becomes infected upon contact with an infectious individual.*β*(*t*): probability that the next interaction of a random susceptible person is with an infectious person.*q*(*t*): overall probability that a susceptible person becomes infected.

Both *r*(*t*) and *β*(*t*) can be decision variables, affected by “hygienic interventions” (reducing the chance of transmission given contact between infectious and susceptible individuals) and “social distancing” (reducing the likelihood of contact between susceptible and infectious individuals), respectively. When social distancing has not been used and mixing is homogenous *β*(*t*) is equal to *β*(*t*) = 1 − *X _{S}*(

$$\begin{array}{l}q(t)=\sum _{n=0}^{\infty}\frac{{e}^{-\lambda \mathrm{\Delta}t}{(\lambda \mathrm{\Delta}t)}^{n}}{n!}\left(\sum _{j=0}^{n}\left(\begin{array}{c}n\\ j\end{array}\right)\beta {(t)}^{j}{(1-\beta (t))}^{n-j}(1-{(1-r(t))}^{j})\right)\\ =1-\sum _{n=0}^{\infty}\frac{{e}^{-\lambda \mathrm{\Delta}t}{(\lambda \mathrm{\Delta}t)}^{n}}{n!}\left(\sum _{j=0}^{n}\left(\begin{array}{c}n\\ j\end{array}\right)\beta {(t)}^{j}{(1-\beta (t))}^{n-j}{(1-r(t))}^{j}\right).\end{array}$$

(14)

The expression
${\sum}_{j=0}^{n}\left(\begin{array}{c}n\\ j\end{array}\right)\beta {(t)}^{j}{(1-\beta (t))}^{n-j}{(1-r(t))}^{j}$ is the *z*-transform of the binomial distribution (*n*, *β*(*t*)) for *z* = 1 − *r*(*t*), and is equal to [1 − *β*(*t*)+*β*(*t*)(1 − *r*(*t*))]* ^{n}*. Therefore, Eq.14 results in:

$$q(t)=1-\sum _{n=0}^{\infty}\frac{{e}^{-\lambda \mathrm{\Delta}t}{(\lambda \mathrm{\Delta}t)}^{n}}{n!}{(1-\beta (t)r(t))}^{n}.$$

(15)

In Eq.15, the expression
${\sum}_{n=0}^{\infty}\frac{{e}^{-\lambda \mathrm{\Delta}t}{(\lambda \mathrm{\Delta}t)}^{n}}{n!}{(1-\beta (t)r(t))}^{n}$ is the *z*-transform of the Poisson distribution with rate *λ*Δ*t* for *z* = 1 − *β*(*t*)*r*(*t*), and hence, Eq.15 results in:

$$q(t)=1-{e}^{-\lambda \mathrm{\Delta}t\beta (t)r(t)}.$$

(16)

When the probability that a random person from *X _{S}*(

$${P}_{I(t)}(i\mid {X}_{S}(t))=\{\begin{array}{l}\left(\begin{array}{c}{X}_{S}(t)\\ i\end{array}\right)q{(t)}^{i}{(1-q(t))}^{{X}_{S}(t)-i},\text{for}\phantom{\rule{0.16667em}{0ex}}0\le i\le {X}_{S}(t),\hfill \\ 0,\phantom{\rule{0.38889em}{0ex}}\text{otherwise}.\hfill \end{array}$$

(17)

Note that for an SIS model, the state *X _{S}*(

$${P}_{I(t)}(i\mid {X}_{S}(t)=N)=\{\begin{array}{ll}1,\hfill & \text{for}\phantom{\rule{0.16667em}{0ex}}i=0,\hfill \\ 0,\hfill & \text{otherwise}.\hfill \end{array}$$

Next, we find the probability mass function for the driving event *S*(*t*). Since by assumption, all the infectives at time *t*, *X _{I}*(

$${P}_{S(t)}(s\mid {X}_{S}(t))=\{\begin{array}{ll}1,\hfill & \text{for}\phantom{\rule{0.16667em}{0ex}}s=N-{X}_{S}(t),\hfill \\ 0,\hfill & \text{otherwise}.\hfill \end{array}$$

(18)

In Step 3, we form the dynamics driving and feasibility constraints. For an SIS model, the number of susceptibles at time *t* + Δ*t*, *X _{S}*(

$$I(t)={X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t)+S(t),\text{and}$$

(19)

$$0\le {X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t)+S(t)\le {X}_{S}(t).$$

(20)

Since, by assumption, all the infectives at time *t* will be removed by time *t* + Δ*t*, we have *S*(*t*) = *X _{I}*(

$$I(t)=N-{X}_{S}(t+\mathrm{\Delta}t),\phantom{\rule{0.38889em}{0ex}}\text{and}$$

(21)

$$0\le {X}_{S}(t+\mathrm{\Delta}t)\le N.$$

(22)

To calculate the probability support (9), by probability function (17) we have 0 ≤ *I*(*t*) ≤ *X _{S}*(

$${\mathrm{\Omega}}_{{X}_{S}(t)}=\{x\in \mathbb{N}\mid N-{X}_{S}(t)\le x\le N.\}$$

(23)

The transition probability matrix of the Markov chain {*X _{S}*(

$$P\{{X}_{S}(t+\mathrm{\Delta}t)=x\mid {X}_{S}(t)\}=\{\begin{array}{l}{P}_{I(t)}(N-x\mid {X}_{S}(t)),\text{if}\phantom{\rule{0.16667em}{0ex}}N-{X}_{S}(t)\le x\le N,\hfill \\ 0,\text{otherwise},\hfill \end{array}$$

(24)

where the probability function *P _{I}*

If individuals that recover from infections acquire permanent immunity, then an SIR model can be used to represent disease spread (Hethcote, 2008). Here we assume that the infectious period is exponentially distributed with mean duration 1/*μ _{I}*. Let

In SIR model for a population of fixed size *N*, the dynamics state equation is: *X _{S}*(

The probability distribution of the driving event *I*(*t*) conditional on the state (*X _{S}*(

$${P}_{I(t)}(i\mid {X}_{S}(t),{X}_{I}(t))=\{\begin{array}{l}\left(\begin{array}{c}{X}_{S}(t)\\ i\end{array}\right)q{(t)}^{i}{(1-q(t))}^{{X}_{S}(t)-i},\text{for}\phantom{\rule{0.16667em}{0ex}}0\le i\le {X}_{S}(t),\hfill \\ 0,\text{otherwise}.\hfill \end{array}$$

(25)

To calculate the probability distribution of the driving event *R*(*t*) conditional on the state(*X _{S}*(

*ρ*(*t*): probability that an infective at time*t*recovers by time*t*+ Δ*t*.

Since we assume that the duration of infectiousness is exponentially distributed with a mean length of 1/*μ _{I}*,

The number of infectives at time *t* who recover during the period [*t*, *t* + Δ*t*] will have binomial distribution with probability of success *ρ*(*t*) and the total number of trials *X _{I}*(

$${P}_{R(t)}(r\mid {X}_{S}(t),{X}_{I}(t))=\{\begin{array}{l}\left(\begin{array}{c}{X}_{I}(t)\\ r\end{array}\right)\rho {(t)}^{r}{(1-\rho (t))}^{{X}_{I}(t)-r},\text{for}\phantom{\rule{0.16667em}{0ex}}0\le r\le {X}_{I}(t),\hfill \\ 0,\text{otherwise}.\hfill \end{array}$$

(26)

Note that for an SIR model, the states (*X _{S}*(

$${P}_{I(t)}(i\mid ({X}_{S}(t),{X}_{I}(t))=({x}_{S},0))=\{\begin{array}{ll}1,\hfill & \text{for}\phantom{\rule{0.16667em}{0ex}}i=0,\hfill \\ 0,\hfill & \text{otherwise},\hfill \end{array}$$

and the probability function (26) is defined as:

$${P}_{R(t)}(r\mid ({X}_{S}(t),{X}_{I}(t))=({x}_{S},0))=\{\begin{array}{ll}1,\hfill & \text{for}\phantom{\rule{0.16667em}{0ex}}r=0,\hfill \\ 0,\hfill & \text{otherwise}.\hfill \end{array}$$

The dynamics driving equations of SIR model can be easily obtained by Eq.7:

$$I(t)={X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t),\phantom{\rule{0.38889em}{0ex}}\text{and}$$

(27)

$$R(t)={X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t)+{X}_{I}(t)-{X}_{I}(t+\mathrm{\Delta}t).$$

(28)

The dynamics feasibility constraints for the SIR model can be obtained by Eq.8:

$$0\le {X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t)\le {X}_{S}(t),\phantom{\rule{0.38889em}{0ex}}\text{and}$$

(29)

$$0\le {X}_{S}(t)-{X}_{S}(t+\mathrm{\Delta}t)+{X}_{I}(t)-{X}_{I}(t+\mathrm{\Delta}t)\le {X}_{I}(t).$$

(30)

Constraints (29)–(30) are equivalent to:

$$\begin{array}{l}0\le {X}_{S}(t+\mathrm{\Delta}t)\le {X}_{S}(t),\phantom{\rule{0.38889em}{0ex}}\text{and}\\ {X}_{S}(t)\le {X}_{S}(t+\mathrm{\Delta}t)+{X}_{I}(t+\mathrm{\Delta}t)\le {X}_{S}(t)+{X}_{I}(t).\end{array}$$

Therefore, the probability support (9) will be:

$${\mathrm{\Omega}}_{({X}_{S}(t),{X}_{I}(t))}=\{({x}_{S},{x}_{I})\in {\mathbb{N}}^{2}\mid 0\le {x}_{S}\le {X}_{S}(t),{X}_{S}(t)\le {x}_{S}+{x}_{I}\le {X}_{S}(t),{X}_{I}(t)\}.$$

(31)

By dynamics driving equations (27)–(28) and probability support (31), the transition probability Pr{(*X _{S}*(

$$\begin{array}{l}Pr\{({X}_{S}(t+\mathrm{\Delta}t),{X}_{I}(t+\mathrm{\Delta}t))=({x}_{S},{x}_{I})\mid {X}_{S}(t),{X}_{I}(t)\}\\ =\{\begin{array}{l}\begin{array}{c}Pr\{(I(t),R(t))=({X}_{S}(t)-{x}_{S},{X}_{S}(t)+{X}_{I}(t)-{x}_{S}-{x}_{I})\mid {X}_{S}(t),{X}_{I}(t)\},\\ \text{for}\phantom{\rule{0.16667em}{0ex}}0\le {x}_{S}\le {X}_{S}(t),{X}_{S}(t)\le {x}_{S}+{x}_{I}\le {X}_{S}(t)+{X}_{I}(t),\end{array}\hfill \\ 0,\text{otherwise}.\hfill \end{array}\end{array}$$

(32)

By the assumption that *I*(*t*) and *R*(*t*) are independent, the probability function (32) results in:

$$\begin{array}{l}Pr\{({X}_{S}(t+\mathrm{\Delta}t),{X}_{I}(t+\mathrm{\Delta}t))=({x}_{S},{x}_{I})\mid {X}_{S}(t),{X}_{I}(t)\}\\ =\{\begin{array}{l}\begin{array}{c}{P}_{I(t)}({X}_{S}(t)-{x}_{S}\mid {X}_{S}(t)){P}_{R(t)}({X}_{S}(t)+{X}_{I}(t)-{x}_{S}-{x}_{I})\mid {X}_{S}(t),{X}_{I}(t)),\\ \text{for}\phantom{\rule{0.16667em}{0ex}}0\le {x}_{S}\le {X}_{S}(t),{X}_{S}(t)\le {x}_{S}+{x}_{I}\le {X}_{S}(t)+{X}_{I}(t),\end{array}\hfill \\ 0,\text{otherwise}.\hfill \end{array}\end{array}$$

(33)

The proposed models can be easily extended to capture the spread of diseases with more complex natural histories. For example, for diseases such as tuberculosis that are characterized by latent (non-infectious) periods after infection, models will often also include an exposed class E, to which infected individuals progress prior to becoming infectious. These SEIR models include four serial classes and can be easily constructed through the five steps outlined in §3.

If birth is assumed to occur according to Poison distribution with rate *λ _{B}*, it can be incorporated into the disease dynamics by adding a

Individuals may also be expected to have different rates of exit from particular classes. For example, consider a disease in which *α _{k}*,

In Figure 2, the joint probability function of the driving events (*I*_{1}(*t*), …, *I _{K}*(

Finally, the assumption of exponential stay time in a class might be violated for some classes; as an alternative the modeler may choose Gamma or empirical distribution (Wearing et al., 2005). A Gamma distribution with scale parameter *μ* and an integer shape parameter *K*, can be modeled by adding *K* serial classes each with stay time exponentially distributed with rate *μ*. An empirical distribution may be modeled by adding a number of parallel classes (branches) to the model as in Figure 2.

For a population of size *N*, the transition probability matrix of the Markov chain {(*X*_{C1}(*t*), …, *X _{CM}*(

The transition probabilities for the Markov chain {(Θ_{C1}(*t*), …, Θ_{CM}(*t*)): *t* = 1, 2, …} can now be calculated by:

$$\begin{array}{l}Pr\{({\mathrm{\Theta}}_{{C}_{1}}(t+\mathrm{\Delta}t),\dots ,{\mathrm{\Theta}}_{{C}_{M}}(t+\mathrm{\Delta}t))=({\theta}_{{k}_{1}}^{{C}_{1}},\dots ,{\theta}_{{k}_{M}}^{{C}_{M}})\mid ({\mathrm{\Theta}}_{{C}_{1}}(t),\dots ,{\mathrm{\Theta}}_{{C}_{M}}(t))\}\\ =\sum _{({x}_{1},\dots ,{x}_{M})\in {\mathrm{\Omega}}_{\mathrm{\Theta}(t)}}{P}_{({X}_{{C}_{1}}(t+\mathrm{\Delta}t),\dots ,{X}_{{C}_{M}}(t+\mathrm{\Delta}t))}(({x}_{1},\dots ,{x}_{M})\mid \lfloor N{\mathrm{\Theta}}_{{C}_{1}}(t)\rfloor ,\dots ,\lfloor N{\mathrm{\Theta}}_{{C}_{M}}(t)\rfloor )\end{array}$$

(34)

where the probability mass function of (*X*_{C1}(*t* + Δ*t*), …, *X _{CM}*(

$${\mathrm{\Omega}}_{\mathrm{\Theta}(t)}=\{({x}_{1},\dots ,{x}_{M})\in {\mathbb{N}}^{M}\mid \lceil {Nb}_{{k}_{1}-1}^{{C}_{1}}\rceil \le {x}_{1}\le \lfloor {Nb}_{{k}_{1}}^{{C}_{1}}\rfloor ,\dots ,\lceil {Nb}_{{k}_{M}-1}^{{C}_{M}}\rceil \le {x}_{M}\le \lfloor {Nb}_{{k}_{M}}^{{C}_{M}}\rfloor \}.$$

While the probability for some state transitions calculated by (34) may be positive, in some situations, the probability of moving from state (Θ_{C1}(*t*), …, Θ_{CM}(*t*)) to (Θ_{C1}(*t*+Δ*t*), …, Θ_{CM}(*t*+Δ*t*)) *should* be set to zero in the transition probability matrix of the Markov chain {(Θ_{C1}(*t*), …, Θ_{CM}(*t*)): *t* = 1, 2, …}. This situation may occur when transitions from state (Θ_{C1}(*t*), …, Θ_{CM}(*t*)) to (Θ_{C1}(*t* + Δ*t*), …, Θ_{CM}(*t* + Δ*t*)) are not possible (for instance, in a SIR model with closed population, the number of susceptibles at time *t* + Δ*t* cannot exceed the number of susceptible at time *t*) but the probability support Ω_{X}_{(}_{t}_{)} in (10) and the set Ω_{Θ(}_{t}_{)} in (34) intersect, i.e., Ω_{X}_{(}_{t}_{)} ∩ Ω_{Θ(}_{t}_{)} ≠ , which may result in a positive transition probability in (34). This can arise as a result of approximating the original Markov chain {(*X*_{C1}(*t*), …, *X _{CM}*(

In this section, we use the SIR model developed in §4.2 to capture an influenza outbreak in an English boarding school reported in (Anonymous, 1978) and recently used by Merl et al. (2009) and Ludkovski and Niemi (2010). The population consisted of *N* = 763 students and the infection was believed to be introduced by one student returning from Asia. The situation satisfies many requirements of a simple SIR model, particularly since no specific intervention was employed during the outbreak.

Given the population size *N* = 763, the SIR Markov model in §4.2 will have *N*(*N* + 1)/2 = 291,466 states. Here we show how the state aggregation method described in §5 can be used to approximate this Markov chain. We also investigate the effect of such approximation on the power of the model in fitting the observed data. The influenza spread model described in §4.2 has three parameters: the contact rate *λ*, infectivity *r*, and the mean recovery time 1/*μ*. Throughout this section, we use maximum likelihood estimation (as discussed in §3.2) to find these parameters.

To construct the approximate Markov chain {(Θ* _{S}*(

Although the *expected* number of infectives provided by the Markov model in Figure 3(a) fits the data reasonably well, the variance of this model output is substantial. A tighter fit can be obtained by selecting a finer grid for model approximation. Figure 3(b) displays the number of students infected and the expected prevalence of influenza provided by the model when the proportions of susceptibles and infectives have been discretized by control points {0, 0.02, 0.04, 0.06, …, 0.98, 1}. The generated Markov chain has 1275 states (a 99.56% reduction in the state space). The model’s parameters are estimated to be *λ* = 15 contacts per day, *r* = 11.5%, and 1/*μ* = 1.8 days.

Note that in both Figure 3(a) and Figure 3(b), the model is not able to perfectly capture the observed prevalence at time *t* = 1. This occurs because the number of infectives at time *t* = 1, *X _{I}*(1) = 0, is too low to be captured by the approximate Markov chain {(Θ

We note that the grid size in this illustrative example is arbitrary and was chosen solely to demonstrate the effect on model outputs. Choice of grid size will ideally reflect the intended use of the model. For example, to predict the magnitude of an outbreak, grid size may be selected to minimize the variance of the prediction, whereas, to determine health policies for controlling an outbreak, grid size may be selected to minimize the expected decision errors caused by approximation.

Table 1 summarizes the effect of grid choice on parameter estimation (see §3.2) and the predictive power of the model (see §3.3) when fitted to an initial set of observations. In this example, parameter estimates are not very sensitive to grid choice; however, choosing a finer grid significantly lowers the standard deviations of predicted values (i.e. *X _{I}*(5) and

Finally, to demonstrate the computational burden of the proposed SIR model, we compare the times needed to built the model for different population sizes in Figure 4. The SIR models here use the same parameter values as the model of Figure 3(b) with a relatively fine grid {0, 0.025, 0.05, 0.075, …, 0.975, 1} for both the number of susceptibles and infectives.

Over the past several years, substantial effort has been devoted to developing methods that facilitate real-time decision making over the course of an epidemic as new information become available. Dynamic programming (Winston, 2003; Bertsekas, 2005), a technique that aims to aid decision making in changing environments, has been successfully applied to problems with similar characteristics (Schaefer et al., 2005; Van and Dana, 2003; Winston, 2003). To date, dynamic programming techniques have not been exploited in efforts to aid policy choices for the control of emerging or persistent infectious diseases. One reason for this may be that currently available infectious disease models do not generally satisfy the requirements of these techniques, as discussed in §2.

In this paper, we proposed a class of models for the spread of infectious diseases in relatively large populations. This class of models generalizes the discrete-time Markov chain models in the work of Reed-Frost (Abbey, 1952) and Greenwood (Greenwood, 1931) that have been used to describe the spread of pathogens with infectious periods that are relatively short in comparison with their latent periods (Daley and Gani, 1999). The proposed models possess two advantages over existing models of infectious diseases: (1) these models can be effectively used by dynamic optimization methods to select *optimal dynamic health policies*, and (2) they are able to approximate the spread of the disease in relatively large populations with a limited state space size, and reasonable degree of accuracy and computation time. In a related paper, we have demonstrated how the proposed class of models can be employed by an MDP to generate stationary optimal health policies for controlling an simplified Influenza epidemic (Yaesoubi and Cohen, 2010).

Although the primarily motivation in developing the proposed class of Markov models was to use them in generating dynamic health policies, the framework developed here extends the application of stochastic mathematical models to populations with relatively large size. The current stochastic mathematical models of infectious diseases have been limited to small and moderate size populations (Keeling and Ross, 2008), and hence, to date, computer simulation has been the primarily modeling approach for capturing the *stochastic* dynamics of infectious disease spreads in large populations.

The primarily disadvantage of using a computer simulation model is the need for a large number of replications to determine the stochastic behavior of the disease spread. The proposed class of Markov models here provides a mathematical framework to capture the stochastic dynamics of infectious disease spreads in relatively large population while allowing the modeler to incorporate the desired level of complexity in the representation of the within-host natural history of disease and between-host transmission dynamics. If, however, disease spread has negligible stochasticity, deterministic models would be preferred to the stochastic framework proposed in this paper.

The main disadvantage of our proposed framework is the need to use *grids* to reduce the state size of the Markov model for modeling disease spread in large populations. Methods to determine the optimize grid size to minimize approximation errors and developing alternative approximation techniques are promising topics for future research.

- We consider dynamic programming for the optimal control of infectious spreads.
- The major limitations of existing infectious disease models are discussed.
- We propose a class of models which can be employed by DP or approximate DP.
- We demonstrate the ability of these models to fit data from an emerging epidemic.

The authors would like to thank Marc Lipsitch for his comments and suggestions. The work is supported by NIH grants DP2OD006663 and U54GM088558. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Office of the Director of the National Institute of Health and the National Institute of General Medical Sciences of the National Institutes of Health.

^{1}The support of a distribution is the smallest closed set whose complement has probability zero.

^{2}By an alternative reasoning, since all the infectives at time *t* will be removed at time *t* + 1, infectives at time *t* + Δ*t* are only those who were infected during the period [*t*, *t* + Δ*t*]; that is, *N* − *X _{S}*(

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Reza Yaesoubi, Harvard School of Public Health - Department of Epidemiology, 677 Huntington Ave., Boston, MA 02115, U.S.A.

Ted Cohen, Brigham and Women’s Hospital - Division of Global Health Equity, Harvard School of Public Health - Department of Epidemiology, 641 Huntington Ave., Boston, MA 02115, U.S.A.

- Abbey Helen. An examination of the reed-frost theory of epidemics. Human Biology. 1952;24(3):201–233. [PubMed]
- Allen Linda JS. Some discrete-time SI, SIR, and SIS epidemic models. Mathematical biosciences. 1994;124(1):83–105. [PubMed]
- Allen Linda JS, Burgin Amy M. Comparison of deterministic and stochastic sis and sir models in discrete time. Mathematical Biosciences. 2000;163:1–33. [PubMed]
- Anderson Roy M, May Robert M. Infectious Diseases of Humans: Dynamics and Control. Oxford University Press; 1992.
- Influenza in a boarding school. British Medical Journal. 1978:587. [PubMed]
- Bertsekas Dimitri P. Dynamic Programming and Optimal Control. 3. Vol. 1. Athena Scientific; 2005.
- Castillo-Chavez Carlos, Yakubu Abdul-Aziz. Discrete-time SIS models with complex dynamics. Nonlinear Analysis-Theory Methods and Applications. 2001;47(7):4753–4762.
- Daley Daryl J, Gani Joseph M. Epidemic Modelling: An Introduction. Cambridge University Press; Cambridge; New York: 1999.
- Dimitrov N, Goll S, Meyers LA, Pourbohloul B, Hupert N. Optimizing tactics for use of the US antiviral strategic national stockpile for pandemic (H1N1) Influenza, 2009. PLoS Curr Influenza 2009 [PMC free article] [PubMed]
- Ge L, Kristensen AR, Mourits MC, Huirne RB. A new decision support framework for managing foot-and-mouth disease epidemics. Annals of Operations Research. 2010:1–14.
- Goldstein E, Apolloni A, Lewis B, Miller JC, Macauley M, Eubank S, Lipsitch M, Wallinga J. Distribution of vaccine/antivirals and the ‘least spread line’ in a stratified population. Journal of the Royal Society Interface. 2010;7(46):755–764. [PMC free article] [PubMed]
- Greenwood M. On the statistical measure of infectiousness. The Journal of Hygiene. 1931;31(3):336–351. [PMC free article] [PubMed]
- Halloran ME, Ferguson NM, Eubank S, Longini IM, Cummings DAT, Lewis B, Xu S, Fraser C, Vullikanti A, Germann TC, et al. Modeling targeted layered containment of an influenza pandemic in the United States. Proceedings of the National Academy of Sciences. 2008;105(12):4639–4644. [PubMed]
- Hethcote Herbert W. Mathematical Understanding of Infectious Disease Dynamics, chap. The basic epidemiology models: models, expressions for
*R*_{0}, parameter estimation, and applications. World Scientific Publishing Company; 2008. pp. 1–61. - Hethcote HW. The mathematics of infectious diseases. SIAM review. 2000;42(4):599–653.
- Jacquez John A, O’Neill Philip. Reproduction numbers and thresholds in stochastic epidemic models i. homogeneous populations. Mathematical Biosciences. 1991;107(2):161–186. [PubMed]
- Jacquez John A, Simon CP. The stochastic SI model with recruitment and deaths I. Comparison with the closed SIS model. Mathematical Biosciences. 1993;117(1–2):77–125. [PubMed]
- Keeling MJ, Ross JV. On methods for studying stochastic disease dynamics. J R Soc Interface. 2008;5:171–181. [PMC free article] [PubMed]
- Larson Richard C. Simple models of influenza progression within a heterogeneous population. Operations Research. 2007;55(3):399412.
- Lefevre C. Optimal control of a birth and death epidemic process. Operations Research. 1981;29(5):971–982. [PubMed]
- Ludkovski Michael, Niemi Jarad. Optimal dynamic policies for influenza management. Statistical Communications in Infectious Diseases. 2010;2(1)
- Merl D, Johnson LR, Gramacy RB, Mangel M. A statistical framework for the adaptive management of epidemiological interventions. PloS One. (4) 2009;(6):e5087. [PMC free article] [PubMed]
- Nåsell Ingemar. Stochastic models of some endemic infections. Mathematical Biosciences. 2002;179(1):1–19. [PubMed]
- Powell WB. Approximate Dynamic Programming: Solving the curses of dimensionality. Wiley-Interscience; 2007.
- Schaefer A, Bailey M, Shechter S, Roberts M. Modeling medical treatment using markov decision processes. Vol. 23. Springer; New York: 2005. Operations Research and Health Care -A Handbook of Methods and Applications; pp. 593–612.
- Van Cuong Le, Dana Rose-Anne. Dynamic Programming in Economics. Kluwer Academic Publishers; Boston: 2003.
- Wallinga J, van Boven M, Lipsitch M. Optimizing infectious disease interventions during an emerging epidemic. Proceedings of the National Academy of Sciences. 2010;107(2):923–928. [PubMed]
- Wearing HJ, Rohani P, Keeling MJ. Appropriate models for the management of infectious diseases. PLoS Medicine. 2005;2(7):621. [PMC free article] [PubMed]
- Winston Wayne L. Operations Research: Applications and Algorithms. 4. Duxbury Press; 2003.
- Yaesoubi Reza, Cohen Ted. Dynamic Health Policies for Controlling the Spread of Emerging Infections: Influenza as an Example. 2010. Submitted. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |