Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3574630

Formats

Article sections

Authors

Related links

Health Care Manag Sci. Author manuscript; available in PMC 2014 June 1.

Published in final edited form as:

Published online 2012 October 23. doi: 10.1007/s10729-012-9215-x

PMCID: PMC3574630

NIHMSID: NIHMS416957

Altarum Institute, 3520 Green Court, Suite 300, Ann Arbor, MI 48105, USA

George Miller: gro.muratla@rellim.egroeg

The publisher's final edited version of this article is available at Health Care Manag Sci

It is widely believed that the US health care system needs to transition from a culture of reactive treatment of disease to one of proactive prevention. As a tool for understanding the appropriate allocation of spending to prevention versus treatment (including research into improved prevention and treatment), a simple Markov model is used to represent the flow of individuals among states of health, where the transition rates are governed by the magnitude of appropriately-lagged expenditures in each of these categories. The model estimates the discounted cost and discounted effectiveness (measured in quality adjusted life years or QALYs) associated with a given spending mix, and it allows computing the marginal cost-effectiveness associated with additional spending in a category. We apply the model to explore interactions of alternative investments in cardiovascular disease (CVD) and to identify an optimal spending mix. Under the assumptions of our model structure, we find that the marginal cost-effectiveness of prevention of CVD varies with changes in spending on treatment (and vice versa), and that the optimal mix of CVD spending (i.e., the spending mix that maximizes the overall QALYs achieved) would, indeed, shift spending from treatment to prevention.

It is frequently claimed that spending on prevention in the US accounts for only 3 % of national health expenditures, representing an inappropriate emphasis on treatment over prevention. In reality, the 3 % figure appears to understate prevention spending: depending on what is counted as prevention, prevention spending approaches nearly 9 % of national health expenditures [1]. But is this enough? Would we as a nation be healthier if we shifted some spending from treatment of existing disease to prevention?

As has been noted elsewhere [2], this question ignores the fact that there are opportunities to improve the health of the nation by shifting resources from less cost-effective interventions to more cost-effective ones both within and between prevention and treatment. However, our intention is to investigate conditions under which shifting spending between treatment interventions with “typical” cost-effectiveness and prevention interventions with “typical” cost-effectiveness would improve health. We make this concept more precise in what follows.

To address this question, we have developed a model to estimate the cost-effectiveness of alternative spending streams for disease treatment and prevention, and for research into new treatment and prevention interventions. We have exercised the model to develop insights into the optimal spending mix for prevention and treatment of cardiovascular disease (CVD) and into the ways in which investments in prevention and treatment interact. We consider CVD prevention to include both primary and secondary prevention, using standard definitions (see, for example, [1]): primary prevention consists of interventions to prevent the occurrence of disease or disability, while secondary prevention consists of interventions to detect and arrest disease or disability in its early asymptomatic stages. This means, for example, that spending to control hypertension and hyperlipidemia in patients without diagnosed CVD is considered prevention, while similar spending on CVD patients is treatment.

A number of models have been developed to investigate the prevention and treatment of CVD. In a systematic review of such models, Unal et al. [3] identify 42 models employing methods such as simulation, Markov or cell-based structures, and life table analysis. These models and other methods have been used in numerous studies of the effectiveness of alternative CVD treatment and prevention interventions. For example, Maciosek et al. [4] used the results of previous cost-effectiveness studies to prioritize clinical preventive services for a variety of diseases, including CVD. Among their conclusions was that aspirin use and smoking cessation efforts are two of the highest priority preventive measures in terms of their cost-effectiveness and reduction of clinically-preventable burden. Ford et al. [5] used the IMPACT mortality model to identify the relative impact of alternative treatments and changes in risk factors (total cholesterol, systolic blood pressure, smoking prevalence, physical activity, body-mass index, and diabetes prevalence) on the observed decline in US deaths from coronary heart disease between 1980 and 2000. They conclude that risk factor reduction accounted for approximately half of the decline, with the other half attributable to medical therapies. Unal et al. [6] describe a similar study of the reduction in coronary heart disease in the United Kingdom. Among their conclusions is that nearly half of the observed decline in deaths could be attributed to smoking cessation. Kahn et al. [7] conducted simulations with the Archimedes model to establish the effects of 11 preventive measures (involving aspirin administration, cholesterol reduction, blood pressure reduction, control of glucose levels in diabetics, smoking cessation, and weight reduction) on the morbidity, mortality, and costs associated with CVD.

These and other studies contribute to an improved understanding of the relative merits of currently-available alternatives for treating and preventing CVD. Our work is designed to complement these contributions by investigating two areas (and their interactions) that these studies do not explicitly address: (1) the tradeoffs between emphasis on treatment and on prevention of CVD in order to establish an ideal prevention-treatment mix, and (2) the effects of research into new and improved interventions for prevention and treatment of CVD on downstream costs and effectiveness and on the ideal prevention-treatment mix. Our model was developed to generate insights to help educate the intuition of policy analysts regarding these interactions and tradeoffs. It was therefore deliberately designed with a simple structure that allows interpretation and understanding of the dynamics that drive its results. Unlike the above models, it was not designed with the detail necessary to explore the effects of specific interventions, nor was it designed to generate precise recommendations regarding an optimal mix of prevention and treatment spending.

Some simple models have previously been developed to address the tradeoffs between treatment and prevention. Our model is in this latter category of more aggregate models that provide some general insights not easily gleaned from more detailed models such as those discussed above. Russell [8] describes a simple relationship to show how the cost-effectiveness of prevention changes with the introduction of a new treatment therapy. Homer and Hirsch [9] develop a simple systems dynamics model of chronic disease prevention which they use to illustrate the effects of different levels of investment in “onset prevention” versus “complications prevention”. Heffley [10] uses a simple Markov model and optimization theory to identify the optimal allocation of resources between treatment and prevention. Heffley’s model is most similar to ours, but its structure differs from ours in important ways (e.g., his model allows for cures, while ours represents chronic conditions that can be controlled but not cured; his model does not capture deaths whereas ours includes deaths from CVD and from other causes). However, his use of a Markov structure and the form of his equations representing the impact of prevention and treatment expenditures on transitions among disease states within this structure are very similar to our approach. All of these simple tradeoff models were developed for illustrative purposes rather than detailed research, none of them has been used to study prevention and treatment of CVD, and none of them explicitly addresses the allocation of resources to research into new treatment and prevention alternatives. However, they provide a useful starting point for the research described here.

We use a simple Markov model to represent the flow of a homogeneous population from birth through a healthy state, a single diseased state, and death (Fig. 1). Population flows are represented with equations that relate appropriately-lagged spending on prevention interventions, treatment interventions, prevention research, and treatment research to transition rates from the healthy state to the diseased state and from the diseased state to death from CVD. (Non-CVD deaths are represented with constant input mortality rates.) The impact of these equations is illustrated in Fig. 2 for spending on treatment interventions and treatment research. For a given research investment level, intervention spending produces diminishing returns as it increases, and the impact of additional spending is assessed with respect to current spending levels. Thus, a 10 % increase in treatment spending from its current level will cause the death rate to decline from .030 to .027. Though not shown in the exhibit, treatment spending also affects the average morbidity level of the sick population (again, with diminishing returns), measured as the annual fraction of a quality adjusted life year (QALY) accrued by the average sick patient (where a healthy individual accrues 1.0 QALY per year). As shown in the figure, a specific level of research spending causes the intervention spending curve to shift down and to the left (for intervention spending greater than 0). The magnitude of this shift also exhibits diminishing returns as this research spending level increases. For given investment streams, the model produces time histories of the sizes of the healthy and diseased populations, total discounted QALYs associated with the investments, total discounted expenditures associated with the investments, and the resultant cost-effectiveness of the investments (discounted cost associated with an investment per discounted QALY saved). The Appendix describes the model structure, including the forms of the equations, in detail.

The model, which is implemented in an Excel spreadsheet, either can be run with input investment streams or can find the optimal mix of prevention and treatment spending. In the descriptive mode, updates to the investment streams (or other inputs) result in automatic re-computation of the simulated time history of spending, annual population in the healthy and sick states, annual deaths of each type, and average morbidity of the sick population. Optimization of the spending mix begins with a fixed total amount of per-capita spending per year and repeats these computations in an iterative search until the model finds the fixed annual fraction of the resultant total spending to be applied to prevention in order to maximize total discounted QALYs accrued over an input time horizon.

The model was designed with a relatively simple structure in order to promote qualitative understanding of the complex interactions of spending on prevention, treatment, and research. This simple structure does not, however, allow for accurate quantitative predictions about the magnitude of changes in morbidity and mortality associated with specific changes in the allocation of spending. Results described in Section 3 should be interpreted with this limitation in mind.

We have populated the model with data representing prevention and treatment of CVD (Table 1). We include within CVD coronary heart disease, stroke, heart failure, peripheral artery disease, and arterial embolisms and thromboses (ICD-10 I20–25, I50, I60–70, I73–74). Starting in the base year of 2009 (year 0 in the model), we track the US population over the age of 45 years, including annual “births” of new, healthy (i.e., free of CVD) 45-year-olds, transitions from this healthy state to sick (with CVD), and deaths from either the healthy or the sick state. Baseline incidence and death rates, per-capita spending levels, parameters representing the effectiveness of spending on prevention and treatment, and time lags representing the delay between research expenditures and the time at which the results of research are realized in improved effectiveness of prevention or treatment are set to represent as closely as possible recent history in the US. We produce model results over a 100 year horizon, and we discount future expenditures and QALYs at 3 % per year.

Our methods for estimating baseline CVD spending are described in [11, 12]. (Spending on prevention of CVD is considerably higher than for most conditions because it includes significant spending on hypertension and hyperlipidemia in patients without diagnosed CVD.) Note that spending is input as an annual per-capita value, so that total spending will vary with the size of the population. Prevention expenditures are assumed to take effect after a 10 year lag, representing the time between initial application of a prevention intervention and the time at which the prevented condition might have occurred in the absence of the preventive measure. A 10 year lag might correspond, for example, to the delay associated with the use of drugs to control hypertension by 50 year olds. Because the magnitude of this lag varies by type of prevention, our analysis includes varying its value parametrically.

The effectiveness of spending on prevention and treatment is based on our analysis [12] of CVD-specific data from the Tufts Cost-Effectiveness Analysis Registry [13]. The Appendix describes our method for converting these values to parameters of the equations that describe the impact of spending on CVD sickness and death rates.

The bounds on QALYs per sick person-year, which were inferred from the work of Dyer et al. [14], represent average morbidity levels in the presence of no treatment spending (lower bound) and unlimited spending (upper bound). These bounds were not approached in any of our model runs; in the base case described below, QALYs per sick person-year after 100 years was 0.75. While we recognize that some individuals in our “healthy” population will have some level of morbidity due to conditions other than CVD, we represent their average morbidity level with a QALY value of 1.0. This overstatement has little impact on our overall results, because all reported cost-effectiveness results are presented as differences from a reference case, where the result of interest is the difference in QALYs achieved between the two cases.

Although there is substantial evidence that past medical research has had a significant impact on morbidity and mortality (e.g., [15–17]), we could find no reliable data describing the magnitude of the relationship between research spending and subsequent effectiveness of CVD prevention and treatment. For this reason (and for other reasons briefly discussed in Section 4), model runs that include either prevention or treatment research are assumed to generate 100 million additional (discounted) QALYs over a 100 year horizon. These values were selected merely to demonstrate the impact of a successful research program on the cost-effectiveness of prevention and treatment of CVD. However, our analysis suggests that they are lower than the impact of cholesterol reduction as a risk factor on CVD mortality between 1980 and 2000 that is reported by Ford et al. [5], which is largely the result of the introduction of statins. They also appear to be consistent with the impact of new laboratory procedures introduced between 1990 and 1998 on subsequent life-years saved (though not associated only with CVD), as reported by Lichtenstein [18]. Our derivation of the magnitude of the lag between initiation of an ultimately successful research program and broad clinical use of its results is described in [12].

Results in the following section are produced by varying these parameter values selectively.

Running the model with the data described in Table 1, but with no research spending, produces a set of baseline results with which we compare various other model runs. This model run produces the input values of 28.1 % of spending allocated to prevention, a marginal treatment cost-effectiveness of $20,550 per QALY, and a marginal prevention cost-effectiveness of $16,918 per QALY. Figure 3 illustrates the resultant time history of the healthy, sick, and total population over the model’s 100 year time horizon.

We do not expect the model, with its many simplifications, to produce highly accurate population, morbidity, and mortality forecasts: as noted earlier, the model was developed to explore the dynamics of alternative spending streams rather than to predict the effects of this spending precisely. As shown in Fig. 3, the model’s projection of overall growth in the 45+ population is slightly higher than projections by the US Census Bureau [19] until 2025, when the two forecasts are equal. For subsequent years until 2050 (the last year of the Census forecast), the model projects a slightly slower growth in the population, and is 8.7 % lower than the Census projection in 2050 (170.7 million versus 187.0 million). Among the reasons for this latter discrepancy are that our model does not represent net internal migration and uses a constant annual “birth” rate of 4.5 million 45-year-olds (this rate grows to 5.3 million in the Census projections). The model forecasts a growth rate in deaths from CVD of 82 % between 2010 and 2050; this compares favorably with the forecast by Foot et al. [20] of the growth in the heart disease death rate of 83 %. (However, this latter forecast excludes stroke, which is included in the model.) Heidenreich et al. [21] forecasts a growth from 2010 to 2030 in the prevalence of coronary heart disease, heart failure, and stroke of 16.6 %, 25 %, and 24.9 %, respectively. Their numbers correspond to an overall growth in prevalence of all three diseases between 16.6 % and 19.1 %, depending on the degree to which more than one of these conditions is present in an individual. In comparison, the model forecasts that overall CVD prevalence will grow by 25.5 % over the same period. Thus, while the model does not replicate other forecasts, it is reasonably consistent with them.

To investigate the interaction between the cost-effectiveness of prevention and of treatment spending, we made a number of model runs in which we varied the spending on prevention (or treatment) and computed the marginal cost-effectiveness of treatment (prevention), using the standard definition of cost-effectiveness, as described by Gold et al. [22]. By marginal cost-effectiveness, we mean the slope of the cost effectiveness curve with respect to a change in treatment (or prevention) spending. We estimate this slope by adding a very small amount of annual treatment (prevention) spending and computing the cost-effectiveness of this spending as (C_{1}-C_{0})/(Q_{1}-Q_{0}), where C_{1}-C_{0} is the small incremental change in discounted treatment (prevention) spending, and Q_{1}-Q_{0} is the resultant change in discounted QALYs realized over the model’s 100 year time horizon. Results are summarized in Fig. 4.

Marginal cost-effectiveness of additional treatment (prevention) spending as a function of prevention (treatment) spending level

As the figure illustrates, additional spending on treatment causes the cost-effectiveness of prevention to worsen slightly (become larger). For example, a $500 per-capita increase in treatment spending from $1300 to $1800 results in a $1,309/QALY increase in the marginal cost-effectiveness of prevention, from $16,356 to $17,665. This impact illustrates how better disease treatment can reduce the value of preventing the condition. Similarly, better prevention causes the value of treatment to lessen. (In the extreme, of course, preventing the condition entirely would make treatment expenditures worthless).^{1} The relatively small magnitudes of these impacts are related to how rapidly returns (such as changes in the sickness and death rates) diminish with additional spending. Because we have little empirical evidence to support the model’s representation of these rates of diminishing returns, these magnitudes should be the subject of further research. However, our results illustrate that the cost-effectiveness of additional spending on prevention depends on current capabilities to treat (and vice versa). Because these capabilities change over time, decision makers should be aware that published estimates of the cost-effectiveness of an intervention do not necessarily reflect the intervention’s future value.

It is sometimes argued that the generally accepted practice [22] of discounting both cost and QALYs in cost-effectiveness analysis at the same discount rate tends to bias such analyses in favor of treatment interventions over prevention [23]. This is because prevention expenditures tend to produce results after a longer time delay (and the resultant effectiveness is therefore more heavily discounted) than with treatment spending. This effect grows as the discount rate increases. However, recent discount rate guidelines from the federal government recommend the use of discount rates that are lower than the commonly used value of 3 % [24]. To illustrate the effect of changing the discount rate, Fig. 5 shows its impact on the marginal cost-effectiveness of treatment and prevention for our CVD example, in which we assume that treatment effects occur immediately after the expenditure is made, whereas the impact of prevention spending is realized after a 10 year delay.

As the exhibit shows, lowering the discount rate causes the marginal cost-effectiveness of both prevention and treatment to decrease (because either type of spending has downstream benefits that are discounted less as the discount rate decreases), but it decreases more rapidly for prevention. As a result, spending to prevent CVD appears more cost-effective than treatment only if the discount rate does not exceed roughly 4 %. Future use in cost-effectiveness analyses of discount rates lower than 3 % should cause prevention interventions to fare more favorably when compared with treatment interventions.

To investigate the extent to which the current mix of expenditures between prevention and treatment is appropriate from a societal perspective, we used the model to identify the percent of annual per-capita spending that should be allocated to prevention to maximize the overall effectiveness of prevention and treatment expenditures. More precisely, we fixed per-capita annual CVD spending on prevention and treatment combined to current levels and found the fixed percentage annual split between prevention and treatment of the resultant total funding that maximizes the total number of (discounted) QALYs realized during the model’s 100 year time horizon. Results associated with this optimal allocation of expenditures are compared with base case values in Table 2.

The table indicates that it would be optimal (in the sense of our computations) to increase annual spending on prevention from 28.1 % to 37.7 % of total spending, possibly helping to validate the concern of some that prevention is underfunded compared to treatment. (Note, however, that the limited empirical basis for some aspects of the model’s equations means that this precise optimal amount should not be taken as a recommendation.) Such a shift would increase the size of the healthy population at 100 years by 6.6 million and the size of the total population by 3.3 million. The size of the sick population would decline by 3.4 million for two reasons: the increased spending on prevention would reduce the CVD incidence rate, and the diversion of expenditures from treatment to prevention would cause the CVD death rate to increase. The average morbidity level of the sick population after 100 years would be 0.74 QALYs per person-year, slightly lower than the base case value of 0.75. Because of diminishing returns, the increased spending on prevention would cause the marginal cost-effectiveness for prevention to increase (worsen), while decreased spending on treatment would cause the marginal cost-effectiveness of treatment to decrease (improve), so that the two types of investments would have nearly equal cost-effectiveness. (Note that optimal allocation of a fixed budget between prevention and treatment would result in the marginal cost-effectiveness of prevention to exactly equal that of treatment. However, our optimization scheme allocates fixed per-capita spending between prevention and treatment. Because the size of the population, and therefore the total expenditures being allocated, varies as the spending mix varies, our formulation does not produce equal marginal cost-effectiveness values at the optimal mix.)

Figure 6 shows how the discounted total QALYs (accrued over the model’s 100 year horizon), the prevention marginal cost effectiveness, and the treatment marginal cost effectiveness vary as the spending mix deviates from optimal. As indicated in Table 2, total discounted QALYs are maximized when prevention expenditures increase from 28.1 % to 37.7 % of expenditures, at which point the two marginal cost-effectiveness values are nearly equal. Because of diminishing returns, the marginal cost-effectiveness of prevention increases (worsens) as the prevention share of expenditures increases, while the marginal cost-effectiveness of treatment decreases (improves).

Unlike treatment of existing disease, the effectiveness of prevention spending is usually realized after a significant lag following the investment in prevention. (In contrast, our analysis assumes that no lag is associated with realizing the effectiveness associated with treatment spending.) The duration of this lag depends on the nature of the preventive intervention. For example, the reduction of incidence of CVD associated with a program to discourage smoking among teenagers will occur with a much greater lag than the reduction associated with the use of statins by a population of 50 year olds. The impact of the duration of this lag on the optimal mix of spending between treatment and prevention is shown in Fig. 7.

With a 3 % discount rate, the optimal percent of spending on prevention ranges from 50 % if there is no lag to 0 % as the lag approaches 35 years. For comparison, our base case (un-optimized) spending mix assumes a 10 year lag before it has an impact and allocates 28.1 % of annual spending to prevention; as noted in the previous section, the optimal mix with a 10 year lag is 37.7 %. It has been argued that identifying near-term benefits of prevention (such as the impact of smoking cessation on reducing the incidence of low birth weight in addition to its longer-term benefits in reducing CVD and other conditions) will help reinforce prevention’s value. This analysis illustrates a reason for this argument.

For reasons noted earlier, the impact of the prevention lag depends on the rate used to discount future costs and effectiveness. Figure 8 illustrates the extent of this impact. The figure indicates the optimal mix of spending between prevention and treatment as a function of the discount rate for three alternative lags in the time until prevention expenditures have an impact. In general, the optimal percent of spending allocated to prevention declines as either the discount rate or the lag increases (although, as noted earlier, the precise magnitude of this effect might differ somewhat from that predicted by the model). Thus, the relative value of prevention interventions with near-term benefits (discussed in the previous section) declines with a reduction in the discount rate.

A significant issue in cost-effectiveness analysis involves establishing the time horizon over which a new intervention is assumed to have an impact (sometimes referred to as the analytic horizon). On the one hand, a long time horizon ignores the possibility that future technology will make current interventions obsolete, or that future population changes will make projections of the costs and benefits of current interventions inaccurate. On the other hand, a short horizon neglects downstream costs and benefits that will accrue from near-term application of a currently available intervention. For example, the Congressional Budget Office’s current cost projection methods have been criticized for their mandated use of a 10 year horizon, which captures near-term intervention costs but not their effects on long-term costs [25]. Figure 9 indicates the effect of the time horizon on the optimal allocation of spending to prevention for our scenario, and illustrates that adoption of a relatively short horizon tends to favor treatment over prevention, largely because of the time lag before which prevention expenditures become effective.

A research breakthrough in either prevention or treatment can have an impact on the cost-effectiveness of additional expenditures in either type of intervention and can change the optimal mix of spending between the two. To investigate this impact, we hypothesize a successful prevention (or treatment) research program that begins at the start of our model run. Based on our analysis of data in the literature [26–28], we assume that the lag from initiation of the research until its results are in active clinical use is 23 years. This includes a 5 year preclinical phase, a 7 year clinical phase, a 2 year licensing phase, and 9 years for diffusion of the new intervention into common practice. (In reality, of course, each of these phases has a time distribution, resulting in a random time from initiation of research until adoption of its results; for simplicity, we assume this lag has a fixed duration.) In the absence of more specific data, we assume that either type of research breakthrough generates 100 million additional discounted QALYs over our model’s 100 year horizon. Table 3 presents the results of this exercise both for our current (un-optimized) spending pattern and for the optimal allocation of spending between treatment and prevention. Note that these results exclude any expenditure to fund the research itself.

Before optimization (where 28.1 % of expenditures are allocated to prevention each year), treatment research causes both the healthy and the sick population to increase over the base case, resulting in 4.4 % growth of the total population at year 100. Growth in the sick population results from the reduced CVD death rate caused by the treatment research; the small growth in the healthy population results from the increase in per capita prevention expenditures (and resultant decrease in the sickness rate) associated with the additional total expenditures generated by the larger overall population. (Recall that annual per-capita, not total, expenditures are fixed in the model.) The marginal cost-effectiveness of treatment improves (a direct effect of the treatment research effort), while the marginal cost-effectiveness of prevention becomes somewhat worse. This latter effect is the same as we observed in Section 3.2.

Before optimization, prevention research also causes the healthy population to increase and the sick population to decline (both primarily because of a reduction in the sickness rate caused by the research), resulting in a net 10.8 % increase in the total population at year 100. The marginal cost-effectiveness of prevention improves, while the marginal cost-effectiveness of treatment worsens.

Thus, while our two research examples produce the same overall effectiveness (as measured in additional QALYs achieved), they have substantially different impacts on the healthy and sick populations: each class of research causes the size of the target population (healthy or sick) and the total population to increase, but prevention research has the advantage of causing the sick population to decline, while treatment research decreases the severity of the illness for a larger sick population.

Maximizing total QALYs is achieved at a lower allocation of resources to prevention in the presence of treatment research (30.7 % rather than 37.7 %) and at a higher allocation to prevention in the presence of prevention research (47.0 %). With treatment research, the shift to more treatment spending in the presence of greater treatment effectiveness causes a decline in the size of the healthy population at 100 years, but the size of the sick population grows significantly (by 22.6 %) because treatment spending is both higher and more effective at reducing the CVD death rate. With prevention research, the greater effectiveness and magnitude of prevention spending causes a 25.9 % increase in the healthy population, a drop in the sick population, and a net increase in the total population of 15.5 %. As with our earlier optimization runs, optimization in the presence of either type of research causes the marginal cost-effectiveness of treatment and of prevention to approach each other in value.

It is interesting to compare the population trajectories over time for these cases. Figure 10 shows the time history for the total population for the non-optimized research runs. (The curves have very similar shapes for the optimized runs.) Note that all three runs produce identical populations until the end of the 23 year research lag, after which the treatment research case begins to show an increase in the population. After the additional 10 year prevention lag, the prevention research case begins to diverge from the base case, with its population eventually exceeding that of the treatment case. These time trajectories illustrate the importance of considering time delays in assessing the impact of research expenditures, especially for prevention research, with its typical additional delay after the results of research have begun to be used.

Our model of the impacts of CVD prevention and treatment spending contains many simplifications: homogeneous healthy and sick populations, a constant annual “birth” rate, generic treatment of multiple cardiovascular diseases as a single condition, no distinction among specific prevention or treatment interventions, deterministic treatment of lags, and no growth of per-capita spending over time (which is inconsistent with historical CVD spending [11]). The model shows the directions of various effects that result primarily from diminishing returns assumptions. We are comfortable with these assumptions and therefore are comfortable with the directions of the effects. However, we did not conduct empirical research into the rates at which returns diminish, so the sizes of these effects are not empirically based.

Although our model incorporates the impact of research spending on the effectiveness of subsequent spending on prevention and treatment, lack of data describing such impact, as well as other technical issues, precluded including this impact in our analysis. (Instead, we hypothesized a research breakthrough of a specific magnitude and explored its effects.) Among the other technical issues is the need to characterize research spending over time in a way that appropriately captures its downstream effects. Another technical issue relates to the model’s use of a fixed production function for research findings that incorporates diminishing returns. Over time, as the cumulative amount of research spending increases, this function specifies that returns to an additional dollar of research decrease. Using this function, we discovered that it is optimal to front-load research spending until the value of an additional dollar falls so low that it is better used elsewhere (for example, for direct prevention or treatment interventions). Once this point has been reached, research spending essentially ceases altogether, because the value of an additional dollar of research has reached a permanent point where it is out-competed by other uses. In order to justify the regular annual spending on research that occurs in the real world, it is necessary to change the model. For example, one could specify that the production function for health care research findings is not fixed but, instead, shifts upward each year due to continued investments elsewhere in pure research. In this way, the value of an additional dollar of health care research spent this year will be greater than if spent last year because it will make use of new knowledge gained from ongoing pure research. (See, for example, the disaggre-gated research production function proposed by Tassey [29]). There are other ways to modify the research production function that justify regular annual spending, but this is a very complex area, and more study is needed in order to determine the best specification for our model.

In spite of these limitations, the model produces results that are reasonably consistent with other projections of population growth and growth in CVD prevalence and death rates. At the same time, the model’s simplicity supports its use in describing and understanding the complex interactions associated with alternative spending streams for prevention and treatment of CVD, and the impacts of advances in research to improve the efficacy of such spending. We have found that:

- The cost-effectiveness of prevention (or treatment) of CVD varies with changes in spending on treatment (prevention).
- The optimal mix of CVD spending (i.e., the spending mix that maximizes the overall QALYs achieved) requires a shift in spending from treatment to prevention.
- The estimated cost-effectiveness and optimal mix of prevention depend significantly on assumptions used in the underlying analysis, including the discount rate used, the analysis time horizon, and the lag before preventive measures take effect.
- A research breakthrough in prevention (or treatment) causes overall effectiveness and the marginal cost-effectiveness of prevention (treatment) to improve as expected, but the marginal cost-effectiveness of treatment (prevention) tends to decline. While each class of research results in an increase in the size of the total population, prevention research causes the sick population to decrease, while treatment research decreases the severity of the illness for a larger sick population.

These results have implications for the allocation of spending between prevention and treatment of CVD, the funding of CVD research, and the methods used to assess the cost-effectiveness of specific interventions. Work continues to improve our understanding of these important topics.

This research was supported by Award Number R21HL098874 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NHLBI or the National Institutes of Health.

The Prevention/Treatment Tradeoff Model (PTM) is an Excel-based mathematical model that we developed for investigating the impacts of healthcare spending changes on multiple output measures related to the effects of disease. The disease of interest may be single and specific (e.g., diabetes), encompass a group of several related diseases (e.g., cardiovascular diseases including stroke, myocardial infarction, etc.), or may be entirely generic (as in the case of modeling all chronic disease). Examples of output measures include number of deaths per year, number of QALYs gained, and cost effectiveness measures. The model may be used to maximize or minimize one or more output measures and therefore be used to design a spending plan that is optimal in some sense.

PTM models an infinitely-divisible population as its constituents, subpopulations whose size need not be integer; these transition between three health states – healthy, sick, and dead – from year to year. The model is deterministic in that the rates of transition are not randomly generated, but rather are precisely determined though mathematical functions taking spending and health state populations as inputs. The model can also be considered Markovian since the state-transition history of a subpopulation (SP) is ignored; only the SP’s current health state is considered in calculating transition rates to other states. Unlike many Markov processes, however, this one is not time-homogeneous: the rates assigned to various states may change over time as spending fluctuates and SPs within states grow or shrink in size. (Note that we use the shorthand terminology “rate” to mean the yearly probability of transition, not the expected number of transitions per time unit.)

Many simplifying assumptions are made in this model, not because they are necessary to make its construction possible, but because of unexpected subtleties and complexities encountered in the output analyses of even this “simple” formulation. Adding extra complexity could potentially obfuscate the roles played by the more basic elements of the model and make determining a realistic set of input data more challenging, while adding little value to the analysis. Chief among these simplifications are a single sickness state, no transitions back to healthy once sick (corresponding approximately to many chronic diseases), and no effects of aging (i.e., each SP has groupwise transition rates independent of its age-demographic makeup). Once the model’s output is better understood, each of these simplifications can be addressed in turn to examine its contributory effects.

As the name of the model suggests, both prevention and treatment of disease are considered. Prevention spending affects the rate of transition from the healthy state to the sick state, while treatment affects the transition rate from the sick state to the death state. Spending is further divided into two additional categories: intervention and research. Intervention directly controls transition rates, while research affects the extent to which intervention dollars have an impact.

PTM uses spending per capita to calculate intervention effects on transition rates and total spending for research effects. Intervention spending is transient and must continually occur to have an effect, whereas research spending is cumulative over time (once research occurs, it is “remembered” from that point forward). Additionally, all four types of spending – prevention research (PR), prevention intervention (PI), treatment research (TR), and treatment intervention (TI) – are subject to user-defined lags that indicate the time interval between commitment of funds and their ultimate effects on transition rates. (A lag of zero would indicate instantaneous effects). Figure 1 (presented earlier) shows the basic flow of the model.

Clearly, the heart of the model lies in the functions that assign transition rates. Without loss of generality, we will discuss some properties of these functions by referring to the function that sets the rate of transition from healthy to sick (i.e., the sickness rate). There are several properties that one would wish to guarantee, including:

- Monotonicity: as intervention spending per capita increases, the sickness rate should only decrease.
- Diminishing returns: as intervention spending per capita increases, the change in sickness rate per dollar spent should lessen.
- The sickness rate should approach a nonnegative asymptote. A strictly positive asymptote would indicate a disease that cannot be eradicated even with infinite spending, given current research levels. A disease that could be eradicated would require a zero rate at some non-infinite spending; however, for model simplicity this is approximated by a zero asymptote.
- Research spending should increase the purchasing power of each intervention dollar as well as decrease the asymptote mentioned above (while keeping it above zero). For simplicity, a single factor may be used for both purposes.

A simple function that satisfies properties 1 through 3 above is based on an exponential form:

$$\begin{array}{l}{r}^{s}(t)={r}_{0}^{s}\phantom{\rule{0.16667em}{0ex}}\left[\left(1-\frac{{r}_{\infty}^{s}}{{r}_{0}^{s}}\right)\phantom{\rule{0.38889em}{0ex}}exp\phantom{\rule{0.16667em}{0ex}}\left(-{b}^{s}\frac{{x}^{PI}(t-{L}_{PI})}{h(t-{L}_{PI})}\right)+\frac{{r}_{\infty}^{s}}{{r}_{0}^{s}}\right]\\ ={r}_{0}^{s}\phantom{\rule{0.16667em}{0ex}}{f}^{s}({y}^{PI}(t))\end{array}$$

where *r ^{s}*(

$${y}^{PI}(t)={x}^{PI}(t-{L}_{PI})/h(t-{L}_{PI}),\phantom{\rule{0.16667em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}{b}^{s}=-\left({\scriptstyle \frac{1}{{y}_{c}^{PI}}}\right)ln\phantom{\rule{0.16667em}{0ex}}\left({\scriptstyle \frac{{\text{r}}_{\infty}^{\text{s}}-{\text{r}}_{\text{c}}^{\text{s}}}{{\text{r}}_{\infty}^{\text{s}}-{\text{r}}_{0}^{\text{s}}}}\right)$$

where *c*-subscipted parameters represent current “real-world” values. A similar function governs the death rate, with suitable modifications to superscripts (*d* replacing *s*) and using the sick population in the denominator of the exponential term rather than the healthy population. For the rest of this discussion, the *s* superscripts will be omitted to reduce clutter, with the understanding that we are referring to the sickness rate equation.

The above defines an exponential function that passes through two particular points, namely the sickness rate at zero spending *r*_{0} and the rate at current spending *r _{c}*; and approaches the asymptotic rate

To address property 4, we introduce a prevention research factor that will be inserted into the rate function previously given. Let ${g}_{\text{inc}}^{PR}$ denote an increment value for this factor and let ${x}_{\text{inc}}^{PR}$ be an accompanying spending amount. Together, these determine the effect of research spending on the value of the research factor through the relation

$${g}^{PR}(t)=1+\left({g}_{\text{inc}}^{PR}/{x}_{\text{inc}}^{PR}\right)\phantom{\rule{0.38889em}{0ex}}\left({x}_{\text{base}}^{PR}+\sum _{i=0}^{t}{x}^{PR}(i-{L}_{PR}-{L}_{PI})\right)$$

where ${x}_{\text{base}}^{PR}$ is the baseline amount of research funding assumed to have taken effect by the beginning of the modeling period (often assumed for simplicity to be zero). The interpretation of this relation is that for every ${x}_{\text{inc}}^{PR}$ dollars spent on research, the factor is increased by ${g}_{\text{inc}}^{PR}$ after the appropriate lag period. Note that lags for both research and intervention must be taken into account, since the effects of research spending do not manifest themselves until 1) the research lag is completed, allowing intervention money spent from that point forward to take advantage of the new technology, and 2) the intervention spending itself takes effect, only after its own additional lag. The research factor is inserted into the rate equation thus:

$$\begin{array}{l}r(t)={r}_{0}\phantom{\rule{0.16667em}{0ex}}\left[\left(1-\frac{{\scriptstyle \frac{{r}_{\infty}}{{r}_{0}}}}{{g}^{PR}(t)}\right)\phantom{\rule{0.38889em}{0ex}}exp\phantom{\rule{0.16667em}{0ex}}(-{g}^{PR}(t)\phantom{\rule{0.16667em}{0ex}}b\phantom{\rule{0.16667em}{0ex}}{y}^{PI}(t))+\frac{{\scriptstyle \frac{{r}_{\infty}}{{r}_{0}}}}{{g}^{PR}(t)}\right]\\ ={r}_{0}f({y}^{PI}(t),\phantom{\rule{0.16667em}{0ex}}{x}^{PR}(t-{L}_{PR}-{L}_{PI}),\phantom{\rule{0.16667em}{0ex}}{x}^{PR}(t-1-{L}_{PR}-{L}_{PI}),\dots ,{x}^{PR}(0-{L}_{PR}-{L}_{PI}),\phantom{\rule{0.16667em}{0ex}}{x}_{\text{base}}^{PR})\end{array}$$

where *g ^{PR}*(

In determining the number of QALYs generated by a spending stream, is it necessary to specify the number of QALYs each sick person generates in 1 year (a healthy person generates 1 QALY). It seems reasonable that, as spending per sick person increases, the number of QALYs per sick person (QPS) would also increase, from some base level corresponding to no treatment at all to some upper level corresponding to unlimited spending. Furthermore, it seems plausible that the QPS value would approach this upper bound asymptotically. In addition, the QPS upper bound should be allowed to increase as treatment research money is spent (while never being allowed to exceed 1). To satisfy these requirements, the following function is used in the model to determine the QPS value in a given year:

$$\mathit{QPS}(t)={\mathit{QPS}}_{L}+\left[\left(1-\frac{(1-{\mathit{QPS}}_{U})}{{g}^{PR}(t)}\right)-{\mathit{QPS}}_{L}\right]\phantom{\rule{0.16667em}{0ex}}\left(\frac{{r}_{t}^{d}-{r}_{0}^{d}}{{\scriptstyle \frac{{r}_{\infty}^{d}}{{g}^{PR}(t)}}-{r}_{0}^{d}}\right)$$

where *QPS*(*t*) is the QPS measure at time (year) *t* and *QPS _{L}* and

As mentioned above, a useful capability of the model is to determine an optimal mix of spending between prevention and treatment. One obvious optimization goal is to maximize the number of discounted QALYs generated by a fixed spending stream. In the baseline for this study, the total amount of per-capita spending is fixed at $2118 per person per year with 28 % of that dedicated to prevention – the current observed real-world values for CVD spending. For the purposes of optimization, however, we are free to divide that amount between prevention and treatment. Each year, some proportion *p* of all intervention spending is allocated to prevention interventions, and *1-p* to treatment interventions. The value of *p* remains fixed from year to year, and the goal of the optimization is to find its best value. The model is able to perform an iterative search over all possible values of *p* to determine the QALY-maximizing optimum. Through an extension of the iterative search into a higher dimensional search space, the model can also determine the optimal apportioning of PI, TI, PR, and TR spending.

^{1}These runs are constructed in such a way that the decline in the sick population associated with increased prevention spending results in an increase in treatment spending per sick person. As a result, diminishing returns produce an increase in the marginal cost-effectiveness of treatment, as shown in the figure. If, instead, treatment spending per sick person is held constant as the sick population changes, increased prevention spending has no effect on the marginal cost-effectiveness of treatment spending.

**Conflict of interest** The authors declare that they have no conflict of interest.

1. Miller G, Roehrig C, Hughes-Cromwick P, Turner A. What is currently spent on prevention as compared to treatment? In: Faust HS, Menzel PT, editors. Prevention vs. treatment: what’s the right balance? Oxford University Press; 2011.

2. Cohen J, Neumann P, Weinstein M. Does preventive care save money? Health economics and the presidential candidates. N Engl J Med. 2008;358(7):661–663. [PubMed]

3. Unal B, Capewell S, Critchley JA. Coronary heart disease policy models: a systematic review. BMC Publ Health. 2006;6:213. [PMC free article] [PubMed]

4. Maciosek MV, Coffield AB, Edwards NM, Flottemesch TJ, Goodman MJ, Solberg LI. Priorities among effective clinical preventive services: results of a systematic review and analysis. Am J Prev Med. 2006;31(1):52–61. [PubMed]

5. Ford ES, Ajani UA, Croft JB, Critchley JA, Labarthe DR, Kottke TE, Giles WH, Capewell S. Explaining the decrease in US deaths from coronary disease, 1980–2000. N Engl J Med. 2007;356 (23):2388–2398. [PubMed]

6. Unal B, Critchley JA, Capewell S. Explaining the decline in coronary heart disease mortality in England and Wales between 1981 and 2000. Circulation. 2004;109:1101–1107. [PubMed]

7. Kahn R, Robertson RM, Smith R, Eddy D. The impact of prevention on reducing the burden of cardiovascular disease. Circulation. 2008;118:576–585. [PubMed]

8. Russell LB. How treatment advances affect prevention’s cost-effectiveness: implications for the funding of medical research. Med Decis Making. 2000;20:352–354. [PubMed]

9. Homer JB, Hirsch GB. System dynamics modeling for public health: background and opportunities. Am J Public Health. 2006;96(3):452–458. [PubMed]

10. Heffley DR. Allocating health expenditures to treatment and prevention. J Heal Econ. 1982;1:265–290. [PubMed]

11. Miller G, Hughes-Cromwick P, Roehrig C. National spending on cardiovascular disease, 1996–2008. J Am Coll Cardiol. 2011;58 (19):2017–2019. [PMC free article] [PubMed]

12. Altarum Institute. Final report of grant number R21HL098874 from the National Heart, Lung, and Blood Institute (available from authors) 2012. Systems Science Methods for Addressing the Cardiovascular Disease Prevention–Treatment Tradeoff.

13. Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center. [Accessed 28 April 2012];Cost-Effectiveness Analysis Registry. http://www.cearegistry.org.

14. Dyer MTD, Goldsmith KA, Sharples LS, Buxton MJ. A review of health utilities using the EQ-5D in studies of cardiovascular disease. Health Qual Life Outcomes. 2010;8:13. [PMC free article] [PubMed]

15. Congressional Budget Office. Federal Support for Research and Development. 2007. Jun,

16. Lichtenberg F. National Bureau of Economic Research Working Paper No 12120. 2006. Mar, The Impact of New Laboratory Procedures and Other Medical Innovations on the Health of Americans, 1990–2003: Evidence from Longitudinal, Disease-Level Data.

17. Lichtenberg F. National Bureau of Economic Research Working Paper No 15880. 2010. Apr, Has Medical Innovation Reduced Cancer Mortality?

18. National Center for Health Statistics. Vital and Health Statistics, Series 10, Number 249. US Department of Health and Human Services; 2010. Dec, Summary Health Statistics for US Adults: National Health Interview Survey, 2009.

19. US Census Bureau, Population Division. [Accessed 28 April 2012];US Population Projections. 2008 http://www.census.gov/population/www/projections/downloadablefiles.html.

20. Foot DK, Lewis RP, Pearson TA, Beller GA. Demographics and cardiology, 1950–2050. J Am Coll Cardiol. 2005;35:1067–1081. [PubMed]

21. Heidenreich PA, Trogdon JG, Khavjou OA, Butler J, Dracup K, Ezekowitz MD, Finkelstein EA, Hong Y, Johnston SC, Khera A, Lloyd-Jones DM, Nelson SA, Nichol G, Orenstein D, Wilson PWF, Woo YJ. Forecasting the future of cardiovascular disease in the United States: a policy statement from the American Heart Association. Circulation. 2011;123:933–944. [PubMed]

22. Gold M, Russell L, Siegel J, Weinstein M. Cost-Effectiveness in Health and Medicine. Oxford University Press; 1996.

23. Menzel PT. Should the value of future health benefits be time-discounted? In: Faust HS, Menzel PT, editors. Prevention vs Treatment: What’s the Right Balance? Oxford University Press; 2011.

24. Office of Management and Budget. OMB Circular No A-94, Appendix C. 2011. Dec, Discount rates for cost-effectiveness, lease purchase, and related analyses.

25. Huang ES, Basu A, O’Grady MJ, Capretta JC. Using clinical information to project federal health care spending. Heal Aff. 2009;28(5):w978–w990. [PubMed]

26. US Government Accountability Office. New drug development: science, business, regulatory, and intellectual property issues cited as hampering drug development efforts. 2006. Nov, GAO-07-49.

27. Congressional Budget Office. Research and Development in the Pharmaceutical Industry. United States Congress; October.2006.

28. Skinner J, Staiger D. Working Paper 14865. National Bureau of Economic Research; 2009. Apr, Technology Diffusion and Productivity Growth in Health Care.

29. Tassey G. The disaggregated technology production function: a new model of university and corporate research. Res Policy. 2005;34:287–303.

30. Minino AM. NCHS Data Brief No 64. National Center for Health Statistics; 2011. Jul, Death in the United States, 2009. [PubMed]

31. Roger VL, Go AS, Lloyd-Jones DM, Adams RJ, Berry JD, Brown TM, Carnethon MR, Dai S, de Simone G, Ford ES, Fox CS, Fullerton HJ, Gillespie C, Greenlund KJ, Hailpern SM, Heit JA, Ho PM, Howard VJ, Kissela BM, Kittner SJ, Lackland DT, Lichtman JH, Lisabeth LD, Makuc DM, Marcus GM, Marelli A, Matchar DB, McDermott MM, Meigs JB, Moy CS, Mozaffarian D, Mussolino ME, Nichol G, Paynter NP, Rosamond WD, Sorlie PD, Stafford RS, Turan TN, Turner MB, Wong ND, Wylie-Rosett J. Heart disease and stroke statistics – 2011 update: a report from the American Heart Association. Circulation. 2011;123:e18–e209. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |