Model types range from simple 'proportionate outcomes' spreadsheet models to complex dynamic transmission models (see Table ). Complex models may generate more accurate results and address more questions, but require additional data, time, effort, and expertise. Using them may require dependence on external consultants or consume rare local expertise. However, they may also engage local experts with strong analytical skills in the public health policy-making process.
The first consideration must be to ensure that model results increase the probability of a decision aligned with the decision-maker's preferences. Oversimplified models may make poor predictions, leading to worse policy choices than would have occurred in the absence of model-based evidence. Hence, sometimes only a sophisticated model is appropriate. For example, if a static model (which ignores herd immunity) suggests that vaccination is not cost-effective, then vaccination should not be rejected without first testing the conclusion using a dynamic model that will capture more benefits. Similarly, investigating the relative value of bivalent and quadrivalent vaccines, or the precise upper age limit for catch-up vaccination, require sophisticated models calibrated with good data.
A second consideration is the trade-off between the incremental informational value and additional time/effort needed for a more sophisticated modeling approach. A model should be as parsimonious as possible; it should capture effects vital to understanding the policy questions (such as herd protection when examining the effects of vaccinating boys on the incidence of disease in girls), but equally should not have unnecessary detail [26
]. Determining whether particular effects are important for decision-making may require a more complex model to investigate whether the results of the simpler model are biased. Often, analysts can draw on a repertoire of simple model structures that have been validated against more complex models and peer-reviewed. For instance, a range of HPV models were recently compared using a standardized input dataset [6
]. Analysts can also draw on methodological work investigating the effect of different simplifications to model structure [20
]. However, methodological investigations in middle- and high-income settings may not represent behavioral determinants and disease co-factors in low-income settings.
The third consideration is the data requirement. Even simple models require demographic, epidemiological, clinical, and economic data (Table ). Ideally, these should be local data from large population-based cohorts or adequately powered trials. When unavailable, less reliable data sources must be used, such as data from cancer registries and health care utilization reports (which may be incomplete and hence biased), or data extrapolated from other countries. Complex models may have greater data requirements, so data shortcomings may compromise their benefits by introducing additional uncertainty. For example, transmission models may require data on sexual behavior, although such data may have been previously collected to investigate HIV control strategies [27
However, simpler models do not guarantee reduced uncertainty, and may indeed increase uncertainty, because certain aspects of the disease or intervention are not explicitly incorporated. Hence, policy-makers short on both data and technical expertise could focus on more basic questions that can be robustly answered with simpler models, and should consider whether their policy decision can also be informed by adapting insights from other settings in which sophisticated modeling has been performed with richer data. For example, countries that have yet to introduce HPV vaccination should first assess the cost-effectiveness of routine vaccination of girls in early adolescence, which often can be addressed using relatively simple models (see Question 4 above). Assessing the cost-effectiveness of male vaccination requires more technically demanding analyses, so countries may need to draw insights from existing studies in high-income [8
] and middle-income [16
] countries, which suggest that initially focusing on increasing vaccine uptake in girls is likely to be a more efficient way to reduce cervical-cancer incidence. In some cases, however, conclusions drawn in other settings may not apply to low-income countries because of differences in sexual behavior (such as partnership concurrency), demographic structure, HPV type distribution, availability of screening and treatment, and co-factors such as HIV infection.
A fourth consideration is the objective, the intended audience, and the ultimate use of results of the modeling exercise. For instance, using a simple model developed by in-country analysts may build internal capacity for policy modeling and cost-effectiveness analysis. If capacity to develop even a simple model de novo
is lacking, it may be possible for local analysts to be advised by external experts or to adapt a model developed overseas. For example, models of HPV vaccination in Thailand [14
] and South Africa [29
] were developed by independent in-country analysts adapting an earlier model from the USA [30
]. Many modeling groups are open to adapting their existing models to new settings in collaboration with in-country analysts [6
]. Another alternative is to develop a regional network of expertise in using a particular model, such as the ProVac initiative in Latin America [4
] and the PREHDICT (Prevention Strategies for HPV-related Diseases in European Countries) initiative in Europe [31
]. Over time, developing in-country capacity may facilitate more informed policy analyses and greater use of evidence in decision-making. Even a model developed overseas may strengthen capacity if it is parameterized and interpreted by in-country analysts in such a way that they gain knowledge of its design. As these analysts grow more experienced, more complex features such as sexual partnerships, demographic change, and co-factors such as HIV infection can be added. Previous experience from HIV modeling suggests that it may be efficient to build simple models with sufficient flexibility to incorporate added levels of complexity as new questions, evidence, and capabilities emerge [32
In addition, models constructed locally (or in partnership with foreign expertise) help to engage policy-makers and program managers throughout the analytic process, potentially leading to a deeper understanding of the local drivers of health impact and cost, highlighting the value of data, and increasing awareness of data gaps. Similar engagement might be reached when a complex model is accompanied by an easy-to-use interface that local stakeholders can parameterize themselves. However, results from simple models may be met with less skepticism because their structure, inputs and assumptions are generally more transparent to end users. Lastly, policy-makers and program managers may feel more ownership of analyses conducted locally, so the influence of the results may weigh more heavily in the decision-making process.