PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Pharmacoeconomics. Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
PMCID: PMC2787446
NIHMSID: NIHMS147021

Calibration Methods Used in Cancer Simulation Models and Suggested Reporting Guidelines

Natasha K. Stout, Ph.D.,1,2 Amy B. Knudsen, Ph.D.,3,4 Chung Yin (Joey) Kong, Ph.D.,3,4 Pamela M. McMahon, Ph.D.,3,4 and G. Scott Gazelle, MD, Ph.D., MPH2,3,4

Abstract

Background

Increasingly, computer simulation models are used for economic and policy evaluation in cancer prevention and control. A model’s predictions of key outcomes such as screening effectiveness depends on the values of unobservable natural history parameters. Calibration is the process of determining the values of unobservable parameters by constraining model output to replicate observed data. Because there are many approaches for model calibration and little consensus on best practices, we surveyed the literature to catalogue the use and reporting of these methods in cancer simulation models.

Methods

We conducted a MEDLINE search (1980 through 2006) for articles on cancer screening models and supplemented search results with articles from our personal reference databases. For each article, two authors independently abstracted pre-determined items using a standard form. Data items included cancer site, model type, methods used for determination of unobservable parameter values, and description of any calibration protocol. All authors reached consensus on items of disagreement. Reviews and non-cancer models were excluded. Articles describing analytical models which estimate parameters with statistical approaches (e.g., maximum likelihood) were catalogued separately. Models that included unobservable parameters were analyzed and classified by whether calibration methods were reported and if so, the methods used.

Results

The review process yielded 154 articles that met our inclusion criteria and of these, we concluded that 131 may have used calibration methods to determine model parameters. Although the term “calibration” was not always used, descriptions of calibration or “model fitting” were found in 50% (n=66) of the articles with an additional 16% (n=21) providing a reference to methods. Calibration target data were identified in nearly all of these articles. Other methodologic details such as the goodness-of-fit metric were discussed in 54% (n=47 of 87) of the articles reporting calibration methods while few details were provided on the algorithms used to search the parameter space.

Conclusions

Our review shows the use of this type of modeling methods is increasing although thorough descriptions of calibration procedures are rare in the published literature on cancer screening models. Calibration is a key component of model development and is central to the validity and credibility of subsequent analyses and inferences drawn from model predictions. To aid peer-review and facilitate discussion of modeling methods, we propose a standardized Calibration Reporting Checklist for model documentation.

Introduction

Increasingly, mathematical and computer models are used for economic evaluation of cancer prevention and control policies.[15] These models fill an important role in policy making as they are able to synthesize data from multiple sources and estimate the effects of interventions in situations when clinical trials may not be feasible because of time, cost, and/or ethical considerations.[6] The National Cancer Institute’s Cancer Intervention and Surveillance Modeling Network (CISNET) recently spurred growth in modeling efforts by funding the development of over 18 models of breast, prostate, colon and lung cancers built to investigate prevention, screening and treatment policy questions in the United States (http://www.cisnet.cancer.gov/). In addition to CISNET, numerous other cancer simulation models have been developed including those for cervical, ovarian and gastric cancers.[711]

In general, disease natural history models use a “systems” approach to simulate the underlying course of disease in individuals and project the overall effect of disease on health in a population. Many of these models are quite complex, describing both unobservable and observable portions of the natural history at an individual level. Capturing the mechanism of disease onset and growth in simulated individuals involves specification and determination of unknown model parameters, many of which cannot be directly informed by data as none may exist. One method for parameter determination is calibration.

Formally, model calibration is the process of determining parameters values such that model output replicates empirical data.[6, 1214] It is performed by comparing model output from different input parameter sets with existing data to identify parameter set(s) which produce model outputs that best correspond to those data. This is often a complex task and there has been little consensus on best practice. Calibration, often termed “model fitting”, may be distinguished from other methods of parameter determination such as direct “estimation” of model parameters.[13] For estimation methods, model parameters are estimated in separate processes from the model itself and the overall fit of model output to data is not considered in parameter determination.

Calibration is a key component of model development and together with validation establishes credibility of modeling results. Models are already often criticized for being “black boxes” with documentation often lacking transparency. If modeling is to gain strength as a tool for informing health policy, it is critical that the assumptions, structure, input data, and parameter estimation methods including calibration are well-documented and made available to “consumers” of these models. To understand how calibration methods are currently described, we surveyed the literature to catalogue these methods as they are used in cancer simulation models. We then proposed a framework for reporting the calibration methods used in cancer models and disease simulation models in general.

Methods

We conducted a focused qualitative review of the literature on calibration methods in cancer simulation models.

Data Sources

Multiple sources were used to identify relevant published literature. We used MEDLINE to identify English-language articles published in the years 1980 through 2006. The following US National Library of Medicine “Medical Subject Headings” (MeSH) and keywords were used in the search: cancer, neoplasm, simulation, computer simulation, natural history, and mass screening. The search results were supplemented by reviewing the reference lists from the articles identified in the search and from articles in our personal reference databases. The full listing of articles retrieved by the search is available from the authors.

Study Selection

Study inclusion criteria consisted of articles describing models that explicitly captured the mechanism of the underlying natural history of cancer. Reviews and commentaries were excluded, as were articles describing models of diseases other than cancer. Also excluded were articles unanimously judged by the authors to describe models with no natural history component, including models that begin from the point of cancer detection. We further distinguished purely analytic models that use statistical inference to estimate parameters from microsimulation models that use calibration to determine underlying parameter values. For analytic models, the process of direct estimation of the unknown natural history parameters to observed data is typically done independently of the model itself.[13] We note that some modelers use a “hybrid” approach with direct estimation of some parameters and subsequent calibration of others.

Data Abstraction and Analysis

For each article, two investigators independently abstracted pre-determined data elements using a standard form (Appendix). Data elements abstracted included cancer site, type of simulation model, methods used for determination of unobservable parameter values, description of calibration protocol (if any), and whether model validation was mentioned. All investigators reached consensus on items of disagreement. For articles that provided no calibration information but that referenced previous publication(s) with the same model, information from the prior publication(s) was used to supplement the information reported in the primary article. Review data were summarized using descriptive statistics.

Calibration protocols were characterized by the descriptions of five components: calibration targets, goodness-of-fit metric, search algorithm, acceptance criteria, and stopping rule. A “target” refers to observed data the model attempts to replicate during calibration. The goodness-of-fit (GOF) metric is the quantitative measure of the model’s fit or ability to replicate target data for a particular set of parameter values. The search algorithm is the method for selecting alternative model parameter values to evaluate. The acceptance criteria specify the satisfactory or acceptable levels of fit based on the GOF metric for a particular set of parameter values. Finally, the stopping rule describes the rationale for ending the search procedure and calibration process as a whole. These components are described in more detail in Table 1.

Table 1
Glossary of Calibration Components

Results

The MEDLINE search and personal databases yielded 169 unique articles. We excluded 18 articles on the basis of the title and/or abstract (5 reviews, editorials, or commentaries; 1 meta-analysis; 3 articles about a disease other than cancer; and 9 cancer-related articles that did not report on a model of the natural history of the disease). We reviewed and abstracted data from the remaining 151 articles and excluded an additional 28 articles (5 review articles, editorials, or commentaries, 1 article reporting the results of a database analysis; 4 methodological papers using general models for which calibration was not necessary, 18 articles on reporting on models that did not track the underlying natural history of cancer or that did not evaluate cancer screening programs). One hundred twenty three articles met the criteria for inclusion. From the references to prior publications in these articles, we identified 31 additional articles that met the inclusion criteria, yielding a total of 154 articles.

Model Characteristics

The number of published articles describing cancer simulation models increased substantially over the study period (Figure 1). Articles describing models of the natural history of breast or cervical cancers were the most prevalent (Table 2). Note that multiple articles using the same model were counted separately.

Figure 1
Number of published articles describing cancer simulation models by year, 1980–2006
Table 2
Published articles describing cancer simulation models by cancer site

Of these 154 articles, 23 used purely analytic methods to directly estimate parameter values[27, 40, 45, 50, 54, 59, 6266, 69, 73, 74, 78, 95, 123, 124, 149, 150, 161, 174, 175], leaving 131 articles for which calibration may have been used to determine at least some unknown model parameter values. This subset includes articles that may have used a hybrid or combination approach of both analytic and calibration methods for parameter determination (for example, see references [49, 70, 71]). Subsequent summary statistics on the proportions of articles reporting calibration details are based on these 131 articles.

Calibration Details

We found that 66 articles discussed or alluded to calibration in the description of the model in the article itself. In some, calibration was explicitly mentioned in the text (for example, see references [48, 61, 120]) while in others we inferred calibration was conducted by authors’ use of terms such as “model fitting” or “model identification” (for example, see references [11, 93, 135]). An additional 21 articles did not explicitly state or imply that calibration was used but did provide references to a prior publication in which calibration methods were mentioned.[29, 34, 37, 38, 41, 42, 47, 68, 79, 82, 84, 94, 98, 104, 106, 111, 113, 119, 121, 128, 136] Thus 87 articles (66%) provided some documentation for the calibration protocol used in developing the model.

Of the 87 articles that discussed or provided a reference to calibration methods, 95% (83 of 87) made at least some mention of the data used as calibration targets. Targets included data from cancer registries, observational studies, and clinical trials. The vast majority of the models were calibrated to multiple targets, although in most cases it was unclear if they were calibrated to these data simultaneously or in stages.

Goodness-of-fit metrics were either explicitly or implicitly described in 54% (47 of 87) of the articles that discussed calibration. Visual assessment of fit, a qualitative goodness-of-fit metric, was used in 20 articles although its use was typically inferred from the article text rather than explicitly stated. For the 27 articles that reported quantitative methods, two used likelihood-based measures [39, 77] and 25 used distance measures such as the absolute or relative differences (for example, see references [43, 48, 61, 71, 81, 135]). The majority of articles did not describe how goodness-of-fit metrics for individual targets were combined to yield an overall goodness-of-fit measure for the model parameters under calibration.

Search algorithms were generally not well described in the articles we reviewed. We inferred that some used informal “trial-and-error” approaches (for example, see reference [101]) while others specified “systematic variation” of model parameter values and provided no further description (for example, see references [7, 36, 81]). When further specified, formal search algorithms included grid search (for example, see references [58, 61, 71, 80]) and random sampling (for example, see references [31, 48]) as well as directed or iterative search methods. Directed search methods use to computer algorithms and numerical approximation techniques identify points in the parameter space likely to lead to successively better model fits. Methods used included the Nelder-Mead algorithm (for example, see references [39, 164]), as well as a variety of other optimization algorithms from engineering (for example, see references [39, 77, 156]).

Criteria for identifying parameter sets that provide an acceptable model fit were rarely described in the articles reviewed. Stopping criteria were also not well documented. With few exceptions, modelers ultimately accepted a single best-fitting parameter set to conduct model analyses rather than accept multiple parameter sets to form a posterior distribution that can capture parameter uncertainty[48].

Model validation methods or model validity were mentioned in 52% (80 of 154) of the articles although details as to how validity was assessed were provided in only a few (for example, see references [10, 83]).

Discussion

Our descriptive review indicates that while an array of techniques are used for model calibration, little attention has been paid to the documentation of these methods in the cancer simulation literature. Calibration is a necessary component of models that simulate an unobserved disease process as it helps ensure the validity and credibility of inferences drawn from model predictions. We found techniques for calibration used in cancer screening models vary in level of complexity and rigor. Methods include “trial-and-error” searching that may employ a more subjective visual assessment of model fit as well as the use of directed search algorithms and more objective quantitative assessments of fit.

Our review also indicates that the depth of documentation of calibration methods in the cancer simulation literature is highly variable. We expected methods would be well-documented in all published papers. Descriptions of calibration methods, if included in the articles, ranged in detail and were often only a few sentences in length. In most equivocal cases, details were not reported or the terminology was considered too imprecise to categorize the methods. The very real possibility that misclassification of methods for these models may have occurred emphasizes the need for more clear and consistent reporting. We recognize that journal word count limits may preclude detailed descriptions of calibration protocols and that the technical details of model calibration may be difficult to publish as stand-alone articles because of their specialized nature. However, clear documentation is critical for both modelers and consumers of model results.

To aid the documentation process, we propose a Calibration Reporting Checklist be adopted as standard practice for modelers (Figure 2). The checklist ensures reporting of details regarding the data, methods and sources for the calibration targets, goodness-of-fit metric(s), search algorithm(s), acceptance criteria, and stopping rules. By encouraging more complete and consistent documentation of all components of calibration, the checklist would aid in the peer-review process. Additionally, with improved transparency, modeling may be less frequently viewed as a “black box” process.[177]

Figure 2Figure 2
Calibration Reporting Checklist

The use of the checklist also will provide a means of disseminating existing and new calibration methodology from other fields such as engineering and environmental science to the disease modeling community. Further it can facilitate comparisons of methods across models, critical for collaborative modeling projects such as CISNET. However there are few rules of thumb to guide choice of methods for any particular model. At present the process of calibration is often an art, rather than a science. Open research questions remain about the quality and appropriateness of alternative calibration methods. Direct comparisons of methods both within and across models are needed to understand if different methods could lead to different calibration results. These types of comparisons will be important next steps for the advancement of model calibration methodology as well as disease simulation modeling in general.

Our focus on cancer screening models for this review does not allow conclusions about the reporting of calibration methods used in models of other diseases. However similar reporting issues most likely exist in other disease models and the checklist would be applicable for these as well. Because of the imperfect nature of article indexing, our search may have missed some relevant articles. For example, the term “natural history” only became a US National Library of Medicine MeSH heading in 1996 and is a broad term. We attempted to mitigate the potential loss of articles by performing a keyword search and including articles from our personal reprints and those referenced in the retrieved articles. As illustrated by our survey of the literature, this is a growing field with greater numbers of disease modeling articles published each year. We are aware of several recent articles describing details of calibration methods[26, 178, 179] but there are likely additional examples that have been published after our search period.

Disease simulation modeling plays an important role in health policy analysis. Our review indicates the application of such models in cancer screening has become more widespread especially within the past decade. The use of disease simulation modeling is likely to expand especially in light of new initiatives in comparative effectiveness research. In addressing policy questions, these models synthesize biologic, epidemiologic and economic data from diverse sources. With their increasing use for policy making, the models themselves will face additional scrutiny and therefore careful documentation of methods will be critical for establishing credibility. While there have been efforts to standardize methods for decision analytic modeling in economic evaluation,[177, 180] less attention has been paid to the calibration of these models. Although questions remain about the best calibration practices, the use of the Calibration Reporting Checklist would begin the standardization process, provide a basis for more transparent comparisons across models, and facilitate important discussions about methods.

Supplementary Material

Appendix

Acknowledgments

The authors gratefully acknowledge the support of Drs. Eric (Rocky) Feuer and Karen Kuntz and members of the NCI Cancer Intervention and Surveillance Modeling Network. This work was supported in part by grants from the National Cancer Institute: F32 CA1259842 (NKS), R25 CA92203 (ABK), K99 126147 (PMM, CYK) and R01 97337 (GSG, PMM, CYK). The funding agreements ensured the authors’ independence in designing the study, collecting, analyzing and interpreting the data, writing, and publishing the report. An earlier version of this work was presented at the 2007 Society for Medical Decision Making Annual Meeting.

Footnotes

Publisher's Disclaimer: This is the prepublication, author-produced version of a manuscript accepted for publication in PharmacoEconomics. This version does not include post-acceptance editing and formatting. The definitive publisher-authorized version of PharmacoEconomics. 2009:27(7):533-545 is available online at: http://adisonline.com/pharmacoeconomics/.

References

1. Ramsey SD, McIntosh M, Etzioni RD, et al. Simulation Modeling of Outcomes and Cost-Effectiveness. Hematology/Oncology Clinics of North America. 2000 Aug;14( 4):925–38. [PubMed]
2. Knudsen AB, McMahon PM, Gazelle GS. Use of Modeling to Evaluate the Cost-Effectiveness of Cancer Screening Programs. J Clin Oncol. 2007;25:203–8. [PubMed]
3. Feuer EJ, Etzioni RD, Cronin KA, et al. The Use of Modeling to Understand the Impact of Screening on US Mortality: Examples from Mammography and PSA Testing. Stat Methods Med Res. 2004 Dec;13:421–42. [PubMed]
4. Goldie SJ. Chapter 15: Public Health Policy and Cost-Effectiveness Analysis. J Natl Cancer Inst Monogr. 2003;31:102–10. [PubMed]
5. Goldie SJ, Goldhaber-Fiebert JD, Garnett G. Chapter 18: Public Health Policy for Cervical Cancer Prevention: The Role of Decision Science, Economic Evaluation, and Mathematical Modeling. Vaccine. 2006 Aug 31;24(S3):155–63. [PubMed]
6. Weinstein MC. Recent Developments in Decision-Analytic Modelling for Economic Evaluation. Pharmacoeconomics. 2006;24( 11):1043–53. [PubMed]
7. Goldie SJ, Grima D, Kohli M, et al. A Comprehensive Natural History Model of HPV Infection and Cervical Cancer to Estimate the Clinical Impact of a Prophylactic HPV-16/18 Vaccine. Int J Cancer. 2003;106:896–904. [PubMed]
8. Yeh JM, Kuntz KM, Ezzati M, et al. Development of an Empirically Calibrated Model of Gastric Cancer in Two High-Risk Countries. Cancer Epidemiology, Biomarkers and Prevention. 2008 May 1;17( 5):1179–87. [PubMed]
9. Mandelblatt JS, Lawrence WF, Womack SM, et al. Benefits and costs of using HPV testing to screen for cervical cancer. JAMA. 2002 May 8;287( 18):2372–81. [PubMed]
10. Urban N, Drescher C, Etzioni R, et al. Use of a stochastic simulation model to identify an efficient protocol for ovarian cancer screening. Control Clin Trials. 1997 Jun;18( 3):251–70. [PubMed]
11. Myers ER, McCrory DC, Nanda K, et al. Mathematical Model for the Natural History of Human Papillomavirus Infection and Cervical Carcinogenesis. Am J Epidemiol 2000 June 15. 2000;151( 12):1158–71. [PubMed]
12. Law AM, Kelton WD. Simulation Modeling and Analysis. 3. Boston: McGraw-Hill; 2000.
13. Clarke LD, Plevritis SK, Boer R, et al. A Comparative Review of CISNET Breast Models Used To Analyze U.S. Breast Cancer Incidence and Mortality Trends. J Natl Cancer Inst Monogr. 2006;36:96–105. [PubMed]
14. Banks J, editor. Principles, Methodology, Advances, Applications, and Practice. John Wiley & Sons. Inc; 1998. Handbook of Simulation.
15. Bruning JL, Kintz BL. Computational Handbook of Statistics. 4. Boston: Allyn & Bacon; 1997.
16. Nelder JA, Mead R. A Simplex Method for Function Minimization. Computer Journal. 1965;7( 4):308–13.
17. Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by Simulated Annealing. Science. 1983;220( 4598):671–80. [PubMed]
18. Press WH, Teukolsky SA, Vetterling WT, et al. Numerical Recipes in C++ 2. New York: Cambridge University Press; 2002.
19. Wong DF, Leong HW, Liu CL. Simulated Annealing for VLSI Design. Boston: Kluwer Academic Publishers; 1988.
20. Holland JH. Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press; 1975.
21. Goldberg DE. Genetic Algorithms in Search, Optimization and Machine Learning. Boston: Kluwer Academic Publishers; 1989.
22. Manikas TW, Cain JT. Genetic Algorithms vs. Simulated Annealing: a comparison of approaches for solving the circuit partitioning problem. Pittsburgh: University of Pittsburgh; 1996.
23. Ingber L, Rosen B. Genetic Algorithms and Very Fast Simulated Annealing. A Comparison Mathematical and Computer Modelling. 1992;16( 11):87–100.
24. Glover F. Tabu Search — Part I. ORSA Journal on Computing. 1989;1( 3):190–206.
25. Glover F. Tabu Search — Part II. ORSA Journal on Computing. 1990;2( 1):4–32.
26. Kong CY, McMahon PM, Gazelle GS. Calibration of Disease Simulation Models Using an Engineering Approach. Value Health. doi: 10.1111/j.1524–4733.2008.00484.x. Published Online: Jan 12 2009. [PMC free article] [PubMed] [Cross Ref]
27. Baker RD. Use of a mathematical model to evaluate breast cancer screening policy. Health Care Manage Sci. 1998 Oct;1( 2):103–13. [PubMed]
28. Beckett JR, Kotre CJ, Michaelson JS. Analysis of benefit:risk ratio and mortality reduction for the UK Breast Screening Programme. Br J Radiol. 2003 May;76( 905):309–20. [PubMed]
29. Beemsterboer PM, Warmerdam PG, Boer R, et al. Radiation risk of mammography related to benefit in screening programmes: a favourable balance? J Med Screen. 1998;5( 2):81–7. [PubMed]
30. Berry DA, Cronin KA, Plevritis SK, et al. Effect of Screening and Adjuvant Treatment on Mortality from Breast Cancer. N Engl J Med. 2005 Oct 27;353( 17):1784–92. [PubMed]
31. Berry DA, Inoue L, Shen Y, et al. Modeling the impact of treatment and screening on U.S. breast cancer mortality: a Bayesian approach. J Natl Cancer Inst Monogr. 2006;36:30–6. [PubMed]
32. Blanchard K, Colbert JA, Puri D, et al. Mammographic screening: patterns of use and estimated impact on breast carcinoma survival. Cancer. 2004 Aug 1;101( 3):495–507. [PubMed]
33. Boer R, de Koning H, Threlfall A, et al. Cost effectiveness of shortening screening interval or extending age range of NHS breast screening programme: computer simulation study. BMJ. 1998 Aug 8;317( 7155):376–9. [PMC free article] [PubMed]
34. Boer R, de Koning H, van Oortmarssen G, et al. Stage distribution at first and repeat examinations in breast cancer screening. J Med Screen. 1999;6( 3):132–8. [PubMed]
35. Carter R, Glasziou P, van Oortmarssen G, et al. Cost-effectiveness of mammographic screening in Australia. Aust J Public Health. 1993 Mar;17( 1):42–50. [PubMed]
36. Carter KJ, Castro F, Kessler E, et al. A computer model for the study of breast cancer. Comput Biol Med. 2003 Jul;33:345–60. [PubMed]
37. Carter KJ, Castro F, Kessler E, et al. Simulation of breast cancer screening: quality assessment of two protocols. J Healthc Qual. 2004 Nov-Dec;26(6):31–8. [PubMed]
38. Castro F, Carter KJ, Kessler E, et al. The relation of breast cancer staging to screening protocol compliance: a computer simulation study. Comput Biol Med. 2005 Feb;35( 2):91–101. [PubMed]
39. Chia YL, Salzman P, Plevritis SK, et al. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration. Stat Methods Med Res. 2004 Dec;13( 6):507–24. [PubMed]
40. Cong XJ, Shen Y, Miller AB. Estimation of age-specific sensitivity and sojourn time in breast cancer screening studies. Stat Med. 2005 Oct 30;24( 20):3123–38. [PubMed]
41. Connor RJ, Boer R, Prorok PC, et al. Investigation of design and bias issues in case-control studies of cancer screening using microsimulation. Am J Epidemiol. 2000 May 15;151( 10):991–8. [PubMed]
42. de Koning HJ, van Ineveld BM, van Oortmarssen GJ, et al. Breast Cancer Screening and Cost-Effectiveness; Policy Alternatives, Quality of Life Considerations and the Possible Impact of Uncertain Factors. Int J Cancer. 1991 Oct 21;49:531–7. [PubMed]
43. de Koning HJ, Boer R, Warmerdam PG, et al. Quantitative interpretation of age-specific mortality reductions from the Swedish breast cancer-screening trials. J Natl Cancer Inst. 1995 Aug 16;87( 16):1217–23. [PubMed]
44. Eddy DM. Screening for Breast Cancer. Ann Intern Med. 1989;111( 5):389–99. [PubMed]
45. Feldstein M, Zelen M. Inferring the Natural Time History of Breast Cancer: Implications for Tumor Growth Rate and Early Detection. Breast Cancer Res Treat. 1984;4:3–10. [PubMed]
46. Fett MJ. Computer modelling of the Swedish two county trial of mammographic screening and trade offs between participation and screening interval. J Med Screen. 2001;8( 1):39–45. [PubMed]
47. Fracheboud J, Groenewoud JH, Boer R, et al. Seventy-five years is an appropriate upper age limit for population-based mammography screening. Int J Cancer. 2006 Apr 15;118( 8):2020–5. [PubMed]
48. Fryback DG, Stout NK, Rosenberg MA, et al. The Wisconsin Breast Cancer Epidemiology Simulation Model. J Natl Cancer Inst Monogr. 2006;36:37–47. [PubMed]
49. Hanin LG, Miller A, Zorin AV, et al. The University of Rochester Model of Breast Cancer Detection and Survival. J Natl Cancer Inst Monogr. 2006;36:66–78. [PubMed]
50. Hsieh HJ, Chen TH, Chang SH. Assessing chronic disease progression using non-homogeneous exponential regression Markov models: an illustration using a selective breast cancer screening in Taiwan. Stat Med. 2002 Nov 30;21( 22):3369–82. [PubMed]
51. Hunter DJ, Drake SM, Shortt SE, et al. Simulation modeling of change to breast cancer detection age eligibility recommendations in Ontario, 2002–2021. Cancer Detect Prev. 2004;28( 6):453–60. [PubMed]
52. Jacobi CE, Jonker MA, Nagelkerke NJ, et al. Prevalence of family histories of breast cancer in the general population and the incidence of related seeking of health care. J Med Genet. 2003 Jul;40( 7):e83. [PMC free article] [PubMed]
53. Jacobi CE, Nagelkerke NJ, van Houwelingen JH, et al. Breast cancer screening, outside the population-screening program, of women from breast cancer families without proven BRCA1/BRCA2 mutations: a simulation study. Cancer Epidemiol Biomarkers Prev. 2006 Mar;15( 3):429–36. [PubMed]
54. Jansen JT, Zoetelief J. MBS: a model for risk benefit analysis of breast cancer screening. Br J Radiol. 1995 Feb;68( 806):141–9. [PubMed]
55. Jansen JT, Zoetelief J. Assessment of lifetime gained as a result of mammographic breast cancer screening using a computer model. Br J Radiol. 1997 Jun;70( 834):619–28. [PubMed]
56. Jansen JT, Zoetelief J. Optimisation of mammographic breast cancer screening using a computer simulation model. Eur J Radiol. 1997 Feb;24( 2):137–44. [PubMed]
57. Knox EG. Evaluation of a proposed breast cancer screening regimen. BMJ. 1988 Sep 10;297( 6649):650–4. [PMC free article] [PubMed]
58. Koscielny S, Tubiana M, Valleron AJ. A simulation model of the natural history of human breast cancer. Br J Cancer. 1985 Oct;52( 4):515–24. [PMC free article] [PubMed]
59. Lee S, Zelen M. A Stochastic Model for Predicting the Mortality of Breast Cancer. J Natl Cancer Inst Monogr. 2006;36:79–86. [PubMed]
60. Mandelblatt JS, Schechter CB, Yabroff KR, et al. Benefits and costs of interventions to improve breast cancer outcomes in African American women. J Clin Oncol. 2004 Jul 1;22( 13):2554–66. [PubMed]
61. Mandelblatt J, Schechter CB, Lawrence W, et al. The SPECTRUM population model of the impact of screening and treatment on U.S. breast cancer trends from 1975 to 2000: principles and practice of the model methods. J Natl Cancer Inst Monogr. 2006;36:47–55. [PubMed]
62. Manton KG, Stallard E. Demographics (1950–1987) of breast cancer in birth cohorts of older women. J Gerontol. 1992 Nov;47(Spec No 32–42) [PubMed]
63. Michaelson JS, Halpern E, Kopans DB. Breast cancer: computer simulation method for estimating optimal intervals for screening. Radiology. 1999 Aug;212( 2):551–60. [PubMed]
64. Michaelson JS, Satija S, Moore R, et al. Estimates of Breast Cancer Growth Rate and Sojourn Time from Screening Database Information. Journal of Women’s Imaging. 2003;5( 1):11–9.
65. Michaelson JS, Satija S, Moore R, et al. Estimates of the Sizes at Which Breast Cancers Become Detectable on Mammographic and Clinical Grounds. Journal of Women’s Imaging. 2003;5( 1):3–10.
66. Myles JP, Nixon RM, Duffy Sw, et al. Bayesian evaluation of breast cancer screening using data from two studies. Stat Med. 2003 May 30;22( 10):1661–74. [PubMed]
67. Okubo I, Glick H, Frumkin H, et al. Cost-effectiveness analysis of mass screening for breast cancer in Japan. Cancer. 1991 Apr 15;67( 8):2021–9. [PubMed]
68. Paci E, Boer R, Zappa M, et al. A model-based prediction of the impact on reduction in mortality by a breast cancer screening programme in the city of Florence, Italy. Eur J Cancer. 1995;31A (3):348–53. [PubMed]
69. Plevritis SK. A mathematical algorithm that computes breast cancer sizes and doubling times detected by screening. Math Biosci. 2001 Jun;171( 2):155–78. [PubMed]
70. Plevritis SK, Kurian AW, Sigal BM, et al. Cost-effectiveness of screening BRCA1/2 mutation carriers with breast magnetic resonance imaging. JAMA. 2006 May 24;295( 20):2374–84. [PubMed]
71. Plevritis SK, Sigal BM, Salzman P, et al. A stochastic simulation model of U.S. breast cancer mortality trends from 1975 to 2000. J Natl Cancer Inst Monogr. 2006;36:86–95. [PubMed]
72. Plevritis SK, Salzman P, Sigal BM, et al. A natural history model of stage progression applied to breast cancer. Statistics in Medicine. 2006;26( 3):581–95. [PubMed]
73. Shen Y, Huang X. Nonparametric estimation of asymptomatic duration from a randomized prospective cancer screening trial. Biometrics. 2005 Dec;61( 4):992–9. [PubMed]
74. Shen Y, Zelen M. Robust modeling in screening studies: estimation of sensitivity and preclinical sojourn time distribution. Biostatistics. 2005 Oct;6( 4):604–14. [PubMed]
75. Stout NK, Rosenberg MA, Trentham-Dietz A, et al. Retrospective cost-effectiveness analysis of screening mammography. J Natl Cancer Inst. 2006 Jun 7;98( 11):774–82. [PubMed]
76. Szeto KL, Devlin NJ. The cost-effectiveness of mammography screening: evidence from a microsimulation model for New Zealand. Health Policy. 1996 Nov;38( 2):101–15. [PubMed]
77. Tan SYGL, van Oortmarssen GJ, de Koning HJ, et al. The MISCAN-Fadia Continuous Tumor Growth Model for Breast Cancer. J Natl Cancer Inst Monogr. 2006;36:56–65. [PubMed]
78. Tubiana M, Koscielny S. The natural history of breast cancer: implications for a screening strategy. Int J Radiat Oncol Biol Phys. 1990 Nov;19( 5):1117–20. [PubMed]
79. van der Maas PJ, de Koning HJ, van Ineveld BM, et al. The cost-effectiveness of breast cancer screening. Int J Cancer. 1989 Jun 15;43( 6):1055–60. [PubMed]
80. van Oortmarssen GJ, Habbema JD, Lubbe JT, et al. A model-based analysis of the HIP project for breast cancer screening. Int J Cancer. 1990 Aug 15;46( 2):207–13. [PubMed]
81. van Oortmarssen GJ, Habbema JD, van der Maas PJ, et al. A model for breast cancer screening. Cancer. 1990 Oct 1;66( 7):1601–12. [PubMed]
82. Vervoort MM, Draisma G, Fracheboud J, et al. Trends in the usage of adjuvant systemic therapy for breast cancer in the Netherlands and its effect on mortality. Br J Cancer. 2004 Jul 19;91( 2):242–7. [PMC free article] [PubMed]
83. Berkhof J, de Bruijne MC, Zielinski GD, et al. Natural history and screening model for high-risk human papillomavirus infection, neoplasia and cervical cancer in the Netherlands. Int J Cancer. 2005 Jun 10;115( 2):268–75. [PubMed]
84. Berkhof J, de Bruijne MC, Zielinski GD, et al. Evaluation of cervical screening strategies with adjunct high-risk human papillomavirus testing for women with borderline or mild dyskaryosis. Int J Cancer. 2006 Apr 1;118( 7):1759–68. [PubMed]
85. Eddy DM. The frequency of cervical cancer screening. Comparison of a mathematical model with empirical data. Cancer. 1987 Sep 1;60( 5):1117–22. [PubMed]
86. Eddy DM. Screening for cervical cancer. Ann Intern Med. 1990 Aug 1;113( 3):214–26. [PubMed]
87. Goldie SJ, Weinstein MC, Kuntz KM, et al. The costs, clinical benefits, and cost-effectiveness of screening for cervical cancer in HIV-infected women. Ann Intern Med. 1999 Jan 19;130( 2):97–107. [PubMed]
88. Goldie SJ, Kuhn L, Denny L, et al. Policy analysis of cervical cancer screening strategies in low-resource settings: clinical benefits and cost-effectiveness. JAMA. 2001 Jun 27;285( 24):3107–15. [PubMed]
89. Goldie SJ, Kim JJ, Wright TCJ. Cost-Effectiveness of Human Papillomavirus DNA Testing for Cervical Cancer Screening in Women Aged 30 Years or More. Obstet Gynecol. 2004 Apr;103( 4):619–31. [PubMed]
90. Goldie SJ, Kohli M, Grima D, et al. Projected clinical benefits and cost-effectiveness of a human papillomavirus 16/18 vaccine. J Natl Cancer Inst. 2004 Apr 21;96( 8):604–15. [PubMed]
91. Goldie SJ, Gaffikin L, Goldhaber-Fiebert JD, et al. Cost-Effectiveness of Cervical-Cancer Screening in Five Developing Countries. N Engl J Med. 2005 Nov 17;353( 20):2158–68. [PubMed]
92. Gustafsson L, Adami HO. Natural history of cervical neoplasia: consistent results obtained by an identification technique. Br J Cancer. 1989 Jul;60( 1):132–41. [PMC free article] [PubMed]
93. Gustafsson L, Adami HO. Cytologic screening for cancer of the uterine cervix in Sweden evaluated by identification and simulation. Br J Cancer. 1990 Jun;61( 6):903–8. [PMC free article] [PubMed]
94. Gustafsson L, Adami HO. Optimization of cervical cancer screening. Cancer Causes Control. 1992 Mar;3( 2):125–36. [PubMed]
95. Gyrd-Hansen D, Holund B, Andersen P. A cost-effectiveness analysis of cervical cancer screening: health policy implications. Health Policy. 1995 Oct;34( 1):35–51. [PubMed]
96. Habbema JD, van Oortmarssen GJ, Lubbe JT, et al. Model building on the basis of Dutch cervical cancer screening data. Maturitas. 1985 May;7( 1):11–20. [PubMed]
97. Habbema JDF, Lubbe JTN, van Oortmarssen GJ, et al. A simulation approach to cost-effectiveness and cost-benefit calculations of screening for the early detection of disease. European Journal of Operations Research. 1987;29( 2):159–66.
98. Helfand M, O’Connor GT, Zimmer-Gembeck M, et al. Effect of the Clinical Laboratory Improvement Amendments of 1988 (CLIA ‘88) on the incidence of invasive cervical cancer. Med Care. 1992 Dec;30( 12):1067–82. [PubMed]
99. Kim JJ, Wright TCJ, Goldie SJ. Cost-Effectiveness of Alternative Triage Strategies for Atypical Squamous Cells of Undetermined Significance. JAMA. 2002 May 8;287( 18):2382–90. [PubMed]
100. Kim JJ, Leung GM, Woo PP, et al. Cost-effectiveness of organized versus opportunistic cervical cytology screening in Hong Kong. J Public Health (Oxf) 2004 Jun;26( 2):130–7. [PubMed]
101. Knox EG. A simulation system for screening procedures. In: McLachlan G, editor. Future and Present Indicatives, Problems and Progress in Medical Care. Nuffield Provincial Hospitals Trust; 1973. pp. 17–55.
102. Koong SL, Yen AM, Chen TH. Efficacy and cost-effectiveness of nationwide cervical cancer screening in Taiwan. J Med Screen. 2006;13(Suppl 1):S44–7. [PubMed]
103. Koopmanschap MA, Lubbe KT, van Oortmarssen GJ, et al. Economic aspects of cervical cancer screening. Soc Sci Med. 1990;30( 10):1081–7. [PubMed]
104. Kulasingam SL, Myers ER, Lawson HW, et al. Cost-effectiveness of Extending Cervical Cancer Screening Intervals Among Women with Prior Normal Pap Tests. Obstet Gynecol. 2006 Feb;107( 2 Part 1):321–8. [PubMed]
105. Mandelblatt JS, Lawrence WF, Gaffikin L, et al. Costs and benefits of different strategies to screen for cervical cancer in less-developed countries. J Natl Cancer Inst. 2002 Oct 2;94( 19):1469–83. [PubMed]
106. Mandelblatt J, Lawrence W, Yi B, et al. The Balance of Harms, Benefits, and Costs of Screening for Cervical Cancer in Older Women. Arch Intern Med. 2004 Feb 9;164:245–7. [PubMed]
107. Matsunaga G, Tsuji I, Sato S, et al. Cost-effective analysis of mass screening for cervical cancer in Japan. J Epidemiol. 1997 Sep;7( 3):135–41. [PubMed]
108. Office of Technology Assessment. The Costs and Effectiveness of Screening for Cervical Cancer in Elderly Women-Background Paper, OTA-BP-H-65. Washington, DC: U.S. Congress; 1990.
109. Parkin DM. A computer simulation model for the practical planning of cervical cancer screening programmes. Br J Cancer. 1985 Apr;51( 4):551–68. [PMC free article] [PubMed]
110. Parkin DM, Moss SM. An evaluation of screening policies for cervical cancer in England and Wales using a computer simulation model. J Epidemiol Community Health. 1986 Jun;40( 2):143–53. [PMC free article] [PubMed]
111. Radensky PW, Mango LJ. Interactive neural-network-assisted screening. An economic assessment. Acta Cytol. 1998 Jan-Feb;42(1):246–52. [PubMed]
112. Sato S, Matunaga G, Tsuji I, et al. Determining the cost-effectiveness of mass screening for cervical cancer using common analytic models. Acta Cytol. 1999 Nov-Dec;43(6):1006–14. [PubMed]
113. Sawaya GF, McConnell KJ, Kulasingam SL, et al. Risk of Cervical Cancer Associated with Extending the Interval between Cervical-Cancer Screenings. N Engl J Med. 2003 Oct 16;349( 16):1501–9. [PubMed]
114. Schechter CB. Cost-effectiveness of rescreening conventionally prepared cervical smears by PAPNET testing. Acta Cytol. 1996 Nov-Dec;40(6):1272–82. [PubMed]
115. Sherlaw-Johnson C, Gallivan S, Jenkins D. Withdrawing Low Risk Women From Cervical Screening Programmes: Mathematical Modelling Study. BMJ. 1999 Feb 6;318( 7180):356–60. [PMC free article] [PubMed]
116. Sherlaw-Johnson C, Philips Z. An Evaluation of Liquid-based Cytology and Human Papillomavirus Testing Within the UK Cervical Cancer Screening Programme. Br J Cancer. 2004 Jul 5;91( 1):84–91. [PMC free article] [PubMed]
117. Shun-Zhang Y, Miller AB, Sherman GJ. Optimising the age, number of tests, and test interval for cervical screening in Canada. J Epidemiol Community Health. 1982 Mar;36( 1):1–10. [PMC free article] [PubMed]
118. Sreenivas V, Prabhakar AK, Ravi R, et al. A simulation approach for estimating the loss of womanyears due to cervical cancer and probability of developing cervical cancer. Neoplasma. 1989;36( 5):623–7. [PubMed]
119. van Ballegooijen M, van den Akker-van Marle E, Patnick J, et al. Overview of important cervical cancer screening process values in European Union (EU) countries, and tentative predictions of the corresponding effectiveness and cost-effectiveness. Eur J Cancer. 2000 Nov;36( 17):2177–88. [PubMed]
120. van den Akker-van Marle ME, van Ballegooijen M, van Oortmarssen GJ, et al. Cost-effectiveness of cervical cancer screening: comparison of screening policies. J Natl Cancer Inst. 2002 Feb 6;94( 3):193–204. [PubMed]
121. Clemen RT, Lacke CJ. Analysis of colorectal cancer screening regimens. Health Care Manage Sci. 2001 Dec;4( 4):257–67. [PubMed]
122. Frazier AL, Colditz GA, Fuchs CS, et al. Cost-effectiveness of screening for colorectal cancer in the general population. JAMA. 2000 Oct 18;284( 15):1954–61. [PubMed]
123. Gyrd-Hansen D, Søgaard J, Kronborg O. Analysis of screening data: colorectal cancer. Int J Epidemiol. 1997 Dec;26( 6):1172–81. [PubMed]
124. Gyrd-Hansen D, Søgaard J, Kronborg O. Colorectal cancer screening: efficiency and effectiveness. Health Econ. 1998 Feb;7( 1):9–20. [PubMed]
125. Haug U, Brenner H. A simulation model for colorectal cancer screening: potential of stool tests with various performance characteristics compared with screening colonoscopy. Cancer Epidemiol Biomarkers Prev. 2005 Feb;14( 2):422–8. [PubMed]
126. Khandker RK, Dulski JD, Kilpatrick JB, et al. A decision model and cost-effectiveness analysis of colorectal cancer screening and surveillance guidelines for average-risk adults. Int J Technol Assess Health Care. 2000 Summer;16( 3):799–810. [PubMed]
127. Ladabaum U, Chopra CL, Huang G, et al. Aspirin as an adjunct to screening for prevention of sporadic colorectal cancer. A cost-effectiveness analysis. Ann Intern Med. 2001 Nov 6;135( 9):769–81. [PubMed]
128. Ladabaum U, Scheiman JM, Fendrick AM. Potential effect of cyclooxygenase-2-specific inhibitors on the prevention of colorectal cancer: a cost-effectiveness analysis. Am J Med. 2003 May;114( 7):546–5. [PubMed]
129. Lejeune C, Arveux P, Dancourt V, et al. A simulation model for evaluating the medical and economic outcomes of screening strategies for colorectal cancer. Eur J Cancer Prev. 2003 Feb;12( 1):77–84. [PubMed]
130. Lejeune C, Arveux P, Dancourt V, et al. Cost-effectiveness analysis of fecal occult blood screening for colorectal cancer. Int J Technol Assess Health Care. 2004 Fall;20( 4):434–9. [PubMed]
131. Loeve F, Boer R, van Oortmarssen GJ, et al. The MISCAN-COLON simulation model for the evaluation of colorectal cancer screening. Comput Biomed Res. 1999 Feb;32( 1):13–33. [PubMed]
132. Loeve F, Brown ML, Boer R, et al. Endoscopic colorectal cancer screening: a cost-saving analysis. J Natl Cancer Inst. 2000 Apr 5;92( 7):557–63. [PubMed]
133. Loeve F, Boer R, van Oortmarssen GJ, et al. Impact of systematic false-negative test results on the performance of faecal occult blood screening. Eur J Cancer. 2001 May;37( 7):912–7. [PubMed]
134. Neilson AR, Whynes DK. Cost-effectiveness of screening for colorectal cancer: a simulation model. IMA J Math Appl Med Biol. 1995 Sep-Dec;12(3–4):355–67. [PubMed]
135. Ness RM, Holmes AM, Klein R, et al. Cost-utility of one-time colonoscopic screening for colorectal cancer at various ages. Am J Gastroenterol. 2000 Jul;95( 7):1800–11. [PubMed]
136. Song K, Fendrick AM, Ladabaum U. Fecal DNA testing compared with conventional colorectal cancer screening methods: a decision analysis. Gastroenterology. 2004 May;126( 5):1270–9. [PubMed]
137. Vijan S, Hwang EW, Hofer TP, et al. Which colon cancer screening test? A comparison of costs, effectiveness, and compliance. Am J Med. 2001 Dec 1;111( 8):593–601. [PubMed]
138. Wagner JL, Herdman RC, Wadhwa S. Cost effectiveness of colorectal cancer screening in the elderly. Ann Intern Med. 1991 Nov 15;115( 10):807–17. [PubMed]
139. Wagner JL, Tunis S, Brown M, et al. Cost-Effectiveness of Colorectal Cancer Screening in Average-Risk Adults. In: Young G, Rosen P, Levin B, editors. Prevention and Early Detection of Colorectal Cancer. Philadelphia: Saunders; 1996. pp. 321–56.
140. Whynes DK, Neilson AR, Walker AR, et al. Faecal occult blood screening for colorectal cancer: is it cost-effective? Health Econ. 1998 Feb;7( 1):21–9. [PubMed]
141. Wong JM, Yen MF, Lai MS, et al. Progression rates of colorectal cancer by Dukes’ stage in a high-risk group: analysis of selective colorectal cancer screening. Cancer J. 2004 May-Jun;10(3):160–9. [PubMed]
142. Yang KC, Liao CS, Chiu YH, et al. Colorectal cancer screening with faecal occult blood test within a multiple disease screening programme: an experience from Keelung, Taiwan. J Med Screen. 2006;13(Suppl 1):S8–13. [PubMed]
143. Garside R, Pitt M, Somerville M, et al. Surveillance of Barrett’s oesophagus: exploring the uncertainty through systematic review, expert workshop and economic modelling. Health Technology Assessment (Winchester, England) 2006 Mar;10( 8):1–142. [PubMed]
144. Davies R, Crabbe D, Roderick P, et al. A simulation to evaluate screening for Helicobacter pylori infection in the prevention of peptic ulcers and gastric cancers. Health Care Manage Sci. 2002 Nov;5( 4):249–58. [PubMed]
145. Fendrick AM, Chernew ME, Hirth RA, et al. Clinical and economic effects of population-based Helicobacter pylori screening to prevent gastric cancer. Arch Intern Med. 1999 Jan 25;159( 2):142–8. [PubMed]
146. Roderick P, Davies R, Raftery J, et al. The cost-effectiveness of screening for Helicobacter pylori to reduce mortality and morbidity from gastric cancer and peptic ulcer disease: a discrete-event simulation model. Health Technology Assessment (Winchester, England) 2003;7( 6):1–86. [PubMed]
147. Roderick P, Davies R, Raftery J, et al. Cost-effectiveness of population screening for Helicobacter pylori in preventing gastric cancer and peptic ulcer disease, using simulation. J Med Screen. 2003;10( 3):148–56. [PubMed]
148. Das P, Ng AK, Earle CC, et al. Computed tomography screening for lung cancer in Hodgkin’s lymphoma survivors: decision analysis and cost-effectiveness analysis. Ann Oncol. 2006 May;17( 5):785–93. [PubMed]
149. Flehinger BJ, Kimmel M. The natural history of lung cancer in a periodically screened population. Biometrics. 1987 Mar;43( 1):127–44. [PubMed]
150. Flehinger BJ, Kimmel M, Melamed MR. Natural history of adenocarcinoma-large cell carcinoma of the lung: conclusions from screening programs in New York and Baltimore. J Natl Cancer Inst. 1988 May 4;80( 5):337–44. [PubMed]
151. Flehinger BJ, Kimmel M, Polyak T, et al. Screening for lung cancer. The Mayo Lung Project revisited. Cancer. 1993 Sep 1;72( 5):1573–80. [PubMed]
152. Gorlova OY, Kimmel M, Henschke C. Modeling of long-term screening for lung carcinoma. Cancer. 2001 Sep 15;92( 6):1531–40. [PubMed]
153. Mahadevia PJ, Fleisher LA, Frick KD, et al. Lung cancer screening with helical computed tomography in older adult smokers: a decision and cost-effectiveness analysis. JAMA. 2003 Jan 15;289( 3):313–22. [PubMed]
154. Marshall D, Simpson KN, Earle CC, et al. Potential cost-effectiveness of one-time screening for lung cancer (LC) in a high risk cohort. Lung Cancer. 2001 Jun;32( 3):227–36. [PubMed]
155. Yamaguchi N, Tamura Y, Sobue T, et al. Evaluation of cancer prevention strategies by computerized simulation model: an approach to lung cancer. Cancer Causes Control. 1991 May;2( 3):147–55. [PubMed]
156. Yamaguchi N, Mizuno S, Akiba S, et al. A 50-year projection of lung cancer deaths among Japanese males and potential impact evaluation of anti-smoking measures and screening using a computerized simulation model. Jpn J Cancer Res. 1992 Mar;83( 3):251–7. [PubMed]
157. Yamaguchi N, Tamura Y, Sobue T, et al. Evaluation of cancer prevention strategies by computerized simulation model: methodological issues. Environ Health Perspect. 1994 Nov;102(Suppl 8):67–71. [PMC free article] [PubMed]
158. Girgis A, Clarke P, Burton RC, et al. Screening for melanoma by primary health care physicians: a cost-effectiveness analysis. J Med Screen. 1996;3( 1):47–53. [PubMed]
159. Downer MC, Jullien JA, Speight PM. An interim determination of health gain from oral cancer and precancer screening: 2. Developing a model of population screening. Community Dent Health. 1997 Dec;14( 4):227–32. [PubMed]
160. Myers ER, Havrilesky LJ, Kulasingam SL, et al. Genomic tests for ovarian cancer detection and management. Evidence Report/Technology Assessment. 2006 Oct;(145):1–100. [PubMed]
161. Skates SJ, Singer DE. Quantifying the potential benefit of CA 125 screening for ovarian cancer. J Clin Epidemiol. 1991;44( 4–5):365–80. [PubMed]
162. Skates SJ, Pauler DK, Jacobs IJ. Screening based on the risk of cancer calculation from Bayesian hierarchical change point and mixture models of longitudinal markers. Journal of the American Statistical Association. 2001;96( 454):429–39.
163. Cowen ME, Chartrand M, Weitzel WF. A Markov model of the natural history of prostate cancer. J Clin Epidemiol. 1994 Jan;47( 1):3–21. [PubMed]
164. Draisma G, Boer R, Otto SJ, et al. Lead times and overdetection due to prostate-specific antigen screening: estimates from the European Randomized Study of Screening for Prostate Cancer. J Natl Cancer Inst. 2003 Jun 18;95( 12):868–78. [PubMed]
165. Draisma G, De Koning HJ. MISCAN: estimating lead-time and over-detection by simulation. BJU Int. 2003 Dec;92(Suppl 2):106–11. [PubMed]
166. Draisma G, Postma R, Schroder FH, et al. Gleason score, age and screening: modeling dedifferentiation in prostate cancer. Int J Cancer. 2006 Nov 15;119( 10):2366–71. [PubMed]
167. Etzioni R, Cha R, Cowen ME. Serial prostate specific antigen screening for prostate cancer: a computer model evaluates competing strategies. J Urol. 1999 Sep;162( 3 Pt 1):741–8. [PubMed]
168. Etzioni R, Legler JM, Feuer EJ, et al. Cancer surveillance series: interpreting trends in prostate cancer--part III: Quantifying the link between population prostate-specific antigen testing and recent declines in prostate cancer mortality. J Natl Cancer Inst. 1999 Jun 16;91( 12):1033–9. [PubMed]
169. Etzioni R, Penson DF, Legler JM, et al. Overdiagnosis due to prostate-specific antigen screening: lessons from U.S. prostate cancer incidence trends. J Natl Cancer Inst. 2002 Jul 3;94( 13):981–90. [PubMed]
170. Krahn MD, Mahoney JE, Eckman MH, et al. Screening for prostate cancer. A decision analytic view. JAMA. 1994 Sep 14;272( 10):773–80. [PubMed]
171. Parker C, Muston D, Melia J, et al. A model of the natural history of screen-detected prostate cancer, and the effect of radical treatment on overall survival. Br J Cancer. 2006 May 22;94( 10):1361–8. [PMC free article] [PubMed]
172. Ross KS, Carter HB, Pearson JD, et al. Comparative efficiency of prostate-specific antigen screening strategies for prostate cancer detection. JAMA. 2000 Sep 20;284( 11):1399–405. [PubMed]
173. Ross KS, Guess HA, Carter HB. Estimation of treatment benefits when PSA screening for prostate cancer is discontinued at different ages. Urology. 2005 Nov;66( 5):1038–42. [PubMed]
174. Tsodikov A, Szabo A, Wegelin J. A population model of prostate cancer incidence. Stat Med. 2006 Aug 30;25( 16):2846–66. [PubMed]
175. Kimmel M, Flehinger BJ. Nonparametric estimation of the size-metastasis relationship in solid cancers. Biometrics. 1991 Sep;47( 3):987–1004. [PubMed]
176. Wang PE, Wang TT, Chiu YH, et al. Evolution of multiple disease screening in Keelung: a model for community involvement in health interventions? J Med Screen. 2006;13(Suppl 1):S54–8. [PubMed]
177. Garrison LP. The ISPOR Good Practice Modeling Principles: A Sensible Approach: Be Transparent, Be Reasonable. Value Health. 2003 Jan-Feb;6(1):6–8. [PubMed]
178. Kim JJ, Kuntz KM, Stout NK, et al. Multiparameter Calibration of a Natural History Model of Cervical Cancer. Am J Epidemiol. 2007 July 15;166(2):137–50. [PubMed]
179. Goldhaber-Fiebert JD, Stout NK, Ortendahl J, et al. Modeling Human Papillomavirus and Cervical Cancer in the United States for Analyses of Screening and Vaccination. Popul Health Metr. 2007;5( 1):11. [PMC free article] [PubMed]
180. Weinstein MC, O’Brien B, Hornberger J, et al. Principles of Good Practice for Decision Analytic Modeling in Health-Care Evaluation: Report of the ISPOR Task Force on Good Research Practices--Modeling Studies. Value Health. 2003 Jan-Feb;6(1):9–17. [PubMed]