PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (27)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
Document Types
1.  Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings 
Background
Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown.
Methods
We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe.
Results
The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy.
Conclusions
Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa.
doi:10.1097/QAI.0b013e31828e1e56
PMCID: PMC3683123  PMID: 23466647
Meningitis/DI; Meningitis/EC; Meningitis/EP; Cost Analysis; Diagnostic Techniques and Procedures/CF; Diagnosis Differential; Cryptococcal Meningitis; TB Meningitis; Bacterial Meningitis; Humans
2.  Derivation of Background Mortality by Smoking and Obesity in Cancer Simulation Models 
BACKGROUNDS
Simulation models designed to evaluate cancer prevention strategies make assumptions on background mortality–the competing risk of death from causes other than the cancer being studied. Researchers often use the U.S. lifetables and assume homogeneous other-cause mortality rates. However, this can lead to bias because common risk factors such as smoking and obesity also predispose individuals for deaths from other causes such as cardiovascular disease.
METHODS
We obtained calendar year-, age- and sex-specific other-cause mortality rates by removing deaths due to a specific cancer from U.S. all-cause life tables. Prevalence across 12 risk factor groups (3 smoking (never, past, and current smoker) and 4 body mass index (BMI) categories (<25, 25-30, 30-35, 35+ kg/m2) were estimated from national surveys (National Health and Nutrition Examination Surveys (NHANES) 1971-2004). Using NHANES linked-mortality data, we estimated hazard ratios for death by BMI/smoking using a Poisson regression model. Finally, we combined these results to create 12 sets of BMI and smoking-specific other-cause life tables for U.S. adults aged 40 and older that can be used in simulation models of lung, colorectal, or breast cancer.
RESULTS
We found substantial differences in background mortality when accounting for BMI and smoking. Ignoring the heterogeneity in background mortality in cancer simulation models can lead to underestimation of competing risk of deaths for higher risk individuals (e.g. male, 60-year old, white obese smokers) by as high as 45%.
CONCLUSION
Not properly accounting for competing risks of death may introduce bias when using simulation modeling to evaluate population health strategies for prevention, screening, or treatment. Further research is warranted on how these biases may impact cancer screening strategies targeted to high-risk individuals.
doi:10.1177/0272989X12458725
PMCID: PMC3663442  PMID: 23132901
3.  How should individuals with a false-positive fecal occult blood test for colorectal cancer be managed? A decision analysis 
Several industrialized nations recommend fecal occult blood testing (FOBT) to screen for colorectal cancer (CRC), but corresponding screening guidelines do not specify how individuals with a prior false positive FOBT result (fpFOBT) should be managed in terms of subsequent CRC screening. Accordingly, we conducted a decision analysis to compare different strategies for managing such individuals.
We used a previously-developed CRC microsimulation model, SimCRC, to calculate life-years and the lifetime number of colonoscopies (as a measure of required resources) for a cohort of 50-year-olds to whom FOBT-based CRC screening is offered annually from age 50 to 75. We compared three management strategies for individuals with a prior fpFOBT: 1) resume screening in 10 years with 10-yearly colonoscopy (SwitchCol_long); 2) resume screening in 1 year with annual FOBT (ContinueFOBT_Short); and 3) resume screening in 10 years (i.e., the recommended interval following a negative colonscopy) with annual FOBT (ContinueFOBT_long). We performed sensitivity analyses on various parameters and assumptions.
When using different management strategies for individuals with a prior fpFOBT the variation in the number of life-years gained relative to no screening was less than 2%, while the variation in the lifetime number of colonoscopies was 23% (percentages are calculated as the maximum difference across strategies divided by the lowest number across strategies). The ContinueFOBT_long strategy showed the lowest lifetime number of colonoscopies per life-year gained even when key assumptions were varied.
In conclusion, the ContinueFOBT_long strategy was advantageous regarding both clinical benefit and required resources. Specifying an appropriate management strategy for individuals with a prior fpFOBT may substantially reduce required resources within a FOBT-based CRC screening program without limiting its effectiveness.
doi:10.1002/ijc.27463
PMCID: PMC3693764  PMID: 22307927
colorectal cancer; screening; fecal occult blood
4.  Lymph Node Evaluation for Colon Cancer in an Era of Quality Guidelines: Who Improves? 
Journal of Oncology Practice  2013;9(4):e164-e171.
The implementation of lymph node evaluation guidelines has been accepted gradually into practice but has been adopted more quickly in the treatment of higher risk patients.
Introduction:
In the 1990s, several organizations began recommending evaluation of > 12 lymph nodes during colon resection because of its association with improved survival. We examined practice implications of multispecialty quality guidelines over the past 20 years recommending evaluation of ≥ 12 lymph nodes during colon resection for adequate staging.
Materials and Methods:
We used the 1988 to 2009 Surveillance, Epidemiology, and End Results program to conduct a retrospective observational cohort study of 90,203 surgically treated patients with colon cancer. We used Cochran-Armitage tests to examine trends in lymph node examination over time and multivariate logistic regression to identify patient characteristics associated with guideline-recommended lymph node evaluation.
Results:
The introduction of practice guidelines was associated with gradual increases in guideline-recommended lymph node evaluation. From 1988 to 1990, 34% of patients had > 12 lymph nodes evaluated, increasing to 38% in 1994 to 1996 and to > 75% from 2006 to 2009. Younger, white patients and those with more-extensive bowel penetration (T3/4 nonmetastatic) and high tumor grade saw more-rapid increases in lymph node evaluation (P < .001). Multivariate analyses demonstrated a significant interaction between year of diagnosis and both T stage and grade, indicating that those with higher T stage and higher grade were more likely to receive guideline-recommended care earlier.
Conclusion:
The implementation of lymph node evaluation guidelines was accepted gradually into practice but adopted more quickly among higher risk patients. By identifying patients who are least likely to receive guideline-recommended care, these findings present a starting point for promoting targeted improvements in cancer care and further understanding underlying contributors to these disparities.
doi:10.1200/JOP.2012.000812
PMCID: PMC3710184  PMID: 23942934
5.  Contribution of H. pylori and Smoking Trends to US Incidence of Intestinal-Type Noncardia Gastric Adenocarcinoma: A Microsimulation Model 
PLoS Medicine  2013;10(5):e1001451.
Jennifer Yeh and colleagues examine the contribution of IHelicobacter pyloriI and smoking trends to the incidence of past and future intestinal-type noncardia gastric adenocarcinoma.
Please see later in the article for the Editors' Summary
Background
Although gastric cancer has declined dramatically in the US, the disease remains the second leading cause of cancer mortality worldwide. A better understanding of reasons for the decline can provide important insights into effective preventive strategies. We sought to estimate the contribution of risk factor trends on past and future intestinal-type noncardia gastric adenocarcinoma (NCGA) incidence.
Methods and Findings
We developed a population-based microsimulation model of intestinal-type NCGA and calibrated it to US epidemiologic data on precancerous lesions and cancer. The model explicitly incorporated the impact of Helicobacter pylori and smoking on disease natural history, for which birth cohort-specific trends were derived from the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS). Between 1978 and 2008, the model estimated that intestinal-type NCGA incidence declined 60% from 11.0 to 4.4 per 100,000 men, <3% discrepancy from national statistics. H. pylori and smoking trends combined accounted for 47% (range = 30%–58%) of the observed decline. With no tobacco control, incidence would have declined only 56%, suggesting that lower smoking initiation and higher cessation rates observed after the 1960s accelerated the relative decline in cancer incidence by 7% (range = 0%–21%). With continued risk factor trends, incidence is projected to decline an additional 47% between 2008 and 2040, the majority of which will be attributable to H. pylori and smoking (81%; range = 61%–100%). Limitations include assuming all other risk factors influenced gastric carcinogenesis as one factor and restricting the analysis to men.
Conclusions
Trends in modifiable risk factors explain a significant proportion of the decline of intestinal-type NCGA incidence in the US, and are projected to continue. Although past tobacco control efforts have hastened the decline, full benefits will take decades to be realized, and further discouragement of smoking and reduction of H. pylori should be priorities for gastric cancer control efforts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cancer of the stomach (gastric cancer) is responsible for a tenth of all cancer deaths world-wide, with an estimated 700,000 people dying from this malignancy every year, making it the second most common cause of global cancer-related deaths after lung cancer. Unfortunately, the projected global burden of this disease estimate that deaths from gastric cancer will double by 2030. Gastric cancer has a poor prognosis with only a quarter of people with this type of cancer surviving more than five years. In order to reduce deaths, it is therefore of utmost importance to identify and reduce the modifiable risk factors associated with gastric cancer. Smoking and chronic gastric infection with the bacteria Helicobacter pylori (H. pylori), are known to be two common modifiable risk factors for gastric cancer, particularly for a type of gastric cancer called intestinal-type noncardia gastric adenocarcinoma (NCGA), which occurs at the distal end of the stomach and accounts for more than half of all cases of gastric cancer in US men.
Why Was This Study Done?
H. pylori initiates a precancerous process, and so infection with this bacteria can increase intestinal-type NCGA risk by as much as 6-fold while smoking doubles cancer risk by advancing increasing progression of existing lesions. Changes in these two risk factors over the past century (especially following the US Surgeon General's Report on Smoking and Health in 1964) have led to a dramatic decline in the rates of gastric cancer in US men. Understanding the combined effects of underlying risk factor trends on health outcomes for intestinal-type NCGA at the population level can help to predict future cancer trends and burden in the US. So in this study, the researchers used a mathematical model to estimate the contribution of H. pylori and smoking trends on the decline in intestinal-type NCGA incidence in US men.
What Did the Researchers Do and Find?
The researchers used birth cohorts derived from data in two national databases, the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS) to develop a population-based model of intestinal-type NCGA. To ensure model predictions were consistent with epidemiologic data, the researchers calibrated the model to data on cancer and precancerous lesions and using the model, projected population outcomes between 1978 and 2040 for a base-case scenario (in which all risk factor trends were allowed to vary over time). The researchers then evaluated alternative risk factors scenarios to provide insights on the potential benefit of past and future efforts to control gastric cancer.
Using these methods, the researchers estimated that the incidence of intestinal-type NCGA (standardized by age) fell from 11.0 to 4.4 per 100,000 men between 1978 and 2008, a drop of 60%. When the researchers incorporated only H. pylori prevalence and smoking trends into the model (both of which fell dramatically over the time period) they found that intestinal-type NCGA incidence fell by only 28% (from 12.7 to 9.2 per 100,000 men), suggesting that H. pylori and smoking trends are responsible for 47% of the observed decline. The researchers found that H. pylori trends alone were responsible for 43% of the decrease in cancer but smoking trends were responsible for only a 3% drop. The researchers also found evidence that after the 1960s, observed trends in lower smoking initiation and higher cessation accelerated the decline in intestinal-type NCGA incidence by 7%. Finally, the researchers found that intestinal-type NCGA incidence is projected to decline an additional 47% between 2008 and 2040 (4.4 to 2.3 per 100,000 men) with H. pylori and smoking trends accounting for more than 80% of the observed fall.
What Do These Findings Mean?
These findings suggest that, combined with a fall in smoking rates, almost half of the observed fall in rates of intestinal-type NCGA cancer in US men between 1978 and 2008 was attributable to the decline in infection rates of H. pylori. Rates for this cancer are projected to continue to fall by 2040, with trends for both H. pylori infection and smoking accounting for more than 80% of the observed fall, highlighting the importance of the relationship between risk factors changes over time and achieving long-term reduction in cancer rates. This study is limited by the assumptions made in the model and in that it only examined one type of gastric cancer and excluded women. Nevertheless, this modeling study highlights that continued efforts to reduce rates of smoking and H. pylori infection will help to reduce rates of gastric cancer.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001451.
The National Cancer Institute gives detailed information about gastric cancer
The Gastric Cancer Foundation has information on gastric cancer for patients and professionals
Cancer Research UK explains types of gastric cancer
doi:10.1371/journal.pmed.1001451
PMCID: PMC3660292  PMID: 23700390
6.  Contribution of screening and survival differences to racial disparities in colorectal cancer rates 
Background
Considerable disparities exist in colorectal cancer (CRC) incidence and mortality rates between blacks and whites in the US. We estimated how much of these disparities could be explained by differences in CRC screening and stage-specific relative CRC survival.
Methods
We used the MISCAN-Colon microsimulation model to estimate CRC incidence and mortality rates in blacks aged 50 years and older from 1975 to 2007 assuming they had: 1) the same trends in screening rates as whites instead of observed screening rates (incidence and mortality); and 2) the same trends in stage-specific relative CRC survival rates as whites instead of observed (mortality only); and 3) a combination of both. The racial disparities in CRC incidence and mortality rates attributable to differences in screening and/or stage-specific relative CRC survival were then calculated by comparing rates from these scenarios to the observed black rates.
Results
Differences in screening account for 42% of disparity in CRC incidence and 19% of disparity in CRC mortality between blacks and whites. 36% of the disparity in CRC mortality could be attributed to differences in stage-specific relative CRC survival. Together screening and survival explained a little over 50% of the disparity in CRC mortality between blacks and whites.
Conclusion
Differences in screening and relative CRC survival are responsible for a considerable proportion of the observed disparities in CRC incidence and mortality rates between blacks and whites.
Impact
Enabling blacks to achieve equal access to care as whites could substantially reduce the racial disparities in CRC burden.
doi:10.1158/1055-9965.EPI-12-0023
PMCID: PMC3531983  PMID: 22514249
Colorectal neoplasms; healthcare disparities; early detection of cancer; survival rate; computer simulation
7.  Stool DNA Testing to Screen for Colorectal Cancer in the Medicare Population 
Annals of internal medicine  2010;153(6):368-377.
Background
Centers for Medicare and Medicaid Services (CMS) considered whether to reimburse stool DNA testing for colorectal cancer screening among Medicare enrollees.
Objective
To evaluate the conditions under which stool DNA testing could be cost-effective compared with the colorectal cancer screening tests currently reimbursed by CMS.
Design
Comparative microsimulation modeling study using two independently-developed models.
Data Sources
Derived from literature.
Target Population
65-year-old (Medicare eligible) individuals; 50-year old individuals as sensitivity analysis.
Time Horizon
Lifetime.
Perspective
Third-party payer.
Interventions
Stool DNA test every 3 or 5 years in comparison to currently-recommended colorectal cancer screening strategies.
Outcome Measures
Life expectancy, lifetime costs, incremental cost-effectiveness ratios, threshold costs.
Results of Base-Case Analysis
Assuming a cost of $350 per test, strategies of stool DNA testing every 3 or 5 years yielded fewer life-years and higher costs than the currently recommended colorectal cancer screening strategies.
Results of Threshold Analysis
Screening with the stool DNA test would be cost-effective at per-test cost of $40 to $60 for 3-yearly stool DNA testing, depending on the simulation model used. There were no levels of sensitivity and specificity for which stool DNA testing would be cost-effective at its current cost of $350 per test. Stool DNA testing at 3-yearly intervals would be cost-effective at a cost of $350 per test if the relative adherence with stool DNA testing were at least 50% better than with other screening tests.
Results of Sensitivity Analysis
None of the above mentioned results changed significantly when considering a 50-year old cohort.
Limitations
We did not model other pathways than the traditional adenoma-carcinoma sequence.
Conclusions
Only if a significant reduction can be made to the test cost or if its availability would entice a large fraction of otherwise unscreened persons to be screened will stool DNA testing be a cost-effective alternative for colorectal cancer screening.
Primary Funding Source
Agency for Healthcare Research and Quality
doi:10.1059/0003-4819-153-6-201009210-00004
PMCID: PMC3578600  PMID: 20855801
8.  Clarifying differences in natural history between models of screening: The case of colorectal cancer 
Background
Microsimulation models are important decision support tools for screening. However, their complexity creates a barrier, making it difficult to understand models and, as a result, limiting realization of their full potential. Therefore, it is important to develop documentation that clarifies assumptions. We demonstrate this problem and explore a solution for the natural history, using three independently developed colorectal cancer screening models.
Methods
We begin by projecting the cost-effectiveness of colonoscopy screening for the three microsimulation models. Next, we provide a conventional presentation of each of them, including information that would usually be published with a decision analysis. Finally, for the three models, we provide the simulated reduction in clinical cancer incidence following a one-time complete removal of adenomas and preclinical cancers. We denote this measure as maximum clinical incidence reduction (MCLIR).
Results
There are considerable between-model differences in projected effectiveness. Conventional documentation describes model structure and associated parameter values. Given only this information, it is very difficult to compare models, largely because differences in structure make parameter values incomparable. In contrast, the MCLIR clearly shows the differences in assumptions on the key issue of the natural history: the dwell time of progressive preclinical disease, explaining between-model differences in projected effectiveness.
Conclusions
The simulated “maximum clinical incidence reduction” adds to the insight in dwell time, the critical characteristic of the natural history of disease, and how it differs between models. Inclusion of the MCLIR as a standard description would clarify the implications of assumptions for models applied to screening questions.
doi:10.1177/0272989X11408915
PMCID: PMC3531980  PMID: 21673187
9.  Modeling the Potential Impact of a Prescription Drug Copayment Increase on the Adult Asthmatic Medicaid Population 
Objectives
The Commonwealth of Massachusetts increased the copayment for prescription drugs by $1.50 for Medicaid (MassHealth) beneficiaries in 2003. We sought to determine the likely health outcomes and cost shifts attributable to this copayment increase using the example of inhaled corticosteroids (ICS) use among adult asthmatic Medicaid beneficiaries.
Method
We compared the predicted costs and health outcomes projected over a 1-year time horizon with and without the increase in copayment from the perspective of MassHealth, providers, pharmacies, and MassHealth beneficiaries by employing decision analysis simulation model.
Results
In a target population of 17,500 adult asthmatics, increased copayments from 50¢ to $2.00 would result in an additional 646 acute events per year, caused by increased drug nonadherence. Annual combined net savings for the state and federal governments would be $2.10 million. Projected MassHealth savings are attributable to both decreased drug utilization and lower pharmacy reimbursement rates; these more than offset the additional costs of more frequent acute exacerbations. Pharmacies would lose $1.98 million in net revenues, MassHealth beneficiaries would pay an additional $0.28 million, and providers would receive additional $0.16 million.
Conclusion
Over its first year of implementation, increase in the prescription drug copayment is expected to produce more frequent acute exacerbations among asthmatic MassHealth beneficiaries who use ICS and to shift the financial burden from government to other stakeholders.
doi:10.1111/j.1524-4733.2007.00219.x
PMCID: PMC3476042  PMID: 18237365
asthma; copayment; medicaid; prescription drug
10.  Cost-effectiveness of omalizumab in adults with severe asthma: Results from the Asthma Policy Model 
Background
Omalizumab (trade name Xolair) is approved by the US Food and Drug Administration for treatment of moderate-to-severe allergic asthma. Given the high acquisition cost of omalizumab, its role and cost-effectiveness in disease management require definition.
Objective
We sought to identify the clinical and economic circumstances under which omalizumab might or might not be a cost-effective option by using a mathematic model.
Methods
We merged published data on clinical and economic outcomes (including acute event incidence, frequency/severity of hospitalizations, and health-related quality of life) to project 10-year costs, quality-adjusted life years (QALYs), and cost-effectiveness of treatment with omalizumab in addition to inhaled corticosteroids. Sensitivity analyses were conducted by using input data ranges from a variety of sources (published clinical trials and observational databases).
Results
For patients with baseline acute event rates, omalizumab conferred an additional 1.7 quality-adjusted months at an incremental cost of $131,000 over a 10-year planning horizon, implying a cost-effectiveness ratio of $821,000 per QALY gained. For patients with 5 times the baseline acute event rate, the cost-effectiveness ratio was $491,000 per QALY gained. The projected cost-effectiveness ratio could fall within a range of other programs that are widely considered to be cost-effective if the cost of omalizumab decreases to less than $200.
Conclusion
Omalizumab is not cost-effective for most patients with severe asthma. The projected cost-effectiveness ratios could fall within a favorable range if the cost of omalizumab decreases significantly.
Clinical implications
Based on the high cost of omalizumab, it is especially important that clinicians explore alternative medications for asthma before initiating omalizumab.
doi:10.1016/j.jaci.2007.07.055
PMCID: PMC3476046  PMID: 17904628
Omalizumab; cost-effectiveness; asthma; anti-IgE
11.  Radiation-related cancer risks from CT colonography screening: a risk-benefit analysis 
Objective
The purpose of this study was to estimate the ratio of cancers prevented to induced (benefit-risk ratio) for CT colonography screening every five years from age 50-80.
Materials and methods
Radiation-related cancer risk was estimated using risk projection models based on the National Research Council's BEIR VII committee's report and screening protocols from the American College of Radiology Imaging Network's National CT Colonography Trial. Uncertainty limits (UL) were estimated using Monte-Carlo simulation methods. Comparative modelling with three colorectal cancer microsimulation models was used to estimate the potential reduction in colorectal cancer cases and deaths.
Results
The estimated mean effective dose per CT colonography screen was 8mSv for females and 7mSv for males. The estimated number of radiation-related cancers from CT colonography screening every 5 years from age 50-80 was 150 cases/100,000 individuals (95%UL:80-280) for males and females. The estimated number of colorectal cancers prevented by CT colonography every 5 years from age 50-80 ranged across the three microsimulation models from 3580 to 5190/100,000, yielding a benefit-risk ratio that varied from 24:1(95%UL=13:1-45:1) to 35:1(95%UL=19:1-65:1). The benefit-risk ratio for cancer deaths was even higher than the ratio for cancer cases. Inclusion of radiation-related cancer risks from CT scans following-up extracolonic findings did not materially alter the results.
Conclusions
Concerns have been raised about recommending CT colonography as a routine screening tool because of the potential harms, including the radiation risks. Based on these models the benefits from CT colonography screening every five years from age 50-80 clearly outweigh the radiation risks.
doi:10.2214/AJR.10.4907
PMCID: PMC3470483  PMID: 21427330
12.  A systematic comparison of microsimulation models of colorectal cancer: the role of assumptions about adenoma progression 
Background
As the complexity of microsimulation models increases, however, concerns about model transparency are heightened.
Methods
We conducted model “experiments” to explore the impact of variations in “deep” model parameters using three colorectal cancer (CRC) models. All natural history models were calibrated to match observed data on adenoma prevalence and cancer incidence, but varied in their underlying specification of the adenoma-carcinoma process. We projected CRC incidence among individuals with an underlying adenoma or preclinical cancer vs. those without any underlying condition and examined the impact of removing adenomas. We calculated the percentage of simulated CRC cases arising from adenomas that developed within 10 or 20 years prior to cancer diagnosis, and estimated dwell time – defined as the time from the development of an adenoma to symptom-detected cancer in the absence of screening among individuals with a CRC diagnosis.
Results
The 20-year CRC incidence among 55-year-old individuals with an adenoma or preclinical cancer was 7 to 75 times greater than in the condition-free group. The removal of all adenomas among the subgroup with an underlying adenoma or cancer resulted in a reduction of 30% to 89% in cumulative incidence. Among CRCs diagnosed at age 65, the proportion arising from adenomas formed within 10 years ranged between 4% and 67%. The mean dwell time varied from 10.6 years to 25.8 years.
Conclusions
Models that all match observed data on adenoma prevalence and cancer incidence can produce quite different dwell times and very different answers with respect to the effectiveness of interventions. When conducting applied analyses to inform policy, using multiple models provides a sensitivity analysis on key (unobserved) “deep” model parameters and can provide guidance about specific areas in need of additional research and validation.
doi:10.1177/0272989X11408730
PMCID: PMC3424513  PMID: 21673186
13.  Projecting the Clinical Benefits of Adjuvant Radiotherapy versus Observation and Selective Salvage Radiotherapy after Radical Prostatectomy: A Decision Analysis 
Our purpose was to project and compare clinical and quality-adjusted life year (QALY) outcomes of adjuvant radiotherapy (ART) vs. salvage radiotherapy (SRT) after radical prostatectomy for men with locally advanced prostate cancer.
We constructed a Markov model to simulate the randomized studies of observation vs. ART, assuming 75% of observation patients would receive SRT at prostate specific antigen (PSA) recurrence. Transition probabilities and utility inputs were drawn from randomized trials of ART and cohort studies of SRT. We projected 10-year PSA recurrence-free survival, metastasis-free survival and overall survival.
We found that observation with selective SRT yielded slightly worse outcomes than ART for post-RT PSA recurrence-free survival (47% and 52%), metastasis-free survival (69% and 70%) and overall survival (72% and 73%). Findings were robust to sensitivity analyses. After adjusting for the disutility of RT, observation plus SRT yielded better QALYs at 10 years than ART (6.80 and 6.13 QALYs).
Thus, observation plus SRT may be optimal for men likely to comply with surveillance who wish to minimize treatment side effects. These findings reflect outcomes for the average patient given the current level of evidence and are meant to help inform current decision-making as we await future clinical studies of comparative effectiveness.
doi:10.1038/pcan.2011.27
PMCID: PMC3156938  PMID: 21691281
prostate cancer; radiotherapy; decision analysis
14.  Cost-Effectiveness of Computed Tomographic Colonography Screening for Colorectal Cancer in the Medicare Population 
Background
The Centers for Medicare and Medicaid Services (CMS) considered whether to reimburse computed tomographic colonography (CTC) for colorectal cancer screening of Medicare enrollees. To help inform its decision, we evaluated the reimbursement rate at which CTC screening could be cost-effective compared with the colorectal cancer screening tests that are currently reimbursed by CMS and are included in most colorectal cancer screening guidelines, namely annual fecal occult blood test (FOBT), flexible sigmoidoscopy every 5 years, flexible sigmoidoscopy every 5 years in conjunction with annual FOBT, and colonoscopy every 10 years.
Methods
We used three independently developed microsimulation models to assess the health outcomes and costs associated with CTC screening and with currently reimbursed colorectal cancer screening tests among the average-risk Medicare population. We assumed that CTC was performed every 5 years (using test characteristics from either a Department of Defense CTC study or the National CTC Trial) and that individuals with findings of 6 mm or larger were referred to colonoscopy. We computed incremental cost-effectiveness ratios for the currently reimbursed screening tests and calculated the maximum cost per scan (ie, the threshold cost) for the CTC strategy to lie on the efficient frontier. Sensitivity analyses were performed on key parameters and assumptions.
Results
Assuming perfect adherence with all tests, the undiscounted number life-years gained from CTC screening ranged from 143 to 178 per 1000 65-year-olds, which was slightly less than the number of life-years gained from 10-yearly colonoscopy (152–185 per 1000 65-year-olds) and comparable to that from 5-yearly sigmoidoscopy with annual FOBT (149–177 per 1000 65-year-olds). If CTC screening was reimbursed at $488 per scan (slightly less than the reimbursement for a colonoscopy without polypectomy), it would be the most costly strategy. CTC screening could be cost-effective at $108–$205 per scan, depending on the microsimulation model used. Sensitivity analyses showed that if relative adherence to CTC screening was 25% higher than adherence to other tests, it could be cost-effective if reimbursed at $488 per scan.
Conclusions
CTC could be a cost-effective option for colorectal cancer screening among Medicare enrollees if the reimbursement rate per scan is substantially less than that for colonoscopy or if a large proportion of otherwise unscreened persons were to undergo screening by CTC.
doi:10.1093/jnci/djq242
PMCID: PMC2923219  PMID: 20664028
15.  Cost-Effectiveness of Treatment and Endoscopic Surveillance of Precancerous Lesions to Prevent Gastric Cancer 
Cancer  2010;116(12):2941-2953.
Background
While surveillance for Barrett’s esophagus and other gastrointestinal precancerous conditions is recommended, no analogous guidelines exist for gastric lesions. We sought to estimate the clinical benefits and cost-effectiveness of treatment and endoscopic surveillance to prevent gastric cancer.
Methods
We developed a state-transition decision model for a cohort of U.S. men with a recent incidental diagnosis of gastric precancerous lesions (dysplasia, intestinal metaplasia, or atrophy). Strategies included (1) no treatment or surveillance, and (2) referral for treatment and surveillance, and varied by treatment for dysplastic and cancerous lesions (surgery or endoscopic mucosal resection (EMR)) and surveillance frequency (none, every 10, 5, or 1 years). We restrict the term ‘post-treatment surveillance’ to surveillance in individuals after treatment. Data were based on published literature and databases. Outcomes included lifetime gastric cancer risk, quality-adjusted-life-expectancy, lifetime costs, and incremental cost-effectiveness ratios.
Results
For a 50-year-old cohort of men with dysplasia, lifetime gastric cancer risk was 5.9%. EMR with annual surveillance reduced lifetime cancer risk by 90% and cost $39,800 per quality-adjusted-life-year (QALY). Addition of post-treatment surveillance every 10 years provided little incremental benefit (~5%), but cost >$1 million per QALY. Results were most sensitive to surgical risks and proportion of lesions completely removed with EMR.
Conclusions
EMR with surveillance every 1 to 5 years for gastric dysplasia is promising for secondary cancer prevention, and has a cost-effectiveness ratio that would be considered attractive in the U.S. Endoscopic surveillance of less advanced lesions does not appear to be cost-effective, except possibly for immigrants from high-risk countries.
doi:10.1002/cncr.25030
PMCID: PMC2946062  PMID: 20564399
gastric cancer; surveillance; secondary prevention; cost-effectiveness; outcomes research
16.  Cost-Effectiveness of Cervical Cancer Screening With Human Papillomavirus DNA Testing and HPV-16,18 Vaccination 
Background
The availability of human papillomavirus (HPV) DNA testing and vaccination against HPV types 16 and 18 (HPV-16,18) motivates questions about the cost-effectiveness of cervical cancer prevention in the United States for unvaccinated older women and for girls eligible for vaccination.
Methods
An empirically calibrated model was used to assess the quality-adjusted life years (QALYs), lifetime costs, and incremental cost-effectiveness ratios (2004 US dollars per QALY) of screening, vaccination of preadolescent girls, and vaccination combined with screening. Screening varied by initiation age (18, 21, or 25 years), interval (every 1, 2, 3, or 5 years), and test (HPV DNA testing of cervical specimens or cytologic evaluation of cervical cells with a Pap test). Testing strategies included: 1) cytology followed by HPV DNA testing for equivocal cytologic results (cytology with HPV test triage); 2) HPV DNA testing followed by cytology for positive HPV DNA results (HPV test with cytology triage); and 3) combined HPV DNA testing and cytology. Strategies were permitted to switch once at age 25, 30, or 35 years.
Results
For unvaccinated women, triennial cytology with HPV test triage, beginning by age 21 years and switching to HPV testing with cytology triage at age 30 years, cost $78 000 per QALY compared with the next best strategy. For girls vaccinated before age 12 years, this same strategy, beginning at age 25 years and switching at age 35 years, cost $41 000 per QALY with screening every 5 years and $188 000 per QALY screening triennially, each compared with the next best strategy. These strategies were more effective and cost-effective than screening women of all ages with cytology alone or cytology with HPV triage annually or biennially.
Conclusions
For both vaccinated and unvaccinated women, age-based screening by use of HPV DNA testing as a triage test for equivocal results in younger women and as a primary screening test in older women is expected to be more cost-effective than current screening recommendations.
doi:10.1093/jnci/djn019
PMCID: PMC3099548  PMID: 18314477
17.  BIAS ASSOCIATED WITH FAILING TO INCORPORATE DEPENDENCE ON EVENT HISTORY IN MARKOV MODELS 
Purpose
When using state-transition Markov models to simulate risk of recurrent events over time, incorporating dependence on higher numbers of prior episodes can increase model complexity, yet failing to capture this event history may bias model outcomes. This analysis assessed the tradeoffs between model bias and complexity when evaluating risks of recurrent events in Markov models.
Methods
We developed a generic episode/relapse Markov cohort model, defining bias as the percentage change in events prevented with two hypothetical interventions (prevention and treatment) when incorporating 0–9 prior episodes in relapse risk, versus a model with 10 such episodes. We evaluated magnitude and sign of bias as a function of event and recovery risks, disease-specific mortality, and risk function.
Results
Bias was positive in the base case for a prevention strategy, indicating that failing to fully incorporate dependence on event history overestimated the prevention’s predicted impact. For treatment, the bias was negative, indicating an underestimated benefit. Bias approached zero as number of tracked prior episodes increased, and average bias over 10 tracked episodes was greater with the exponential than linear functions of relapse risk and with treatment than prevention strategies. With linear and exponential risk functions, absolute bias reached 33% and 78%, respectively, in prevention, and 52% and 85% in treatment.
Conclusion
Failing to incorporate dependence on prior event history in subsequent relapse risk in Markov models can greatly impact model outcomes, overestimating the impact of prevention and treatment strategies by up to 85%, and underestimating impact in some treatment models by up to 20%. When at least four prior episodes are incorporated, bias does not exceed 26% in prevention or 11% in treatment.
doi:10.1177/0272989X10363480
PMCID: PMC3086820  PMID: 20400728
18.  Alzheimer's Disease-Like Phenotype Associated With the c.154delA Mutation in Progranulin 
Archives of neurology  2010;67(2):171-177.
Objective
To characterize a kindred with a familial neurodegenerative disorder associated with a mutation in progranulin (PGRN), emphasizing the unique clinical features in this kindred.
Design
Clinical, radiologic, pathologic, and genetic characterization of a kindred with a familial neurodegenerative disorder.
Setting
Multispecialty group academic medical center.
Patients
Affected members of a kindred with dementia +/- parkinsonism associated with a unique mutation in PGRN.
Main Outcome Measure
Genotype-phenotype correlation.
Results
Ten affected individuals were identified, among whom six presented with initial amnestic complaints resulting in initial diagnoses of AD or amnestic mild cognitive impairment (MCI). A minority of individuals presented with features characteristic of FTD. The ages of onset of generation II (mean 75.8 years, range 69-80 years) were far greater than those of generation III (mean 60.7 years, range 55-66 years). The pattern of cerebral atrophy varied widely among affected individuals. Neuropathology in six individuals showed frontotemporal lobar degeneration with ubiquitin positive neuronal cytoplasmic and intranuclear inclusions (FTLD-U + NII). PGRN analysis revealed a single base pair deletion in exon 2 (c.154delA), causing a frameshift (p.Thr52Hisfs×2) and therefore creation of a premature termination codon and likely null allele.
Conclusions
We describe a large kindred in which the majority of affected individuals had clinical presentations resembling AD or amnestic MCI in association with a mutation in PGRN and underlying FTLD-U + NII neuropathology. This is in distinct contrast to previously reported kindreds, where clinical presentations have typically been within the spectrum of FTLD. The basis for the large difference in age of onset between generations will require further study.
doi:10.1001/archneurol.2010.113
PMCID: PMC2902004  PMID: 20142525
MRI; progranulin; frontotemporal dementia; PGRN
19.  Effects of Helicobacter pylori infection and smoking on gastric cancer incidence in China: a population-level analysis of trends and projections 
Cancer causes & control : CCC  2009;20(10):2021-2029.
Objective
Although gastric cancer incidence is declining in China, trends may differ from historical patterns in developed countries. Our aim was to (1) retrospectively estimate the effects of Helicobacter pylori (H. pylori) and smoking on past gastric cancer incidence and (2) project how interventions on these two risk factors can reduce future incidence.
Methods
We used a population-based model of intestinal-type gastric cancer to estimate gastric cancer incidence between 1985 and 2050. Disease and risk factor data in the model were from community-based epidemiological studies and national prevalence surveys.
Results
Between 1985 and 2005, age-standardized gastric cancer incidence among Chinese men declined from 30.8 to 27.2 per 100,000 (12%); trends in H. pylori and smoking prevalences accounted for >30% of overall decline. If past risk factor trends continue, gastric cancer incidence will decline an additional 30% by 2050. Yet, annual cases will increase from 116,000 to 201,000 due to population growth and aging. Assuming that H. pylori prevention/treatment and tobacco control are implemented in 2010, the decline in gastric cancer incidence is projected to increase to 33% with universal H. pylori treatment for 20-year-olds, 42% for a hypothetical childhood H. pylori vaccine, and 34% for aggressive tobacco control.
Conclusions
The decline in gastric cancer incidence has been slower than in developed countries and will be offset by population growth and aging. Public health interventions should be implemented to reduce the total number of cases.
doi:10.1007/s10552-009-9397-9
PMCID: PMC2904855  PMID: 19642005
Gastric cancer; Helicobacter pylori; Smoking; Cancer prevention; China
20.  Cost Effectiveness and Screening Interval of Lipid Screening in Hodgkin's Lymphoma Survivors 
Journal of Clinical Oncology  2009;27(32):5383-5389.
Purpose
Survivors of Hodgkin's lymphoma (HL) who received mediastinal irradiation have an increased risk of coronary heart disease. We evaluated the cost effectiveness of lipid screening in survivors of HL and compared different screening intervals.
Methods
We developed a decision-analytic model to evaluate lipid screening in a hypothetical cohort of 30-year-old survivors of HL who survived 5 years after mediastinal irradiation. We compared the following strategies: no screening, and screening at 1-, 3-, 5-, or 7-year intervals. Screen-positive patients were treated with statins. Markov models were used to calculate life expectancy, quality-adjusted life expectancy, and lifetime costs. Baseline probabilities, transition probabilities, and utilities were derived from published studies and US population data. Costs were estimated from Medicare fee schedules and the medical literature. Sensitivity analyses were performed.
Results
Using an incremental cost-effectiveness ratio (ICER) threshold of $100,000 per quality-adjusted life-year (QALY) saved, lipid screening at every interval was cost effective relative to a strategy of no screening. When comparing screening intervals, a 3-year interval was cost effective relative to a 5-year interval, but annual screening, relative to screening every 3 years, had an ICER of more than $100,000/QALY saved. Factors with the most influence on the results included risk of cardiac events/death after HL, efficacy of statins in reducing cardiac events/death, and costs of statins.
Conclusion
Lipid screening in survivors of HL, with statin therapy for screen-positive patients, improves survival and is cost effective. A screening interval of 3 years seems reasonable in the long-term follow-up of survivors of HL.
doi:10.1200/JCO.2009.22.8460
PMCID: PMC2868601  PMID: 19752333
21.  Exploring the cost-effectiveness of Helicobacter pylori screening to prevent gastric cancer in China in anticipation of clinical trial results 
Gastric cancer is the second leading cause of cancer-related deaths worldwide. Treatment for Helicobacter pylori infection, the leading causal risk factor, can reduce disease progression, but the long-term impact on cancer incidence is uncertain. Using the best available data, we estimated the potential health benefits and economic consequences associated with H. pylori screening in a high-risk region of China. An empirically calibrated model of gastric cancer was used to project reduction in lifetime cancer risk, life-expectancy and costs associated with (i) single lifetime screening (age 20, 30 or 40); (ii) single lifetime screening followed by rescreening individuals with negative results and (iii) universal treatment for H. pylori (age 20, 30 or 40). Data were from the published literature and national and international databases. Screening and treatment for H. pylori at age 20 reduced the mean lifetime cancer risk by 14.5% (men) to 26.6% (women) and cost less than $1,500 per year of life saved (YLS) compared to no screening. Rescreening individuals with negative results and targeting older ages was less cost-effective. Universal treatment prevented an additional 1.5% to 2.3% of risk reduction, but incremental cost-effectiveness ratios exceeded $2,500 per YLS. Screening young adults for H. pylori could prevent one in every 4 to 6 cases of gastric cancer in China and would be considered cost-effective using the GDP per capita threshold. These results illustrate the potential promise of a gastric cancer screening program and provide rationale for urgent clinical studies to move the prevention agenda forward.
doi:10.1002/ijc.23864
PMCID: PMC2597699  PMID: 18823009
simulation model; cost-effectiveness; Helicobacter pylori; gastric cancer
22.  Evaluating Test Strategies for Colorectal Cancer Screening: A Decision Analysis for the U.S. Preventive Services Task Force 
Annals of internal medicine  2008;149(9):659-669.
Background
The U.S. Preventive Services Task Force requested a decision analysis to inform their update of the recommendations for colorectal cancer screening.
Objective
To assess life-years gained and colonoscopy requirements for colorectal cancer screening strategies and identify a set of recommendable screening strategies.
Design
Decision analysis using 2 colorectal cancer microsimulation models from the Cancer Intervention and Surveillance Modeling Network.
Data Sources
Derived from the literature.
Target Population
U.S. average-risk 40-year-old population.
Perspective
Societal.
Time Horizon
Lifetime.
Interventions
Fecal occult blood tests (FOBTs), flexible sigmoidoscopy, or colonoscopy screening beginning at age 40, 50, or 60 years and stopping at age 75 or 85 years, with screening intervals of 1, 2, or 3 years for FOBT and 5, 10, or 20 years for sigmoidoscopy and colonoscopy.
Outcome Measures
Number of life-years gained compared with no screening and number of screening tests required.
Results of Base-Case Analysis
Beginning screening at age 50 years was consistently better than at age 60. Decreasing the stop age from 85 to 75 years decreased life-years gained by 1% to 4%, whereas colonoscopy use decreased by 4% to 15%. Assuming equally high adherence, 4 strategies provided similar life-years gained: colonoscopy every 10 years, annual Hemoccult SENSA (Beckman Coulter, Fullerton, California) testing or fecal immunochemical testing, and sensitive FOBT every 2 to 3 years with 5-yearly sigmoidoscopy. Hemoccult II and flexible sigmoidoscopy every 5 years alone were less effective.
Results of Sensitivity Analysis
The results were most sensitive to beginning screening at age 40 years.
Limitations
The stopping age for screening was based only on chronological age.
Conclusions
The findings support colorectal cancer screening with the following: colonoscopy every 10 years, annual screening with a sensitive FOBT, or high sensitivity FOBT every 2 to 3 years with5-yearly flexible sigmoidoscopy every 5 years. from ages 50 to 75 years.
PMCID: PMC2731975  PMID: 18838717
23.  Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination 
Background
To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV) and cervical cancer, explicitly incorporating uncertainty about the natural history of disease.
Methods
We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN), HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF) scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies.
Results
Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82%) and 69% (60–77%), respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter uncertainty about the natural history of type-specific HPV infection. The uncertainty surrounding the model-predicted reduction in cervical cancer incidence narrowed substantially when vaccination was combined with every-5-year screening, with a mean reduction of 89% and range of 83% to 95%.
Conclusion
We demonstrate an approach to parameterization, calibration and performance evaluation for a U.S. cervical cancer microsimulation model intended to provide qualitative and quantitative inputs into decisions that must be taken before long-term data on vaccination outcomes become available. This approach allows for a rigorous and comprehensive description of policy-relevant uncertainty about health outcomes under alternative cancer prevention strategies. The model provides a tool that can accommodate new information, and can be modified as needed, to iteratively assess the expected benefits, costs, and cost-effectiveness of different policies in the U.S.
doi:10.1186/1478-7954-5-11
PMCID: PMC2213637  PMID: 17967185
24.  Clinical and Echocardiographic Correlates of Health Status in Patients with Acute Chest Pain 
OBJECTIVE
To assess the ability of echocardiographic data to predict important functional status outcomes in patients with chest pain.
DESIGN
Prospective cohort study.
SETTING
A large, urban teaching hospital.
PATIENTS
Three hundred thirty-three patients admitted from the Emergency Department for evaluation of chest pain.
MEASUREMENTS AND MAIN RESULTS
Patients underwent two-dimensional and Doppler echocardiography as well as a face-to-face interview during their initial hospitalization and a telephone interview 1 year thereafter. The interview included the Medical Outcomes Study 36-Item Short Form (SF-36) health inventory, a generic health status instrument with a physical function subscale. The relation between clinical and echocardiographic factors and functional status was explored by univariable and multivariable linear regression and logistic regression analyses. Multiple clinical and echocardiographic factors correlated significantly with functional status measures at 1 year. For the SF-36 score at 1 year, age, male gender, white race, the presence of rales, and a comorbidity score were independently predictors in multivariate analysis; echocardiographic findings of severe left ventricular dysfunction (parameter estimate [PE] −27.6; 95% confidence interval [CI] −43.1, −12.2) and aortic insufficiency (PE −16.7; 95% CI −26.4, −7.0) added independent predictive information. Explanatory power (r2) for models using clinical and demographic variables was .27 and increased after inclusion of echocardiographic data to an r 2of .35. Results in the subset of patients (n =148) with acute coronary syndromes such as unstable angina or myocardial infarction were qualitatively similar. Selected factors (rales on examination, electrocardiographic changes suggestive of ischemia, and moderate to severe mitral regurgitation) also predicted which patients would die or have a decline in their functional status. In multivariate analysis, only rales remained an independent predictor of poor outcome (odds ratio 2.4; 95% CI 1.2, 4.5).
CONCLUSIONS
Echocardiographic data are correlated with measures of functional status in patients with chest pain, but the ability to predict future functional status from clinical or echocardiographic information is limited. Because functional status cannot be predicted adequately from either patients' characteristics or echocardiographic testing, it must be assessed directly.
doi:10.1046/j.1525-1497.1997.07160.x
PMCID: PMC1497201  PMID: 9436894
chest pain; echocardiography; functional status; prognosis
25.  Network Meta-analysis of Margin Threshold for Women With Ductal Carcinoma In Situ 
Background
Negative margins are associated with reduced risk of ipsilateral breast tumor recurrence (IBTR) for women with ductal carcinoma in situ (DCIS) treated with breast-conserving surgery (BCS). However, there is no consensus about the best minimum margin width.
Methods
We searched the PubMed database for studies of DCIS published in English between January 1970 and July 2010 and examined the relationship between IBTR and margin status after BCS for DCIS. Women with DCIS were stratified into two groups, BCS with or without radiotherapy. We used frequentist and Bayesian approaches to estimate the odds ratios (OR) of IBTR for groups with negative margins and positive margins. We further examined specific margin thresholds using mixed treatment comparisons and meta-regression techniques. All statistical tests were two-sided.
Results
We identified 21 studies published in 24 articles. A total of 1066 IBTR events occurred in 7564 patients, including BCS alone (565 IBTR events in 3098 patients) and BCS with radiotherapy (501 IBTR events in 4466 patients). Compared with positive margins, negative margins were associated with reduced risk of IBTR in patients with radiotherapy (OR = 0.46, 95% credible interval [CrI] = 0.35 to 0.59), and in patients without radiotherapy (OR = 0.34, 95% CrI = 0.24 to 0.47). Compared with patients with positive margins, the risk of IBTR for patients with negative margins was smaller (negative margin >0 mm, OR = 0.45, 95% CrI = 0.38 to 0.53; >2 mm, OR = 0.38, 95% CrI = 0.28 to 0.51; >5 mm, OR = 0.55, 95% CrI = 0.15 to 1.30; and >10 mm, OR = 0.17, 95% CrI = 0.12 to 0.24). Compared with a negative margin greater than 2 mm, a negative margin of at least 10 mm was associated with a lower risk of IBTR (OR = 0.46, 95% CrI = 0.29 to 0.69). We found a probability of .96 that a negative margin threshold greater than 10 mm is the best option compared with other margin thresholds.
Conclusions
Negative surgical margins should be obtained for DCIS patients after BCS regardless of radiotherapy. Within cosmetic constraint, surgeons should attempt to achieve negative margins as wide as possible in their first attempt. More studies are needed to understand whether margin thresholds greater than 10 mm are warranted.
doi:10.1093/jnci/djs142
PMCID: PMC3916966  PMID: 22440677

Results 1-25 (27)