PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (51)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  The Health System and Population Health Implications of Large-Scale Diabetes Screening in India: A Microsimulation Model of Alternative Approaches 
PLoS Medicine  2015;12(5):e1001827.
Background
Like a growing number of rapidly developing countries, India has begun to develop a system for large-scale community-based screening for diabetes. We sought to identify the implications of using alternative screening instruments to detect people with undiagnosed type 2 diabetes among diverse populations across India.
Methods and Findings
We developed and validated a microsimulation model that incorporated data from 58 studies from across the country into a nationally representative sample of Indians aged 25–65 y old. We estimated the diagnostic and health system implications of three major survey-based screening instruments and random glucometer-based screening. Of the 567 million Indians eligible for screening, depending on which of four screening approaches is utilized, between 158 and 306 million would be expected to screen as “high risk” for type 2 diabetes, and be referred for confirmatory testing. Between 26 million and 37 million of these people would be expected to meet international diagnostic criteria for diabetes, but between 126 million and 273 million would be “false positives.” The ratio of false positives to true positives varied from 3.9 (when using random glucose screening) to 8.2 (when using a survey-based screening instrument) in our model. The cost per case found would be expected to be from US$5.28 (when using random glucose screening) to US$17.06 (when using a survey-based screening instrument), presenting a total cost of between US$169 and US$567 million. The major limitation of our analysis is its dependence on published cohort studies that are unlikely fully to capture the poorest and most rural areas of the country. Because these areas are thought to have the lowest diabetes prevalence, this may result in overestimation of the efficacy and health benefits of screening.
Conclusions
Large-scale community-based screening is anticipated to produce a large number of false-positive results, particularly if using currently available survey-based screening instruments. Resource allocators should consider the health system burden of screening and confirmatory testing when instituting large-scale community-based screening for diabetes.
Sanjay Basu and colleagues estimate the benefits and costs of scaling up survey- or glucometer-based diabetes screening across India’s diverse populations.
Editors' Summary
Background
Worldwide, 387 million people have diabetes, a chronic condition characterized by high levels of glucose (sugar) in the blood. Blood sugar levels are usually controlled by insulin, a hormone released by the pancreas after meals. In people with type 2 diabetes (the most common type of diabetes), blood sugar control fails because the fat and muscle cells that normally respond to insulin by removing excess sugar from the blood become less responsive to insulin. Risk factors for diabetes include being overweight, having a large waist, being physically inactive, and having a family history of diabetes. The symptoms of diabetes, which develop slowly, include excessive urination at night and unexplained weight loss. Type 2 diabetes can usually be controlled initially with diet and exercise and with antidiabetic drugs such as metformin and sulfonylureas, but many patients eventually need insulin injections. Long-term complications of diabetes, which include an increased risk of heart disease and stroke, reduce the life expectancy of people with diabetes by about 10 years compared to people without diabetes.
Why Was This Study Done?
Diabetes is becoming increasing common, particularly in rapidly developing countries, but most people with diabetes in these countries are unaware that they have the condition. Because the risk of developing diabetic complications is reduced by careful blood sugar control, it is important to identify and treat anyone who has diabetes as early as possible. Some rapidly developing countries are therefore beginning to develop systems for large-scale community-based screening for diabetes (even though the UK has recently decided against such screening). In India, for example, more than 53 million adults living in rural and urban communities have already been screened using either questionnaires designed to provide a risk score (survey-based screening) or random blood glucose testing (glucometer-based screening). People who are identified as “high risk” using these approaches are referred for fasting blood glucose tests to confirm the diagnosis. Although the Indian government plans to expand this screening program, no data have been collected to track its performance. Here, the researchers develop a microsimulation model (a computer model that operates at the level of individuals) to investigate the implications of using alternative screening instruments to identify people with undetected diabetes across diverse populations in India.
What Did the Researchers Do and Find?
The researchers constructed a synthetic nationally representative population of Indians aged 25–65 years using data from 58 sub-national studies. They then used their microsimulation model to estimate the diagnostic and health system implications of using three survey-based screening instruments and glucometer-based screening to identify individuals in this population with diabetes. Depending on which approach was used for screening, between 158 million and 306 million of the 567 million Indians eligible for screening would be classified as high risk for diabetes and would be referred for confirmatory testing, according to the model. However, between 126 million and 273 million of these high-risk individuals would be false positives; only between 26 million and 37 million of these individuals would meet the international diagnostic criteria for diabetes (true positives). The researchers estimate that the cost per case found would vary from US$5.28 (when using random glucose screening) to US$17.06 (when using a survey-based screening instrument). Finally, they estimate that the total cost for screening the eligible population would be between US$169 and US$567 million.
What Do These Findings Mean?
Established criteria for implementing screening programs specify that such programs should use reliable instruments that detect a large proportion of true cases (high sensitivity) and that have a low rate of false positives (high specificity). Screening programs should also offer significant therapeutic benefits to individuals diagnosed through screening. The findings of this study suggest that large-scale community-based screening for diabetes in India using the currently available screening instruments is unlikely to meet these criteria. Indeed, because the data used to construct the synthetic population came from published studies that did not capture the situation in the poorest, most rural areas of India, where the proportion of the population with diabetes is thought to be lowest, these findings may overestimate the efficacy and health benefits of screening. The researchers suggest, therefore, that an approach that focuses on symptom-based screening and on improvements in the treatment of already diagnosed individuals might be a more sensible path for India to take to deal with its burgeoning diabetes epidemic than community-based mass screening.
Additional Information.
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001827. The US National Diabetes Information Clearinghouse provides information about diabetes for patients, healthcare professionals, and the general public (in English and Spanish)The UK National Health Service Choices website provides information for patients and caregivers about type 2 diabetes and about living with diabetes; it also provides people’s stories about diabetesThe charity Diabetes UK provides detailed information for patients and caregivers in several languagesThe UK-based non-profit organization HealthTalkOnline has interviews with people about their experiences of diabetesMedlinePlus provides links to further resources and advice about diabetes (in English and Spanish)A statement from the UK National Screening Committee on diabetes screening in adults is available
doi:10.1371/journal.pmed.1001827
PMCID: PMC4437977  PMID: 25992895
2.  Improving diabetes prevention with benefit based tailored treatment: risk based reanalysis of Diabetes Prevention Program 
Objective To determine whether some participants in the Diabetes Prevention Program were more or less likely to benefit from metformin or a structured lifestyle modification program.
Design Post hoc analysis of the Diabetes Prevention Program, a randomized controlled trial.
Setting Ambulatory care patients.
Participants 3060 people without diabetes but with evidence of impaired glucose metabolism.
Intervention Intervention groups received metformin or a lifestyle modification program with the goals of weight loss and physical activity.
Main outcome measure Development of diabetes, stratified by the risk of developing diabetes according to a diabetes risk prediction model.
Results Of the 3081 participants with impaired glucose metabolism at baseline, 655 (21%) progressed to diabetes over a median 2.8 years’ follow-up. The diabetes risk model had good discrimination (C statistic=0.73) and calibration. Although the lifestyle intervention provided a sixfold greater absolute risk reduction in the highest risk quarter than in the lowest risk quarter, patients in the lowest risk quarter still received substantial benefit (three year absolute risk reduction 4.9% v 28.3% in highest risk quarter; numbers needed to treat of 20.4 and 3.5, respectively). The benefit of metformin, however, was seen almost entirely in patients in the top quarter of risk of diabetes. No benefit was seen in the lowest risk quarter. Participants in the highest risk quarter averaged a 21.4% three year absolute risk reduction (number needed to treat 4.6).
Conclusions Patients at high risk of diabetes have substantial variation in their likelihood of receiving benefit from diabetes prevention treatments. Using this knowledge could decrease overtreatment and make prevention of diabetes far more efficient, effective, and patient centered, provided that decision making is based on an accurate risk prediction tool.
doi:10.1136/bmj.h454
PMCID: PMC4353279  PMID: 25697494
3.  Moneyball, Gambling and the New Cholesterol Guidelines 
doi:10.1161/CIRCOUTCOMES.114.000876
PMCID: PMC4026096  PMID: 24594549
prevention; cholesterol-lowering drugs; guideline
4.  The effect of patients’ risks and preferences on health gains with glucose lowering in type 2 diabetes 
JAMA internal medicine  2014;174(8):1227-1234.
Importance
Type 2 diabetes is common, and treatment of blood glucose is a mainstay of diabetes management. However, the benefits of intensive glucose treatment take many years to manifest, while treatment burden begins immediately. Because guidelines often fail to consider treatment burden, many patients with diabetes may be overtreated.
Objective
We examined how treatment burden affects the benefits of intensive vs. moderate glycemic control in patients with type 2 diabetes.
Design
We estimated the effects of A1c reduction on diabetes outcomes and overall quality-adjusted life years (QALYs) using a Markov simulation model. Model probabilities were based on estimates from randomized trials and observational studies.
Setting
US adults with type 2 diabetes
Participants
Simulated patients based on patients with type 2 diabetes drawn from the National Health and Nutrition Examination Study.
Interventions
Glucose lowering with oral agents or insulin in type 2 diabetes
Main Outcome measures
QALYs and reduction in risk of microvascular and cardiovascular diabetes complications.
Results
Assuming a low treatment burden (0.001, or 0.4 lost days per year), treatment that lowers A1c by 1 point provided benefits ranging from 0.77–0.91 QALYs for patients diagnosed at age 45 to 0.08–0.10 QALYs for those diagnosed at age 75. An increase in treatment burden (0.01, or 3.7 days lost per year) resulted in A1c lowering causing more harm than benefit in those aged 75. Across all ages, patients who view treatment as more burdensome (0.025–0.05) experienced a net loss in QALYs from treatments to lower A1c.
Conclusions
Improving glycemic control can provide substantial benefits, especially for younger patients; however, for most patients over age 50 with an A1c below 9% on metformin, further glycemic treatment usually offers at most modest benefits. Further, the magnitude of benefit is enormously sensitive to patients’ views of the treatment burden, and even very small treatment adverse effects result in net harm in older patients. The current approach of broadly advocating intensive glycemic control for millions of patients should be reconsidered; instead, treating A1c’s less than 9% should be individualized based on estimates of benefit weighed against the patient’s views of the burdens of treatment.
doi:10.1001/jamainternmed.2014.2894
PMCID: PMC4299865  PMID: 24979148
5.  Using internally developed risk models to assess heterogeneity in treatment effects in clinical trials 
Background
Recent proposals suggest that risk-stratified analyses of clinical trials be routinely performed to better enable tailoring of treatment decisions to individuals. Trial data can be stratified using externally developed risk models (e.g. Framingham risk score), but such models are not always available. We sought to determine whether internally developed risk models, developed directly on trial data, introduce bias compared to external models.
Methods and Results
We simulated a large patient population with known risk factors and outcomes. Clinical trials were then simulated by repeatedly drawing from the patient population assuming a specified relative treatment effect in the experimental arm, which either did or did not vary according to a subjects baseline risk. For each simulated trial, two internal risk models were developed on either the control population only (internal controls only, ICO) or on the whole trial population blinded to treatment (internal whole trial, IWT). Bias was estimated for the internal models by comparing treatment effect predictions to predictions from the external model.
Under all treatment assumptions, internal models introduced only modest bias compared to external models. The magnitude of these biases were slightly smaller for IWT models than for ICO models. IWT models were also slightly less sensitive to bias introduced by overfitting and less sensitive to falsely identifying the existence of variability in treatment effect across the risk spectrum than ICO models.
Conclusions
Appropriately developed internal models produce relatively unbiased estimates of treatment effect across the spectrum of risk. When estimating treatment effect, internally developed risk models using both treatment arms should, in general, be preferred to models developed on the control population.
doi:10.1161/CIRCOUTCOMES.113.000497
PMCID: PMC3957096  PMID: 24425710
clinical trials; modeling
6.  Association Between Hospital Case Volume and the Use of Bronchoscopy and Esophagoscopy During Head and Neck Cancer Diagnostic Evaluation 
Cancer  2013;120(1):10.1002/cncr.28379.
Background
There are no clinical guidelines on best practices for the use of bronchoscopy and esophagoscopy in diagnosing head and neck cancer. This retrospective cohort study examined variation in the use of bronchoscopy and esophagoscopy across hospitals in Michigan.
Patients and Methods
We identified 17,828 head and neck cancer patients in the 2006–2010 Michigan State Ambulatory Surgery Databases. We used hierarchical, mixed-effect logistic regression to examine whether a hospital’s risk-adjusted rate of concurrent bronchoscopy or esophagoscopy was associated with its case volume (<100, 100–999, or ≥1000 cases/hospital) for those undergoing diagnostic laryngoscopy.
Results
Of 9,218 patients undergoing diagnostic laryngoscopy, 1,191 (12.9%) received concurrent bronchoscopy and 1,675 (18.2%) underwent concurrent esophagoscopy. The median hospital rate of bronchoscopy was 2.7% (range 0–61.1%), and low-volume (OR 27.1 [95% CI 1.9, 390.7]) and medium-volume (OR 28.1 [95% CI 2.0, 399.0]) hospitals were more likely to perform concurrent bronchoscopy compared to high-volume hospitals. The median hospital rate of esophagoscopy was 5.1% (range 0–47.1%), and low-volume (OR 9.8 [95% CI 1.5, 63.7]) and medium-volume (OR 8.5 [95% CI 1.3, 55.0]) hospitals were significantly more likely to perform concurrent esophagoscopy relative to high-volume hospitals.
Conclusions
Head and neck cancer patients undergoing diagnostic laryngoscopy are much more likely to undergo concurrent bronchoscopy and esophagoscopy at low- and medium-volume hospitals than at high-volume hospitals. Whether this represents over-use of concurrent procedures or appropriate care that leads to earlier diagnosis and better outcomes merits further investigation.
doi:10.1002/cncr.28379
PMCID: PMC3867538  PMID: 24114146
otolaryngology; endoscopy; head and neck cancer; diagnostic techniques and procedures; hospital volume; SASD; Michigan
7.  The Effect of Pre-PPACA Medicaid Eligibility Expansion in New York State on Access to Specialty Surgical Care 
Medical care  2014;52(9):790-795.
Background
Critics argue that expanding health insurance coverage through Medicaid may not result in improved access to care. The ACA provides reimbursement incentives aimed at improving access to primary care services for new Medicaid beneficiaries; however, there are no such incentives for specialty services. Using the natural experiment of Medicaid expansion in New York State in October 2001, we examined whether Medicaid expansion increased access to common musculoskeletal procedures for Medicaid beneficiaries.
Methods
From the State Inpatient Database for NY State, we identified 19–64 year old patients who received; lower extremity large joint replacement, spine procedures and upper/lower extremity fracture/dislocation repair from January 1998–December 2006. We used interrupted time series analysis to evaluate the association between Medicaid expansion and trends in the relative and absolute number of Medicaid beneficiaries who received these musculoskeletal procedures.
Results
Prior to Medicaid expansion, we observed a slight but steady temporal decline in the proportion of musculoskeletal surgical patients who were Medicaid beneficiaries. Following expansion this trend reversed and by 5 years after Medicaid expansion, the proportion of musculoskeletal surgical patients who were Medicaid beneficiaries was 4.7 percentage points (95% CI 3.9, 5.5) higher than expected based on the pre-expansion time trend.
Conclusions
Medicaid expansion in NY State significantly improved access to common musculoskeletal procedures, for Medicaid beneficiaries.
doi:10.1097/MLR.0000000000000175
PMCID: PMC4262819  PMID: 24984209
access; Medicaid; specialist; specialty; surgical
8.  Modeling Test and Treatment Strategies for Presymptomatic Alzheimer Disease 
PLoS ONE  2014;9(12):e114339.
Objectives
In this study, we developed a model of presymptomatic treatment of Alzheimer disease (AD) after a screening diagnostic evaluation and explored the circumstances required for an AD prevention treatment to produce aggregate net population benefit.
Methods
Monte Carlo simulation methods were used to estimate outcomes in a simulated population derived from data on AD incidence and mortality. A wide variety of treatment parameters were explored. Net population benefit was estimated in aggregated QALYs. Sensitivity analyses were performed by individually varying the primary parameters.
Findings
In the base-case scenario, treatment effects were uniformly positive, and net benefits increased with increasing age at screening. A highly efficacious treatment (i.e. relative risk 0.6) modeled in the base-case is estimated to save 20 QALYs per 1000 patients screened and 221 QALYs per 1000 patients treated.
Conclusions
Highly efficacious presymptomatic screen and treat strategies for AD are likely to produce substantial aggregate population benefits that are likely greater than the benefits of aspirin in primary prevention of moderate risk cardiovascular disease (28 QALYS per 1000 patients treated), even in the context of an imperfect treatment delivery environment.
doi:10.1371/journal.pone.0114339
PMCID: PMC4256252  PMID: 25474698
11.  Improved cardiovascular risk prediction using nonparametric regression and electronic health record data 
Medical care  2013;51(3):251-258.
Background
Use of the electronic health record (EHR) is expected to increase rapidly in the near future, yet little research exists on whether analyzing internal EHR data using flexible, adaptive statistical methods could improve clinical risk prediction. Extensive implementation of EHR in the Veterans Health Administration (VHA) provides an opportunity for exploration.
Objectives
To compare the performance of various approaches for predicting risk of cerebro- and cardiovascular (CCV) death, using traditional risk predictors versus more comprehensive EHR data.
Research Design
Retrospective cohort study. We identified all VHA patients without recent CCV events treated at twelve facilities from 2003 to 2007, and predicted risk using the Framingham risk score (FRS), logistic regression, generalized additive modeling, and gradient tree boosting.
Measures
The outcome was CCV-related death within five years. We assessed each method's predictive performance with the area under the ROC curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, plots of estimated risk, and reclassification tables, using cross-validation to penalize over-fitting.
Results
Regression methods outperformed the FRS, even with the same predictors (AUC increased from 71% to 73% and calibration also improved). Even better performance was attained in models using additional EHR-derived predictor variables (AUC increased to 78% and net reclassification improvement was as large as 0.29). Nonparametric regression further improved calibration and discrimination compared to logistic regression.
Conclusions
Despite the EHR lacking some risk factors and its imperfect data quality, healthcare systems may be able to substantially improve risk prediction for their patients by using internally-developed EHR-derived models and flexible statistical methodology.
doi:10.1097/MLR.0b013e31827da594
PMCID: PMC4081533  PMID: 23269109
12.  Individual and Population Benefits of Daily Aspirin Therapy: A Proposal for Personalizing National Guidelines 
Background
Clinical practice guidelines that help clinicians and patients understand the magnitude of expected individual risks and benefits would help patient-centered decision-making and prioritization of care. We assessed the net benefit from daily aspirin in individuals to estimate the individual and public health implications of a more individualized decision-making approach.
Methods and Results
We use data from the National Health and Nutrition Examination Survey (NHANES) representing all U.S. persons aged 30 to 85 years with no history of myocardial infarction and applied a Markov Model based on randomized evidence and published literature to estimate lifetime effects of aspirin treatment in quality adjusted life years (QALYs). We show that treatment benefit varies greatly by an individual's cardiovascular disease (CVD) risk. Almost all adults have fewer major clinical events on aspirin, but for most, events prevented would be so rare that even a very small distaste for aspirin use would make treatment inappropriate. With minimal dislike of aspirin use (disutility = 0.005 QALY per year), only those with a 10-year cardiac event risk greater than 6.1% would have a net benefit. A disutility of 0.01 QALY moves this benefit cut-point to 10.6%. Multiple factors altered the absolute benefit of aspirin, but the strong relationship between CVD risk and magnitude of benefit was robust.
Conclusions
The benefits of aspirin therapy depend substantially on an individual's risk of CVD and adverse treatment effects. Understanding who benefits from aspirin use and how much can help clinicians and patients develop a more patient-centered approach to preventive therapy.
doi:10.1161/CIRCOUTCOMES.110.959239
PMCID: PMC4039386  PMID: 21487091
aspirin; prevention; risk factors; shared decision-making
13.  Does Rectal Indomethacin Eliminate the Need for Prophylactic Pancreatic Stent Placement in Patients Undergoing High-Risk ERCP? Post hoc Efficacy and Cost-Benefit Analyses Using Prospective Clinical Trial Data 
OBJECTIVES
A recent large-scale randomized controlled trial (RCT) demonstrated that rectal indomethacin administration is effective in addition to pancreatic stent placement (PSP) for preventing post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) in high-risk cases. We performed a post hoc analysis of this RCT to explore whether rectal indomethacin can replace PSP in the prevention of PEP and to estimate the potential cost savings of such an approach.
METHODS
We retrospectively classified RCT subjects into four prevention groups: (1) no prophylaxis, (2) PSP alone, (3) rectal indomethacin alone, and (4) the combination of PSP and indomethacin. Multivariable logistic regression was used to adjust for imbalances in the prevalence of risk factors for PEP between the groups. Based on these adjusted PEP rates, we conducted an economic analysis comparing the costs associated with PEP prevention strategies employing rectal indomethacin alone, PSP alone, or the combination of both.
RESULTS
After adjusting for risk using two different logistic regression models, rectal indomethacin alone appeared to be more effective for preventing PEP than no prophylaxis, PSP alone, and the combination of indomethacin and PSP. Economic analysis revealed that indomethacin alone was a cost-saving strategy in 96% of Monte Carlo trials. A prevention strategy employing rectal indomethacin alone could save approximately $150 million annually in the United States compared with a strategy of PSP alone, and $85 million compared with a strategy of indomethacin and PSP.
CONCLUSIONS
This hypothesis-generating study suggests that prophylactic rectal indomethacin could replace PSP in patients undergoing high-risk ERCP, potentially improving clinical outcomes and reducing healthcare costs. A RCT comparing rectal indomethacin alone vs. indomethacin plus PSP is needed.
doi:10.1038/ajg.2012.442
PMCID: PMC3947644  PMID: 23295278
14.  Improving the Reliability of Physician “Report Cards” 
Medical care  2013;51(3):266-274.
Background
Performance measures are widely used to profile primary care physicians (PCPs) but their reliability is often limited by small sample sizes. We evaluated there liability of individual PCP profiles and whether they can be improved by combining measures into composites or by profiling practice groups.
Methods
We performed a cross-sectional analysis of electronic health record data for patients with diabetes (DM), congestive heart failure (CHF), ischemic vascular disease (IVD), or eligible for preventive care services seen by a PCP within a large, integrated healthcare system between April 2009 and May 2010. We evaluated performance on 14 measures of DM care, 9 of CHF, 7 of IVD, and 4 of preventive care.
Results
There were 51,771 patients seen by 163 physicians in 17 clinics. Few PCPs (0 to 60%) could be profiled with 80% reliability using single process or intermediate-outcome measures. Combining measures into single-disease composites improved reliability for DM and preventive care with 74.5% and 76.7% of PCPs having sufficient panel sizes, but composites remained unreliable for CHF and IVD. 85.3% of PCPs could be reliably profiled using a single overall composite. Aggregating PCPs into practice groups (3 to 21 PCPs per group) did not improve reliability in most cases due to little between-group practice variation.
Conclusion
Single measures rarely differentiate between individual PCPs or groups of PCPs reliably. Combining measures into single- or multi-disease composites can improve reliability for some common conditions, but not all. Assessing PCP practice groups with in a single healthcare system, rather than individual PCPs, did not substantially improve reliability.
doi:10.1097/MLR.0b013e31827da99c
PMCID: PMC3669898  PMID: 23295578
quality measurement; reliability; physician profiling
15.  Physician Practices and Readiness for Medical Home Reforms: Policy, Pitfalls, and Possibilities 
Health Services Research  2012;47(1 Pt 2):486-508.
Objective
To determine the proportion of physician practices in the United States that currently meets medical home criteria.
Data Source/Study Setting
2007 and 2008 National Ambulatory Medical Care Survey.
Study Design
We mapped survey items to the National Committee on Quality Assurance's (NCQA's) medical home standards. After awarding points for each “passed” element, we calculated a practice's infrastructure score, dividing its cumulative total by the number of available points. We identified practices that would be recognized as a medical home (Level 1 [25–49 percent], Level 2 [50–74 percent], or Level 3 [infrastructure score ≥75 percent]) and examined characteristics associated with NCQA recognition.
Results
Forty-six percent (95 percent confidence interval [CI], 42.5–50.2) of all practices lack sufficient medical home infrastructure. While 72.3 percent (95 percent CI, 64.0–80.7 percent) of multi-specialty groups would achieve recognition, only 49.8 percent (95 percent CI, 45.2–54.5 percent) of solo/partnership practices meet NCQA standards. Although better prepared than specialists, 40 percent of primary care practices would not qualify as a medical home under present criteria.
Conclusion
Almost half of all practices fail to meet NCQA standards for medical home recognition.
doi:10.1111/j.1475-6773.2011.01332.x
PMCID: PMC3393004  PMID: 22091559
Models; organizational; patient-centered care; organization and administration; primary health care
16.  Physician Practices and Readiness for MedicalHome Reforms: Policy, Pitfalls, and Possibilities 
Health services research  2011;47(1 0 2):10.1111/j.1475-6773.2011.01332.x.
Objective
To determine the proportion of physician practices in the United States that currently meets medical home criteria.
Data Source/Study Setting
2007 and 2008 National Ambulatory Medical Care Survey.
Study Design
We mapped survey items to the National Committee on Quality Assurance’s (NCQA’s) medical home standards. After awarding points for each “passed” element, we calculated a practice’s infrastructure score, dividing its cumulative total by the number of available points. We identified practices that would be recognized as a medical home (Level 1 [25–49 percent], Level 2 [50–74 percent], or Level 3 [infrastructure score ≥75 percent]) and examined characteristics associated with NCQA recognition.
Results
Forty-six percent (95 percent confidence interval [CI], 42.5–50.2) of all practices lack sufficient medical home infrastructure. While 72.3 percent (95 percent CI, 64.0–80.7 percent) of multi-specialty groups would achieve recognition, only 49.8 percent (95 percent CI, 45.2–54.5 percent) of solo/partnership practices meet NCQA standards. Although better prepared than specialists, 40 percent of primary care practices would not qualify as a medical home under present criteria.
Conclusion
Almost half of all practices fail to meet NCQA standards for medical home recognition.
doi:10.1111/j.1475-6773.2011.01332.x
PMCID: PMC3393004  PMID: 22091559
17.  Fall Associated Difficulty with Activities of Daily Living (ADL) in Functionally Independent Older Adults Aged 65 to 69 in the United States: A Cohort Study 
Journal of the American Geriatrics Society  2013;61(1):10.1111/jgs.12071.
Background/Objectives
Falling is a risk factor for functional dependence in adults 75 years and older, but has not been systematically evaluated for younger and healthier older adults. This younger group of older adults may benefit from earlier identification of their risk. We hypothesizedthat falling would bea marker for future difficulty with activities of daily (ADL)that would vary by fall frequency and associated injury.
Design, Setting, and Patients
Nationally representative cohort of 2,020 community-living, functionally independent older adults 65-69 years of age at baseline followed between 1998-2008.
Main Outcome Measurement
ADL difficulty
Results
Experiencing one fall with injury in the prior 2 years (Odds = 1.78, 95% CI 1.29-2.48), at least 2 falls without injury in the prior 2 years (Odds = 2.36, 95% CI 1.80-3.09), or at least 2 falls with at least one injury in the prior 2 years (Odds = 3.75, 95% CI 2.55-5.53) were independently associated with higher rates of ADL difficulty after adjustment for socio-demographic, behavioral, and clinical covariates.
Limitations
HRS data are self-reported
Conclusion
Falling is an important marker for future ADL difficulty in youngerfunctionally independent older adults. Individuals who fall frequently or report injury are at highest risk.
doi:10.1111/jgs.12071
PMCID: PMC3807864  PMID: 23311555
Activities of Daily Living; Falls; Disability; Older Adults
18.  Providing clinicians with a patient’s 10-year cardiovascular risk improves their statin prescribing: a true experiment using clinical vignettes 
Background
Statins are effective for primary prevention of cardiovascular (CV) disease, the leading cause of death in the world. Multinational guidelines emphasize CV risk as an important factor for optimal statin prescribing. However, it’s not clear how primary care providers (PCPs) use this information. The objective of this study was to determine how primary care providers use information about global CV risk for primary prevention of CV disease.
Methods
A double-blinded, randomized experiment using clinical vignettes mailed to office-based PCPs in the United States who were identified through the American Medical Association Physician Masterfile in June 2012. PCPs in the control group received clinical vignettes with all information on the risk factors needed to calculate CV risk. The experimental group received the same vignettes in addition to the subject’s 10-year calculated CV risk (Framingham risk score). The primary study outcome was the decision to prescribe a statin.
Results
Providing calculated CV risk to providers increased statin prescribing in the two high-risk cases (CV risk > 20%) by 32 percentage points (41% v. 73%; 95% CI = 23-40, p <0.001; relative risk [RR] = 1.78) and 16 percentage points (12% v. 27%, 95% CI 8.5-22.5%, p <0.001; RR = 2.25), and decreased statin prescribing in the lowest risk case (CV risk = 2% risk) by 9 percentage points [95% CI = 1.00-16.7%, p = 0.003, RR = 0.88]. Fewer than 20% of participants in each group reported routinely calculating 10-year CV risk in their patients.
Conclusions
Providers do not routinely calculate 10-year CV risk for their patients. In this vignette experiment, PCPs undertreated low LDL, high CV risk patients. Giving providers a patient’s calculated CV risk improved statin prescribing. Providing PCPs with accurate estimates of patient CV risk at the point of service has the potential to improve the efficiency of statin prescribing.
doi:10.1186/1471-2261-13-90
PMCID: PMC3924357  PMID: 24148829
Primary prevention; Cardiovascular disease; Statins; Cardiovascular risk
19.  Cardiac Risk is Not Associated with Hypertension Treatment Intensification 
Objective
Considering cardiovascular (CV) risk could make clinical care more efficient and individualized, but most practice guidelines focus on single risk factors. We sought to see if hypertension treatment intensification (TI) is more likely in patients with elevated CV risk.
Study design
Prospective cohort study of 856 US Veterans with diabetes and elevated blood pressure (BP).
Methods
We used multilevel logistic regression to compare TI across three CV risk groups – those with history of heart disease, a high-risk primary prevention group (10-year event risk > 20% but no history of heart disease), and those with low/medium CV risk (10-year event risk < 20%).
Results
There were no significant differences in TI rates across risk groups, with adjusted odds ratios (ORs) of 1.19 (95% confidence interval 0.77–1.84) and 1.18 (0.76–1.83) for high-risk patients and those with a history of CVD, respectively, compared with those of low/medium-risk. Several individual risk factors were associated with higher rates of TI: systolic BP, mean BP in the prior year, and higher hemoglobin A1C. Self-reported home BP < 140/90 was associated with lower rates of TI. Incorporating CV risk into TI decision algorithms could prevent an estimated 38% more cardiac events without increasing the number of treated patients.
Conclusions
While an individual’s blood pressure alters clinical decisions about TI, overall CV risk does not appear to play a role in clinical decision-making. Adoption of TI decision algorithms that incorporate CV risk could substantially enhance the efficiency and clinical utility of CV preventive care.
PMCID: PMC3682773  PMID: 22928756
Prevention; hypertension; decision making; veterans
20.  Duration of resuscitation efforts and subsequent survival after in-hospital cardiac arrest 
Lancet  2012;380(9852):1473-1481.
Background
During in-hospital cardiac arrests, it is uncertain how long resuscitation should continue prior to termination of efforts. We hypothesized that the duration of resuscitation varies across hospitals, and that patients at hospitals with longer attempts have higher survival rates.
Methods
Between 2000 and 2008, we identified 64,339 patients with cardiac arrests at 435 hospitals within a large national registry. For each hospital, we calculated the median duration of resuscitation before termination of efforts among its non-survivors as a measure of the hospital’s overall tendency for longer attempts. We then determined the association between a hospital’s tendency for longer attempts and risk-adjusted survival using multilevel regression models.
Findings
The overall proportion of patients achieving immediate survival with return of spontaneous circulation (ROSC) was 48·5% while 15·4% survived to discharge. For patients achieving ROSC, the median resuscitation time was 12 minutes (IQR: 6–21) while it was 20 minutes (IQR: 14–30) for those not achieving ROSC (i.e., non-survivors). Compared with patients at hospitals with the shortest attempts (median duration, 16 minutes), patients at hospitals with the longest attempts (median duration, 25 minutes) had a higher likelihood of ROSC (adjusted risk-ratio 1·12, [95% CI: 1·06–1·18]; p <0·001) and survival to discharge (adjusted risk-ratio 1·12, [95% CI: 1·02–1·23]; p=0·021). These findings were more prominent in cardiac arrests due to asystole and pulseless electrical activity (p for interaction<0.01 for both ROSC and survival to discharge).
Interpretation
The duration of resuscitation attempts varies across hospitals. Patients at hospitals with longer attempts have a higher likelihood of ROSC and survival to discharge, particularly when the arrest is due to asystole and pulseless electrical activity.
Funding
The American Heart Association, the Robert Wood Johnson Foundation Clinical Scholars Program, the National Institutes of Health.
doi:10.1016/S0140-6736(12)60862-9
PMCID: PMC3535188  PMID: 22958912
21.  Effect of Flexible Sigmoidoscopy-Based Screening on Incidence and Mortality of Colorectal Cancer: A Systematic Review and Meta-Analysis of Randomized Controlled Trials 
PLoS Medicine  2012;9(12):e1001352.
A systematic review and meta-analysis of randomized trials conducted by B. Joseph Elmunzer and colleagues reports that that flexible sigmoidoscopy-based screening reduces the incidence of colorectal cancer in average-risk patients, as compared to usual care or no screening.
Background
Randomized controlled trials (RCTs) have yielded varying estimates of the benefit of flexible sigmoidoscopy (FS) screening for colorectal cancer (CRC). Our objective was to more precisely estimate the effect of FS-based screening on the incidence and mortality of CRC by performing a meta-analysis of published RCTs.
Methods and Findings
Medline and Embase databases were searched for eligible articles published between 1966 and 28 May 2012. After screening 3,319 citations and 29 potentially relevant articles, two reviewers identified five RCTs evaluating the effect of FS screening on the incidence and mortality of CRC. The reviewers independently extracted relevant data; discrepancies were resolved by consensus. The quality of included studies was assessed using criteria set out by the Evidence-Based Gastroenterology Steering Group. Random effects meta-analysis was performed.
The five RCTs meeting eligibility criteria were determined to be of high methodologic quality and enrolled 416,159 total subjects. Four European studies compared FS to no screening and one study from the United States compared FS to usual care. By intention to treat analysis, FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen [NNS] to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (relative risk [RR] 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The efficacy estimate, the amount of benefit for those who actually adhered to the recommended treatment, suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
Limitations of this meta-analysis include heterogeneity in the design of the included trials, absence of studies from Africa, Asia, or South America, and lack of studies comparing FS with colonoscopy or stool-based testing.
Conclusions
This meta-analysis of randomized controlled trials demonstrates that FS-based screening significantly reduces the incidence and mortality of colorectal cancer in average-risk patients.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Colorectal cancer (CRC) is the second leading cause of cancer-related death in the United States. Regular CRC screening has been shown to reduce the risk of dying from CRC by 16%, and CRC screening can identify early stage cancers in otherwise healthy people, which allows for early treatment and management of the disease. Screening for colorectal cancer is frequently performed using a flexible sigmoidoscopy (FS), which is a thin, flexible tube with a tiny camera and light on the end, allowing a doctor to look at the inside wall of the bowel and remove any small growths or polyps. Although screening may detect early cancers, the life-saving and health benefits of screening are uncertain because the polyp may not necessarily progress. This could lead to anxiety and unnecessary interventions and treatments amongst those screened. Randomized controlled trials (RCTs) are needed to determine all of the risks involved in cancer screenings, however the guidelines that recommend FS-based screening do not rely upon RCT data. Recently, the results of four large-scale RCTs evaluating FS screening for CRC have been published. The conflicting results with respect to the incidence and mortality of CRC in these studies have called into question the effectiveness of endoscopic screening.
Why Was This Study Done?
The results of RCTs measuring the risks and outcomes of CRC screening have shown varying estimates of the benefits of using FS screening. If better estimates of the risks and benefits of FS screening are developed, then the current CRC screening guidelines may be updated to reflect this new information. In this study, the authors show the results of a meta-analysis of published RCTs, which more precisely estimates the effects of FS-based screening on the incidence and mortality of colorectal cancer.
What Did the Researchers Do and Find?
The researchers used the Medline and Embase databases to find relevant studies from 1966 to May 28, 2012. After screening 3,319 citations and 29 potentially relevant articles, five RCTs of high methodologic quality and 416,159 total subjects evaluating the effect of FS screening on the incidence and mortality of CRC were identified. The data were extracted and random effects meta-analysis was performed. The meta-analysis revealed that FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen (NNS) to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (RR 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The amount of benefit for those who adhered to the recommended treatment suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
What Do These Findings Mean?
This meta-analysis of RCTs evaluating the effect of FS on CRC incidence and mortality demonstrates that a FS-based strategy for screening is very effective in reducing the incidence and mortality of CRC in patients. The current recommendations for endoscopic screening are based on observational studies, which may not accurately reflect the effect of FS-based screening on the incidence and mortality of CRC. Here, the authors performed a systematic review and meta-analysis of five recent RCTs to better estimate the true effect of FS-based screening on the incidence and mortality of CRC. Thus, the results of this meta-analysis may affect health policy, and directly impact patients and clinicians.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001352.
Cancer research UK provides comprehensive information about screening for colorectal cancers as does the UK National Screening Committee
PubMed Health has general information about colon cancer
The National Cancer Institute also has comprehensive resources on colorectal cancer and treatment
The Mayo Clinic provides an overview of all aspects of colon cancer
doi:10.1371/journal.pmed.1001352
PMCID: PMC3514315  PMID: 23226108
22.  A Randomized Trial of Rectal Indomethacin to Prevent Post-ERCP Pancreatitis 
The New England Journal of Medicine  2012;366(15):1414-1422.
Background
Preliminary research suggests that rectally administered nonsteroidal antiinflammatory drugs may reduce the incidence of pancreatitis after endoscopic retrograde cholangiopancreatography (ERCP).
Methods
In this multicenter, randomized, placebo-controlled, double-blind clinical trial, we assigned patients at elevated risk for post-ERCP pancreatitis to receive a single dose of rectal indomethacin or placebo immediately after ERCP. Patients were determined to be at high risk on the basis of validated patient- and procedure-related risk factors. The primary outcome was post-ERCP pancreatitis, which was defined as new upper abdominal pain, an elevation in pancreatic enzymes to at least three times the upper limit of the normal range 24 hours after the procedure, and hospitalization for at least 2 nights.
Results
A total of 602 patients were enrolled and completed follow-up. The majority of patients (82%) had a clinical suspicion of sphincter of Oddi dysfunction. Post-ERCP pancreatitis developed in 27 of 295 patients (9.2%) in the indomethacin group and in 52 of 307 patients (16.9%) in the placebo group (P = 0.005). Moderate-to-severe pancreatitis developed in 13 patients (4.4%) in the indomethacin group and in 27 patients (8.8%) in the placebo group (P = 0.03).
Conclusions
Among patients at high risk for post-ERCP pancreatitis, rectal indomethacin significantly reduced the incidence of the condition. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT00820612.)
doi:10.1056/NEJMoa1111103
PMCID: PMC3339271  PMID: 22494121
23.  Adherence to Colorectal Cancer Screening 
Archives of Internal Medicine  2012;172(7):575-582.
Background
Despite evidence that several colorectal cancer (CRC) screening strategies can reduce CRC mortality, screening rates remain low. This study aimed to determine whether the approach by which screening is recommended influences adherence.
Methods
We used a cluster randomization design with clinic time block as the unit of randomization. Persons at average risk for development of CRC in a racially/ethnically diverse urban setting were randomized to receive recommendation for screening by fecal occult blood testing (FOBT), colonoscopy, or their choice of FOBT or colonoscopy. The primary outcome was completion of CRC screening within 12 months after enrollment, defined as performance of colonoscopy, or 3 FOBT cards plus colonoscopy for any positive FOBT result. Secondary analyses evaluated sociodemographic factors associated with completion of screening.
Results
A total of 997 participants were enrolled; 58% completed the CRC screening strategy they were assigned or chose. However, participants who were recommended colonoscopy completed screening at a significantly lower rate (38%) than participants who were recommended FOBT (67%) (P< .001) or given a choice between FOBT or colonoscopy (69%) (P< .001). Latinos and Asians (primarily Chinese) completed screening more often than African Americans. Moreover, non-white participants adhered more often to FOBT, while white participants adhered more often to colonoscopy.
Conclusions
The common practice of universally recommending colonoscopy may reduce adherence to CRC screening, especially among racial/ethnic minorities. Significant variation in overall and strategy-specific adherence exists between racial/ethnic groups; however, this may be a proxy for health beliefs and/or language. These results suggest that patient preferences should be considered when making CRC screening recommendations.
Trial Registration
clinicals.gov Identifier: NCT00705731
doi:10.1001/archinternmed.2012.332
PMCID: PMC3360917  PMID: 22493463
25.  Examining the Evidence: A Systematic Review of the Inclusion and Analysis of Older Adults in Randomized Controlled Trials 
ABSTRACT
BACKGROUND
Due to a shortage of studies focusing on older adults, clinicians and policy makers frequently rely on clinical trials of the general population to provide supportive evidence for treating complex, older patients.
OBJECTIVES
To examine the inclusion and analysis of complex, older adults in randomized controlled trials.
REVIEW METHODS
A PubMed search identified phase III or IV randomized controlled trials published in 2007 in JAMA, NEJM, Lancet, Circulation, and BMJ. Therapeutic interventions that assessed major morbidity or mortality in adults were included. For each study, age eligibility, average age of study population, primary and secondary outcomes, exclusion criteria, and the frequency, characteristics, and methodology of age-specific subgroup analyses were reviewed.
RESULTS
Of the 109 clinical trials reviewed in full, 22 (20.2%) excluded patients above a specified age. Almost half (45.6%) of the remaining trials excluded individuals using criteria that could disproportionately impact older adults. Only one in four trials (26.6%) examined outcomes that are considered highly relevant to older adults, such as health status or quality of life. Of the 42 (38.5%) trials that performed an age-specific subgroup analysis, fewer than half examined potential confounders of differential treatment effects by age, such as comorbidities or risk of primary outcome. Trials with age-specific subgroup analyses were more likely than those without to be multicenter trials (97.6% vs. 79.1%, p < 0.01) and funded by industry (83.3% vs. 62.7%, p < 0.05). Differential benefit by age was found in seven trials (16.7%).
CONCLUSION
Clinical trial evidence guiding treatment of complex, older adults could be improved by eliminating upper age limits for study inclusion, by reducing the use of eligibility criteria that disproportionately affect multimorbid older patients, by evaluating outcomes that are highly relevant to older individuals, and by encouraging adherence to recommended analytic methods for evaluating differential treatment effects by age.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1629-x) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-010-1629-x
PMCID: PMC3138606  PMID: 21286840
clinical trial methodology; exclusion criteria; subgroup analysis; comorbidities

Results 1-25 (51)