PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (41)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  Improved cardiovascular risk prediction using nonparametric regression and electronic health record data 
Medical care  2013;51(3):251-258.
Background
Use of the electronic health record (EHR) is expected to increase rapidly in the near future, yet little research exists on whether analyzing internal EHR data using flexible, adaptive statistical methods could improve clinical risk prediction. Extensive implementation of EHR in the Veterans Health Administration (VHA) provides an opportunity for exploration.
Objectives
To compare the performance of various approaches for predicting risk of cerebro- and cardiovascular (CCV) death, using traditional risk predictors versus more comprehensive EHR data.
Research Design
Retrospective cohort study. We identified all VHA patients without recent CCV events treated at twelve facilities from 2003 to 2007, and predicted risk using the Framingham risk score (FRS), logistic regression, generalized additive modeling, and gradient tree boosting.
Measures
The outcome was CCV-related death within five years. We assessed each method's predictive performance with the area under the ROC curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, plots of estimated risk, and reclassification tables, using cross-validation to penalize over-fitting.
Results
Regression methods outperformed the FRS, even with the same predictors (AUC increased from 71% to 73% and calibration also improved). Even better performance was attained in models using additional EHR-derived predictor variables (AUC increased to 78% and net reclassification improvement was as large as 0.29). Nonparametric regression further improved calibration and discrimination compared to logistic regression.
Conclusions
Despite the EHR lacking some risk factors and its imperfect data quality, healthcare systems may be able to substantially improve risk prediction for their patients by using internally-developed EHR-derived models and flexible statistical methodology.
doi:10.1097/MLR.0b013e31827da594
PMCID: PMC4081533  PMID: 23269109
2.  Individual and Population Benefits of Daily Aspirin Therapy: A Proposal for Personalizing National Guidelines 
Background
Clinical practice guidelines that help clinicians and patients understand the magnitude of expected individual risks and benefits would help patient-centered decision-making and prioritization of care. We assessed the net benefit from daily aspirin in individuals to estimate the individual and public health implications of a more individualized decision-making approach.
Methods and Results
We use data from the National Health and Nutrition Examination Survey (NHANES) representing all U.S. persons aged 30 to 85 years with no history of myocardial infarction and applied a Markov Model based on randomized evidence and published literature to estimate lifetime effects of aspirin treatment in quality adjusted life years (QALYs). We show that treatment benefit varies greatly by an individual's cardiovascular disease (CVD) risk. Almost all adults have fewer major clinical events on aspirin, but for most, events prevented would be so rare that even a very small distaste for aspirin use would make treatment inappropriate. With minimal dislike of aspirin use (disutility = 0.005 QALY per year), only those with a 10-year cardiac event risk greater than 6.1% would have a net benefit. A disutility of 0.01 QALY moves this benefit cut-point to 10.6%. Multiple factors altered the absolute benefit of aspirin, but the strong relationship between CVD risk and magnitude of benefit was robust.
Conclusions
The benefits of aspirin therapy depend substantially on an individual's risk of CVD and adverse treatment effects. Understanding who benefits from aspirin use and how much can help clinicians and patients develop a more patient-centered approach to preventive therapy.
doi:10.1161/CIRCOUTCOMES.110.959239
PMCID: PMC4039386  PMID: 21487091
aspirin; prevention; risk factors; shared decision-making
3.  Does Rectal Indomethacin Eliminate the Need for Prophylactic Pancreatic Stent Placement in Patients Undergoing High-Risk ERCP? Post hoc Efficacy and Cost-Benefit Analyses Using Prospective Clinical Trial Data 
OBJECTIVES
A recent large-scale randomized controlled trial (RCT) demonstrated that rectal indomethacin administration is effective in addition to pancreatic stent placement (PSP) for preventing post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) in high-risk cases. We performed a post hoc analysis of this RCT to explore whether rectal indomethacin can replace PSP in the prevention of PEP and to estimate the potential cost savings of such an approach.
METHODS
We retrospectively classified RCT subjects into four prevention groups: (1) no prophylaxis, (2) PSP alone, (3) rectal indomethacin alone, and (4) the combination of PSP and indomethacin. Multivariable logistic regression was used to adjust for imbalances in the prevalence of risk factors for PEP between the groups. Based on these adjusted PEP rates, we conducted an economic analysis comparing the costs associated with PEP prevention strategies employing rectal indomethacin alone, PSP alone, or the combination of both.
RESULTS
After adjusting for risk using two different logistic regression models, rectal indomethacin alone appeared to be more effective for preventing PEP than no prophylaxis, PSP alone, and the combination of indomethacin and PSP. Economic analysis revealed that indomethacin alone was a cost-saving strategy in 96% of Monte Carlo trials. A prevention strategy employing rectal indomethacin alone could save approximately $150 million annually in the United States compared with a strategy of PSP alone, and $85 million compared with a strategy of indomethacin and PSP.
CONCLUSIONS
This hypothesis-generating study suggests that prophylactic rectal indomethacin could replace PSP in patients undergoing high-risk ERCP, potentially improving clinical outcomes and reducing healthcare costs. A RCT comparing rectal indomethacin alone vs. indomethacin plus PSP is needed.
doi:10.1038/ajg.2012.442
PMCID: PMC3947644  PMID: 23295278
4.  Improving the Reliability of Physician “Report Cards” 
Medical care  2013;51(3):266-274.
Background
Performance measures are widely used to profile primary care physicians (PCPs) but their reliability is often limited by small sample sizes. We evaluated there liability of individual PCP profiles and whether they can be improved by combining measures into composites or by profiling practice groups.
Methods
We performed a cross-sectional analysis of electronic health record data for patients with diabetes (DM), congestive heart failure (CHF), ischemic vascular disease (IVD), or eligible for preventive care services seen by a PCP within a large, integrated healthcare system between April 2009 and May 2010. We evaluated performance on 14 measures of DM care, 9 of CHF, 7 of IVD, and 4 of preventive care.
Results
There were 51,771 patients seen by 163 physicians in 17 clinics. Few PCPs (0 to 60%) could be profiled with 80% reliability using single process or intermediate-outcome measures. Combining measures into single-disease composites improved reliability for DM and preventive care with 74.5% and 76.7% of PCPs having sufficient panel sizes, but composites remained unreliable for CHF and IVD. 85.3% of PCPs could be reliably profiled using a single overall composite. Aggregating PCPs into practice groups (3 to 21 PCPs per group) did not improve reliability in most cases due to little between-group practice variation.
Conclusion
Single measures rarely differentiate between individual PCPs or groups of PCPs reliably. Combining measures into single- or multi-disease composites can improve reliability for some common conditions, but not all. Assessing PCP practice groups with in a single healthcare system, rather than individual PCPs, did not substantially improve reliability.
doi:10.1097/MLR.0b013e31827da99c
PMCID: PMC3669898  PMID: 23295578
quality measurement; reliability; physician profiling
5.  Physician Practices and Readiness for Medical Home Reforms: Policy, Pitfalls, and Possibilities 
Health Services Research  2012;47(1 Pt 2):486-508.
Objective
To determine the proportion of physician practices in the United States that currently meets medical home criteria.
Data Source/Study Setting
2007 and 2008 National Ambulatory Medical Care Survey.
Study Design
We mapped survey items to the National Committee on Quality Assurance's (NCQA's) medical home standards. After awarding points for each “passed” element, we calculated a practice's infrastructure score, dividing its cumulative total by the number of available points. We identified practices that would be recognized as a medical home (Level 1 [25–49 percent], Level 2 [50–74 percent], or Level 3 [infrastructure score ≥75 percent]) and examined characteristics associated with NCQA recognition.
Results
Forty-six percent (95 percent confidence interval [CI], 42.5–50.2) of all practices lack sufficient medical home infrastructure. While 72.3 percent (95 percent CI, 64.0–80.7 percent) of multi-specialty groups would achieve recognition, only 49.8 percent (95 percent CI, 45.2–54.5 percent) of solo/partnership practices meet NCQA standards. Although better prepared than specialists, 40 percent of primary care practices would not qualify as a medical home under present criteria.
Conclusion
Almost half of all practices fail to meet NCQA standards for medical home recognition.
doi:10.1111/j.1475-6773.2011.01332.x
PMCID: PMC3393004  PMID: 22091559
Models; organizational; patient-centered care; organization and administration; primary health care
6.  Physician Practices and Readiness for MedicalHome Reforms: Policy, Pitfalls, and Possibilities 
Health services research  2011;47(1 0 2):10.1111/j.1475-6773.2011.01332.x.
Objective
To determine the proportion of physician practices in the United States that currently meets medical home criteria.
Data Source/Study Setting
2007 and 2008 National Ambulatory Medical Care Survey.
Study Design
We mapped survey items to the National Committee on Quality Assurance’s (NCQA’s) medical home standards. After awarding points for each “passed” element, we calculated a practice’s infrastructure score, dividing its cumulative total by the number of available points. We identified practices that would be recognized as a medical home (Level 1 [25–49 percent], Level 2 [50–74 percent], or Level 3 [infrastructure score ≥75 percent]) and examined characteristics associated with NCQA recognition.
Results
Forty-six percent (95 percent confidence interval [CI], 42.5–50.2) of all practices lack sufficient medical home infrastructure. While 72.3 percent (95 percent CI, 64.0–80.7 percent) of multi-specialty groups would achieve recognition, only 49.8 percent (95 percent CI, 45.2–54.5 percent) of solo/partnership practices meet NCQA standards. Although better prepared than specialists, 40 percent of primary care practices would not qualify as a medical home under present criteria.
Conclusion
Almost half of all practices fail to meet NCQA standards for medical home recognition.
doi:10.1111/j.1475-6773.2011.01332.x
PMCID: PMC3393004  PMID: 22091559
7.  Fall Associated Difficulty with Activities of Daily Living (ADL) in Functionally Independent Older Adults Aged 65 to 69 in the United States: A Cohort Study 
Journal of the American Geriatrics Society  2013;61(1):10.1111/jgs.12071.
Background/Objectives
Falling is a risk factor for functional dependence in adults 75 years and older, but has not been systematically evaluated for younger and healthier older adults. This younger group of older adults may benefit from earlier identification of their risk. We hypothesizedthat falling would bea marker for future difficulty with activities of daily (ADL)that would vary by fall frequency and associated injury.
Design, Setting, and Patients
Nationally representative cohort of 2,020 community-living, functionally independent older adults 65-69 years of age at baseline followed between 1998-2008.
Main Outcome Measurement
ADL difficulty
Results
Experiencing one fall with injury in the prior 2 years (Odds = 1.78, 95% CI 1.29-2.48), at least 2 falls without injury in the prior 2 years (Odds = 2.36, 95% CI 1.80-3.09), or at least 2 falls with at least one injury in the prior 2 years (Odds = 3.75, 95% CI 2.55-5.53) were independently associated with higher rates of ADL difficulty after adjustment for socio-demographic, behavioral, and clinical covariates.
Limitations
HRS data are self-reported
Conclusion
Falling is an important marker for future ADL difficulty in youngerfunctionally independent older adults. Individuals who fall frequently or report injury are at highest risk.
doi:10.1111/jgs.12071
PMCID: PMC3807864  PMID: 23311555
Activities of Daily Living; Falls; Disability; Older Adults
8.  Providing clinicians with a patient’s 10-year cardiovascular risk improves their statin prescribing: a true experiment using clinical vignettes 
Background
Statins are effective for primary prevention of cardiovascular (CV) disease, the leading cause of death in the world. Multinational guidelines emphasize CV risk as an important factor for optimal statin prescribing. However, it’s not clear how primary care providers (PCPs) use this information. The objective of this study was to determine how primary care providers use information about global CV risk for primary prevention of CV disease.
Methods
A double-blinded, randomized experiment using clinical vignettes mailed to office-based PCPs in the United States who were identified through the American Medical Association Physician Masterfile in June 2012. PCPs in the control group received clinical vignettes with all information on the risk factors needed to calculate CV risk. The experimental group received the same vignettes in addition to the subject’s 10-year calculated CV risk (Framingham risk score). The primary study outcome was the decision to prescribe a statin.
Results
Providing calculated CV risk to providers increased statin prescribing in the two high-risk cases (CV risk > 20%) by 32 percentage points (41% v. 73%; 95% CI = 23-40, p <0.001; relative risk [RR] = 1.78) and 16 percentage points (12% v. 27%, 95% CI 8.5-22.5%, p <0.001; RR = 2.25), and decreased statin prescribing in the lowest risk case (CV risk = 2% risk) by 9 percentage points [95% CI = 1.00-16.7%, p = 0.003, RR = 0.88]. Fewer than 20% of participants in each group reported routinely calculating 10-year CV risk in their patients.
Conclusions
Providers do not routinely calculate 10-year CV risk for their patients. In this vignette experiment, PCPs undertreated low LDL, high CV risk patients. Giving providers a patient’s calculated CV risk improved statin prescribing. Providing PCPs with accurate estimates of patient CV risk at the point of service has the potential to improve the efficiency of statin prescribing.
doi:10.1186/1471-2261-13-90
PMCID: PMC3924357  PMID: 24148829
Primary prevention; Cardiovascular disease; Statins; Cardiovascular risk
9.  Cardiac Risk is Not Associated with Hypertension Treatment Intensification 
Objective
Considering cardiovascular (CV) risk could make clinical care more efficient and individualized, but most practice guidelines focus on single risk factors. We sought to see if hypertension treatment intensification (TI) is more likely in patients with elevated CV risk.
Study design
Prospective cohort study of 856 US Veterans with diabetes and elevated blood pressure (BP).
Methods
We used multilevel logistic regression to compare TI across three CV risk groups – those with history of heart disease, a high-risk primary prevention group (10-year event risk > 20% but no history of heart disease), and those with low/medium CV risk (10-year event risk < 20%).
Results
There were no significant differences in TI rates across risk groups, with adjusted odds ratios (ORs) of 1.19 (95% confidence interval 0.77–1.84) and 1.18 (0.76–1.83) for high-risk patients and those with a history of CVD, respectively, compared with those of low/medium-risk. Several individual risk factors were associated with higher rates of TI: systolic BP, mean BP in the prior year, and higher hemoglobin A1C. Self-reported home BP < 140/90 was associated with lower rates of TI. Incorporating CV risk into TI decision algorithms could prevent an estimated 38% more cardiac events without increasing the number of treated patients.
Conclusions
While an individual’s blood pressure alters clinical decisions about TI, overall CV risk does not appear to play a role in clinical decision-making. Adoption of TI decision algorithms that incorporate CV risk could substantially enhance the efficiency and clinical utility of CV preventive care.
PMCID: PMC3682773  PMID: 22928756
Prevention; hypertension; decision making; veterans
10.  Duration of resuscitation efforts and subsequent survival after in-hospital cardiac arrest 
Lancet  2012;380(9852):1473-1481.
Background
During in-hospital cardiac arrests, it is uncertain how long resuscitation should continue prior to termination of efforts. We hypothesized that the duration of resuscitation varies across hospitals, and that patients at hospitals with longer attempts have higher survival rates.
Methods
Between 2000 and 2008, we identified 64,339 patients with cardiac arrests at 435 hospitals within a large national registry. For each hospital, we calculated the median duration of resuscitation before termination of efforts among its non-survivors as a measure of the hospital’s overall tendency for longer attempts. We then determined the association between a hospital’s tendency for longer attempts and risk-adjusted survival using multilevel regression models.
Findings
The overall proportion of patients achieving immediate survival with return of spontaneous circulation (ROSC) was 48·5% while 15·4% survived to discharge. For patients achieving ROSC, the median resuscitation time was 12 minutes (IQR: 6–21) while it was 20 minutes (IQR: 14–30) for those not achieving ROSC (i.e., non-survivors). Compared with patients at hospitals with the shortest attempts (median duration, 16 minutes), patients at hospitals with the longest attempts (median duration, 25 minutes) had a higher likelihood of ROSC (adjusted risk-ratio 1·12, [95% CI: 1·06–1·18]; p <0·001) and survival to discharge (adjusted risk-ratio 1·12, [95% CI: 1·02–1·23]; p=0·021). These findings were more prominent in cardiac arrests due to asystole and pulseless electrical activity (p for interaction<0.01 for both ROSC and survival to discharge).
Interpretation
The duration of resuscitation attempts varies across hospitals. Patients at hospitals with longer attempts have a higher likelihood of ROSC and survival to discharge, particularly when the arrest is due to asystole and pulseless electrical activity.
Funding
The American Heart Association, the Robert Wood Johnson Foundation Clinical Scholars Program, the National Institutes of Health.
doi:10.1016/S0140-6736(12)60862-9
PMCID: PMC3535188  PMID: 22958912
11.  Effect of Flexible Sigmoidoscopy-Based Screening on Incidence and Mortality of Colorectal Cancer: A Systematic Review and Meta-Analysis of Randomized Controlled Trials 
PLoS Medicine  2012;9(12):e1001352.
A systematic review and meta-analysis of randomized trials conducted by B. Joseph Elmunzer and colleagues reports that that flexible sigmoidoscopy-based screening reduces the incidence of colorectal cancer in average-risk patients, as compared to usual care or no screening.
Background
Randomized controlled trials (RCTs) have yielded varying estimates of the benefit of flexible sigmoidoscopy (FS) screening for colorectal cancer (CRC). Our objective was to more precisely estimate the effect of FS-based screening on the incidence and mortality of CRC by performing a meta-analysis of published RCTs.
Methods and Findings
Medline and Embase databases were searched for eligible articles published between 1966 and 28 May 2012. After screening 3,319 citations and 29 potentially relevant articles, two reviewers identified five RCTs evaluating the effect of FS screening on the incidence and mortality of CRC. The reviewers independently extracted relevant data; discrepancies were resolved by consensus. The quality of included studies was assessed using criteria set out by the Evidence-Based Gastroenterology Steering Group. Random effects meta-analysis was performed.
The five RCTs meeting eligibility criteria were determined to be of high methodologic quality and enrolled 416,159 total subjects. Four European studies compared FS to no screening and one study from the United States compared FS to usual care. By intention to treat analysis, FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen [NNS] to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (relative risk [RR] 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The efficacy estimate, the amount of benefit for those who actually adhered to the recommended treatment, suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
Limitations of this meta-analysis include heterogeneity in the design of the included trials, absence of studies from Africa, Asia, or South America, and lack of studies comparing FS with colonoscopy or stool-based testing.
Conclusions
This meta-analysis of randomized controlled trials demonstrates that FS-based screening significantly reduces the incidence and mortality of colorectal cancer in average-risk patients.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Colorectal cancer (CRC) is the second leading cause of cancer-related death in the United States. Regular CRC screening has been shown to reduce the risk of dying from CRC by 16%, and CRC screening can identify early stage cancers in otherwise healthy people, which allows for early treatment and management of the disease. Screening for colorectal cancer is frequently performed using a flexible sigmoidoscopy (FS), which is a thin, flexible tube with a tiny camera and light on the end, allowing a doctor to look at the inside wall of the bowel and remove any small growths or polyps. Although screening may detect early cancers, the life-saving and health benefits of screening are uncertain because the polyp may not necessarily progress. This could lead to anxiety and unnecessary interventions and treatments amongst those screened. Randomized controlled trials (RCTs) are needed to determine all of the risks involved in cancer screenings, however the guidelines that recommend FS-based screening do not rely upon RCT data. Recently, the results of four large-scale RCTs evaluating FS screening for CRC have been published. The conflicting results with respect to the incidence and mortality of CRC in these studies have called into question the effectiveness of endoscopic screening.
Why Was This Study Done?
The results of RCTs measuring the risks and outcomes of CRC screening have shown varying estimates of the benefits of using FS screening. If better estimates of the risks and benefits of FS screening are developed, then the current CRC screening guidelines may be updated to reflect this new information. In this study, the authors show the results of a meta-analysis of published RCTs, which more precisely estimates the effects of FS-based screening on the incidence and mortality of colorectal cancer.
What Did the Researchers Do and Find?
The researchers used the Medline and Embase databases to find relevant studies from 1966 to May 28, 2012. After screening 3,319 citations and 29 potentially relevant articles, five RCTs of high methodologic quality and 416,159 total subjects evaluating the effect of FS screening on the incidence and mortality of CRC were identified. The data were extracted and random effects meta-analysis was performed. The meta-analysis revealed that FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen (NNS) to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (RR 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The amount of benefit for those who adhered to the recommended treatment suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
What Do These Findings Mean?
This meta-analysis of RCTs evaluating the effect of FS on CRC incidence and mortality demonstrates that a FS-based strategy for screening is very effective in reducing the incidence and mortality of CRC in patients. The current recommendations for endoscopic screening are based on observational studies, which may not accurately reflect the effect of FS-based screening on the incidence and mortality of CRC. Here, the authors performed a systematic review and meta-analysis of five recent RCTs to better estimate the true effect of FS-based screening on the incidence and mortality of CRC. Thus, the results of this meta-analysis may affect health policy, and directly impact patients and clinicians.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001352.
Cancer research UK provides comprehensive information about screening for colorectal cancers as does the UK National Screening Committee
PubMed Health has general information about colon cancer
The National Cancer Institute also has comprehensive resources on colorectal cancer and treatment
The Mayo Clinic provides an overview of all aspects of colon cancer
doi:10.1371/journal.pmed.1001352
PMCID: PMC3514315  PMID: 23226108
12.  A Randomized Trial of Rectal Indomethacin to Prevent Post-ERCP Pancreatitis 
The New England Journal of Medicine  2012;366(15):1414-1422.
Background
Preliminary research suggests that rectally administered nonsteroidal antiinflammatory drugs may reduce the incidence of pancreatitis after endoscopic retrograde cholangiopancreatography (ERCP).
Methods
In this multicenter, randomized, placebo-controlled, double-blind clinical trial, we assigned patients at elevated risk for post-ERCP pancreatitis to receive a single dose of rectal indomethacin or placebo immediately after ERCP. Patients were determined to be at high risk on the basis of validated patient- and procedure-related risk factors. The primary outcome was post-ERCP pancreatitis, which was defined as new upper abdominal pain, an elevation in pancreatic enzymes to at least three times the upper limit of the normal range 24 hours after the procedure, and hospitalization for at least 2 nights.
Results
A total of 602 patients were enrolled and completed follow-up. The majority of patients (82%) had a clinical suspicion of sphincter of Oddi dysfunction. Post-ERCP pancreatitis developed in 27 of 295 patients (9.2%) in the indomethacin group and in 52 of 307 patients (16.9%) in the placebo group (P = 0.005). Moderate-to-severe pancreatitis developed in 13 patients (4.4%) in the indomethacin group and in 27 patients (8.8%) in the placebo group (P = 0.03).
Conclusions
Among patients at high risk for post-ERCP pancreatitis, rectal indomethacin significantly reduced the incidence of the condition. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT00820612.)
doi:10.1056/NEJMoa1111103
PMCID: PMC3339271  PMID: 22494121
13.  Adherence to Colorectal Cancer Screening 
Archives of Internal Medicine  2012;172(7):575-582.
Background
Despite evidence that several colorectal cancer (CRC) screening strategies can reduce CRC mortality, screening rates remain low. This study aimed to determine whether the approach by which screening is recommended influences adherence.
Methods
We used a cluster randomization design with clinic time block as the unit of randomization. Persons at average risk for development of CRC in a racially/ethnically diverse urban setting were randomized to receive recommendation for screening by fecal occult blood testing (FOBT), colonoscopy, or their choice of FOBT or colonoscopy. The primary outcome was completion of CRC screening within 12 months after enrollment, defined as performance of colonoscopy, or 3 FOBT cards plus colonoscopy for any positive FOBT result. Secondary analyses evaluated sociodemographic factors associated with completion of screening.
Results
A total of 997 participants were enrolled; 58% completed the CRC screening strategy they were assigned or chose. However, participants who were recommended colonoscopy completed screening at a significantly lower rate (38%) than participants who were recommended FOBT (67%) (P< .001) or given a choice between FOBT or colonoscopy (69%) (P< .001). Latinos and Asians (primarily Chinese) completed screening more often than African Americans. Moreover, non-white participants adhered more often to FOBT, while white participants adhered more often to colonoscopy.
Conclusions
The common practice of universally recommending colonoscopy may reduce adherence to CRC screening, especially among racial/ethnic minorities. Significant variation in overall and strategy-specific adherence exists between racial/ethnic groups; however, this may be a proxy for health beliefs and/or language. These results suggest that patient preferences should be considered when making CRC screening recommendations.
Trial Registration
clinicals.gov Identifier: NCT00705731
doi:10.1001/archinternmed.2012.332
PMCID: PMC3360917  PMID: 22493463
15.  Examining the Evidence: A Systematic Review of the Inclusion and Analysis of Older Adults in Randomized Controlled Trials 
ABSTRACT
BACKGROUND
Due to a shortage of studies focusing on older adults, clinicians and policy makers frequently rely on clinical trials of the general population to provide supportive evidence for treating complex, older patients.
OBJECTIVES
To examine the inclusion and analysis of complex, older adults in randomized controlled trials.
REVIEW METHODS
A PubMed search identified phase III or IV randomized controlled trials published in 2007 in JAMA, NEJM, Lancet, Circulation, and BMJ. Therapeutic interventions that assessed major morbidity or mortality in adults were included. For each study, age eligibility, average age of study population, primary and secondary outcomes, exclusion criteria, and the frequency, characteristics, and methodology of age-specific subgroup analyses were reviewed.
RESULTS
Of the 109 clinical trials reviewed in full, 22 (20.2%) excluded patients above a specified age. Almost half (45.6%) of the remaining trials excluded individuals using criteria that could disproportionately impact older adults. Only one in four trials (26.6%) examined outcomes that are considered highly relevant to older adults, such as health status or quality of life. Of the 42 (38.5%) trials that performed an age-specific subgroup analysis, fewer than half examined potential confounders of differential treatment effects by age, such as comorbidities or risk of primary outcome. Trials with age-specific subgroup analyses were more likely than those without to be multicenter trials (97.6% vs. 79.1%, p < 0.01) and funded by industry (83.3% vs. 62.7%, p < 0.05). Differential benefit by age was found in seven trials (16.7%).
CONCLUSION
Clinical trial evidence guiding treatment of complex, older adults could be improved by eliminating upper age limits for study inclusion, by reducing the use of eligibility criteria that disproportionately affect multimorbid older patients, by evaluating outcomes that are highly relevant to older individuals, and by encouraging adherence to recommended analytic methods for evaluating differential treatment effects by age.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1629-x) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-010-1629-x
PMCID: PMC3138606  PMID: 21286840
clinical trial methodology; exclusion criteria; subgroup analysis; comorbidities
16.  Variation in Organ Quality between Liver Transplant Centers 
A wide spectrum of quality exists among deceased donor organs available for liver transplantation. It is unknown whether some transplant centers systematically use more low quality organs, and what factors might influence these decisions. We used hierarchical regression to measure variation in Donor Risk Index (DRI) in the U.S. by region, Organ Procurement Organization (OPO), and transplant center. The sample included all adults who underwent deceased donor liver transplantation between January 12, 2005 and February 1, 2009 (n=23,810). Despite adjusting for geographic region and OPO, transplant centers’ mean DRI ranged from 1.27–1.74, and could not be explained by differences in patient populations such as disease severity. Larger volume centers and those having competing centers within their OPO were more likely to use higher risk organs, particularly among recipients with lower Model for End-stage Liver Disease (MELD) scores. Centers using higher risk organs had equivalent waiting list mortality rates, but tended to have higher post-transplant mortality (hazard ratio 1.10 per 0.1 increase in mean DRI). In conclusion, the quality of deceased donor organ patients receive is variable and depends in part on characteristics of the transplant center they visit.
doi:10.1111/j.1600-6143.2011.03487.x
PMCID: PMC3175797  PMID: 21466651
17.  Diminishing Efficacy of Combination Therapy, Response-Heterogeneity, and Treatment Intolerance Limit the Attainability of Tight Risk Factor Control in Patients with Diabetes 
Health Services Research  2010;45(2):437-456.
Objective
To evaluate the attainability of tight risk factor control targets for three diabetes risk factors and to assess the degree of polypharmacy required.
Data Sources/Study Setting
National Health and Nutrition Examination Survey-III.
Study Design
We simulated a strategy of “treating to targets,” exposing subjects to a battery of treatments until low-density lipoprotein (LDL)-cholesterol (100 mg/dL), hemoglobin A1c (7 percent), and blood pressure (130/80 mm Hg) targets were achieved or until all treatments had been exhausted. Regimens included five statins of increasing potency, four A1c-lowering therapies, and eight steps of antihypertensive therapy.
Data Collection/Extraction Methods
We selected parameter estimates from placebo-controlled trials and meta-analyses.
Principal Findings
Under ideal efficacy conditions, 77, 64, and 58 percent of subjects achieved the LDL, A1c, and blood pressure targets, respectively. Successful control depended highly on a subject's baseline number of treatments. Using the least favorable assumptions of treatment tolerance, success rates were 11–17 percentage points lower. Approximately 57 percent of subjects required five or more medication classes.
Conclusions
A significant proportion of people with diabetes will fail to achieve targets despite using high doses of multiple, conventional treatments. These findings raise concerns about the feasibility and polypharmacy burden needed for tight risk factor control, and the use of measures of tight control to assess the quality of care for diabetes.
doi:10.1111/j.1475-6773.2009.01075.x
PMCID: PMC2838154  PMID: 20070387
Quality measurement; Monte Carlo simulation; outcomes research; diabetes mellitus
18.  Prevalence, Diagnosis, and Treatment of Impaired Fasting Glucose and Impaired Glucose Tolerance in Nondiabetic U.S. Adults 
Diabetes Care  2010;33(11):2355-2359.
OBJECTIVE
To estimate the rates of prevalence, diagnosis, and treatment of impaired fasting glucose (IFG) and impaired glucose tolerance (IGT).
RESEARCH DESIGN AND METHODS
A representative sample of the U.S. population (the National Health and Nutrition Examination Survey [NHANES]) from 2005–2006 including 1,547 nondiabetic adults (>18 years of age) without a history of myocardial infarction was assessed to determine the proportion of adults who met the criteria for IFG/IGT, and the proportion of IFG/IGT subjects who: 1) reported receiving a diagnosis from their physicians; 2) were prescribed lifestyle modification or an antihyperglycemic agent; and 3) were currently on therapy. We used multivariable regression analysis to identify predictors of diagnosis and treatment.
RESULTS
Of the 1,547 subjects, 34.6% (CI 30.3–38.9%) had pre-diabetes; 19.4% had IFG only; 5.4% had IGT only, and 9.8% had both IFG and IGT. Only 4.8% of those with pre-diabetes reported having received a formal diagnosis from their physicians. No subjects with pre-diabetes received oral antihyperglycemics, and the rates of recommendation for exercise or diet were 31.7% and 33.5%, respectively. Among the 47.7% pre-diabetic subjects who exercised, 49.4% reported exercising for at least 30 min daily.
CONCLUSIONS
Three years after a major clinical trial demonstrated that interventions could greatly reduce progression from IFG/IGT to type 2 diabetes, the majority of the U.S. population with IFG/IGT was undiagnosed and untreated with interventions. Whether this is due to physicians being unaware of the evidence, unconvinced by the evidence, or clinical inertia is unclear.
doi:10.2337/dc09-1957
PMCID: PMC2963494  PMID: 20724649
19.  Interhospital Transfers among Medicare Beneficiaries Admitted for Acute Myocardial Infarction at Non-Revascularization Hospitals 
Background
Patients with acute myocardial infarctions (AMI) who are admitted to hospitals without coronary revascularization are frequently transferred to hospitals with this capability, yet we know little about the basis for how such revascularization hospitals are selected.
Methods and Results
We examined interhospital transfer patterns in 71,336 AMI patients admitted to hospitals without revascularization capabilities in the 2006 Medicare claims using network analysis and regression models. A total of 31,607 (44.3%) AMI patients were transferred from 1,684 non-revascularization hospitals to 1,104 revascularization hospitals. Median time to transfer was 2 days. Median transfer distance was 26.7 miles, with 96.1% within 100 miles. In 45.8% of cases, patients bypassed a closer hospital to go to farther hospital that had a better 30-day risk standardized mortality rates. However, in 36.8% of cases, another revascularization hospital with lower 30-day risk-standardized mortality was actually closer to the original admitting non-revascularization hospital than the observed transfer destination. Adjusted regression models demonstrated that shorter transfer distances were more common than transfers to the hospitals with lowest 30-day mortality rates. Simulations suggest that an optimized system that prioritized the transfer of AMI patients to a nearby hospital with the lowest 30-day mortality rate might produce clinically meaningful reduction in mortality.
Conclusions
Over 40% of AMI patients admitted to non-revascularization hospitals are transferred to revascularization hospitals. Many patients are not directed to nearby hospitals with the lowest 30-day risk-standardized mortality, and this may represent an opportunity for improvement.
doi:10.1161/CIRCOUTCOMES.110.957993
PMCID: PMC3103265  PMID: 20682917
patient transfers; revascularization; networks; Medicare; mortality
20.  Medication cost problems among chronically ill adults in the US: did the financial crisis make a bad situation even worse? 
A national internet survey was conducted between March and April 2009 among 27,302 US participants in the Harris Interactive Chronic Illness Panel. Respondents reported behaviors related to cost-related medication non-adherence (CRN) and the impacts of medication costs on other aspects of their daily lives. Among respondents aged 40–64 and looking for work, 66% reported CRN in 2008, and 41% did not fill a prescription due to cost pressures. More than half of respondents aged 40–64 and nearly two-thirds of those in this group who were looking for work or disabled reported other impacts of medication costs, such as cutting back on basic needs or increasing credit card debt. More than one-third of respondents aged 65+ who were working or looking for work reported CRN. Regardless of age or employment status, roughly half of respondents reporting medication cost hardship said that these problems had become more frequent in 2008 than before the economic recession. These data show that many chronically ill patients, particularly those looking for work or disabled, reported greater medication cost problems since the economic crisis began. Given links between CRN and worse health, the financial downturn may have had significant health consequences for adults with chronic illness.
doi:10.2147/PPA.S17363
PMCID: PMC3090380  PMID: 21573050
medication adherence; cost-of-care; access to care; chronic disease
21.  Prematurity and Low Birth Weight as Potential Mediators of Higher Stillbirth Risk in Mixed Black/White Race Couples 
Journal of Women's Health  2010;19(4):767-773.
Abstract
Objective
Although births of multiracial and multiethnic infants are becoming more common in the United States, little is known about birth outcomes and risks for adverse events. We evaluated risk of fetal death for mixed race couples compared with same race couples and examined the role of prematurity and low birth weight as potential mediating risk factors.
Methods
We performed a retrospective cohort analysis using data from the 1998–2002 California Birth Cohort to evaluate the odds of fetal death, low birth weight, and prematurity for couples with a mother and father who were categorized as either being of same or different racial groups. Risk of prematurity (birth prior to 37 weeks gestation) and low birth weight (<2500 g) were also tested to see if the model could explain variations among groups.
Results
The analysis included approximately 1.6 million live births and 1749 stillbirths. In the unadjusted model, compared with two white parents, black/black and black/white couples had a significantly higher risk of fetal death. When all demographic, social, biological, genetic, congenital, and procedural risk factors except gestational age and birth weight were included, the odds ratios (OR) were all still significant. Black/black couples had the highest level of risk (OR 2.11, CI 1.77-2.51), followed by black mother/white father couples (OR 2.01, CI 1.16-3.48), and white mother/black father couples (OR 1.84, CI 1.33-2.54). Virtually all of the higher risk of fetal death was explainable by higher rates of low birth weight and prematurity.
Conclusions
Mixed race black and white couples face higher odds of prematurity and low birth weight, which appear to contribute to the substantially higher demonstrated risk for stillbirth. There are likely additional unmeasured factors that influence birth outcomes for mixed race couples.
doi:10.1089/jwh.2009.1561
PMCID: PMC2867623  PMID: 20235877
22.  Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal 
Trials  2010;11:85.
Mounting evidence suggests that there is frequently considerable variation in the risk of the outcome of interest in clinical trial populations. These differences in risk will often cause clinically important heterogeneity in treatment effects (HTE) across the trial population, such that the balance between treatment risks and benefits may differ substantially between large identifiable patient subgroups; the "average" benefit observed in the summary result may even be non-representative of the treatment effect for a typical patient in the trial. Conventional subgroup analyses, which examine whether specific patient characteristics modify the effects of treatment, are usually unable to detect even large variations in treatment benefit (and harm) across risk groups because they do not account for the fact that patients have multiple characteristics simultaneously that affect the likelihood of treatment benefit. Based upon recent evidence on optimal statistical approaches to assessing HTE, we propose a framework that prioritizes the analysis and reporting of multivariate risk-based HTE and suggests that other subgroup analyses should be explicitly labeled either as primary subgroup analyses (well-motivated by prior evidence and intended to produce clinically actionable results) or secondary (exploratory) subgroup analyses (performed to inform future research). A standardized and transparent approach to HTE assessment and reporting could substantially improve clinical trial utility and interpretability.
doi:10.1186/1745-6215-11-85
PMCID: PMC2928211  PMID: 20704705
23.  Variation in the net benefit of aggressive cardiovascular risk factor control across the US diabetes population 
Archives of internal medicine  2010;170(12):1037-1044.
Background
Lowering LDL-cholesterol and blood pressure in patients with diabetes can significantly reduce the risk of cardiovascular disease. However, previous studies have not assessed variability in the benefit and harm from pursuing LDL and blood pressure targets.
Methods
Our sample was comprised of subjects aged 30-75 with diabetes participating in the National Health and Nutrition Examination Survey-III. We used Monte Carlo methods to simulate a treat-to-target strategy, in which patients underwent treatment intensification with the goal of achieving LDL cholesterol and blood pressure targets of 100 mg/dl and 130/80, respectively. Patients received up to 5 titrations of statin therapy and 8 titrations of antihypertensive therapy. Treatment side effects and polypharmacy risks and burdens were incorporated using disutilities. Health outcomes were simulated using a Markov model.
Results
Treating to targets resulted in gains of 1.50 (LDL) and 1.35 (BP) quality-adjusted life years (QALYs) of lifetime treatment-related benefit, which declined to 1.42 and 1.16 QALYs after accounting for treatment-related harms. The majority of the total benefit was limited to the first few steps of medication intensification or to tight control for a limited group of very high risk patients. However, because of treatment-related disutility, intensifying beyond the 1st step (LDL) or 3rd step (BP) resulted in either limited benefit or net harm for patients with below-average risk.
Conclusion
The benefits and harms from aggressive risk factor modification vary widely across the US diabetes population depending on a patient's underlying CVD risk, suggesting a personalized approach could maximize a patient's net benefit from treatment.
doi:10.1001/archinternmed.2010.150
PMCID: PMC2897053  PMID: 20585069
24.  Marriage and Cohabitation Outcomes After Pregnancy Loss 
Pediatrics  2010;125(5):e1202-e1207.
OBJECTIVE
The goal was to evaluate marriage and cohabitation outcomes for couples who experienced a live birth or fetal death at any gestational age.
METHODS
For married and cohabitating women who experienced live births, miscarriages, or stillbirths, we conducted a survival analysis (median follow-up period: 7.8 years), by using data from the National Survey of Family Growth, to examine the association between birth outcomes and subsequent relationship survival. The Cox proportional-hazards models controlled for multiple independent risk factors known to affect relationship outcomes. The main outcome measure was the proportion of intact marriages or cohabitations over time.
RESULTS
Of 7770 eligible pregnancies, 82% ended in live births, 16% in miscarriages, and 2% in stillbirths. With controlling for known risk factors, women who experienced miscarriages (hazard ratio: 1.22 [95% confidence interval: 1.08–1.38]; P = .001) or stillbirths (hazard ratio: 1.40 [95% confidence interval: 1.10–1.79]; P = .007) had a significantly greater hazard of their relationship ending, compared with women whose pregnancies ended in live births.
CONCLUSIONS
This is the first national study to establish that parental relationships have a higher risk of dissolving after miscarriage or stillbirth, compared with live birth. Given the frequency of pregnancy loss, these findings might have significant societal implications if causally related.
doi:10.1542/peds.2009-3081
PMCID: PMC2883880  PMID: 20368319
miscarriage; stillbirth; pregnancy loss; marriage; cohabitation
25.  Overestimating Outcome Rates: Statistical Estimation When Reliability Is Suboptimal 
Health Services Research  2007;42(4):1718-1738.
Objective
To demonstrate how failure to account for measurement error in an outcome (dependent) variable can lead to significant estimation errors and to illustrate ways to recognize and avoid these errors.
Data Sources
Medical literature and simulation models.
Study Design/Data Collection
Systematic review of the published and unpublished epidemiological literature on the rate of preventable hospital deaths and statistical simulation of potential estimation errors based on data from these studies.
Principal Findings
Most estimates of the rate of preventable deaths in U.S. hospitals rely upon classifying cases using one to three physician reviewers (implicit review). Because this method has low to moderate reliability, estimates based on statistical methods that do not account for error in the measurement of a “preventable death” can result in significant overestimation. For example, relying on a majority rule rating with three reviewers per case (reliability ∼0.45 for the average of three reviewers) can result in a 50–100 percent overestimation compared with an estimate based upon a reliably measured outcome (e.g., by using 50 reviewers per case). However, there are statistical methods that account for measurement error that can produce much more accurate estimates of outcome rates without requiring a large number of measurements per case.
Conclusion
The statistical principles discussed in this case study are critically important whenever one seeks to estimate the proportion of cases belonging to specific categories (such as estimating how many patients have inadequate blood pressure control or identifying high-cost or low-quality physicians). When the true outcome rate is low ( < 20 percent), using an outcome measure that has low-to-moderate reliability will generally result in substantially overestimating the proportion of the population having the outcome unless statistical methods that adjust for measurement error are used.
doi:10.1111/j.1475-6773.2006.00661.x
PMCID: PMC1955272  PMID: 17610445
Reliability; statistical estimation; measurement error; medical errors; preventable deaths; adverse events

Results 1-25 (41)