As statin therapy increases risks of diabetes, the balance of benefit and risk in primary prevention for these agents has become controversial. We undertook an analysis of participants from the JUPITER trial to address the balance of vascular benefits and diabetes hazard of statin use.
In the randomized, double-blind JUPITER trial, 17,603 men and women without prior cardiovascular disease or diabetes were randomly allocated to rosuvastatin 20 mg or placebo and followed for up to 5 years for the trial primary endpoint (myocardial infarction, stroke, hospitalization for unstable angina, arterial revascularization, or cardiovascular death) and the protocol pre-specified secondary endpoints of venous thromboembolism (VTE), all-cause mortality, and incident diabetes. To address balance of vascular benefits and diabetes hazard, participants were stratified on the basis of having none or at least one of the following major risk factors for developing diabetes: metabolic syndrome, impaired fasting glucose, body mass index >30 kg/m2, or HbA1c > 6 percent.
Trial participants with one or more major diabetes risk factor (N=11,508) were at higher risk of developing diabetes; for such individuals, statin allocation was associated with a 39 percent reduction in the primary endpoint (P=0.0001), a 36 percent reduction in VTE (P=0.08), a 17 percent reduction in total mortality (P=0.15) and a 28 percent increase in diabetes (P=0.01). Thus, for those with diabetes risk factors, 93 vascular events or deaths were avoided for every 54 new cases of diabetes diagnosed. For trial participants with no major diabetes risk factor (N=6,095), statin allocation was associated with a 52 percent reduction in the primary endpoint (P=0.0001), a 53 percent reduction in VTE (P=0.05), a 22 percent reduction in total mortality (P=0.08) and no increase in diabetes (HR 0.99, P= 0.99). For such individuals, a total of 86 vascular events or deaths were avoided with no new cases of diabetes diagnosed. In analysis limited to the 486 participants who developed diabetes during follow-up (270 on rosuvastatin vs. 216 on placebo group, P=0.01), the point estimate of cardiovascular risk reduction associated with statin therapy (hazard ratio 0.63) was consistent with that observed for the trial as a whole (hazard ratio 0.56). As compared to placebo, statin allocation accelerated the average time to diagnosis of diabetes by 5.4 weeks.
In the JUPITER primary prevention trial, the cardiovascular and mortality benefits of statin therapy exceed the diabetes hazard, including among those at higher risk for developing diabetes
Acute myocardial infarction; Epidemiology; Health policy and outcome research; Primary prevention; Secondary prevention; Resource utilization
Part D coverage gap entry is associated with a two-fold increased rate of drug discontinuation among beneficiaries now fully responsible for drug costs. Reduced adherence to drugs has been associated with adverse outcomes. We evaluated whether coverage gap entry is associated with risk of death or hospitalization for cardiovascular outcomes.
Prospective cohort study. Beneficiaries entered the study upon reaching the coverage gap spending threshold and were observed until an event, reaching the threshold for catastrophic coverage, or year’s end. Exposed patients were responsible for drug costs in the gap; unexposed patients received financial assistance. We matched 9,436 exposed patients to 9,436 unexposed patients based on propensity score (PS) or high-dimensional propensity score (hdPS).
Medicare Part D drug insurance.
303,978 Medicare beneficiaries aged 65+ in 2006 and 2007 with linked prescription and medical claims who enrolled in stand-alone Part D or retiree drug plans and reached the gap spending threshold.
Rates of death and hospitalization for any of 5 cardiovascular outcomes, including acute coronary syndrome+revascularization (ACS), after reaching the coverage gap spending threshold were compared using Cox proportional hazards models.
In PS-matched analyses, exposed beneficiaries had elevated but non-significant hazards of death (HR=1.25; 95% CI 0.98–1.59) and ACS (HR=1.16; 0.83–1.62) compared with unexposed patients. hdPS-matched analyses minimized residual confounding and confirmed results: death (HR=0.99; 0.78–1.24); ACS (HR=1.07; 0.81–1.41). Exposed beneficiaries were no more or less likely to experience other outcomes than were the unexposed.
During the short-term coverage gap period, having no financial assistance to pay for drugs was not associated with an increased risk of death or hospitalization for cardiovascular causes. However, long-term health consequences remain unclear.
Medicare Part D; coverage gap; adverse health outcomes; cardiovascular disease; drug discontinuation
To test whether supplementation with alternate-day vitamin E or daily vitamin C affects the incidence of the diagnosis of age-related macular degeneration (AMD) in a large-scale randomized trial of male physicians.
Randomized, double-masked, placebo-controlled trial.
We included 14 236 apparently healthy United States male physicians aged ≥50 years who did not report a diagnosis of AMD at baseline.
Participants were randomly assigned to receive 400 international units (IU) of vitamin E or placebo on alternate days, and 500 mg of vitamin C or placebo daily. Participants reported new diagnoses of AMD on annual questionnaires and medical record data were collected to confirm the reports.
Main Outcome Measures
Incident diagnosis of AMD responsible for a reduction in best-corrected visual acuity to ≤20/30.
After 8 years of treatment and follow-up, a total of 193 incident cases of visually significant AMD were documented. There were 96 cases in the vitamin E group and 97 in the placebo group (hazard ratio [HR], 1.03; 95% confidence interval [CI], 0.78–1.37). For vitamin C, there were 97 cases in the active group and 96 in the placebo group (HR, 0.99; 95% CI, 0.75–1.31).
In a large-scale, randomized trial of United States male physicians, alternate-day use of 400 IU of vitamin E and/or daily use of 500 mg of vitamin C for 8 years had no appreciable beneficial or harmful effect on risk of incident diagnosis of AMD.
Nonexperimental studies of treatment effectiveness provide an important complement to randomized trials by including heterogeneous populations. Propensity scores (PS) are common in these studies, but may not adequately capture changes in channeling experienced by innovative treatments. We use calendar time-specific (CTS) PSs to examine the effect of oxaliplatin during dissemination from off-label to widespread use.
Stage III colon cancer patients aged 65+ initiating chemotherapy between 2003–06 were examined using cancer registry data linked with Medicare claims. Two PS approaches for receipt of oxaliplatin vs. 5-flourouricil were constructed using logistic models with key components of age, sex, substage, grade, census level income, and comorbidities: 1) a conventional, year-adjusted PS and 2) a CTS PS constructed and matched separately within 1-year intervals, then combined. We compared PS-matched hazard ratios (HR) for mortality using Cox models.
Oxaliplatin use increased significantly; 8%(n=86) of patients received it in the first time period vs. 52%(n=386) in the last. Channeling by comorbidities, income, and age appeared to change over time. The CTS PS improved covariate balance within calendar time strata and yielded an attenuated estimated benefit of oxaliplatin (HR=0.75) compared with the conventional PS (HR=0.69).
In settings where prescribing patterns have changed and calendar time acts as a confounder, a CTS PS can characterize changes in treatment choices and estimating separate PSs within specific calendar time periods may result in enhanced confounding control. To increase validity of CER, researchers should carefully consider drug lifecycles and effects of innovative treatment dissemination over time.
A correctly-specified propensity score (PS) estimated in a cohort (“cohort PS”) should in expectation remain valid in a subgroup population. We sought to determine whether using a cohort PS can be validly applied to subgroup analyses and thus add efficiency to studies with many subgroups or restricted data. In each of 3 cohort studies we estimated a cohort PS, defined 5 subgroups, and then estimated subgroup-specific PSs. We compared difference in treatment effect estimates for subgroup analyses adjusted by cohort PSs versus subgroup-specific PSs. Then, 10M times, we simulated a population with known characteristics of confounding, subgroup size, treatment interactions, and treatment effect, and again assessed difference in point estimates. We observed that point estimates in most subgroups were substantially similar with the two methods of adjustment. In simulations, the effect estimates differed by a median of 3.4% (interquartile [IQ] range 1.3% to 10.0%). The IQ range exceeded 10% only in cases where the subgroup had <1000 patients or few outcome events. Our empirical and simulation results indicated that using a cohort PS in subgroup analyses was a feasible approach, particularly in larger subgroups.
Propensity Scores; Confounding Factors (Epidemiology); Multicenter Study [Publication Type]; Epidemiologic Methods; Effect Modifiers (Epidemiology); Comparative Effectiveness Research
Propensity score calibration (PSC) can be used to adjust for unmeasured confounders using a cross-sectional validation study that lacks information on the disease outcome (Y), under a strong surrogacy assumption. Using directed acyclic graphs and path analysis, the authors developed a formula to predict the presence and magnitude of the bias of PSC in the simplest setting of a binary exposure (T) and 1 confounder (X) that are observed in the main study and 1 confounder (C) that is observed in the validation study only. PSC bias is predicted on the basis of parameters that can be estimated from the data and a single unidentifiable parameter, the relative risk (RR) associated with C (RRCY). The authors simulated 1,000 cohort studies each with a Poisson-distributed outcome Y, varying parameter values over a wide range. When using the true parameter for RRCY, the formula predicts PSC bias almost perfectly in this simple setting (correlation with observed bias over 24 scenarios assessed: r = 0.998). The authors conclude that the bias from PSC observed in certain scenarios can be estimated from the imbalance in C between treated and untreated persons, after adjustment for X, in the validation study and assuming a range of plausible values for the unidentifiable RRCY.
bias (epidemiology); confounding factors (epidemiology); epidemiologic methods; path analysis; propensity score; propensity score calibration; research design
Prospective medical product monitoring is intended to alert stakeholders about whether and when safety problems are identifiable in a continuous stream of longitudinal electronic healthcare data. In comparing the performance of methods to generate these alerts, three factors must be considered: (1) accuracy in alerting; (2) timeliness of alerting; and (3) the trade-offs between the costs of false negative and false positive alerting. Using illustrative examples, we show that traditional scenario-based measures of accuracy, such as sensitivity and specificity, which classify only at the end of monitoring, fail to appreciate timeliness of alerting. We propose an event-based approach that classifies exposed outcomes according to whether or not a prior alert was generated. We provide event-based extensions to existing metrics and discuss why these metrics are limited in this setting because of inherent tradeoffs that they impose between the relative consequences of false positives versus false negatives. We provide an expression that summarizes event-based sensitivity (the proportion of exposed events that occur after alerting among all exposed events in scenarios with true safety issues) and event-based specificity (the proportion of exposed events that occur in the absence of alerting among all exposed events in scenarios with no true safety issues) by taking an average weighted by the relative costs of false positive and false negative alerting. This approach explicitly accounts for accuracy in alerting, timeliness in alerting, and the trade-offs between the costs of false negative and false positive alerting. Subsequent work will involve applying the metric to simulated data.
medical product monitoring; active surveillance; prospective safety monitoring; performance metrics; time-to-alerting; operating characteristics
The eye is often the unit of measurement for outcomes, and frequently also for covariates, in vision research, and measurements in the two eyes of the same person are often strongly but far from perfectly correlated. Advances have occurred in development and accessibility of analytic approaches to evaluate determinants of eye-specific outcomes including information from both eyes of some subjects.
We illustrate available regression approaches to analyze correlated outcomes from both eyes in data sets with both eye- and subject-specific exposures and potential confounding variables. We consider cross-sectional and longitudinal study designs, and discrete, continuous, and time-to-event outcomes.
Across a range of study designs and measurement scales for the outcome variable, we show the under-estimation of P-values and widths of confidence intervals that occurs when the correlation between paired eyes in a person is ignored, and the reduced precision that occurs in separate analyses of right or left eyes, or in analyses of persons rather than eyes. By comparison, regression models with the eye as the unit of analysis and appropriate consideration of the correlation between paired outcomes generally offer maximal use of available data, enhanced interpretability of covariate-outcome associations, and efficient use of information from subjects who contribute only one eye to analyses.
For many studies in vision research, the now widely available regression models that appropriately treat the eye as the unit of analysis offer the best analytic approach.
Medicare Part D improved access to cardiovascular medications. Increased cardiovascular drug utilization and resulting health improvements could be derailed when beneficiaries enter the coverage gap and must pay 100% of drug costs. The coverage gap remains the subject of Congressional debate; evidence regarding its impact on cardiovascular drug use and health outcomes is needed.
Methods and Results
We followed 122,255 Medicare beneficiaries with cardiovascular conditions with linked prescription and medical claims who reached the coverage gap spending threshold in 2006 or 2007. Beneficiaries entered the study upon reaching the threshold and were followed until an event, the catastrophic coverage spending threshold, or year’s end. We matched 3,980 beneficiaries who reached the threshold and received no financial assistance (exposed) to 3,980 with financial assistance during the gap period (unexposed) using propensity score (PS) and high-dimensional PS (hdPS) approaches. We compared rates of cardiovascular drug discontinuation, drug switching, and death or hospitalization for acute coronary syndrome+revascularization (ACS), congestive heart failure, or atrial fibrillation. In PS-matched analyses, exposed beneficiaries were more likely to discontinue (HR=1.57; 95% CI, 1.39–1.79, RD=13.76; 95% CI, 10.99-16.54 drugs/100 person-years) but no more or less likely to switch cardiovascular drugs. There were no significant differences in rates of death (PS-matched HR=1.23; 0.89-1.71) or other outcomes.
Part D beneficiaries with cardiovascular conditions with no financial assistance during the coverage gap were at increased risk for cardiovascular drug discontinuation. However, the impact of this difference on health outcomes is not clear.
epidemiology; Part D coverage gap; cardiovascular drugs; cardiovascular morbidity and mortality
Under Medicare Part D, patient characteristics influence plan choice, which in turn influences Part D coverage gap entry. We compared pre-defined propensity score (PS) and high-dimensional propensity score (hdPS) approaches to address such ‘confounding by health system use’ in assessing whether coverage gap entry is associated with cardiovascular events or death.
We followed 243,079 Medicare patients aged 65+ with linked prescription, medical, and plan-specific data in 2005–2007. Patients reached the coverage gap and were followed until an event or year’s end. Exposed patients were responsible for drug costs in the gap; unexposed patients (patients with non-Part D drug insurance and Part D patients receiving a low-income subsidy (LIS)) received financial assistance. Exposed patients were 1:1 PS- or hdPS-matched to unexposed patients. The PS model included 52 predefined covariates; the hdPS model added 400 empirically identified covariates. Hazard ratios for death and any of five cardiovascular outcomes were compared. In sensitivity analyses, we explored residual confounding using only LIS patients in the unexposed group.
In unadjusted analyses, exposed patients had no greater hazard of death (HR=1.00; 95% CI, 0.84–1.20) or other outcomes. PS- (HR=1.29;0.99–1.66) and hdPS- (HR=1.11;0.86–1.42) matched analyses showed elevated but non-significant hazards of death. In sensitivity analyses, the PS analysis showed a protective effect (HR=0.78;0.61–0.98), while the hdPS analysis (HR=1.06;0.82–1.37) confirmed the main hdPS findings.
Although the PS-matched analysis suggested elevated though non-significant hazards of death among patients with no financial assistance during the gap, the hdPS analysis produced lower estimates that were stable across sensitivity analyses.
confounding; health services use; propensity score adjustment; high-dimensional propensity score; health policy
Usefulness of propensity scores and regression models to balance potential confounders at treatment initiation may be limited for newly introduced therapies with evolving use patterns.
To consider settings in which the disease risk score has theoretical advantages as a balancing score in comparative effectiveness research, because of stability of disease risk and the availability of ample historical data on outcomes in people treated before introduction of the new therapy.
We review the indications for and balancing properties of disease risk scores in the setting of evolving therapies, and discuss alternative approaches for estimation. We illustrate development of a disease risk score in the context of the introduction of atorvastatin and the use of high-dose statin therapy beginning in 1997, based on data from 5,668 older survivors of myocardial infarction who filled a statin prescription within 30 days after discharge from 1995 until 2004. Theoretical considerations suggested development of a disease risk score among non-users of atorvastatin and high-dose statins during the period 1995–1997.
Observed risk of events increased from 11% to 35% across quintiles of the disease risk score which had a C-statistic of 0.71. The score allowed control of many potential confounders even during early follow-up with few study endpoints.
Balancing on a disease risk score offers an attractive alternative to a propensity score in some settings such as newly marketed drugs and provides an important axis for evaluation of potential effect modification. Joint consideration of propensity and disease risk scores may be valuable.
To determine whether residual risk after high-dose statin therapy for primary prevention individuals with low LDL cholesterol (LDL-C) is related to on-treatment apolipoprotein B, non-HDL cholesterol (non-HDL-C), or lipid ratios, and how they compare with on-treatment LDL cholesterol (LDL-C).
Guidelines focus on LDL-C as the primary target of therapy, yet residual risk for cardiovascular disease (CVD) among statin-treated individuals remains high and not fully explained.
Participants in the randomized placebo-controlled JUPITER trial were adults without diabetes or CVD, with baseline LDL-C<130 mg/dL, high-sensitivity C-reactive protein ≥2 mg/L, and triglycerides <500 mg/dL. Individuals allocated to rosuvastatin 20 mg daily with baseline and on-treatment lipids and lipoproteins were examined in relation to the primary endpoint of incident CVD (non-fatal myocardial infarction or stroke, hospitalization for unstable angina, arterial revascularization, or cardiovascular death).
Using separate multivariate Cox models, statistically significant associations of a similar magnitude with residual risk of CVD were found for on-treatment LDL-C, non-HDL-C, apolipoprotein B, total/HDL-C, LDL-C/HDL-C, and apolipoprotein B/A-1. The respective adjusted standardized hazard ratios (95% confidence intervals) for each of these measures were 1.31 (1.09–1.56), 1.25 (1.04–1.50), 1.27 (1.06–1.53), 1.22 (1.03–1.44), 1.29 (1.09–1.52), and 1.27 (1.09–1.49). The overall residual risk and the risk associated with these measures decreased among participants achieving on-treatment LDL-C ≤70 mg/dL, on-treatment non-HDL-C ≤100 mg/dL, or on-treatment apolipoprotein B ≤80 mg/dL. By contrast, on-treatment triglycerides showed no association with CVD.
In this primary prevention trial of non-diabetic individuals with low LDL-C, on-treatment LDL-C was as valuable as non-HDL-C, apolipoprotein B, or ratios in predicting residual risk.
apolipoproteins; lipids; lipoproteins; primary prevention; trials
Background and Purpose
Light-to-moderate alcohol consumption has been consistently associated with lower risk of heart disease, but data for stroke are less certain. A lower risk of stroke with light-to-moderate alcohol intake has been suggested but the dose response among women remains uncertain and the data in this subgroup have been sparse.
A total of 83,578 female participants of the Nurses’ Health Study who were free of diagnosed cardiovascular disease and cancer at baseline, were followed from 1980–2006. Data on self-reported alcohol consumption were assessed at baseline and updated approximately every 4 years, while stroke and potential confounder data were updated at baseline and biennially. Strokes were classified according to the National Survey of Stroke criteria.
We observed 2,171 incident strokes over 1,695,324 person-years. In multivariable adjusted analyses, compared to abstainers, the relative risks of stroke were RR=0.83 (95% CI 0.75–0.92) for <5 g/day, RR=0.79 (95% CI: 0.70–0.90) for 5–14.9 g/day, RR=0.87 (0.72–1.05) for 15–29.9 g/day and RR=1.06 (95% CI=0.86–1.30) for 30–45 g/day. Results were similar for ischemic and hemorrhagic stroke.
Light-to-moderate alcohol consumption was associated with a lower risk of total stroke. In this population of women with modest alcohol consumption, an elevated risk of total stroke related to alcohol was not observed.
Risk factors for Stroke; alcohol; ischemic stroke; subarachnoid hemorrhage
The effects of clinical-trial funding on the interpretation of trial results are poorly understood. We examined how such support affects physicians’ reactions to trials with a high, medium, or low level of methodologic rigor.
We presented 503 board-certified internists with abstracts that we designed describing clinical trials of three hypothetical drugs. The trials had high, medium, or low methodologic rigor, and each report included one of three support disclosures: funding from a pharmaceutical company, NIH funding, or none. For both factors studied (rigor and funding), one of the three possible variations was randomly selected for inclusion in the abstracts. Follow-up questions assessed the physicians’ impressions of the trials’ rigor, their confidence in the results, and their willingness to prescribe the drugs.
The 269 respondents (53.5% response rate) perceived the level of study rigor accurately. Physicians reported that they would be less willing to prescribe drugs tested in low-rigor trials than those tested in medium-rigor trials (odds ratio, 0.64; 95% confidence interval [CI], 0.46 to 0.89; P = 0.008) and would be more willing to prescribe drugs tested in high-rigor trials than those tested in medium-rigor trials (odds ratio, 3.07; 95% CI, 2.18 to 4.32; P<0.001). Disclosure of industry funding, as compared with no disclosure of funding, led physicians to downgrade the rigor of a trial (odds ratio, 0.63; 95% CI, 0.46 to 0.87; P = 0.006), their confidence in the results (odds ratio, 0.71; 95% CI, 0.51 to 0.98; P = 0.04), and their willingness to prescribe the hypothetical drugs (odds ratio, 0.68; 95% CI, 0.49 to 0.94; P = 0.02). Physicians were half as willing to prescribe drugs studied in industry-funded trials as they were to prescribe drugs studied in NIH-funded trials (odds ratio, 0.52; 95% CI, 0.37 to 0.71; P<0.001). These effects were consistent across all levels of methodologic rigor.
Physicians discriminate among trials of varying degrees of rigor, but industry sponsorship negatively influences their perception of methodologic quality and reduces their willingness to believe and act on trial findings, independently of the trial’s quality. These effects may influence the translation of clinical research into practice.
Active medical-product-safety surveillance systems are being developed to monitor many products and outcomes simultaneously in routinely collected longitudinal electronic healthcare data. These systems will rely on algorithms to generate alerts about potential safety concerns.
We compared the performance of five classes of algorithms in simulated data using a sequential matched-cohort framework, and applied the results to two electronic healthcare databases to replicate monitoring of cerivastatin-induced rhabdomyolysis. We generated 600,000 simulated scenarios with varying expected event frequency in the unexposed, alerting threshold, and outcome risk in the exposed, and compared the alerting algorithms in each scenario type using an event-based performance metric.
We observed substantial variation in algorithm performance across the groups of scenarios. Relative performance varied by the event frequency and by user-defined preferences for sensitivity versus specificity. Type I error-based statistical testing procedures achieved higher event-based performance than other approaches in scenarios with few events, whereas statistical process control and disproportionality measures performed relatively better with frequent events. In the empirical data, we observed 6 cases of rhabdomyolysis among 4,294 person-years of follow-up, with all events occurring among cerivastatin-treated patients. All selected algorithms generated alerts before the drug was withdrawn from the market.
For active medical-product-safety monitoring in a sequential matched cohort framework, no single algorithm performed best in all scenarios. Alerting algorithm selection should be tailored to particular features of a product-outcome pair, including the expected event frequencies and trade-offs between false-positive and false-negative alerting.
Venous thromboembolism (VTE) and cardiovascular disease (CVD) share some risk factors, including obesity, yet it is unclear how dietary patterns associated with reduced risk of CVD relate to risk of VTE.
To compare relationships of adherence to a DASH-style diet with risks of CVD and VTE.
We confirmed by medical record review 1094 incident cases of CVD and 675 incident VTEs during mean follow-up of 14.6 years in 34,827 initially healthy participants in the Women’s Health Study who completed at baseline a 133-item food frequency questionnaire scored for adherence to a DASH diet. We compared estimated associations of dietary patterns with CVD and VTE from proportional hazards models in a competing risk framework.
Initial analyses adjusted for age, energy intake, and randomized treatments found 36–41% reduced hazards of CVD among women in the top two quintiles of DASH score relative to those in the bottom quintile (Ptrend<0.001). In multivariate analysis, women in the top two quintiles had 12–23% reduced hazards of CVD relative to women in the bottom quintile (Ptrend=0.04). Analyses restricted to coronary events found more variable 10–33% reduced hazards in the top two quintiles (Ptrend=0.09). In contrast, higher DASH scores were unrelated to risk of VTE with a 1% reduced hazard for the top vs. bottom quintile (Ptrend=0.95).
An apparently strong association of adherence to the DASH diet with incidence of CVD was attenuated upon control for confounding variables. Adherence to the DASH diet was not associated with risk of VTE in women.
Increasing evidence supports a role for inflammation in promoting atrial fibrillation (AF) and statins have anti-inflammatory effects that may be relevant for the prevention of AF. However, studies of statin therapy and incident AF have yielded mixed results and not focused on individuals with an underlying pro-inflammatory response. We studied whether high-sensitivity C-reactive protein is associated with incident AF and whether treatment with rosuvastatin is associated with a lower incidence of AF compared with placebo.
Methods and results
We randomized men and women with LDL cholesterol <130 mg/dL and high-sensitivity C-reactive protein ≥2 mg/L to receive either rosuvastatin 20 mg daily or placebo. Atrial fibrillation was determined from treatment-blind adverse event reports. Among 17 120 participants without prior history of arrhythmia, each increasing tertile of baseline high-sensitivity C-reactive protein was associated with a 36% increase in the risk of developing AF (95% CI: 1.16–1.60; P-trend < 0.01). Allocation to rosuvastatin when compared with placebo was associated with a 27% reduction in the relative risk of developing AF during the trial period; specifically, AF was reported among 138 participants in the placebo group and 100 in the rosuvastatin group (incidence rate 0.78 vs. 0.56/100 person-years, HR: 0.73, 95% CI: 0.56–0.94, P = 0.01). The exclusion of participants who developed a major cardiovascular event prior to the report of AF yielded similar results.
Within the JUPITER trial cohort of individuals selected for underlying inflammation, increasing levels of high-sensitivity C-reactive protein were associated with an increased risk of incident AF and random allocation to rosuvastatin significantly reduced that risk.
C-reactive protein; Atrial fibrillation; Statins
Medicare Part D’s implementation improved access to and affordability of prescription drugs for the elderly without prior drug insurance. Effects for specific drugs and drug classes are less well understood. We assessed Part D’s impact on antipsychotic medication (APM) utilization and out-of-pocket costs among elderly without prior drug insurance. Retail pharmacy claims from 3 nationwide pharmacy chains were used to analyze two time-series designs: 1) a Policy Model, to obtain a policymaker’s perspective: what was the overall impact of Part D on APM use and costs among elderly without drug insurance in 2005 with the opportunity to enroll?, and 2) a Clinical Model, to obtain a clinician’s perspective: what would happen to elderly without drug insurance in 2005 who did enroll in Part D—would they be able to get APMs? At what cost? Subgroup analyses among Part D enrollees evaluated potentially different effects for patients who received a subsidy and patients who used anti-dementia drugs. In the Policy Model, Part D implementation was associated with a 5% increase in APM use and a 37% reduction in out-of-pocket costs, suggesting a modest need for APMs among all previously uninsured elderly. Patients who did enroll in Part D (Clinical Model) had a 97% increase in APM use and a 62% decrease in out-of-pocket costs, suggesting that patients who needed APMs were able to access them at low cost through the Part D program. Part D implementation was associated with increased use and affordability of APMs for elderly without prior drug insurance.
Recent theoretical studies have shown that conditioning on an instrumental variable (IV), a variable that is associated with exposure but not associated with outcome except through exposure, can increase both bias and variance of exposure effect estimates. Although these findings have obvious implications in cases of known IVs, their meaning remains unclear in the more common scenario where investigators are uncertain whether a measured covariate meets the criteria for an IV or rather a confounder. The authors present results from two simulation studies designed to provide insight into the problem of conditioning on potential IVs in routine epidemiologic practice. The simulations explored the effects of conditioning on IVs, near-IVs (predictors of exposure that are weakly associated with outcome), and confounders on the bias and variance of a binary exposure effect estimate. The results indicate that effect estimates which are conditional on a perfect IV or near-IV may have larger bias and variance than the unconditional estimate. However, in most scenarios considered, the increases in error due to conditioning were small compared with the total estimation error. In these cases, minimizing unmeasured confounding should be the priority when selecting variables for adjustment, even at the risk of conditioning on IVs.
bias (epidemiology); confounding factors (epidemiology); epidemiologic methods; instrumental variable; precision; simulation; variable selection
Adjusting for laboratory test results may result in better confounding control when added to administrative claims data in the study of treatment effects. However, missing values can arise through several mechanisms.
We studied the relationship between availability of outpatient lab test results, lab values, and patient and system characteristics in a large healthcare database using LDL, HDL, and HbA1c in a cohort of initiators of statins or Vytorin (ezetimibe & simvastatin) as examples.
Among 703,484 patients 68% had at least one lab test performed in the 6 months before treatment. Performing an LDL test was negatively associated with several patient characteristics, including recent hospitalization (OR = 0.32, 95% CI: 0.29-0.34), MI (OR = 0.77, 95% CI: 0.69-0.85), or carotid revascularization (OR = 0.37, 95% CI: 0.25-0.53). Patient demographics, diagnoses, and procedures predicted well who would have a lab test performed (AUC = 0.89 to 0.93). Among those with test results available claims data explained only 14% of variation.
In a claims database linked with outpatient lab test results, we found that lab tests are performed selectively corresponding to current treatment guidelines. Poor ability to predict lab values and the high proportion of missingness reduces the added value of lab tests for effectiveness research in this setting.
Insurance claims data; Laboratory test results; Serum lipid levels; Confounding; Imputation; Pharmacoepidemiology; Lipid lowering therapy; Statin; Ezetimibe
Prospective data regarding risk factors for peripheral artery disease (PAD) are sparse, especially among women; the relative contribution of systolic versus diastolic blood pressure control for incident PAD has not been well-studied. We evaluated the association of self-reported blood pressure control with incident symptomatic PAD in middle-aged and older women.
We examined the relationship between reported hypertension and incident confirmed symptomatic PAD (n=178) in 39,260 female health professionals aged ≥45 years without known vascular disease at baseline. Median follow-up was 13.3 years. Women were grouped according to presence of reported isolated diastolic (IDH), isolated systolic (ISH), or combined systolic-diastolic hypertension (SDH) using cut-points of 90 and 140 mmHg for diastolic and systolic blood pressure, respectively. SBP and DBP were modeled as continuous and categorical exposures. Multivariable-adjusted hazard ratios (HRs), including adjustment for CV risk factors, were derived from Cox proportional hazards models.
Adjusted HRs compared to women without reported hypertension were 1.0 (0.4–2.8) for IDH, 2.0 (1.3–3.1) for ISH, and 2.8 (1.8–4.5) for SDH. There was a 43% increased adjusted risk per 10 mmHg of reported SBP (95% CI 27–62%) and a gradient in risk according to SBP category (<120, 120–139, 140–159, and ≥160 mmHg); HRs were 1.0, 2.3, 4.3, and 6.6 (p-trend<0.001), respectively. Reported DBP, while individually predictive in models excluding SBP, was not predictive after adjustment for SBP.
These prospective data suggest a strong prognostic role for uncontrolled blood pressure and, particularly, uncontrolled systolic blood pressure in the development of peripheral atherosclerosis in women.
hypertension; peripheral artery disease; women
While elevated blood pressure (BP) has been consistently associated with incident congestive heart failure (CHF), much less is known about the effect of BP change. We therefore assessed the association of BP change over time with subsequent risk of CHF.
4655 participants ≥65 years old from the prospective Established Populations for Epidemiologic Studies of the Elderly program who were alive and free of CHF after 6 years of follow-up were included. Categories of sustained high BP, sustained low BP, BP progression and BP regression were defined according to BP differences between study entry and 6 years of follow-up. The primary endpoint was incident CHF subsequent to the 6-year examination.
During 4.3 years of follow-up after the 6-year examination, 642 events occurred. The hazard ratio (HR) (95% confidence interval (CI)) for systolic BP ≥160 compared to <120 mmHg at 6 years was 1.39 (1.04–1.86). Conversely, the lowest diastolic BP category at 6 years was associated with an increased risk of incident CHF (HR (95% CI) <70 mmHg versus 70–79 mmHg 1.42 (1.18–1.71)). Systolic and diastolic BP were better predictors than pulse pressure. The HRs (95% CI) for incident CHF associated with sustained high systolic BP ≥160 mmHg and systolic BP progression were 1.35 (0.97–1.89) and 1.45 (1.14–1.85), respectively. Conversely, significant associations were found in those with sustained low diastolic BP or diastolic BP regression (HR (95% CI) 1.42 (1.11–1.83) and 1.45 (1.19–1.76), respectively).
While persistently elevated systolic BP and systolic BP progression were strong predictors of CHF in the elderly, inverse associations were found with regard to diastolic BP. Systolic and diastolic BP were better predictors of CHF than pulse pressure.
Hypertension; Blood pressure; Pulse pressure; Heart failure; Mortality
To develop and validate a single numeric comorbidity score for predicting short-and long-term mortality, by combining conditions in the Charlson and Elixhauser measures.
STUDY DESIGN AND SETTING
In a cohort of 120,679 Pennsylvania Medicare enrollees with drug coverage through a pharmacy assistance program, we developed a single numeric comorbidity score for predicting 1-year mortality, by combining the conditions in the Charlson and Elixhauser measures. We externally validated the combined score in a cohort of New Jersey Medicare enrollees, by comparing its performance to that of both component scores in predicting 1-year mortality, as well as 180-, 90-, and 30-day mortality.
C-statistics from logistic regression models including the combined score were higher than corresponding c-statistics from models including either the Romano implementation of the Charlson Index or the single numeric version of the Elixhauser system; c-statistics were 0.860 (95% confidence interval [CI]: 0.854, 0.866), 0.839 (95% CI: 0.836, 0.849), and 0.836 (95% CI: 0.834, 0.847), respectively, for the 30-day mortality outcome. The combined comorbidity score also yielded positive values for two recently proposed measures of reclassification.
In similar populations and data settings, the combined score may offer improvements in comorbidity summarization over existing scores.
comorbidity; bias; claims data; Medicare; health services research; mortality
To examine whether intake of ω-3 fatty acids and fish affect incidence of age-related macular degeneration (AMD) in women.
A detailed food-frequency questionnaire was administered at baseline among 39,876 female health professionals (mean [SD] age: 54.6 [7.0] years). A total of 38,022 women completed the questionnaire and were free of a diagnosis of AMD.
Main Outcome Measure
Incident AMD responsible for a reduction in best-corrected visual acuity to 20/30 or worse based on self-report confirmed by medical record review.
A total of 235 cases of AMD, most characterized by some combination of drusen and retinal pigment epithelial changes, were confirmed during an average of 10 years of follow-up. Women in the highest tertile of intake for docosahexaenoic acid (DHA), compared to those in the lowest, had a multivariate-adjusted relative risk (RR) of AMD of 0.62 (95% confidence interval [CI], 0.44–0.87). For eicosapentaenoic acid (EPA), women in the highest tertile of intake had a RR of 0.66 (CI, 0.48–0.92). Consistent with the findings for DHA and EPA, women who consumed 1 or more servings of fish per week, compared to those who consumed less than 1 serving per month, had a RR of AMD of 0.58 (CI, 0.38–0.87).
These prospective data from a large cohort of female health professionals without a diagnosis of AMD at baseline indicate that regular consumption of DHA and EPA and fish was associated with a significantly decreased risk of incident AMD, and may be of benefit in primary prevention of AMD.