Quantify incidence of cardiovascular outcomes in patients with advanced breast cancer receiving cardiotoxic and non-cardiotoxic chemotherapy.
Identified all women at a Midwestern health system with initial diagnosis of AJCC stage III/IV breast cancer (1995–2003) and random sample of 50 women initially diagnosed with stage I/II who progressed to stage III/IV. Calculated rate of new cardiovascular outcomes (heart failure, dysrhythmia and ischemia events) for cardiotoxic (anthracycline or trastuzumab) and non-cardiotoxic agents.
Of 315 patients, 90.5% (N=285) received systemic cancer therapy; 67.7% (n=193) received cardiotoxic drugs. Older patients were less likely to receive cardiotoxic agents (86.4% ≤ 59 years vs. 31.9% aged 70+). Adjusting for age, race, stage, surgery/radiation, ER/PR status and diagnosis year, rate of new cardiac events was higher in patients exposed to cardiotoxic drugs compared to those exposed to non-cardiotoxic drugs (adjusted hazard ratio=2.5, 95% CI 0.9, 7.2). Patients with cardiac event history (relative risk=3.2, 95% CI 2.0–5.1) and those with heart failure history (relative risk=5.9, 95% CI 2.4–14.6) were more likely to receive non-cardiotoxic treatment. Heart failure events occurred steadily over time; after 3 years follow-up 16% exposed to cardiotoxic drugs experienced an event, and 8% of those exposed to non-cardiotoxic drugs experienced an event.
Patients with cardiac comorbidity are less likely to receive cardiotoxic agents. Use of cardiotoxic agents is common, treatment is related to patient and tumor characteristics and is associated with substantial risk of cardiotoxicity that persists during patient’s remaining lifespan.
breast cancer; chemotherapy; cardiotoxic agents; cardiotoxicity risk
This paper introduces an improved tool for designing matched-pairs randomized trials. The tool allows the incorporation of clinical and other knowledge regarding the relative importance of variables used in matching and allows for multiple types of missing data. The method is illustrated in the context of a cluster-randomized trial. A web application and R package are introduced to implement the method and incorporate recent advances in the area.
Reweighted Mahalanobis Distance (RMD) matching incorporates user-specified weights and imputed values for missing data. Weight may be assigned to missingness indicators to match on missingness patterns. Three examples are presented, using real data from a cohort of 90 Veterans Health Administration sites that had at least 100 incident metformin users in 2007. Matching is utilized to balance seven factors aggregated at the site level. Covariate balance is assessed for 10,000 randomizations under each strategy: simple randomization, matched randomization using the Mahalanobis distance, and matched randomization using the RMD.
The RMD matching achieved better balance than simple randomization or MD randomization. In the first example, simple and MD randomization resulted in a 10% chance of seeing an absolute mean difference of greater than 26% in the percent of nonwhite patients per site; the RMD dramatically reduced that to 6%. The RMD achieved significant improvement over simple randomization even with as much as 20% of the data missing.
RMD matching provides an easy-to-use tool that incorporates user knowledge and missing data.
To evaluate the impact of a prescriber focused individual educational and audit–feedback intervention undertaken by the Nova Scotia Prescription Monitoring Program (NSPMP) in March/April 2007 to reduce meperidine use.
The NSPMP records all prescriptions for controlled substances dispensed in community pharmacies in Nova Scotia, Canada. Oral meperidine use from 1 July 2005 to 31 December 2009 was examined using NSPMP data. Monthly totals for the following were obtained: number of individual patients who filled at least one meperidine prescription, number of prescriptions, and number of tablets dispensed. Data were analyzed graphically to observe overall trends. The intervention effect was estimated on the logarithmic scale with autocorrelations over time modeled by an integrated autoregressive moving average model for each outcome measure.
An overall trend toward decreasing use from July 2005 to December 2009 was apparent for all three outcome measures. The intervention was associated with a statistically significant reduction in meperidine use, after adjusting for the overall long-term trend. Compared with the pre-intervention period, the monthly number of patients declined by 12% (p <0.001; 95% confidence interval [CI] = 5%–18%), prescriptions by 10% (p <0.001; 95%CI = 3%–17%), and tablets by 13.5% (p <0.001, 95%CI = 6%–29%) in the post-intervention period.
Given the risks associated with meperidine, determining that this intervention successfully reduced meperidine use is encouraging. This study highlights the potential for using population data such as the NSPMP to evaluate the effectiveness of population-level interventions to improve medication use, including professional, organizational, financial, and regulatory initiatives.
PMID: 22081471 CAMSID: cams3253
educational intervention; meperidine; pethidine; time series analysis
Nonexperimental studies of treatment effectiveness provide an important complement to randomized trials by including heterogeneous populations. Propensity scores (PS) are common in these studies, but may not adequately capture changes in channeling experienced by innovative treatments. We use calendar time-specific (CTS) PSs to examine the effect of oxaliplatin during dissemination from off-label to widespread use.
Stage III colon cancer patients aged 65+ initiating chemotherapy between 2003–06 were examined using cancer registry data linked with Medicare claims. Two PS approaches for receipt of oxaliplatin vs. 5-flourouricil were constructed using logistic models with key components of age, sex, substage, grade, census level income, and comorbidities: 1) a conventional, year-adjusted PS and 2) a CTS PS constructed and matched separately within 1-year intervals, then combined. We compared PS-matched hazard ratios (HR) for mortality using Cox models.
Oxaliplatin use increased significantly; 8%(n=86) of patients received it in the first time period vs. 52%(n=386) in the last. Channeling by comorbidities, income, and age appeared to change over time. The CTS PS improved covariate balance within calendar time strata and yielded an attenuated estimated benefit of oxaliplatin (HR=0.75) compared with the conventional PS (HR=0.69).
In settings where prescribing patterns have changed and calendar time acts as a confounder, a CTS PS can characterize changes in treatment choices and estimating separate PSs within specific calendar time periods may result in enhanced confounding control. To increase validity of CER, researchers should carefully consider drug lifecycles and effects of innovative treatment dissemination over time.
Electronic healthcare databases are commonly used in comparative effectiveness and safety research of therapeutics. Many databases now include additional confounder information in a subset of the study population through data linkage or data collection. We described and compared existing methods for analyzing such datasets.
Using data from The Health Improvement Network and the relation between non-steroidal anti-inflammatory drugs (NSAIDs) and upper gastrointestinal bleeding (UGIB) as an example, we employed several methods to handle partially missing confounder information.
The crude odds ratio (OR) of upper gastrointestinal bleeding was 1.50 (95% confidence interval: 0.98, 2.28) among selective cyclo-oxygenase-2 inhibitor initiators (n = 43,569) compared with traditional non-steroidal anti-inflammatory drug initiators (n = 411,616). The OR dropped to 0.81 (0.52, 1.27) upon adjustment for confounders recorded for all patients. When further considering three additional variables missing in 22% of the study population (smoking, alcohol consumption, body mass index), the OR was between 0.80 and 0.83 for the missing-category approach, the missing-indicator approach, single imputation by the most common category, multiple imputation by chained equations, and propensity score calibration. The OR was 0.65 (0.39, 1.09) and 0.67 (0.38, 1.16) for the unweighted and the inverse probability weighted complete-case analysis, respectively.
Existing methods for handling partially missing confounder data require different assumptions and may produce different results. The unweighted complete-case analysis, the missing-category/indicator approach, and single imputation require often unrealistic assumptions and should be avoided. In this study, differences across methods were not substantial, likely due to relatively low proportion of missingness and weak confounding effect by the three additional variables upon adjustment for other variables.
Comparative effectiveness research; Missing data; Databases; Confounding; Pharmacoepidemiology; THIN
A correctly-specified propensity score (PS) estimated in a cohort (“cohort PS”) should in expectation remain valid in a subgroup population. We sought to determine whether using a cohort PS can be validly applied to subgroup analyses and thus add efficiency to studies with many subgroups or restricted data. In each of 3 cohort studies we estimated a cohort PS, defined 5 subgroups, and then estimated subgroup-specific PSs. We compared difference in treatment effect estimates for subgroup analyses adjusted by cohort PSs versus subgroup-specific PSs. Then, 10M times, we simulated a population with known characteristics of confounding, subgroup size, treatment interactions, and treatment effect, and again assessed difference in point estimates. We observed that point estimates in most subgroups were substantially similar with the two methods of adjustment. In simulations, the effect estimates differed by a median of 3.4% (interquartile [IQ] range 1.3% to 10.0%). The IQ range exceeded 10% only in cases where the subgroup had <1000 patients or few outcome events. Our empirical and simulation results indicated that using a cohort PS in subgroup analyses was a feasible approach, particularly in larger subgroups.
Propensity Scores; Confounding Factors (Epidemiology); Multicenter Study [Publication Type]; Epidemiologic Methods; Effect Modifiers (Epidemiology); Comparative Effectiveness Research
Acid suppressants are commonly prescribed medications. Laboratory studies suggest a mechanism by which they could increase colorectal cancer (CRC) risk. A few epidemiologic studies have investigated acid suppressant use and CRC risk; none has documented an overall association. We sought to investigate whether acid suppressants are associated with CRC risk.
We conducted a case-control study among members of an integrated healthcare delivery system in Washington State. Cases (N=641) were diagnosed with CRC between 2000-2003; controls (N=641) were randomly selected from enrollees and matched to cases on age, sex, and length of enrollment. We used conditional logistic regression to estimate the odds ratios (OR) and 95% confidence intervals (CI) for CRC associated with the use of any acid suppressive medication, proton pump inhibitors (PPIs) only, histamine receptor antagonists (H2 blockers) only, or both PPIs and H2 blockers in relation to the use of neither PPIs nor H2 blockers.
Use of PPIs exclusively was modestly associated with increased risk of CRC, however this finding was consistent with chance and based on a small number of patients exposed (OR=1.7; 95% CI=0.8, 4.0). H2 blocker use alone was not related to CRC risk (OR=0.8; 95% CI = 0.6, 1.1).
PPI use may be modestly associated with CRC risk; further research should be conducted in populations with long-term PPI use.
colorectal cancer; acid suppressive medications; proton pump inhibitors; histamine receptor antagonists
To describe the Acute Myocardial Infarction (AMI) Validation project, a test case for health outcome validation within the FDA-funded Mini-Sentinel pilot program.
The project consisted of four parts: (1) case identification: developing an ICD9-based algorithm to identify hospitalized AMI patients within the Mini-Sentinel Distributed Database; (2) chart retrieval: establishing procedures that ensured patient privacy (collection and transfer of minimum necessary amount of information, and redaction of direct identifiers to validate potential cases of AMI; (3) abstraction and adjudication: trained nurse abstractors gathered key data using a standardized form with cardiologist adjudication; and (4) calculation of the positive predictive value of the constructed algorithm.
Key decision points included: (1) breadth of the AMI algorithm; (2) centralized vs. distributed abstraction; and (3) approaches to maintaining patient privacy and to obtaining charts for public health purposes. We used an algorithm limited to ICD9 codes 410.x0-410.x1. Centralized data abstraction was performed due to the modest number of charts requested (<155). The project’s public health status accelerated chart retrieval in most instances.
We have established a process to validate AMI within Mini-Sentinel, which may be used for other health outcomes. Challenges include: (1) ensuring that only minimum necessary data is transmitted by Data Partners for centralized chart review; (2) establishing procedures to maintain data privacy while still allowing for timely access to medical charts; and (3) securing access to charts for public health uses that do not require IRB approval while maintaining patient privacy.
Myocardial infarction; coronary artery disease; validation; administrative data
To characterize the validity of algorithms to identify AF from electronic health data through a systematic review of the literature, and to identify gaps needing further research.
Two reviewers examined publications during 1997–2008 that identified patients with AF from electronic health data and provided validation information. We abstracted information including algorithm sensitivity, specificity, and positive predictive value (PPV).
We reviewed 544 abstracts and 281 full-text articles, of which 18 provided validation information from 16 unique studies. Most used data from before 2000, and 10 of 16 used only inpatient data. Three studies incorporated electronic ECG data for case identification or validation. A large proportion of prevalent AF cases identified by ICD-9 code 427.31 were valid (PPV 70–96%, median 89%). Seven studies reported algorithm sensitivity (range, 57–95%; median 79%). One study validated an algorithm for incident AF and reported a PPV of 77%.
The ICD-9 code 427.31 performed relatively well, but conclusions about algorithm validity are hindered by few recent data, use of nonrepresentative populations, and a disproportionate focus on inpatient data. An optimal contemporary algorithm would likely draw on inpatient and outpatient codes and electronic ECG data. Additional research is needed in representative, contemporary populations regarding algorithms that identify incident AF and incorporate electronic ECG data.
atrial fibrillation; cardiac arrhythmia; epidemiology; validation; positive predictive value; sensitivity
Previous studies suggest that disease-modifying anti-rheumatic drugs (DMARDs) increase tuberculosis (TB) risk. The accuracy of pharmacy and coded-diagnosis information to identify persons with TB is unclear.
Within a cohort of rheumatoid arthritis (RA) patients (2000–2005) enrolled in Tennessee Medicaid, we identified those with potential TB using ICD9-CM diagnosis codes and/or pharmacy claims. Using the Tennessee TB registry as the gold standard for identification of TB, we estimated the sensitivity, specificity, predictive values and the respective 95% confidence intervals for each TB case-ascertainment strategy.
Ten of 18,094 RA patients had confirmed TB during 61,461 person-years of follow-up (16.3 per 100,000 person-years). The sensitivity and positive predictive value (PPV) and respective 95% confidence intervals were low for confirmed TB based on ICD9-CM codes alone (60.0% (26.2–87.8) and 1.3% (0.5–2.9)), pharmacy data alone (20% (2.5–55.6) and 4.1% (0.5–14.3)), and both (20% (2.5–55.6) and 25.0% (3.2–65.1)).
Algorithms that use administrative data alone to identify TB have a poor positive predictive value that results in a high false positive rate of TB detection.
rheumatoid arthritis; anti-rheumatic drugs; tuberculosis
Prospective medical product monitoring is intended to alert stakeholders about whether and when safety problems are identifiable in a continuous stream of longitudinal electronic healthcare data. In comparing the performance of methods to generate these alerts, three factors must be considered: (1) accuracy in alerting; (2) timeliness of alerting; and (3) the trade-offs between the costs of false negative and false positive alerting. Using illustrative examples, we show that traditional scenario-based measures of accuracy, such as sensitivity and specificity, which classify only at the end of monitoring, fail to appreciate timeliness of alerting. We propose an event-based approach that classifies exposed outcomes according to whether or not a prior alert was generated. We provide event-based extensions to existing metrics and discuss why these metrics are limited in this setting because of inherent tradeoffs that they impose between the relative consequences of false positives versus false negatives. We provide an expression that summarizes event-based sensitivity (the proportion of exposed events that occur after alerting among all exposed events in scenarios with true safety issues) and event-based specificity (the proportion of exposed events that occur in the absence of alerting among all exposed events in scenarios with no true safety issues) by taking an average weighted by the relative costs of false positive and false negative alerting. This approach explicitly accounts for accuracy in alerting, timeliness in alerting, and the trade-offs between the costs of false negative and false positive alerting. Subsequent work will involve applying the metric to simulated data.
medical product monitoring; active surveillance; prospective safety monitoring; performance metrics; time-to-alerting; operating characteristics
When using electronic medical record (EMR) data to study drug use, hospitalizations are markers of severe outcomes. To identify events within a specified time window, it is important to validate hospitalization diagnoses and dates. Our objective was to validate pneumonia hospitalizations and their dates identified using hospitalization codes in The Health Improvement Network (THIN), a UK primary care EMR.
This cross-sectional study used a cohort of THIN adult visits for acute nonspecific respiratory infections from 6/1985-8/2006. Pneumonia hospitalizations within 14 days after the visit were identified using THIN diagnosis and hospitalization codes; 60 were randomly selected for validation. Patients' general practitioners (GPs) returned deidentifed hospital summaries and consultants' letters regarding overnight hospitalizations within a 180-day window around the THIN hospitalization. Positive predictive value (PPV) was the number of GP-validated hospitalizations divided by THIN documented hospitalizations.
GPs returned 59/60 patient records; 52 had confirmed hospitalizations. PPV of THIN hospitalization documentation was 88% (95% c.i.77–95). One admission was not for pneumonia; PPV of THIN-documented pneumonia admission was 86% (75–94). Of 52 valid THIN hospitalizations, 50 were actually admitted within 14 days of the documented THIN date (range −2 to +18). The absolute median difference between THIN and validated admission dates was +0.5 days; absolute mean difference was +3.1 days. In 16 of 52 admitted patients, the THIN admission date was the actual discharge date.
THIN hospitalization codes performed well in identifying acute pneumonia hospitalizations and their timing. Admission date validity might be better for conditions associated with shorter vs. longer hospitalizations.
pneumonia; health services research; validation studies; electronic medical records; drug safety; treatment outcomes
Although chronic use of diuretics has been implicated as a risk factor for falls, it is unknown whether changes in diuretic drugs are associated with an acutely elevated risk of falls. We evaluated the relationship between change in a diuretic prescription (new prescription or increased dose) and the occurrence of documented falls among nursing home residents.
Participants of the cohort were 1,785 long term care residents of two, large nursing homes (2005–2010; Boston, MA). A self-matched, case-crossover analysis was used to examine whether there is an acutely increased risk of falling in the day following a diuretic drug change compared to days without a diuretic drug change. Odds ratios with 95% confidence intervals were calculated using conditional logistic regression models.
During a mean follow-up of 8.4 months, 1,181 participants experienced an incident fall. Nine participants experienced a diuretic change on the day before the fall. The odds of falling one day following a change in a diuretic was elevated (OR: 2.08, 95% CI 0.89, 4.86). The association was stronger and reached nominal statistical significance when loop diuretics were examined separately (OR: 2.46, 95% CI 1.02, 5.92). We estimated that for every 271 loop diuretic drug changes, one excess fall occurred.
Nursing home residents are at an increased risk of falls in the day following a new prescription or increased dose of a loop diuretic drug. Extra precautions should be taken immediately following a loop diuretic drug change in an effort to prevent falls.
diuretic; fall; nursing home
Infliximab, a chimeric monoclonal anti-TNFα antibody, has been found to increase the risk of serious infections compared with the TNF receptor fusion protein etanercept in some studies. It is unclear whether the risk varies by patient characteristics. We conducted a study to address this question.
We identified members of Kaiser Permanente Northern California who initiated infliximab (n=793) or etanercept (n=2,692) in 1997–2007. Using a Cox model, we estimated the propensity score-adjusted hazard ratio (HR) and 95% confidence interval (CI) of serious infections requiring hospitalization or opportunistic infections comparing infliximab with etanercept following treatment initiation. We tested whether the adjusted HR differed by age, sex, race/ethnicity, body mass index, and smoking status.
The crude incidence rate of serious infections per 100 person-years was 5.4 (95% CI: 3.8, 7.5) in patients <65 years and 16.0 (10.4, 23.4) in patients ≥65 years during the first three months following treatment initiation. Compared with etanercept, the adjusted HR during this period was elevated for infliximab in patients <65 years (HR 3.01; 95% CI: 1.49, 6.07), but not in those ≥65 years (HR 0.94; 0.41, 2.13). Findings did not suggest that the HR varied by other patient characteristics examined.
An increased risk of serious infections associated with infliximab relative to etanercept did not appear to be modified by patients’ sex, race/ethnicity, body mass index, or smoking status. There was an indication that the increased risk might be limited to patients <65 years. Additional studies are warranted to verify or refute this finding.
Anti-TNF agents; Database; Pharmacoepidemiology; Propensity score; Serious infections
National Cancer Institute (NCI)-funded cooperative oncology group trials have improved overall survival for children with cancer from 10% to 85% and have set standards of care for adults with malignancies. Despite these successes, cooperative oncology groups currently face substantial challenges. We are working to develop methods to improve the efficiency and effectiveness of these trials. Specifically, we merged data from the Children’s Oncology Group (COG) and the Pediatric Health Information Systems (PHIS) to improve toxicity monitoring, estimate treatment-associated resource utilization and costs, and to address important clinical epidemiology questions.
COG and PHIS data on patients enrolled on a Phase III COG trial for de novo acute myeloid leukemia (AML) at 43 PHIS hospitals were merged using a probabilistic algorithm. Resource utilization summary statistics were then tabulated for the first chemotherapy course based on PHIS data.
Of 416 patients enrolled on the Phase III COG trial at PHIS centers, 392 (94%) were successfully matched. Of these, 378 (96%) had inpatient PHIS data available beginning at the date of study enrollment. For these, daily blood product usage and anti-infective exposures were tabulated and standardized costs were described.
These data demonstrate that patients enrolled in a cooperative group oncology trial can be successfully identified in an administrative data set, and that supportive care resource utilization can be described. Further work is required to optimize the merging algorithm, map resource utilization metrics to NCI Common Toxicity Criteria for monitoring toxicity, perform comparative effectiveness studies, and estimate the costs associated with protocol therapy.
administrative data; acute myeloid leukemia; cooperative oncology group; comparative effectiveness; clinical trials
Under Medicare Part D, patient characteristics influence plan choice, which in turn influences Part D coverage gap entry. We compared pre-defined propensity score (PS) and high-dimensional propensity score (hdPS) approaches to address such ‘confounding by health system use’ in assessing whether coverage gap entry is associated with cardiovascular events or death.
We followed 243,079 Medicare patients aged 65+ with linked prescription, medical, and plan-specific data in 2005–2007. Patients reached the coverage gap and were followed until an event or year’s end. Exposed patients were responsible for drug costs in the gap; unexposed patients (patients with non-Part D drug insurance and Part D patients receiving a low-income subsidy (LIS)) received financial assistance. Exposed patients were 1:1 PS- or hdPS-matched to unexposed patients. The PS model included 52 predefined covariates; the hdPS model added 400 empirically identified covariates. Hazard ratios for death and any of five cardiovascular outcomes were compared. In sensitivity analyses, we explored residual confounding using only LIS patients in the unexposed group.
In unadjusted analyses, exposed patients had no greater hazard of death (HR=1.00; 95% CI, 0.84–1.20) or other outcomes. PS- (HR=1.29;0.99–1.66) and hdPS- (HR=1.11;0.86–1.42) matched analyses showed elevated but non-significant hazards of death. In sensitivity analyses, the PS analysis showed a protective effect (HR=0.78;0.61–0.98), while the hdPS analysis (HR=1.06;0.82–1.37) confirmed the main hdPS findings.
Although the PS-matched analysis suggested elevated though non-significant hazards of death among patients with no financial assistance during the gap, the hdPS analysis produced lower estimates that were stable across sensitivity analyses.
confounding; health services use; propensity score adjustment; high-dimensional propensity score; health policy
Usefulness of propensity scores and regression models to balance potential confounders at treatment initiation may be limited for newly introduced therapies with evolving use patterns.
To consider settings in which the disease risk score has theoretical advantages as a balancing score in comparative effectiveness research, because of stability of disease risk and the availability of ample historical data on outcomes in people treated before introduction of the new therapy.
We review the indications for and balancing properties of disease risk scores in the setting of evolving therapies, and discuss alternative approaches for estimation. We illustrate development of a disease risk score in the context of the introduction of atorvastatin and the use of high-dose statin therapy beginning in 1997, based on data from 5,668 older survivors of myocardial infarction who filled a statin prescription within 30 days after discharge from 1995 until 2004. Theoretical considerations suggested development of a disease risk score among non-users of atorvastatin and high-dose statins during the period 1995–1997.
Observed risk of events increased from 11% to 35% across quintiles of the disease risk score which had a C-statistic of 0.71. The score allowed control of many potential confounders even during early follow-up with few study endpoints.
Balancing on a disease risk score offers an attractive alternative to a propensity score in some settings such as newly marketed drugs and provides an important axis for evaluation of potential effect modification. Joint consideration of propensity and disease risk scores may be valuable.
Despite persistent racial/ethnic disparities in cardiovascular disease (CVD) among older adults, information on whether there are similar disparities in the use of prescription and over-the-counter medications to prevent such disease is limited. We examined racial and ethnic disparities in the use of statins and aspirin among older adults at low, moderate, and high risk for CVD.
Methods and Results
In-home interviews, including a medication inventory, were administered between June 2005 and March 2006 to 3005 community-residing individuals, ages 57–85 years, drawn from a cross-sectional, nationally-representative probability sample of the United States. Based on a modified version of the Adult Treatment Panel III (ATP III) risk stratification guidelines, 1066 respondents were at high cardiovascular risk, 977 were at moderate risk, and 812 were at low risk. Rates of use were highest among respondents at high cardiovascular risk. Racial differences were highest among respondents at high risk with blacks less likely than whites to use statins (38% vs. 50%, p = 0.007) and aspirin (29% vs. 44%, p = 0.008). After controlling for age, gender, comorbidity, and socioeconomic, and access to care factors, racial/ethnic disparities persisted. In particular, blacks at highest risk were less likely than their white counterparts to use statins (odds ratio (OR) 0.65, confidence interval (CI) 0.46–0.90) or aspirin (OR 0.61, CI 0.37–0.98).
These results, based on an in-home survey of actual medication use, suggest widespread underuse of indicated preventive therapies among older adults at high cardiovascular risk in the United States. Racial/ethnic disparities in such use may contribute to documented disparities in cardiovascular outcomes.
geriatrics; disparities; race/ethnicity; secondary prevention; statins
FDA advisory-committees recently made some recommendations to address acetaminophen (APAP)-related toxicity.
To study the proportion of APAP-users potentially consuming APAP over the currently recommended dose (4gm/day) and a toxic dose (10gm/day). To explore the impact of substituting the APAP-strength in combination-prescriptions to 325mg on potential APAP-overuse patterns.
Using the 2001-2008 pharmacy claims from IMS LifeLink Health Plans, APAP potential maximum daily dose (PMDD), potential cumulative dose and potential average daily dose (PADD) were calculated annually for APAP-users. The proportion of users with potential APAP-use above 4gm/day and 10gm/day are reported. Analyses were repeated by substituting the maximum APAP-strength in combination-prescriptions to 325mg. Ordinary least squares regression was used to detect linear trends in APAP-use/overuse.
790,188 of 2,656,161 study subjects were prescribed acetaminophen in one or more years from 2001-2008. The proportions of adult APAP-users with PMDD >4gm/day and PADD >4gm/day significantly decreased (p=0.0020 and p=0.0024 respectively). If the maximum APAP-strength in combination-prescriptions was 325mg, the proportion of APAP-users with PMDD >4gm would be 14.08% in 2001 and 13.67% in 2008 while the proportion of those with PMDD >10gm would be 0.21% and 2.30%.
About 1 in 4 APAP-users have a PMDD >4gm/day while 2-3% have a PMDD >10gm based exclusively on prescription data which is concerning. These proportions could reduce by over half if the maximum APAP-strength in combination-prescriptions is 325mg. Additional monitoring of opioid prescription-patterns, physician and pharmacist cognizance in prescribing APAP-containing combination products and dose reduction strategies should be considered to reduce APAP-overuse.
High doses of gabapentin were associated with pancreatic acinar cell tumors in male Wistar rats, but there is little published epidemiological data regarding gabapentin and carcinogenicity. We explored the association between gabapentin and cancer in a United States (US) medical care program, and followed up nominally significant associations in a United Kingdom (UK) primary care database.
In the US Kaiser Permanente Northern California (KPNC) health system, we performed nested case-control analyses of gabapentin and 55 cancer sites and all-cancer combined using conditional logistic regression. Up to ten controls were matched to each case on year of birth, sex, and year of cohort entry. No other covariates were included in models. Only dispensings for gabapentin 2 years or more prior to index date were considered. Nominally significant associations with an odds ratio > 1.00 and p < 0.05 for three or more dispensings versus no dispensings were followed up by similar nested-case control analyses in the UK General Practice Research Database (GPRD), adjusting for potential indications for gabapentin and risk factors for the specific cancers.
The following analyses had OR>1.00 and p<0.05 for three or more dispensings of gabapentin versus no dispensing (2-year lag) in KPNC, and were also examined in the GPRD: all-cancer, breast, lung and bronchus, urinary bladder, kidney/renal pelvis, stomach, anus/anal canal/anorectum, penis, and other nervous system. These cancers were not statistically significantly associated with gabapentin in the GPRD case-control studies (2-year lag). The GPRD and KPNC studies did not identify a statistically significant increased risk of pancreatic cancer with >2 prescriptions of gabapentin in 2-year lagged analyses.
The epidemiological data in a US cohort with up to 12 years of follow-up and a UK cohort with up to 15 years of follow up do not support a carcinogenic effect of gabapentin use. However, the confidence intervals for some analyses were wide, and an important effect cannot be confidently excluded.
Gabapentin; Cancer; Protopathic bias
Studies have associated thiazolidinedione (TZD) treatment with cardiovascular disease (CVD) and questioned whether the two available TZDs, rosiglitazone and pioglitazone, have different CVD risks. We compared CVD incidence, cardiovascular (CV) and all-cause mortality in type 2 diabetic patients treated with rosiglitazone or pioglitazone as their only TZD.
We analyzed survey, medical record, administrative, and National Death Index (NDI) data from 1999 through 2003 from Translating Research Into Action for Diabetes (TRIAD), a prospective observational study of diabetes care in managed care. Medications, CV procedures, and CVD were determined from health plan (HP) administrative data, and mortality was from NDI. Adjusted hazard rates (AHR) were derived from Cox proportional hazard models adjusted for age, sex, race/ethnicity, income, history of diabetic nephropathy, history of CVD, insulin use, and HP.
Across TRIAD’s ten HPs, 1,815 patients (24%) filled prescriptions for a TZD, 773 (10%) for only rosiglitazone, 711 (10%) for only pioglitazone, and 331 (4%) for multiple TZDs. In the seven HPs using both TZDs, 1,159 patients (33%) filled a prescription for a TZD, 564 (16%) for only rosiglitazone, 334 (10%) for only pioglitazone, and 261 (7%) for multiple TZDs. For all CV events, CV and all-cause mortality, we found no significant difference between rosiglitazone and pioglitazone.
In this relatively small, prospective, observational study, we found no statistically significant differences in CV outcomes for rosiglitazone- compared to pioglitazone-treated patients. There does not appear to be a pattern of clinically meaningful differences in CV outcomes for rosiglitazone- versus pioglitazone-treated patients.
Thiazolidinediones; rosiglitazone; pioglitazone; diabetes
Few recent U.S. studies have examined population-based patterns in prescription drug use and even fewer have considered detailed patterns by race/ethnicity. In a representative community sample, our objectives were to determine the most commonly-used prescription drug classes, and to describe their use by age, gender, and race/ethnicity.
Cross-sectional epidemiologic study of 5503 (1767 black, 1877 Hispanic, 1859 white) community-dwelling participants aged 30–79 in the Boston Area Community Health Survey (2002–2005). Using medication information collected from an in-home interview and medication inventory, the prevalence of use of a therapeutic class (95% confidence interval [95% CI]) in the past month was estimated by gender, age group, and race/ethnicity. Estimates were weighted inversely to the probability of sampling for generalizablity to Boston, MA.
The therapeutic class containing selective serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor (SSRI/SNRI) antidepressants was most commonly used (14.6%), followed by statins (13.9%), beta-adrenergic blockers (10.6%) and angiotensin-converting enzyme inhibitors (10.5%). Within all age groups and both genders, black participants were substantially less likely than white to use SSRI/SNRI antidepressants (e.g., black men: 6.0% [95% CI: 3.9%–8.1%]; white men: 15.0% [95% CI: 10.2%–19.4%]). Other race/ethnic differences were observed: for example, black women were significantly less likely than other groups to use benzodiazepines (e.g. black: 2.6% [95% CI: 1.2%–3.9%]; Hispanic: 9.4% [95% CI: 5.8%–13.0%]).
Race/ethnic differences in use of prescription therapeutic classes were observed in our community sample. Examining therapeutic classes rather than individual drugs resulted in a different distribution of common exposures compared to other surveys.
pharmacoepidemiology; minority health; prescription drugs
To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data.
PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria.
A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater.
The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest.
cerebrovascular accident; transient ischemic attack; validation; administrative data
Drug Prescriptions; Narcotics