The clinical utility of genotype-guided (pharmacogenetically based) dosing of warfarin has been tested only in small clinical trials or observational studies, with equivocal results.
We randomly assigned 1015 patients to receive doses of warfarin during the first 5 days of therapy that were determined according to a dosing algorithm that included both clinical variables and genotype data or to one that included clinical variables only. All patients and clinicians were unaware of the dose of warfarin during the first 4 weeks of therapy. The primary outcome was the percentage of time that the international normalized ratio (INR) was in the therapeutic range from day 4 or 5 through day 28 of therapy.
At 4 weeks, the mean percentage of time in the therapeutic range was 45.2% in the genotype-guided group and 45.4% in the clinically guided group (adjusted mean difference, [genotype-guided group minus clinically guided group], −0.2; 95% confidence interval, −3.4 to 3.1; P=0.91). There also was no significant between-group difference among patients with a predicted dose difference between the two algorithms of 1 mg per day or more. There was, however, a significant interaction between dosing strategy and race (P=0.003). Among black patients, the mean percentage of time in the therapeutic range was less in the genotype-guided group than in the clinically guided group. The rates of the combined outcome of any INR of 4 or more, major bleeding, or thromboembolism did not differ significantly according to dosing strategy.
Genotype-guided dosing of warfarin did not improve anticoagulation control during the first 4 weeks of therapy. (Funded by the National Heart, Lung, and Blood Institute and others; COAG ClinicalTrials.gov number, NCT00839657.)
In vitro and animal model data suggest that intraoperative preservation solutions may influence endothelial function and vein graft failure (VGF) after coronary artery bypass graft (CABG) surgery. Clinical studies to validate these findings are lacking.
To evaluate the effect of vein graft preservation solutions on VGF and clinical outcomes in patients undergoing CABG surgery.
DESIGN, SETTING, AND PARTICIPANTS
Data from the Project of Ex-Vivo Vein Graft Engineering via Transfection IV (PREVENT IV) study, a phase 3, multicenter, randomized, double-blind, placebo-controlled trial that enrolled 3014 patients at 107 US sites from August 1, 2002, through October 22, 2003, were used. Eligibility criteria for the trial included CABG surgery for coronary artery disease with at least 2 planned vein grafts.
Preservation of vein grafts in saline, blood, or buffered saline solutions.
MAIN OUTCOMES AND MEASURES
One-year angiographic VGF and 5-year rates of death, myocardial infarction, and subsequent revascularization.
Most patients had grafts preserved in saline (1339 [44.4%]), followed by blood (971 [32.2%]) and buffered saline (507 [16.8%]). Baseline characteristics were similar among groups. One-year VGF rates were much lower in the buffered saline group than in the saline group (patient-level odds ratio [OR], 0.59 [95% CI, 0.45-0.78; P < .001]; graft-level OR, 0.63 [95% CI, 0.49-0.79; P < .001]) or the blood group (patient-level OR, 0.62 [95% CI, 0.46-0.83; P = .001]; graft-level OR, 0.63 [95% CI, 0.48-0.81; P < .001]). Use of buffered saline solution also tended to be associated with a lower 5-year risk for death, myocardial infarction, or subsequent revascularization compared with saline (hazard ratio, 0.81 [95% CI, 0.64-1.02; P = .08]) and blood (0.81 [0.63-1.03; P = .09]) solutions.
CONCLUSIONS AND RELEVANCE
Patients undergoing CABG whose vein grafts were preserved in a buffered saline solution had lower VGF rates and trends toward better long-term clinical outcomes compared with patients whose grafts were preserved in saline- or blood-based solutions.
While extracardiac vascular disease (ECVD), defined as a history of peripheral vascular disease (PVD) or cerebrovascular disease (CBVD), is common in patients undergoing coronary artery bypass graft (CABG) surgery, there are limited data available on the association between ECVD, vein graft failure (VGF), and clinical outcomes.
Using data from the Project of Ex-vivo Vein Graft Engineering via Transfection IV (PREVENTIV) trial (n = 3,014), 1-year angiographic follow-up and 5- year clinical outcomes (death, myocardial infarction, and revascularization) were determined in patients with and without ECVD. Logistic regression was used to assess risk of VGF. Generalized estimating equations methods were used to account for correlations in a graft level analysis. Kaplan-Meier estimates and Cox hazards regression were used to compare clinical outcomes. We similarly explored the association of the individual components CBVD and PVD with both VGF and clinical outcomes in an additive model.
Patients with ECVD (n=634, 21%) were older, more commonly female, and had more comorbidities, lower use of internal thoracic artery grafting, and overall worse graft quality than patients without ECVD. VGF rates tended to be higher (patient-level: odds ratio [OR]: 1.23, 95% confidence interval [CI] 0.96 to 1.58, p = 0.099; graft-level: OR: 1.23, 95% CI: 1.00 to 1.53, p = 0.053) in patients with ECVD. VGF rates were significantly higher among CBVD patients (OR: 1.42, 95% CI: 1.03 to 1.97, p = 0.035; graft-level: OR: 1.40, 95% CI: 1.06 to 1.85, p = 0.019). Patients with ECVD had a higher risk of death, myocardial infarction, or revascularization 5 years after CABG surgery (hazard ratio [HR]: 2.96, 95% CI: 2.02 to 4.35, p < 0.001). This relationship was driven by the subset of patients with PVD (HR = 3.32, 95% CI: 2.16 to 5.09, p < 0.001) and not by those with CBVD (HR = 1.10, 95% CI: 0.88 to 1.37, p = 0.40).
ECVD is common among patients undergoing CABG surgery and is associated with similar short-term but increasingly worse long-term clinical outcomes. This higher risk may be partly, but not exclusively, due to higher rates of VGF among these patients.
We investigated the prevalence of prior myocardial infarction (MI) and incidence of ischaemic cardiovascular (CV) events among atrial fibrillation (AF) patients.
Methods and results
In ROCKET AF, 14 264 patients with nonvalvular AF were randomized to rivaroxaban or warfarin. The key efficacy outcome for these analyses was CV death, MI, and unstable angina (UA). This pre-specified analysis was performed on patients while on treatment. Rates are per 100 patient-years. Overall, 2468 (17%) patients had prior MI at enrollment. Compared with patients without prior MI, these patients were more likely to be male (75 vs. 57%), on aspirin at baseline (47 vs. 34%), have prior congestive heart failure (78 vs. 59%), diabetes (47 vs. 39%), hypertension (94 vs. 90%), higher mean CHADS2 score (3.64 vs. 3.43), and fewer prior strokes or transient ischaemic attacks (46 vs. 54%). CV death, MI, or UA rates tended to be lower in patients assigned rivaroxaban compared with warfarin [2.70 vs. 3.15; hazard ratio (HR) 0.86, 95% confidence interval (CI) 0.73–1.00; P = 0.0509]. CV death, MI, or UA rates were higher in those with prior MI compared with no prior MI (6.68 vs. 2.19; HR 3.04, 95% CI 2.59–3.56) with consistent results for CV death, MI, or UA for rivaroxaban compared with warfarin in prior MI compared with no prior MI (P interaction = 0.10).
Prior MI was common and associated with substantial risk for subsequent cardiac events. Patients with prior MI assigned rivaroxaban compared with warfarin had a non-significant 14% reduction of ischaemic cardiac events.
Atrial fibrillation; Myocardial infarction; Coronary artery disease; Outcomes; Factor Xa; Rivaroxaban; Warfarin
Association of weight loss achieved through various decongestive strategies with clinical outcomes in acute decompensated heart failure (HF) patients is not well described. Our goal was to determine the relationship between weight change during hospitalization and subsequent clinical events in decompensated HF patients. We evaluated data on 433 patients hospitalized with advanced HF enrolled in the Evaluation Study of Congestive Heart Failure and Pulmonary Artery Catheterization Effectiveness (ESCAPE) trial. The influence of change in weight during hospitalization to clinical outcomes (days alive out of hospital in the first 6 months; death; death or rehospitalization; and death, rehospitalization or cardiac transplantation) was evaluated. On average patients lost approximately 3.6 Kg during hospitalization. When categorized into 3 weight loss tertiles, those in highest tertile were more likely to be older, females, smokers, with higher body weight, prior percutaneous coronary intervention(s), baseline heart rate, and BNP and blood urea nitrogen values, but lower ejection fraction and peak oxygen consumptions. No significant differences were observed between weight change and any in-hospital or follow-up events (days well HR 0.995 [95% CI 0.975–1.016]; 180 days death HR 1.012 [95% CI 0.969–1.057]; death/rehospitalization-180 days HR 1.014 [95% CI 0.990–1.038]). In conclusions, weight loss in patients with acute decompensated HF during hospitalization was not related to clinical end-points. This data challenges the merit of using weight as a surrogate endpoint for more important clinical events i.e. death and/or rehospitalization in patients with heart failure in the design of treatment strategies for novel therapeutic agents in randomized controlled clinical trials.
heart failure; weight; outcomes
This study compares the yield and characteristics of diabetes cohorts identified using heterogeneous phenotype definitions.
Materials and methods
Inclusion criteria from seven diabetes phenotype definitions were translated into query algorithms and applied to a population (n=173 503) of adult patients from Duke University Health System. The numbers of patients meeting criteria for each definition and component (diagnosis, diabetes-associated medications, and laboratory results) were compared.
Three phenotype definitions based heavily on ICD-9-CM codes identified 9–11% of the patient population. A broad definition for the Durham Diabetes Coalition included additional criteria and identified 13%. The electronic medical records and genomics, NYC A1c Registry, and diabetes-associated medications definitions, which have restricted or no ICD-9-CM criteria, identified the smallest proportions of patients (7%). The demographic characteristics for all seven phenotype definitions were similar (56–57% women, mean age range 56–57 years).The NYC A1c Registry definition had higher average patient encounters (54) than the other definitions (range 44–48) and the reference population (20) over the 5-year observation period. The concordance between populations returned by different phenotype definitions ranged from 50 to 86%. Overall, more patients met ICD-9-CM and laboratory criteria than medication criteria, but the number of patients that met abnormal laboratory criteria exclusively was greater than the numbers meeting diagnostic or medication data exclusively.
Differences across phenotype definitions can potentially affect their application in healthcare organizations and the subsequent interpretation of data.
Further research focused on defining the clinical characteristics of standard diabetes cohorts is important to identify appropriate phenotype definitions for health, policy, and research.
Phenotypes; Electronic Health Records; Diabetes; Patient Registries; Secondary Data Use; Clinical Research
Widespread sharing of data from electronic health records and patient-reported outcomes can strengthen the national capacity for conducting cost-effective clinical trials and allow research to be embedded within routine care delivery. While pragmatic clinical trials (PCTs) have been performed for decades, they now can draw on rich sources of clinical and operational data that are continuously fed back to inform research and practice. The Health Care Systems Collaboratory program, initiated by the NIH Common Fund in 2012, engages healthcare systems as partners in discussing and promoting activities, tools, and strategies for supporting active participation in PCTs. The NIH Collaboratory consists of seven demonstration projects, and seven problem-specific working group ‘Cores’, aimed at leveraging the data captured in heterogeneous ‘real-world’ environments for research, thereby improving the efficiency, relevance, and generalizability of trials. Here, we introduce the Collaboratory, focusing on its Phenotype, Data Standards, and Data Quality Core, and present early observations from researchers implementing PCTs within large healthcare systems. We also identify gaps in knowledge and present an informatics research agenda that includes identifying methods for the definition and appropriate application of phenotypes in diverse healthcare settings, and methods for validating both the definition and execution of electronic health records based phenotypes.
Clinical Research; Secondary Data Use; Phenotyping; Data quality
During long-term anticoagulation in atrial fibrillation, temporary interruptions (TIs) of therapy are common, but the relationship between patient outcomes and TIs has not been well studied. We sought to determine reasons for TI, the characteristics of patients undergoing TI, and the relationship between anticoagulant and outcomes among patients with TI.
Methods and Results
In the Rivaroxaban Once Daily, Oral, Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation (ROCKET AF), a randomized, double-blind, double-dummy study of rivaroxaban and warfarin in nonvalvular atrial fibrillation, baseline characteristics, management, and outcomes, including stroke, non–central nervous system systemic embolism, death, myocardial infarction, and bleeding, were reported in participants who experienced TI (3–30 days) for any reason. The at-risk period for outcomes associated with TI was from TI start to 30 days after resumption of study drug. In 14 236 participants who received at least 1 dose of study drug, 4692 (33%) experienced TI. Participants with TI were similar to the overall ROCKET AF population in regard to baseline clinical characteristics. Only 6% (n=483) of TI incidences involved bridging therapy. Stroke/systemic embolism rates during the at-risk period were similar in rivaroxaban-treated and warfarin-treated participants (0.30% versus 0.41% per 30 days; hazard ratio [confidence interval]=0.74 [0.36–1.50]; P=0.40). Risk of major bleeding during the at-risk period was also similar in rivaroxaban-treated and warfarin-treated participants (0.99% versus 0.79% per 30 days; hazard ratio [confidence interval]=1.26 [0.80–2.00]; P=0.32).
TI of oral anticoagulation is common and is associated with substantial stroke risks and bleeding risks that were similar among patients treated with rivaroxaban or warfarin. Further investigation is needed to determine the optimal management strategy in patients with atrial fibrillation requiring TI of anticoagulation.
Clinical Trial Registration
URL: http://www.clinicaltrials.gov. Unique identifier: NCT00403767.
anticoagulation; atrial fibrillation; stroke
This study sought to report additional safety results from the ROCKET AF (Rivaroxaban Once-daily oral Direct Factor Xa Inhibition Compared with Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation).
The ROCKET AF trial demonstrated similar risks of stroke/systemic embolism and major/nonmajor clinically relevant bleeding (principal safety endpoint) with rivaroxaban and warfarin.
The risk of the principal safety and component bleeding endpoints with rivaroxaban versus warfarin were compared, and factors associated with major bleeding were examined in a multivariable model.
The principal safety endpoint was similar in the rivaroxaban and warfarin groups (14.9 vs. 14.5 events/100 patient-years; hazard ratio: 1.03; 95% confidence interval: 0.96 to 1.11). Major bleeding risk increased with age, but there were no differences between treatments in each age category (<65, 65 to 74, ≥75 years; pinteraction = 0.59). Compared with those without (n = 13,455), patients with a major bleed (n = 781) were more likely to be older, current/prior smokers, have prior gastrointestinal (GI) bleeding, mild anemia, and a lower calculated creatinine clearance and less likely to be female or have a prior stroke/transient ischemic attack. Increasing age, baseline diastolic blood pressure (DBP) ≥90 mm Hg, history of chronic obstructive pulmonary disease or GI bleeding, prior acetylsalicylic acid use, and anemia were independently associated with major bleeding risk; female sex and DBP <90 mm Hg were associated with a decreased risk.
Rivaroxaban and warfarin had similar risk for major/nonmajor clinically relevant bleeding. Age, sex, DBP, prior GI bleeding, prior acetylsalicylic acid use, and anemia were associated with the risk of major bleeding. (An Efficacy and Safety Study of Rivaroxaban With Warfarin for the Prevention of Stroke and Non-Central Nervous System Systemic Embolism in Patients With Non-Valvular Atrial Fibrillation: NCT00403767)
anticoagulants; atrial fibrillation; hemorrhage
The cardiometabolic risk cluster metabolic syndrome (MS) includes ≥3of elevated fasting glucose, hypertension, elevated triglycerides, reduced high-density lipoprotein cholesterol(HDL-c), and increased waist circumference. Each can be affected by physical activity and diet. Our objective was to determine whether determine whether baseline physical activity and/or diet behavior impact MS in the course of a large pharmaceutical trial.
This was an observational study from NAVIGATOR, a double-blind, randomized (nateglinide, valsartan, both, or placebo), controlled trial between 2002 and 2004. We studied data from persons (n=9306) with impaired glucose tolerance and cardiovascular disease (CVD) or CVD risk factors; 7118 with pedometer data were included in this analysis.
Physical activity was assessed with 7-day pedometer records; diet behavior was self-reported on a 6-item survey. An MS score (MSSc) was calculated using the sum of each MS component, centered around the Adult Treatment Panel III threshold, and standardized according to sample standard deviation. Excepting HDL-c, assessed at baseline and year 3, MS components were assessed yearly. Follow-up averaged 6 years.
For every 2000-stepincrease in average daily steps, there was an associated reduction in average MSSc of 0.29(95%CI−0.33to−0.25).For each diet behavior endorsed, there was an associated reduction in average MSSc of 0.05 (95%CI−0.08 to −0.01).Accounting for the effects of pedometer steps and diet behavior together had minimal impact on parameter estimates with no significant interaction. Relations were independent of age, sex, race, region, smoking, family history of diabetes, and use of nateglinide, valsartan, aspirin, antihypertensive, and lipid-lowering agent.
Baseline physical activity and diet behavior were associated independently with reductions in MSSc such that increased attention to these lifestyle elements providescardiometabolic benefits. Thus, given the potential to impact outcomes, assessment of physical activity and diet should be performed in pharmacologic trials targeting cardiometabolic risk.
pedometer; clinical trials; diabetes risk; diet surveys; z scores
Independent data monitoring committees (IDMCs) were introduced to monitor patient safety and study conduct in randomized clinical trials (RCTs), but certain challenges regarding the utilization of IDMCs have developed. First, the roles and responsibilities of IDMCs are expanding, perhaps due to increasing trial complexity and heterogeneity regarding medical, ethical, legal, regulatory, and financial issues. Second, no standard for IDMC operating procedures exists, and there is uncertainty about who should determine standards and whether standards should vary with trial size and design. Third, considerable variability in communication pathways exist across IDMC interfaces with regulatory agencies, academic coordinating centers, and sponsors. Finally, there has been a substantial increase in the number of RCTs using IDMCs, yet there is no set of qualifications to help guide the training and development of the next generation of IDMC members. Recently, an expert panel of representatives from government, industry, and academia assembled at the Duke Clinical Research Institute to address these challenges and to develop recommendations for the future utilization of IDMCs in RCTs.
To investigate whether tirofiban would have been non-inferior to abciximab had the trial completed enrollment, and we place the termination of this trial in a broader research ethics context.
TENACITY was terminated by the sponsor for financial reasons. At the time, event rates for the 2 treatment arms were unknown.
TENACITY was designed to compare tirofiban with abciximab in approximately 8000 patients; however, enrollment was terminated after 383 (4.8%) patients. The primary endpoint was a composite of 30-day death, myocardial infarction, and urgent target vessel revascularization. Non-inferiority was defined as the likelihood that tirofiban would preserve at least 50% of the ability of abciximab to reduce the primary endpoint at 30 days, based on abciximab’s demonstrated ability to reduce such events by 43% (relative risk, 0.573; 95% confidence interval [CI], 0.507–0.648; P<0.001). To determine the probability of non-inferiority given the patients already enrolled, a Bayesian approach was used.
The primary composite endpoint occurred in 8.8% of patients randomized to abciximab vs. 6.9% receiving high-bolus-dose tirofiban (odds ratio, 0.77; 95% CI, 0.37–1.64). The estimated conditional power for the test that tirofiban would be non-inferior to abciximab if all patients been enrolled is 93.7%. Using the estimated predictive power method, the likelihood was 84.8%.
TENACITY was well-powered to identify non-inferiority with tirofiban vs. abciximab, and the patients enrolled strengthened the probability that this would have been the outcome had the trial been completed. When a clinical trial is terminated solely for financial reasons, it is incumbent upon the sponsor to provide proper patient follow-up and publication of the findings.
glycoprotein IIb/IIIa inhibitors; clinical trial; tirofiban; abciximab
Despite the rapid growth of electronic health data, most data systems do not connect individual patient records to data sets from outside the health care delivery system. These isolated data systems cannot support efforts to recognize or address how the physical and environmental context of each patient influences health choices and health outcomes. In this article we describe how a geographic health information system in Durham, North Carolina, links health system and social and environmental data via shared geography to provide a multidimensional understanding of individual and community health status and vulnerabilities. Geographic health information systems can be useful in supporting the Institute for Healthcare Improvement’s Triple Aim Initiative to improve the experience of care, improve the health of populations, and reduce per capita costs of health care. A geographic health information system can also provide a comprehensive information base for community health assessment and intervention for accountable care that includes the entire population of a geographic area.
The Patient-Centered Outcomes Research Institute (PCORI) has launched PCORnet, a major initiative to support an effective, sustainable national research infrastructure that will advance the use of electronic health data in comparative effectiveness research (CER) and other types of research. In December 2013, PCORI's board of governors funded 11 clinical data research networks (CDRNs) and 18 patient-powered research networks (PPRNs) for a period of 18 months. CDRNs are based on the electronic health records and other electronic sources of very large populations receiving healthcare within integrated or networked delivery systems. PPRNs are built primarily by communities of motivated patients, forming partnerships with researchers. These patients intend to participate in clinical research, by generating questions, sharing data, volunteering for interventional trials, and interpreting and disseminating results. Rapidly building a new national resource to facilitate a large-scale, patient-centered CER is associated with a number of technical, regulatory, and organizational challenges, which are described here.
comparative effectiveness research; distributed databases; patient-centered outcomes research institute; clinical data research networks; patient-powered research networks
To examine the characteristics associated with early dyspnoea relief during acute heart failure (HF) hospitalization, and its association with 30-day outcomes.
Methods and results
ASCEND-HF was a randomized trial of nesiritide vs. placebo in 7141 patients hospitalized with acute HF in which dyspnoea relief at 6 h was measured on a 7-point Likert scale. Patients were classified as having early dyspnoea relief if they experienced moderate or marked dyspnoea improvement at 6 h. We analysed the clinical characteristics, geographical variation, and outcomes (mortality, mortality/HF hospitalization, and mortality/hospitalization at 30 days) associated with early dyspnoea relief. Early dyspnoea relief occurred in 2984 patients (43%). In multivariable analyses, predictors of dyspnoea relief included older age and oedema on chest radiograph; higher systolic blood pressure, respiratory rate, and natriuretic peptide level; and lower serum blood urea nitrogen (BUN), sodium, and haemoglobin (model mean C index = 0.590). Dyspnoea relief varied markedly across countries, with patients enrolled from Central Europe having the lowest risk-adjusted likelihood of improvement. Early dyspnoea relief was associated with lower risk-adjusted 30-day mortality/HF hospitalization [hazard ratio (HR) 0.81; 95% confidence interval (CI) 0.68–0.96] and mortality/hospitalization (HR 0.85; 95% CI 0.74–0.99), but similar mortality.
Clinical characteristics such as respiratory rate, pulmonary oedema, renal function, and natriuretic peptide levels are associated with early dyspnoea relief, and moderate or marked improvement in dyspnoea was associated with a lower risk for 30-day outcomes.
Acute heart failure; Dyspnoea relief; Prognosis; Outcomes
Time in therapeutic range (TTR) is a standard quality measure of the use of warfarin. We assessed the relative effects of rivaroxaban versus warfarin at the level of trial center TTR (cTTR) since such analysis preserves randomized comparisons.
Methods and Results
TTR was calculated using the Rosendaal method, without exclusion of international normalized ratio (INR) values performed during warfarin initiation. Measurements during warfarin interruptions >7 days were excluded. INRs were performed via standardized finger‐stick point‐of‐care devices at least every 4 weeks. The primary efficacy endpoint (stroke or non‐central nervous system embolism) was examined by quartiles of cTTR and by cTTR as a continuous function. Centers with the highest cTTRs by quartile had lower‐risk patients as reflected by lower CHADS2 scores (P<0.0001) and a lower prevalence of prior stroke or transient ischemic attack (P<0.0001). Sites with higher cTTR were predominantly from North America and Western Europe. The treatment effect of rivaroxaban versus warfarin on the primary endpoint was consistent across a wide range of cTTRs (P value for interaction=0.71). The hazard of major and non‐major clinically relevant bleeding increased with cTTR (P for interaction=0.001), however, the estimated reduction by rivaroxaban compared with warfarin in the hazard of intracranial hemorrhage was preserved across a wide range of threshold cTTR values.
The treatment effect of rivaroxaban compared with warfarin for the prevention of stroke and systemic embolism is consistent regardless of cTTR.
rivaroxaban; time in therapeutic range; warfarin
Conflicting relationships have been described between anemia correction using erythropoiesis-stimulating agents (ESAs) and progression of chronic kidney disease (CKD). This study was undertaken to examine the impact of target hemoglobin on progression of kidney disease in the CHOIR (Correction of Hemoglobin and Outcomes in Renal Insufficiency) trial.
Secondary analysis of a randomized controlled trial
Setting and participants
1432 participants with CKD and anemia
Participants were randomized to target hemoglobin of 13.5 vs 11.3 gm/dL with the use of epoetin-alfa.
Outcomes and measurements
Cox regression was used to estimate hazard ratios for progression of CKD (a composite of doubling of creatinine, initiation of renal replacement therapy (RRT), or death). Interactions between hemoglobin target and select baseline variables (estimated glomerular filtration rate (eGFR), proteinuria, diabetes, heart failure, and smoking history) were also examined.
Participants randomized to higher hemoglobin targets experienced a shorter time to progression of kidney disease in both univariate (HR, 1.25; 95% CI, 1.03–1.52; p=0.02) and multivariable models (HR, 1.22; 95% CI, 1.00–1.48; p=0.05). These differences were attributable to higher rates of RRT and death among participants in the high hemoglobin arm. Hemoglobin target did not interact with eGFR, proteinuria, diabetes, or heart failure (p>0.05 for all). In the multivariable model, hemoglobin target interacted with tobacco use (p=0.04) such that the higher target had a greater risk of CKD progression among participants that currently smoked (HR, 2.50; 95% CI, 1.23–5.09; p=0.01) which was not present among those who did not currently smoke (HR, 1.15; 95% CI 0.93–1.41; p=0.2).
A post-hoc analysis and thus cause- effect cannot be determined.
These results suggest that high hemoglobin target is associated with a greater risk of progression of CKD. This risk may be augmented by concurrent smoking. Further defining the mechanism of injury may provide insight into methods to optimize outcomes in anemia management.
Limited data exist concerning outcomes of patients with non-ST-segment elevation acute coronary syndromes (NSTE ACS) with no angiographically obstructive coronary artery disease (non-obstructive CAD). We assessed the frequency of clinical outcomes among patients with non-obstructive CAD compared with obstructive CAD.
Methods and results:
We pooled data from eight NSTE ACS randomized clinical trials from 1994 to 2008, including 37,101 patients who underwent coronary angiography. The primary outcome was 30-day death or myocardial infarction (MI). Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for 30-day death or MI for non-obstructive versus obstructive CAD were generated for each trial. Summary ORs (95% CIs) across trials were generated using random effects models. Overall, 3550 patients (9.6%) had non-obstructive CAD. They were younger, more were female, and fewer had diabetes mellitus, previous MI or prior percutaneous coronary intervention than patients with obstructive CAD. Thirty-day death or MI was less frequent among patients with non-obstructive CAD (2.2%) versus obstructive CAD (13.3%) (ORadj 0.15; 95% CI, 0.11–0.20); 30-day death or spontaneous MI and six-month mortality were also less frequent among patients with non-obstructive CAD (ORadj 0.19 (0.14–0.25) and 0.37 (0.28–0.49), respectively).
Among patients with NSTE ACS, one in 10 had non-obstructive CAD. Death or MI occurred in 2.2% of these patients by 30 days. Compared with patients with obstructive CAD, the rate of major cardiac events was lower in patients with non-obstructive CAD but was not negligible, prompting the need to better understand management strategies for this group.
Acute coronary syndromes; angiography; atherosclerosis; coronary disease; infarction
Recent studies suggest that the use of antidepressants may be associated with increased mortality in patients with cardiac disease. Because depression has also been shown to be associated with increased mortality in these patients, it remains unclear if this association is attributable to the use of antidepressants or to depression.
To evaluate the association of long-term mortality with antidepressant use and depression, we studied 1006 patients aged 18 years or older with clinical heart failure and an ejection fraction of 35% or less (62% with ischemic disease) between March 1997 and June 2003. The patients were followed up for vital status annually thereafter. Depression status, which was assessed by the Beck Depression Inventory (BDI) scale and use of antidepressants, was prospectively collected. The main outcome of interest was long-term mortality.
Of the study patients, 30.0% were depressed (defined by a BDI score ≥10) and 24.2% were taking antidepressants (79.6% of these patients were taking selective serotonin reuptake inhibitors [SSRIs] only). The vital status was obtained from all participants at an average follow-up of 972 (731) (mean [SD]) days. During this period, 42.7% of the participants died. Overall, the use of antidepressants (unadjusted hazard ratio [HR], 1.32; 95% confidence interval [CI], 1.03–1.69) or SSRIs only (unadjusted HR, 1.32; 95% CI, 0.99–1.74) was associated with increased mortality. However, the association between antidepressant use (HR, 1.24; 95% CI, 0.94–1.64) and increased mortality no longer existed after depression and other confounders were controlled for. Nonetheless, depression remained associated with increased mortality (HR, 1.33; 95% CI, 1.07–1.66). Similarly, depression (HR, 1.34; 95% CI, 1.08–1.68) rather than SSRI use (HR, 1.10; 95% CI, 0.81–1.50) was independently associated with increased mortality after adjustment.
Our findings suggest that depression (defined by a BDI score ≥10), but not antidepressant use, is associated with increased mortality in patients with heart failure.
Identifying high-risk heart failure (HF) patients at hospital discharge may allow more effective triage to management strategies.
HF severity at presentation predicts outcomes, but the prognostic importance of clinical status changes due to interventions is less well described.
Predictive models using variables obtained during hospitalization were created using data from ESCAPE and internally validated by bootstrapping method. Model coefficients were converted to an additive risk score. Additionally, data from the FIRST (Flolan International Randomized Survival Trial) was used to externally validate this model.
Patients discharged with complete data (n=423) had 6-month mortality and death or rehospitalization rates of 18.7% and 64%. Discharge risk factors for mortality included BNP, per doubling (Hazard Ration [HR]: 1.42, 95% confidence interval [CI]: 1.15–1.75), cardiopulmonary resuscitation or mechanical ventilation during hospitalization (HR: 2.54, 95% CI: 1.12–5.78), blood urea nitrogen, per 20-U increase) (HR: 1.22, 95% CI: 0.96–1.55), serum sodium, per unit increase (HR: 0.93, 95% CI: 0.87–0.99), age >70 (HR: 1.05, 95% CI: 0.51–2.17), daily loop diuretic, furosemide equivalents >240 mg (HR: 1.49, 95% CI: 0.68–3.26), lack of beta-blocker (HR: 1.28, 95% CI: 0.68–2.41), and 6-minute walk, per 100 feet increase (HR: 0.955, 95% CI: 0.99–1.00; c index 0.76. A simplified discharge score discriminated mortality risk from 5% (score=0) to 94% (score =8). Bootstrap validation demonstrated good internal validation of the model (c index 0.78, 95% CI: 0.68–0.83).
The ESCAPE discharge risk model and score refine risk assessment after inhospital therapy for advanced decompensated systolic HF, allowing clinicians to focus surveillance and triage for early life-saving interventions in this high-risk population.
heart failure; risk stratification; discharge risk model
Purpose: Poor adherence to prescribed medicines is associated with increased rates of poor outcomes, including hospitalization, serious adverse events, and death, and is also associated with increased healthcare costs. However, current approaches to evaluation of medication adherence using real-world electronic health records (EHRs) or claims data may miss critical opportunities for data capture and fall short in modeling and representing the full complexity of the healthcare environment. We sought to explore a framework for understanding and improving data capture for medication adherence in a population-based intervention in four U.S. counties.
Approach: We posited that application of a data model and a process matrix when designing data collection for medication adherence would improve identification of variables and data accessibility, and could support future research on medication-taking behaviors. We then constructed a use case in which data related to medication adherence would be leveraged to support improved healthcare quality, clinical outcomes, and efficiency of healthcare delivery in a population-based intervention for persons with diabetes. Because EHRs in use at participating sites were deemed incapable of supplying the needed data, we applied a taxonomic approach to identify and define variables of interest. We then applied a process matrix methodology, in which we identified key research goals and chose optimal data domains and their respective data elements, to instantiate the resulting data model.
Conclusions: Combining a taxonomic approach with a process matrix methodology may afford significant benefits when designing data collection for clinical and population-based research in the arena of medication adherence. Such an approach can effectively depict complex real-world concepts and domains by “mapping” the relationships between disparate contributors to medication adherence and describing their relative contributions to the shared goals of improved healthcare quality, outcomes, and cost.
medication adherence; data model; process matrix; taxonomy; health behavior; self-management; secondary use; cardiometabolic
Cardiovascular medicine is widely regarded as a vanguard for evidence‐based drug and technology development. Our goal was to describe the cardiovascular clinical research portfolio from ClinicalTrials.gov.
Methods and Results
We identified 40 970 clinical research studies registered between 2007 and 2010 in which patients received diagnostic, therapeutic, or other interventions per protocol. By annotating 18 491 descriptors from the National Library of Medicine's Medical Subject Heading thesaurus and 1220 free‐text terms to select those relevant to cardiovascular disease, we identified studies that related to the diagnosis, treatment, or prevention of diseases of the heart and peripheral arteries in adults (n=2325 [66%] included from review of 3503 potential studies). The study intervention involved a drug in 44.6%, a device or procedure in 39.3%, behavioral intervention in 8.1%, and biological or genetic interventions in 3.0% of the trials. More than half of the trials were postmarket approval (phase 4, 25.6%) or not part of drug development (no phase, 34.5%). Nearly half of all studies (46.3%) anticipated enrolling 100 patients or fewer. The majority of studies assessed biomarkers or surrogate outcomes, with just 31.8% reporting a clinical event as a primary outcome.
Cardiovascular studies registered on ClinicalTrials.gov span a range of study designs. Data have limited verification or standardization and require manual processes to describe and categorize studies. The preponderance of small and late‐phase studies raises questions regarding the strength of evidence likely to be generated by the current portfolio and the potential efficiency to be gained by more research consolidation.
cardiovascular diseases; cardiovascular medicine; clinical research; clinical trials
To describe the development of an academic-health services partnership undertaken to improve use of evidence in clinical practice.
Academic health science schools and health service settings share common elements of their missions: to educate, participate in research, and excel in healthcare delivery, but differences in the business models, incentives, and approaches to problem-solving can lead to differences in priorities. Thus, academic and health service settings do not naturally align their leadership structures or work processes. We established a common commitment to accelerate the appropriate use of evidence in clinical practice and created an organizational structure to optimize opportunities for partnering that would leverage shared resources to achieve our goal.
A jointly governed and funded institute integrated existing activities from the academic and service sectors. Additional resources included clinical staff and student training and mentoring, a pilot research grant-funding program, and support to access existing data. Emergent developments include an appreciation for a wider range of investigative methodologies and cross-disciplinary teams with skills to integrate research in daily practice and improve patient outcomes.
By developing an integrated leadership structure and commitment to shared goals, we developed a framework for integrating academic and health service resources, leveraging additional resources, and forming a mutually beneficial partnership to improve clinical outcomes for patients.
academic-service partnership; academic medical center; evidence-based practice (EBP); nursing research; healthcare delivery