To examine the characteristics associated with early dyspnoea relief during acute heart failure (HF) hospitalization, and its association with 30-day outcomes.
Methods and results
ASCEND-HF was a randomized trial of nesiritide vs. placebo in 7141 patients hospitalized with acute HF in which dyspnoea relief at 6 h was measured on a 7-point Likert scale. Patients were classified as having early dyspnoea relief if they experienced moderate or marked dyspnoea improvement at 6 h. We analysed the clinical characteristics, geographical variation, and outcomes (mortality, mortality/HF hospitalization, and mortality/hospitalization at 30 days) associated with early dyspnoea relief. Early dyspnoea relief occurred in 2984 patients (43%). In multivariable analyses, predictors of dyspnoea relief included older age and oedema on chest radiograph; higher systolic blood pressure, respiratory rate, and natriuretic peptide level; and lower serum blood urea nitrogen (BUN), sodium, and haemoglobin (model mean C index = 0.590). Dyspnoea relief varied markedly across countries, with patients enrolled from Central Europe having the lowest risk-adjusted likelihood of improvement. Early dyspnoea relief was associated with lower risk-adjusted 30-day mortality/HF hospitalization [hazard ratio (HR) 0.81; 95% confidence interval (CI) 0.68–0.96] and mortality/hospitalization (HR 0.85; 95% CI 0.74–0.99), but similar mortality.
Clinical characteristics such as respiratory rate, pulmonary oedema, renal function, and natriuretic peptide levels are associated with early dyspnoea relief, and moderate or marked improvement in dyspnoea was associated with a lower risk for 30-day outcomes.
Acute heart failure; Dyspnoea relief; Prognosis; Outcomes
Recent studies suggest that the use of antidepressants may be associated with increased mortality in patients with cardiac disease. Because depression has also been shown to be associated with increased mortality in these patients, it remains unclear if this association is attributable to the use of antidepressants or to depression.
To evaluate the association of long-term mortality with antidepressant use and depression, we studied 1006 patients aged 18 years or older with clinical heart failure and an ejection fraction of 35% or less (62% with ischemic disease) between March 1997 and June 2003. The patients were followed up for vital status annually thereafter. Depression status, which was assessed by the Beck Depression Inventory (BDI) scale and use of antidepressants, was prospectively collected. The main outcome of interest was long-term mortality.
Of the study patients, 30.0% were depressed (defined by a BDI score ≥10) and 24.2% were taking antidepressants (79.6% of these patients were taking selective serotonin reuptake inhibitors [SSRIs] only). The vital status was obtained from all participants at an average follow-up of 972 (731) (mean [SD]) days. During this period, 42.7% of the participants died. Overall, the use of antidepressants (unadjusted hazard ratio [HR], 1.32; 95% confidence interval [CI], 1.03–1.69) or SSRIs only (unadjusted HR, 1.32; 95% CI, 0.99–1.74) was associated with increased mortality. However, the association between antidepressant use (HR, 1.24; 95% CI, 0.94–1.64) and increased mortality no longer existed after depression and other confounders were controlled for. Nonetheless, depression remained associated with increased mortality (HR, 1.33; 95% CI, 1.07–1.66). Similarly, depression (HR, 1.34; 95% CI, 1.08–1.68) rather than SSRI use (HR, 1.10; 95% CI, 0.81–1.50) was independently associated with increased mortality after adjustment.
Our findings suggest that depression (defined by a BDI score ≥10), but not antidepressant use, is associated with increased mortality in patients with heart failure.
Conflicting relationships have been described between anemia correction using erythropoiesis-stimulating agents (ESAs) and progression of chronic kidney disease (CKD). This study was undertaken to examine the impact of target hemoglobin on progression of kidney disease in the CHOIR (Correction of Hemoglobin and Outcomes in Renal Insufficiency) trial.
Secondary analysis of a randomized controlled trial
Setting and participants
1432 participants with CKD and anemia
Participants were randomized to target hemoglobin of 13.5 vs 11.3 gm/dL with the use of epoetin-alfa.
Outcomes and measurements
Cox regression was used to estimate hazard ratios for progression of CKD (a composite of doubling of creatinine, initiation of renal replacement therapy (RRT), or death). Interactions between hemoglobin target and select baseline variables (estimated glomerular filtration rate (eGFR), proteinuria, diabetes, heart failure, and smoking history) were also examined.
Participants randomized to higher hemoglobin targets experienced a shorter time to progression of kidney disease in both univariate (HR, 1.25; 95% CI, 1.03–1.52; p=0.02) and multivariable models (HR, 1.22; 95% CI, 1.00–1.48; p=0.05). These differences were attributable to higher rates of RRT and death among participants in the high hemoglobin arm. Hemoglobin target did not interact with eGFR, proteinuria, diabetes, or heart failure (p>0.05 for all). In the multivariable model, hemoglobin target interacted with tobacco use (p=0.04) such that the higher target had a greater risk of CKD progression among participants that currently smoked (HR, 2.50; 95% CI, 1.23–5.09; p=0.01) which was not present among those who did not currently smoke (HR, 1.15; 95% CI 0.93–1.41; p=0.2).
A post-hoc analysis and thus cause- effect cannot be determined.
These results suggest that high hemoglobin target is associated with a greater risk of progression of CKD. This risk may be augmented by concurrent smoking. Further defining the mechanism of injury may provide insight into methods to optimize outcomes in anemia management.
Identifying high-risk heart failure (HF) patients at hospital discharge may allow more effective triage to management strategies.
HF severity at presentation predicts outcomes, but the prognostic importance of clinical status changes due to interventions is less well described.
Predictive models using variables obtained during hospitalization were created using data from ESCAPE and internally validated by bootstrapping method. Model coefficients were converted to an additive risk score. Additionally, data from the FIRST (Flolan International Randomized Survival Trial) was used to externally validate this model.
Patients discharged with complete data (n=423) had 6-month mortality and death or rehospitalization rates of 18.7% and 64%. Discharge risk factors for mortality included BNP, per doubling (Hazard Ration [HR]: 1.42, 95% confidence interval [CI]: 1.15–1.75), cardiopulmonary resuscitation or mechanical ventilation during hospitalization (HR: 2.54, 95% CI: 1.12–5.78), blood urea nitrogen, per 20-U increase) (HR: 1.22, 95% CI: 0.96–1.55), serum sodium, per unit increase (HR: 0.93, 95% CI: 0.87–0.99), age >70 (HR: 1.05, 95% CI: 0.51–2.17), daily loop diuretic, furosemide equivalents >240 mg (HR: 1.49, 95% CI: 0.68–3.26), lack of beta-blocker (HR: 1.28, 95% CI: 0.68–2.41), and 6-minute walk, per 100 feet increase (HR: 0.955, 95% CI: 0.99–1.00; c index 0.76. A simplified discharge score discriminated mortality risk from 5% (score=0) to 94% (score =8). Bootstrap validation demonstrated good internal validation of the model (c index 0.78, 95% CI: 0.68–0.83).
The ESCAPE discharge risk model and score refine risk assessment after inhospital therapy for advanced decompensated systolic HF, allowing clinicians to focus surveillance and triage for early life-saving interventions in this high-risk population.
heart failure; risk stratification; discharge risk model
Purpose: Poor adherence to prescribed medicines is associated with increased rates of poor outcomes, including hospitalization, serious adverse events, and death, and is also associated with increased healthcare costs. However, current approaches to evaluation of medication adherence using real-world electronic health records (EHRs) or claims data may miss critical opportunities for data capture and fall short in modeling and representing the full complexity of the healthcare environment. We sought to explore a framework for understanding and improving data capture for medication adherence in a population-based intervention in four U.S. counties.
Approach: We posited that application of a data model and a process matrix when designing data collection for medication adherence would improve identification of variables and data accessibility, and could support future research on medication-taking behaviors. We then constructed a use case in which data related to medication adherence would be leveraged to support improved healthcare quality, clinical outcomes, and efficiency of healthcare delivery in a population-based intervention for persons with diabetes. Because EHRs in use at participating sites were deemed incapable of supplying the needed data, we applied a taxonomic approach to identify and define variables of interest. We then applied a process matrix methodology, in which we identified key research goals and chose optimal data domains and their respective data elements, to instantiate the resulting data model.
Conclusions: Combining a taxonomic approach with a process matrix methodology may afford significant benefits when designing data collection for clinical and population-based research in the arena of medication adherence. Such an approach can effectively depict complex real-world concepts and domains by “mapping” the relationships between disparate contributors to medication adherence and describing their relative contributions to the shared goals of improved healthcare quality, outcomes, and cost.
medication adherence; data model; process matrix; taxonomy; health behavior; self-management; secondary use; cardiometabolic
Cardiovascular medicine is widely regarded as a vanguard for evidence‐based drug and technology development. Our goal was to describe the cardiovascular clinical research portfolio from ClinicalTrials.gov.
Methods and Results
We identified 40 970 clinical research studies registered between 2007 and 2010 in which patients received diagnostic, therapeutic, or other interventions per protocol. By annotating 18 491 descriptors from the National Library of Medicine's Medical Subject Heading thesaurus and 1220 free‐text terms to select those relevant to cardiovascular disease, we identified studies that related to the diagnosis, treatment, or prevention of diseases of the heart and peripheral arteries in adults (n=2325 [66%] included from review of 3503 potential studies). The study intervention involved a drug in 44.6%, a device or procedure in 39.3%, behavioral intervention in 8.1%, and biological or genetic interventions in 3.0% of the trials. More than half of the trials were postmarket approval (phase 4, 25.6%) or not part of drug development (no phase, 34.5%). Nearly half of all studies (46.3%) anticipated enrolling 100 patients or fewer. The majority of studies assessed biomarkers or surrogate outcomes, with just 31.8% reporting a clinical event as a primary outcome.
Cardiovascular studies registered on ClinicalTrials.gov span a range of study designs. Data have limited verification or standardization and require manual processes to describe and categorize studies. The preponderance of small and late‐phase studies raises questions regarding the strength of evidence likely to be generated by the current portfolio and the potential efficiency to be gained by more research consolidation.
cardiovascular diseases; cardiovascular medicine; clinical research; clinical trials
To describe the development of an academic-health services partnership undertaken to improve use of evidence in clinical practice.
Academic health science schools and health service settings share common elements of their missions: to educate, participate in research, and excel in healthcare delivery, but differences in the business models, incentives, and approaches to problem-solving can lead to differences in priorities. Thus, academic and health service settings do not naturally align their leadership structures or work processes. We established a common commitment to accelerate the appropriate use of evidence in clinical practice and created an organizational structure to optimize opportunities for partnering that would leverage shared resources to achieve our goal.
A jointly governed and funded institute integrated existing activities from the academic and service sectors. Additional resources included clinical staff and student training and mentoring, a pilot research grant-funding program, and support to access existing data. Emergent developments include an appreciation for a wider range of investigative methodologies and cross-disciplinary teams with skills to integrate research in daily practice and improve patient outcomes.
By developing an integrated leadership structure and commitment to shared goals, we developed a framework for integrating academic and health service resources, leveraging additional resources, and forming a mutually beneficial partnership to improve clinical outcomes for patients.
academic-service partnership; academic medical center; evidence-based practice (EBP); nursing research; healthcare delivery
Targeting a higher hemoglobin in patients with chronic kidney disease leads to adverse cardiovascular outcomes, yet the reasons remain unclear. Herein, we sought to determine whether changes in erythropoiesis-stimulating agent (ESA) dose and in hemoglobin were predictive of changes in blood pressure (BP) and whether these changes were associated with cardiovascular outcomes.
In this secondary analysis of 1421 Correction of Hemoglobin and Outcomes in Renal Disease (CHOIR) participants, mixed model analyses were used to describe monthly changes in ESA dose and hemoglobin with changes in diastolic BP (DBP) and systolic BP (SBP). Poisson modeling was performed to determine whether changes in hemoglobin and BP were associated with the composite end point of death or cardiovascular outcomes.
Monthly average DBP, but not SBP, was higher in participants in the higher hemoglobin arm. Increases in ESA doses and in hemoglobin were significantly associated with linear increases in DBP, but not consistently with increases in SBP. In models adjusted for demographics and comorbid conditions, increases in ESA dose (>0 U) and larger increases in hemoglobin (>1.0 g/dL/month) were associated with poorer outcomes [event rate ratio per 1000 U weekly dose per month increase 1.05, (1.02–1.08), P = 0.002 and event rate ratio 1.70 (1.02–2.85), P = 0.05, respectively]. However, increasing DBP was not associated with adverse outcomes [event rate ratio 1.01 (0.98–1.03), P = 0.7].
Among CHOIR participants, higher hemoglobin targets, increases in ESA dose and in hemoglobin were associated both with increases in DBP and with higher event rates; however, increasing DBP was not associated with adverse outcomes.
anemia; blood pressure; cardiovascular events; chronic kidney disease; erythropoietin dose
Edifoligide, an E2F transcription factor decoy, does not prevent vein graft failure or adverse clinical outcomes at 1-year in patients undergoing coronary artery bypass grafting (CABG). We compared the 5-year clinical outcomes of patients in PREVENT IV treated with edifoligide and placebo and to identify predictors of long-term clinical outcomes.
A total of 3014 patients undergoing CABG with at least 2 planned vein grafts were enrolled. Kaplan-Meier curves were generated to compare the long-term effects of edifoligide and placebo. A Cox proportional hazards model was constructed to identify factors associated with 5-year post-CABG outcomes. The main outcome measure was death, myocardial infarction (MI), repeat revascularization, and rehospitalization through 5 years.
Five-year follow-up was complete in 2865 (95.1%) patients. At 5 years, patients randomized to edifoligide and placebo had similar rates of death (11.7% and 10.7%), MI (2.3% and 3.2%), revascularization (14.1% and 13.9%), and rehospitalization (61.6% and 62.5%). The 5-year composite outcome of death, MI, or revascularization occurred at similar frequency in patients assigned to edifoligide and placebo (26.3% and 25.5%; hazard ratio 1.03 [95% confidence interval 0.89–1.18]; P=0.721). Factors associated with death, MI, or revascularization at 5 years included diabetes, sex, worst graft quality, peri-index CABG MI, and ejection fraction.
Up to a quarter of patients undergoing CABG will have a major cardiac event or repeat revascularization procedure within 5 years of surgery. Edifoligide does not affect outcomes following CABG; however, common identifiable baseline and procedural risk factors are associated with long-term outcomes following CABG.
vein graft failure; coronary artery bypass graft surgery; transcription factor decoy; outcomes
To examine the relationship of depression and survival of patients with chronic heart failure (HF) over a 12-year follow-up period.
The survival associated with depression has been demonstrated in HF patients for up to 7 years. Longer-term impact of depression on survival of these patients remains unknown.
Prospectively conducted observational study examining adult HF patients who were admitted to a cardiology service at Duke University Medical Center between March 1997 and June 2003 and completed the Beck Depression Inventory (BDI) scale. The National Death Index was queried for vital status. Cox proportional hazards modeling was used to determine association of survival and depression.
During a mean follow-up of 1792.33±1372.82 days (median 1600; range 0–4683), 733 of 985 HF participants died of all causes, representing 80% of those with depression (BDI>10) and 73% of those without (p=0.01). Depression was significantly and persistently associated with decreased survival over follow-up (Hazard Ratio [HR] 1.35, 95% Confidence Interval [CI] 1.15–1.57) and was independent of conventional risk factors (HR 1.40, 95% CI 1.16–1.68). Furthermore, survival was inversely associated with depression severity (BDIcontinuous HR 1.02, 95% CI 1.006–1.025, p=0.001).
The impact of co-morbid depression during index hospitalization on significantly increased mortality of HF patients is strong and persists over12 years. These findings suggest that more investigation is needed to understand the trajectory of depression and the mechanisms underlying the impact of depression as well as to identify effective management strategies for depression of patients with HF.
depression; cardiovascular disease; chronic disease; epidemiology
Some prior studies have suggested that the time to cardiac surgery after cardiac catheterization is inversely related to post-operative acute kidney injury (AKI). However, these studies because of small number of patients were unable to adequately account for patient case-mix and included those undergoing either elective or urgent surgery.
Methods and Results
We examined data on 2441 consecutive patients undergoing elective coronary artery bypass surgery (CABG) after cardiac catheterization. The association of post-CABG AKI (defined as increase in post-CABG serum creatinine ≥ 50% above baseline and/or the need for new dialysis) and time between cardiac catheterization and CABG was evaluated using multivariable logistic regression modeling. AKI occurred in 17.1% of CABG patients. The risk of AKI was highest in patients in whom CABG was performed ≤ 1 day of cardiac catheterization (adjusted mean rates [95% CI] 24.0% [18.0%, 30.9%], 18.4% [14.8%, 22.5%], 17.3% [13.3%, 21.9%], 16.4% [12.6%, 20.8%], and 15.8% [13.7%, 18.0%] for days ≤ 1, 2, 3, 4 and ≥ 5, respectively; p = 0.019 for test of trend). Post-CABG AKI was associated with increased risk of long-term death (HR 1.268, 95% CI 1.093–1.471).
The risk of post-CABG AKI was inversely and modestly related to the time between cardiac catheterization and CABG with highest incidence in those operated ≤ 1 day of cardiac catheterization despite their lower risk profile. Whether delaying elective CABG > 24 hours of exposure to contrast agents (when feasible) has the potential for decreasing post-CABG AKI remains to be evaluated in future studies.
Coronary artery bypass surgery; acute kidney injury; risk; outcomes
Vein graft failure (VGF) is common after coronary artery bypass graft surgery, but its relationship with long-term clinical outcomes is unknown. In this retrospective analysis, we examined the relationship between VGF, assessed by coronary angiography 12 to 18 months after coronary artery bypass graft surgery, and subsequent clinical outcomes.
Methods and Results
Using the Project of Ex Vivo Vein Graft Engineering via Transfection IV (PREVENT IV) trial database, we studied data from 1829 patients who underwent coronary artery bypass graft surgery and had an angiogram performed up to 18 months after surgery. The main outcome measure was death, myocardial infarction, and repeat revascularization through 4 years after angiography. VGF occurred in 787 of 1829 patients (43%). Clinical follow-up was completed in 97% of patients with angiographic follow-up. The composite of death, myocardial infarction, or revascularization occurred more frequently among patients who had any VGF compared with those who had none (adjusted hazard ratio, 1.58; 95% confidence interval, 1.21–2.06; P=0.008). This was due mainly to more frequent revascularization with no differences in death (adjusted hazard ratio, 1.04; 95% confidence interval, 0.71–1.52; P=0.85) or death or myocardial infarction (adjusted hazard ratio, 1.08; 95% confidence interval, 0.77–1.53; P=0.65).
VGF is common after coronary artery bypass graft surgery and is associated with repeat revascularization but not with death and/or myocardial infarction. Further investigations are needed to evaluate therapies and strategies for decreasing VGF to improve outcomes in patients undergoing coronary artery bypass graft surgery.
angiography; coronary artery bypass; graft survival; outcome assessment; veins
Chronic kidney disease is assuming epidemic proportions, and an increasing number of clinical trials are testing treatments developed to improve morbidity and mortality. Surprisingly, however, a large proportion of these trials have had negative or neutral results. When trials unexpectedly demonstrate either no benefit or a detrimental impact of a treatment, especially when that treatment is already used in practice, critics commonly argue that the results were dictated by flawed trial design rather than the intrinsic properties of the treatment. In kidney disease therapeutics, trials commonly rely on observational data and test the hypothesis that these associations may be extrapolated to cause-and-effect. Other key issues in trial design that may affect outcomes include the impact of enrolling relatively healthier subjects, the complexity of recruiting participants with specific characteristics while maintaining generalizability, and the subtleties of event adjudication and quality of life assessments. In this article, general principles of trial design will be discussed and the potential lessons learned from recent trials in nephrology will be critically reviewed.
nephrology; clinical trial; CHOIR; CREATE; HEMO; MDRD
Data within a continuing use context (also known as secondary use) can require translation into the variables necessary for project analysis. We have developed and applied a framework in which:
Project objectives inform the curation of data elements.
Data elements are rendered into system-readable metadata.
Metadata are applied to the source data and used to produce data sets.
This process distinguishes between data sets and source data. Data sets contain project-specific variables that are structured for analytic activities. This can differ from source data, which may be stored in a structure dictated by the original source system for data collection, or in a data structure contrary to what is desired for analysis. Data elements mediate this translation, and the process of curation refines their definitions and associated attributes. This framework improves analysis workflow through the application of best practices, consistent processes, and centralized decision-making.
Limited data exist concerning outcomes of patients with non-ST-segment elevation acute coronary syndromes (NSTE ACS) with no angiographically obstructive coronary artery disease (non-obstructive CAD). We assessed the frequency of clinical outcomes among patients with non-obstructive CAD compared with obstructive CAD.
Methods and results:
We pooled data from eight NSTE ACS randomized clinical trials from 1994 to 2008, including 37,101 patients who underwent coronary angiography. The primary outcome was 30-day death or myocardial infarction (MI). Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for 30-day death or MI for non-obstructive versus obstructive CAD were generated for each trial. Summary ORs (95% CIs) across trials were generated using random effects models. Overall, 3550 patients (9.6%) had non-obstructive CAD. They were younger, more were female, and fewer had diabetes mellitus, previous MI or prior percutaneous coronary intervention than patients with obstructive CAD. Thirty-day death or MI was less frequent among patients with non-obstructive CAD (2.2%) versus obstructive CAD (13.3%) (ORadj 0.15; 95% CI, 0.11–0.20); 30-day death or spontaneous MI and six-month mortality were also less frequent among patients with non-obstructive CAD (ORadj 0.19 (0.14–0.25) and 0.37 (0.28–0.49), respectively).
Among patients with NSTE ACS, one in 10 had non-obstructive CAD. Death or MI occurred in 2.2% of these patients by 30 days. Compared with patients with obstructive CAD, the rate of major cardiac events was lower in patients with non-obstructive CAD but was not negligible, prompting the need to better understand management strategies for this group.
Acute coronary syndromes; angiography; atherosclerosis; coronary disease; infarction
elderly; clopidogrel; glycoprotein IIb/IIIa blockers
Current understanding of chronic diseases is based on crude clinical characterization, imaging studies, and laboratory testing that has evolved over decades. The Measurement to Understand Reclassification of Disease of Cabarrus/Kannapolis (MURDOCK) Study is a multi-tiered, longitudinal study designed to enable classification of chronic diseases using clinically annotated biospecimen collections, -omic technologies, electronic health records, and standard epidemiological methods. We expect that detailed molecular classification will improve mechanistic understanding of chronic diseases, augmenting discovery and testing of new treatments, and allowing refined selection of prevention and treatment strategies. The MURDOCK Study Community Registry and Biorepository will serve as a bridge for validation of initial exploratory studies, a platform for future prospective studies in targeted populations, and a resource of both data (analytical and clinical) and samples for cross-registry meta-analyses and comparative population studies. Participation of local health care providers and the Cabarrus County/Kannapolis, NC, community will facilitate future medical research and provide the opportunity to educate and inform the public about genomic research, actively engaging them in shaping the future of medical discovery and treatment of chronic diseases. We present the rationale and study design for the MURDOCK Community Registry and Biorepository and baseline characteristics of the first 6000 participants.
Disease reclassification; community registry; biorepository
The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research.
We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010.
As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants.
ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.
Facing critically low return per dollar invested on clinical research and clinical care, the American biomedical enterprise is in need of a significant transformation. A confluence of high-throughput “omic” technologies and increasing adoption of the electronic health record has fueled excitement for a new paradigm for biomedical research and practice. The ability to simultaneously measure thousands of molecular variables and assess their relationships with clinical data collected during the course of care could enable reclassification of disease not only by gross phenotypic observation but according to underlying molecular mechanism and influence of social determinants.In turn, this reclassification could enable development of targeted therapeutic interventions as well as disease prevention strategies at the individual and population levels.
The MURDOCK Study consists of distinct project “horizons” or stages. Horizon 1 entailed the generation and analysis of molecular data for existing large,clinically well-annotated cohorts in four disease areas. Horizon 1.5 involves creating and maintaining a 50,000-person,community volunteer registry for biomarker signature validation and prospective studies, including integration of environmental and social data. Horizon 2 leverages and prospectively recruits Horizon 1.5 volunteers, and extends the study to additional disease areas of interest. Horizon 3 will expand the study through regional, national,and international partnerships.
The MURDOCK Study embodies a new model of team science investigation and represents a significant resource for translational research. The study team invites inquiries to form new collaborations to exploit the rich resources provided by these biospecimens and associated study data.
Stratified medicine; personalized medicine; biomarkers; disease reclassification; community registry; biorepository
Congress has authorized the U.S. Food and Drug Administration (FDA) to provide industry sponsors with a 6-month extension of drug marketing rights under the Pediatric Exclusivity Provision if FDA-requested pediatric drug trials are conducted. The cost and economic return of pediatric exclusivity to industry sponsors has been shown to be highly variable. We sought to determine the cost of performing pediatric exclusivity trials within a single therapeutic area and the subsequent economic return to industry sponsors.
We evaluated 9 orally administered anti-hypertensive drugs submitted to the FDA under the Pediatric Exclusivity Provision from 1997–2004 and obtained key elements of the clinical trial designs and operations. Estimates of the costs of performing the studies were generated and converted into after-tax cash outflow. Market sales were obtained and converted into after-tax inflows based on 6 months of additional patent protection. Net economic return and net return-to-cost ratios were determined for each drug.
Of the 9 anti-hypertensive agents studied, an average of 2 studies per drug was performed, including at least 1 pharmacokinetic study and a safety and efficacy study. The median cost of completing a pharmacokinetic trial was $862,000 (range: $556,000–1.8 million). The median cost of performing safety and efficacy trials for these agents was $4.3 million (range: $2.1 million–12.9 million). The ratio of net economic return to cost was 17 (range: 4–64.7).
We found that, within a cohort of anti-hypertensive drugs, the Pediatric Exclusivity Provision has generated highly variable, yet lucrative returns to industry sponsors.
clinical trials; hypertension; pediatrics; drugs; cost-benefit analysis
Few data exist to guide antiarrhythmic drug therapy for sustained ventricular tachycardia (VT)/ventricular fibrillation (VF) after acute myocardial infarction (MI). The objective of this analysis was to describe survival of patients with sustained VT/VF post-MI according to antiarrhythmic drug treatment.
Design & Setting
We conducted a retrospective analysis of ST-segment elevation MI patients with sustained VT/VF in GUSTO IIB and III and compared all-cause death in patients receiving amiodarone, lidocaine, or no antiarrhythmic. We used Cox proportional hazards modeling and inverse weighted estimators to adjust for baseline characteristics, beta-blocker use, and propensity to receive antiarrhythmics. Due to non-proportional hazards for death in early follow-up (0–3 hours after sustained VT/VF) compared with later follow-up (>3 hours), we analyzed all-cause mortality using time-specific hazards.
Patients & Interventions
Among 19,190 acute MI patients, 1126 (5.9%) developed sustained VT/VF and met the inclusion criteria. Patients received lidocaine (n=664, 59.0%), amiodarone (n=50, 4.4%), both (n=110, 9.8%), or no antiarrhythmic (n=302, 26.8%).
In the first 3 hours after VT/VF, amiodarone (adjusted HR 0.39, 95% CI 0.21–0.71) and lidocaine (adjusted HR 0.72, 95% CI 0.53–0.96) were associated with a lower hazard of death—likely evidence of survivor bias. Among patients who survived 3 hours, amiodarone was associated with increased mortality at 30 days (adjusted HR 1.71, 95% CI 1.02–2.86) and 6 months (adjusted HR 1.96, 95% CI 1.21–3.16) but lidocaine was not at 30 days (adjusted HR 1.19, 95% CI 0.77–1.82) and 6 months (adjusted HR 1.10, 95% CI 0.73–1.66).
Among patients with acute MI complicated by sustained VT/VF who survive 3 hours, amiodarone, but not lidocaine, is associated with an increased risk of death; reinforcing the need for randomized trials in this population.
ventricular arrhythmia; antiarrhythmic drug therapy; clinical trials; acute coronary syndrome; ventricular tachycardia; ventricular fibrillation
We examined the relation of maximal in-hospital diuretic dose to weight loss, changes in renal function, and mortality in hospitalised heart failure (HF) patients.
In ESCAPE, 395 patients received diuretics in-hospital. Weight was measured at baseline, discharge, and every other day before discharge. Weight loss was defined as the difference between baseline and last in-hospital weight. Mortality was assessed using a log-logistic model with non-zero background.
Median weight loss: 2.8 kg (0.7, 6.1); mean: 3.7 kg (22% of values <0). Weight loss and maximum in-hospital dose were correlated (p = 0.0007). Baseline weight, length of stay, and baseline brain natriuretic peptide were significant predictors of weight loss. After adjusting for these, dose was not a significant predictor of weight loss. A strong relation between dose and mortality was seen (p = 0.003), especially at >300 mg/day. Dose remained a significant predictor of mortality after adjusting for baseline variables that significantly predicted mortality. Correlation between maximal dose and creatinine level change was not significant (r = 0.043; p = 0.412)
High diuretic doses during HF hospitalisation are associated with increased mortality and poor 6-month outcome.
diuretics; heart failure; outcomes