PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (779)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
more »
1.  Effect of Depth and Duration of Cooling on Deaths in the NICU Among Neonates With Hypoxic Ischemic Encephalopathy 
JAMA  2014;312(24):2629-2639.
IMPORTANCE
Hypothermia at 33.5°C for 72 hours for neonatal hypoxic ischemic encephalopathy reduces death or disability to 44% to 55%; longer cooling and deeper cooling are neuroprotective in animal models.
OBJECTIVE
To determine if longer duration cooling (120 hours), deeper cooling (32.0°C), or both are superior to cooling at 33.5°C for 72 hours in neonates who are full-term with moderate or severe hypoxic ischemic encephalopathy.
DESIGN, SETTING, AND PARTICIPANTS
Arandomized, 2 × 2 factorial design clinical trial performed in 18 US centers in the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Neonatal Research Network between October 2010 and November 2013.
INTERVENTIONS
Neonates were assigned to 4 hypothermia groups; 33.5°C for 72 hours, 32.0°C for 72 hours, 33.5°C for 120 hours, and 32.0°C for 120 hours.
MAIN OUTCOMES AND MEASURES
The primary outcome of death or disability at 18 to 22 months is ongoing. The independent data and safety monitoring committee paused the trial to evaluate safety (cardiac arrhythmia, persistent acidosis, major vessel thrombosis and bleeding, and death in the neonatal intensive care unit [NICU]) after the first 50 neonates were enrolled, then after every subsequent 25 neonates. The trial was closed for emerging safety profile and futility analysis after the eighth review with 364 neonates enrolled (of 726 planned). This report focuses on safety and NICU deaths by marginal comparisons of 72 hours’ vs 120 hours’ duration and 33.5°C depth vs 32.0°C depth (predefined secondary outcomes).
RESULTS
The NICU death rates were 7 of 95 neonates (7%) for the 33.5°C for 72 hours group, 13 of 90 neonates (14%) for the 32.0°C for 72 hours group, 15 of 96 neonates (16%) for the 33.5°C for 120 hours group, and 14 of 83 neonates (17%) for the 32.0°C for 120 hours group. The adjusted risk ratio (RR) for NICU deaths for the 120 hours group vs 72 hours group was 1.37 (95% CI, 0.92–2.04) and for the 32.0°C group vs 33.5°C group was 1.24 (95% CI, 0.69–2.25). Safety outcomes were similar between the 120 hours group vs 72 hours group and the 32.0°C group vs 33.5°C group, except major bleeding occurred among 1% in the 120 hours group vs 3% in the 72 hours group (RR, 0.25 [95% CI, 0.07–0.91]). Futility analysis determined that the probability of detecting a statistically significant benefit for longer cooling, deeper cooling, or both for NICU death was less than 2%.
CONCLUSIONS AND RELEVANCE
Among neonates who were full-term with moderate or severe hypoxic ischemic encephalopathy, longer cooling, deeper cooling, or both compared with hypothermia at 33.5°C for 72 hours did not reduce NICU death. These results have implications for patient care and design of future trials.
doi:10.1001/jama.2014.16058
PMCID: PMC4335311  PMID: 25536254
2.  Sargramostim plus Ipilimumab vs Ipilimumab Alone for Treatment of Metastatic Melanoma: A Randomized Clinical Trial 
JAMA  2014;312(17):1744-1753.
Importance
Cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4) blockade with ipilimumab prolongs survival in metastatic melanoma patients. CTLA-4 blockade and granulocyte-macrophage colony stimulating factor (GM-CSF) secreting tumor vaccine combinations demonstrate therapeutic synergy in pre-clinical models. A key issue is whether systemic GM-CSF synergizes with CTLA-4 blockade.
Objective
To compare the effect of sargramostim plus ipilimumab vs ipilimumab alone on overall survival in patients with metastatic melanoma.
Design, Setting, and Participants
A phase II randomized clinical trial was conducted in the United States by Eastern Cooperative Oncology Group between December 28, 2010 and July 28, 2011. Patients with unresectable stage III or IV melanoma, ≥one prior therapy, no CNS metastases, and ECOG performance status 0/1 were eligible.
Interventions
Patients were randomized to ipilimumab 10 mg/kilogram intravenously day 1 plus sargramostim 250 μg subcutaneously days 1-14 of 21 day cycles versus ipilimumab alone. Ipilimumab treatment included induction for four cycles followed by maintenance every fourth cycle.
Main Outcomes and Measures
Primary was comparison of the length of overall survival. Secondary was progression-free survival, response rate, safety, and tolerability.
Results
A total of 245 patients were treated. Median follow-up was 13.3 months (range; .03-19.9). Median overall survival for sargramostim plus ipilimumab was 17.5 months (95% CI; 14.9, not reached) compared to 12.7 months (95% CI; 10.0, not reached) for ipilimumab. One-year survival rate for sargramostim was 68.9% (95% CI; 60.6%, 85.5%) compared to 52.9% (95% CI; 43.6%, 62.2%) with ipilimumab (stratified logrank one-sided P=.01; mortality hazard ratio .64, one-sided 90% repeated CI (not applicable, .90)). A planned interim analysis was conducted at 69.8% (104 observed/ 149 planned deaths) information time. O'Brien-Fleming boundary was crossed for improvement in overall survival. There was no difference in progression-free survival. Median progression free survival for ipilimumab+sargramostim was 3.1 months (95% CI; 2.9, 4.6) and for ipilimumab was 3.1 months (95% CI; 2.9, 4.0). Grade 3-5 adverse events occurred in 44.9% (95% CI; 35.8%, 54.4%) of sargramostim plus ipilimumab and 58.3% (95% CI; 49.0%, 67.2%) of ipilimumab alone (two-sided P=.04).
Conclusion and Relevance
Among patients with unresectable stage III or IV melanoma, treatment with sargramostim plus ipilimumab, compared to ipilimumab alone, resulted in longer overall survival and lower toxicity, but no difference in progression free survival. These findings require confirmation in larger sample sizes and longer follow up.
doi:10.1001/jama.2014.13943
PMCID: PMC4336189  PMID: 25369488
3.  Safety and Immunogenicity of Tetanus Diphtheria and Acellular Pertussis (Tdap) Immunization During Pregnancy in Mothers and Infants: A Randomized Clinical Trial 
JAMA  2014;311(17):1760-1769.
Importance
Maternal immunization with tetanus toxoid and reduced diphtheria toxoid acellular pertussis (Tdap) vaccine could prevent infant pertussis. The effect of vaccine-induced maternal antibodies on infant responses to diphtheria and tetanus toxoids acellular pertussis (DTaP) immunization is unknown.
Objective
To evaluate the safety and immunogenicity of Tdap immunization during pregnancy and its effect on infant responses to DTaP.
Design, Setting and Participants
Phase I, randomized, double-masked, placebo-controlled clinical trial conducted in private (Houston) and academic (Durham, Seattle) obstetric practices from 2008 to 2012. Forty eight healthy 18–45 year-old pregnant women received Tdap (n=33) or placebo (n=15) at 30–32 weeks’ gestation with cross-over Tdap immunization postpartum.
Interventions
Tdap vaccination at 30–32 weeks’ gestation or post-partum.
Outcome Measures
Primary: Maternal and infant adverse events, pertussis illness and infant growth and development (Bayley-III screening test) until 13 months of age. Secondary: Antibody concentrations in pregnant women before and 4 weeks after Tdap immunization or placebo, at delivery and 2 months postpartum, and in infants at birth, 2 months, and after the third (7 months) and fourth (13 months) doses of DTaP.
Results
All participants delivered healthy newborns. No Tdap-associated serious adverse events occurred in women or infants. Injection site reactions after Tdap immunization were reported in 78.8% (95% CI: 61.1%, 91.0%) and 80% (CI: 51.9%, 95.7%) pregnant and postpartum women, respectively. Injection site pain was the predominant symptom. Systemic symptoms were reported in 36.4% (CI: 20.4%, 54.9%) and 73.3% (CI: 44.9%, 92.2%) pregnant and postpartum women, respectively. Malaise and myalgia were most common. Growth and development were similar in both infant groups. No cases of pertussis occurred. Significantly higher concentrations of pertussis antibodies were measured at delivery in women who received Tdap during pregnancy and in their infants at birth and at age 2 months when compared to infants of women immunized postpartum. Antibody responses in infants of Tdap recipients during pregnancy were modestly lower after 3 DTaP doses, but not different following the fourth dose.
Conclusions and Relevance
This preliminary safety assessment did not find an increased risk of adverse events among women who received Tdap vaccine at 30–32 weeks’ gestation or their infants. Maternal immunization with Tdap resulted in high concentrations of pertussis antibodies in infants during the first 2 months of life and did not substantially alter infant responses to DTaP. Further research is needed to provide definitive evidence of the safety and efficacy of Tdap vaccination during pregnancy.
Trial Registration
ClinicalTrials.gov, study identifier: NCT00707148. URL: http://www.clinicaltrials.gov
doi:10.1001/jama.2014.3633
PMCID: PMC4333147  PMID: 24794369
Maternal immunization; Pertussis; infants; maternal antibodies; response to active immunization
4.  Detection of Undiagnosed HIV Among State Prison Entrants 
JAMA  2013;310(20):2198-2199.
doi:10.1001/jama.2013.280740
PMCID: PMC4329507  PMID: 24281464
5.  Thirty-Day Hospital Readmission following Discharge from Post-acute Rehabilitation in Fee-for-Service Medicare Patients 
Importance
The Centers for Medicare and Medicaid Services (CMS) recently identified 30-day readmission after discharge from inpatient rehabilitation facilities as a national quality indicator. Research is needed to determine the rates and factors related to readmission in this patient population.
Objective
Determine 30-day readmission rates and factors related to readmission for patients receiving post-acute inpatient rehabilitation.
Design
Retrospective cohort study.
Setting
1,365 post-acute inpatient rehabilitation facilities providing services to Medicare fee-for service beneficiaries.
Participants
Records for 736,536 post-acute patients discharged from inpatient rehabilitation facilities to the community in 2006 through 2011. Mean age 78.0 (SD = 7.3) years. Sixty-three percent of patients were female and 85.1% were non-Hispanic white.
Main Outcome and Measures
30-day readmission rates for the six largest diagnostic impairment categories receiving inpatient rehabilitation. These included stroke, lower extremity fracture, lower extremity joint replacement, debility, neurological disorders and brain dysfunction.
Results
Mean rehabilitation length of stay was 12.4 (SD = 5.3) days. The overall 30-day readmission rate was 11.8% (95%CI, 11.7%, 11.8%). Rates ranged from 5.8% (95%CI, 5.8%, 5.9%) for patients with lower extremity joint replacement to 18.8% (95%CI, 18.8%, 18.9%). for patients with debility. Rates were highest in men (13.0%; 95%CI, 12.8%, 13.1%), non-Hispanic blacks, (13.8%; 95%CI, 13.5%, 14.1%), dual eligible beneficiaries (15.1%; 95%CI, 14.9%, 15.4%), and in patients with tier 1 comorbidities (25.6%; 95%CI, 24.9%, 26.3%). Higher motor and cognitive functional status were associated with lower hospital readmission rates across the six impairment categories. Variability in adjusted readmission rates by state ranged from 9.2% to 13.6%. Approximately 50% of patients who were rehospitalized within the 30-day period were readmitted within 11 days of discharge. MS-DRG codes for heart failure, urinary tract infection, pneumonia, septicemia, nutritional and metabolic disorders, esophagitis, gastroenteritis and digestive disorders were common reasons for readmission.
Conclusion and Relevance
Among post-acute rehabilitation facilities providing services to Medicare fee-for-service beneficiaries, 30-day readmission rates ranged from 5.8% to 18.8% for selected impairment groups. Further research is needed to understand the reasons for readmission.
doi:10.1001/jama.2014.8
PMCID: PMC4085109  PMID: 24519300
6.  Molecular Findings Among Patients Referred for Clinical Whole-Exome Sequencing 
JAMA  2014;312(18):1870-1879.
IMPORTANCE
Clinical whole-exome sequencing is increasingly used for diagnostic evaluation of patients with suspected genetic disorders.
OBJECTIVE
To perform clinical whole-exome sequencing and report (1) the rate of molecular diagnosis among phenotypic groups, (2) the spectrum of genetic alterations contributing to disease, and (3) the prevalence of medically actionable incidental findings such as FBN1 mutations causing Marfan syndrome.
DESIGN, SETTING, AND PATIENTS
Observational study of 2000 consecutive patients with clinical whole-exome sequencing analyzed between June 2012 and August 2014. Whole-exome sequencing tests were performed at a clinical genetics laboratory in the United States. Results were reported by clinical molecular geneticists certified by the American Board of Medical Genetics and Genomics. Tests were ordered by the patient’s physician. The patients were primarily pediatric (1756 [88%]; mean age, 6 years; 888 females [44%], 1101 males [55%], and 11 fetuses [1% gender unknown]), demonstrating diverse clinical manifestations most often including nervous system dysfunction such as developmental delay.
MAIN OUTCOMES AND MEASURES
Whole-exome sequencing diagnosis rate overall and by phenotypic category, mode of inheritance, spectrum of genetic events, and reporting of incidental findings.
RESULTS
A molecular diagnosis was reported for 504 patients (25.2%) with 58% of the diagnostic mutations not previously reported. Molecular diagnosis rates for each phenotypic category were 143/526 (27.2%; 95% CI, 23.5%–31.2%) for the neurological group, 282/1147 (24.6%; 95% CI, 22.1%–27.2%) for the neurological plus other organ systems group, 30/83 (36.1%; 95% CI, 26.1%–47.5%) for the specific neurological group, and 49/244 (20.1%; 95% CI, 15.6%–25.8%) for the nonneurological group. The Mendelian disease patterns of the 527 molecular diagnoses included 280 (53.1%) autosomal dominant, 181 (34.3%) autosomal recessive (including 5 with uniparental disomy), 65 (12.3%) X-linked, and 1 (0.2%) mitochondrial. Of 504 patients with a molecular diagnosis, 23 (4.6%) had blended phenotypes resulting from 2 single gene defects. About 30% of the positive cases harbored mutations in disease genes reported since 2011. There were 95 medically actionable incidental findings in genes unrelated to the phenotype but with immediate implications for management in 92 patients (4.6%), including 59 patients (3%) with mutations in genes recommended for reporting by the American College of Medical Genetics and Genomics.
CONCLUSIONS AND RELEVANCE
Whole-exome sequencing provided a potential molecular diagnosis for 25% of a large cohort of patients referred for evaluation of suspected genetic conditions, including detection of rare genetic events and new mutations contributing to disease. The yield of whole-exome sequencing may offer advantages over traditional molecular diagnostic approaches in certain patients.
doi:10.1001/jama.2014.14601
PMCID: PMC4326249  PMID: 25326635
7.  Television Viewing and Risk of Type 2 Diabetes, Cardiovascular Disease, and All-Cause Mortality A Meta-analysis 
JAMA  2011;305(23):2448-2455.
Context
Prolonged television (TV) viewing is the most prevalent and pervasive sedentary behavior in industrialized countries and has been associated with morbidity and mortality. However, a systematic and quantitative assessment of published studies is not available.
Objective
To perform a meta-analysis of all prospective cohort studies to determine the association between TV viewing and risk of type 2 diabetes, fatal or nonfatal cardiovascular disease, and all-cause mortality.
Data Sources and Study Selection
Relevant studies were identified by searches of the MEDLINE database from 1970 to March 2011 and the EMBASE database from 1974 to March 2011 without restrictions and by reviewing reference lists from retrieved articles. Cohort studies that reported relative risk estimates with 95% confidence intervals (CIs) for the associations of interest were included.
Data Extraction
Data were extracted independently by each author and summary estimates of association were obtained using a random-effects model.
Data Synthesis
Of the 8 studies included, 4 reported results on type 2 diabetes (175 938 individuals; 6428 incident cases during 1.1 million person-years of follow-up), 4 reported on fatal or nonfatal cardiovascular disease (34 253 individuals; 1052 incident cases), and 3 reported on all-cause mortality (26 509 individuals; 1879 deaths during 202 353 person-years of follow-up). The pooled relative risks per 2 hours of TV viewing per day were 1.20 (95% CI, 1.14-1.27) for type 2 diabetes, 1.15 (95% CI, 1.06-1.23) for fatal or nonfatal cardiovascular disease, and 1.13 (95% CI, 1.07-1.18) for all-cause mortality. While the associations between time spent viewing TV and risk of type 2 diabetes and cardiovascular disease were linear, the risk of all-cause mortality appeared to increase with TV viewing duration of greater than 3 hours per day. The estimated absolute risk differences per every 2 hours of TV viewing per day were 176 cases of type 2 diabetes per 100 000 individuals per year, 38 cases of fatal cardiovascular disease per 100 000 individuals per year, and 104 deaths for all-cause mortality per 100 000 individuals per year.
Conclusion
Prolonged TV viewing was associated with increased risk of type 2 diabetes, cardiovascular disease, and all-cause mortality.
doi:10.1001/jama.2011.812
PMCID: PMC4324728  PMID: 21673296
8.  Cause-Specific Risk of Hospital Admission Related to Extreme Heat in Older Adults 
JAMA  2014;312(24):2659-2667.
IMPORTANCE
Heat exposure is known to have a complex set of physiological effects on multiple organ systems, but current understanding of the health effects is mostly based on studies investigating a small number of prespecified health outcomes such as cardiovascular and respiratory diseases.
OBJECTIVES
To identify possible causes of hospital admissions during extreme heat events and to estimate their risks using historical data.
DESIGN, SETTING, AND POPULATION
Matched analysis of time series data describing daily hospital admissions of Medicare enrollees (23.7 million fee-for-service beneficiaries [aged ≥65 years] per year; 85% of all Medicare enrollees) for the period 1999 to 2010 in 1943 counties in the United States with at least 5 summers of near-complete (>95%) daily temperature data.
EXPOSURES
Heat wave periods, defined as 2 or more consecutive days with temperatures exceeding the 99th percentile of county-specific daily temperatures, matched to non–heat wave periods by county and week.
MAIN OUTCOMES AND MEASURES
Daily cause-specific hospitalization rates by principal discharge diagnosis codes, grouped into 283 disease categories using a validated approach.
RESULTS
Risks of hospitalization for fluid and electrolyte disorders, renal failure, urinary tract infection, septicemia, and heat stroke were statistically significantly higher on heat wave days relative to matched non–heat wave days, but risk of hospitalization for congestive heart failure was lower (P < .05). Relative risks for these disease groups were 1.18 (95% CI, 1.12–1.25) for fluid and electrolyte disorders, 1.14 (95% CI, 1.06–1.23) for renal failure, 1.10 (95% CI, 1.04–1.16) for urinary tract infections, 1.06 (95% CI, 1.00–1.11) for septicemia, and 2.54 (95% CI, 2.14–3.01) for heat stroke. Absolute risk differences were 0.34 (95% CI, 0.22–0.46) excess admissions per 100 000 individuals at risk for fluid and electrolyte disorders, 0.25 (95% CI, 0.12–0.39) for renal failure, 0.24 (95% CI, 0.09–0.39) for urinary tract infections, 0.21 (95% CI, 0.01–0.41) for septicemia, and 0.16 (95% CI, 0.10–0.22) for heat stroke. For fluid and electrolyte disorders and heat stroke, the risk of hospitalization increased during more intense and longer-lasting heat wave periods (P < .05). Risks were generally highest on the heat wave day but remained elevated for up to 5 subsequent days.
CONCLUSIONS AND RELEVANCE
Among older adults, periods of extreme heat were associated with increased risk of hospitalization for fluid and electrolyte disorders, renal failure, urinary tract infection, septicemia, and heat stroke. However, the absolute risk increase was small and of uncertain clinical importance.
doi:10.1001/jama.2014.15715
PMCID: PMC4319792  PMID: 25536257
9.  Ancillary Care for Public Health Research in Developing Countries 
JAMA  2009;302(4):429-431.
doi:10.1001/jama.2009.1072
PMCID: PMC4314939  PMID: 19622822
10.  Accuracy of FDG-PET to diagnose lung cancer in areas with infectious lung disease: A meta-analysis 
JAMA  2014;312(12):1227-1236.
Importance
Positron emission tomography (PET) with 18-fluorodeoxyglucose (FDG) is recommended for the non-invasive diagnosis of pulmonary nodules suspicious for lung cancer. In populations with endemic infectious lung disease, FDG-PET may not accurately identify malignant lesions.
Objective
To estimate the diagnostic accuracy of FDG-PET for pulmonary nodules suspicious for lung cancer in regions where infectious lung disease is endemic and compare the test accuracy in regions where infectious lung disease is rare.
Data Sources and Study Selection
Databases of MEDLINE, EMBASE and the Web of Science were searched from October 1, 2000, through April 28, 2014. Articles reporting information sufficient to calculate sensitivity and specificity of FDG-PET to diagnose lung cancer were included. Only studies that enrolled more than 10 participants with benign and malignant lesions were included. Database searches yielded 1923 articles, of which 257 were assessed for eligibility. Seventy studies were included in the analysis. Studies reported on a total of 8511 nodules; 5105 (60%) were malignant.
Data Extraction and Synthesis
Abstracts meeting eligibility criteria were collected by a research librarian and reviewed by 2 independent reviewers. Hierarchical summary receiver operating characteristic curves were constructed. A random-effects logistic regression model was used to summarize and assess the effect of endemic infectious lung disease on test performance.
Main Outcome and Measures
The sensitivity and specificity for FDG-PET test performance.
Results
Heterogeneity for sensitivity (I2=87%) and specificity (I2=82%) was observed across studies. The pooled (unadjusted) sensitivity was 89% (95% CI, 86%-91%) and specificity was 75% (95% CI, 71%-79%). There was a 16% lower average adjusted specificity in regions with endemic infectious lung disease (61% [95% CI, 49%-72%]) compared with nonendemic regions (77% [95% CI, 73%-80%]). Lower specificity was observed when the analysis was limited to rigorously conducted and well-controlled studies. In general, sensitivity did not change appreciably by endemic infection status, even after adjusting for relevant factors.
Conclusions and Relevance
The accuracy of FDG-PET for diagnosing lung nodules was extremely heterogeneous. Use of FDG-PET combined with computed tomography was less specific in diagnosing malignancy in populations with endemic infectious lung disease compared with nonendemic regions. These data do not support use of FDG-PET to diagnose lung cancer in endemic areas unless an institution achieves test performance accuracy similar to that found in nonendemic regions.
doi:10.1001/jama.2014.11488
PMCID: PMC4315183  PMID: 25247519
Lung cancer; diagnosis; FDG-PET; meta-analysis
11.  Consideration of insurance reimbursement for physical activity and exercise programs for patients with diabetes 
JAMA  2011;305(17):1808-1809.
doi:10.1001/jama.2011.572
PMCID: PMC4313545  PMID: 21540427
12.  Antihypertensive Treatment and Secondary Prevention of Cardiovascular Disease Events Among Persons Without Hypertension 
JAMA  2011;305(9):913-922.
Context
Cardiovascular disease (CVD) risk increases beginning at systolic blood pressure levels of 115 mm Hg. Use of antihypertensive medications among patients with a history of CVD or diabetes and without hypertension has been debated.
Objective
To evaluate the effect of antihypertensive treatment on secondary prevention of CVD events and all-cause mortality among persons without clinically defined hypertension.
Data Sources
Meta-analysis with systematic search of MEDLINE (1950 to week 3 of January 2011), EMBASE, and the Cochrane Collaboration Central Register of Controlled Clinical Trials and manual examination of references in selected articles and studies.
Study Selection
From 874 potentially relevant publications, 25 trials that fulfilled the predetermined inclusion and exclusion criteria were included in the meta-analysis.
Data Extraction
Information on participant characteristics, trial design and duration, treatment drug, dose, control, and clinical events were extracted using a standardized protocol. Outcomes included stroke, myocardial infarction (MI), congestive heart failure (CHF), composite CVD outcomes, CVD mortality, and all-cause mortality.
Results
Compared with controls, participants receiving antihypertensive medications had a pooled relative risk of 0.77 (95% confidence interval [CI], 0.61 to 0.98) for stroke, 0.80 (95% CI, 0.69 to 0.93) for MI, 0.71 (95% CI, 0.65 to 0.77) for CHF, 0.85 (95% CI, 0.80 to 0.90) for composite CVD events, 0.83 (95% CI, 0.69 to 0.99) for CVD mortality, and 0.87 (95% CI, 0.80 to 0.95) for all-cause mortality from random-effects models. The corresponding absolute risk reductions per 1000 persons were −7.7 (95% CI, −15.2 to −0.3) for stroke, −13.3 (95% CI, −28.4 to 1.7) for MI, −43.6 (95% CI, −65.2 to −22.0) for CHF events, −27.1 (95% CI, −40.3 to −13.9) for composite CVD events, −15.4 (95% CI, −32.5 to 1.7) for CVD mortality, and −13.7 (95% CI, −24.6 to −2.8) for all-cause mortality. Results did not differ according to trial characteristics or subgroups defined by clinical history.
Conclusions
Among patients with clinical history of CVD but without hypertension, antihypertensive treatment was associated with decreased risk of stroke, CHF, composite CVD events, and all-cause mortality. Additional randomized trial data are necessary to assess these outcomes in patients without CVD clinical recommendations.
doi:10.1001/jama.2011.250
PMCID: PMC4313888  PMID: 21364140
13.  Effect of Home Blood Pressure Telemonitoring and Pharmacist Management On Blood Pressure Control: The HyperLink Cluster Randomized Trial 
JAMA  2013;310(1):46-56.
Context
Patients with high blood pressure (BP) visit a physician 4 times or more per year on average in the U.S., yet BP is controlled in only about half. Practical, robust and sustainable models are needed to improve BP control in patients with uncontrolled hypertension.
Objectives
To determine whether an intervention combining home BP telemonitoring with pharmacist case management improves BP control compared with usual care and to determine whether BP control is maintained after the intervention stops.
Design
A clinic-randomized trial with 12 months of intervention and 6 months of post-intervention follow-up.
Patients and Setting
450 adults with uncontrolled BP recruited from 14,692 patients with electronic medical records across sixteen primary care clinics in an integrated health system in Minneapolis-St. Paul, MN.
Interventions
Eight clinics were randomized to provide usual care to their patients (n = 222) and 8 were randomized to provide the telemonitoring intervention (n = 228). Intervention patients received home BP telemonitors and transmitted BP data to pharmacists who adjusted antihypertensive therapy accordingly.
Main Outcome Measures
BP control to <140/90 mm Hg (<130/80 mm Hg in patients with diabetes or kidney disease) at 6 and 12 months. Secondary outcomes were change in BP, patient satisfaction, and BP control at 18 months.
Results
At baseline, enrollees were 45% female, 82% non-Hispanic white, mean age was 61 (sd 12.0) years and mean BP was 148/85 mm Hg. BP was controlled at both 6 and 12 months in 57.2% (95% CI, 44.8% - 68.7%) of Telemonitoring Intervention patients and 30.0% (95% CI, 23.2% - 37.8%) of Usual Care patients, P=0.001. At 6 months, BP was controlled in 71.8 % (95% CI, 65.6% - 77.3%) of Telemonitoring Intervention patients and 45.2% (95% CI, 39.2% - 51.3%) of Usual Care patients, P<0.0001; at 12 months BP was controlled in 71.2% (95% CI, 62.0% - 78.9%) of Telemonitoring Intervention patients and 52.8% (95% CI, 45.4% - 60.2%) of Usual Care patients, P=0.005; and at 18 months BP was controlled in 71.8% (95% CI, 65.0% - 77.8%) of Telemonitoring Intervention patients and 57.1% (95% CI, 51.5% - 62.6%) of Usual Care patients, P=0.003. Systolic BP decreased from baseline more among Telemonitoring Intervention than Usual Care patients by 10.7 mm Hg (95% CI, 7.3-14.3) at 6 months, 9.7 mm Hg (95% CI, 6.0-13.4) at 12 months, and 6.6 mm Hg (95% CI, 2.5-10.7) at 18 months, all P<0.001. Diastolic BP decreased from baseline more among Telemonitoring Intervention than Usual Care patients by 6.0 mm Hg (95% CI, 3.4-8.6) at 6 months, 5.1 mm Hg (95% CI, 2.8-7.4) at 12 months, and 3.0 mm Hg (95% CI, -0.3-6.3) at 18 months, all P<0.001, except at 18 months.
Conclusions
Home BP telemonitoring and pharmacist case management achieved better BP control compared to usual care during 12 months of intervention, and benefits persisted for 6 months post-intervention.
Trial Registration
ClinicalTrials.gov, NCT00781365. URL: http://clinicaltrials.gov/ct2/show/NCT00781365?term=hyperlink&rank=1
doi:10.1001/jama.2013.6549
PMCID: PMC4311883  PMID: 23821088
14.  Effect of Communication Skills Training for Residents and Nurse Practitioners on Quality of Communication With Patients With Serious Illness 
JAMA  2013;310(21):2271-2281.
IMPORTANCE
Communication about end-of-life care is a core clinical skill. Simulation-based training improves skill acquisition, but effects on patient-reported outcomes are unknown.
OBJECTIVE
To assess the effects of a communication skills intervention for internal medicine and nurse practitioner trainees on patient- and family-reported outcomes.
DESIGN, SETTING, AND PARTICIPANTS
Randomized trial conducted with 391 internal medicine and 81 nurse practitioner trainees between 2007 and 2013 at the University of Washington and Medical University of South Carolina.
INTERVENTION
Participants were randomized to an 8-session, simulation-based, communication skills intervention (N = 232) or usual education (N = 240).
MAIN OUTCOMES AND MEASURES
Primary outcome was patient-reported quality of communication (QOC; mean rating of 17 items rated from 0–10, with 0 = poor and 10 = perfect). Secondary outcomes were patient-reported quality of end-of-life care (QEOLC; mean rating of 26 items rated from 0–10) and depressive symptoms (assessed using the 8-item Personal Health Questionnaire [PHQ-8]; range, 0–24, higher scores worse) and family-reported QOC and QEOLC. Analyses were clustered by trainee.
RESULTS
There were 1866 patient ratings (44% response) and 936 family ratings (68% response). The intervention was not associated with significant changes in QOC or QEOLC. Mean values for postintervention patient QOC and QEOLC were 6.5 (95% CI, 6.2 to 6.8) and 8.3 (95% CI, 8.1 to 8.5) respectively, compared with 6.3 (95% CI, 6.2 to 6.5) and 8.3 (95% CI, 8.1 to 8.4) for control conditions. After adjustment, comparing intervention with control, there was no significant difference in the QOC score for patients (difference, 0.4 points [95% CI, −0.1 to 0.9]; P = .15) or families (difference, 0.1 [95% CI, −0.8 to 1.0]; P = .81). There was no significant difference in QEOLC score for patients (difference, 0.3 points [95% CI, −0.3 to 0.8]; P = .34) or families (difference, 0.1 [95% CI, −0.7 to 0.8]; P = .88). The intervention was associated with significantly increased depression scores among patients of postintervention trainees (mean score, 10.0 [95% CI, 9.1 to 10.8], compared with 8.8 [95% CI, 8.4 to 9.2]) for control conditions; adjusted model showed an intervention effect of 2.2 (95% CI, 0.6 to 3.8; P = .006).
CONCLUSIONS AND RELEVANCE
Among internal medicine and nurse practitioner trainees, simulation-based communication training compared with usual education did not improve quality of communication about end-of-life care or quality of end-of-life care but was associated with a small increase in patients’ depressive symptoms. These findings raise questions about skills transfer from simulation training to actual patient care and the adequacy of communication skills assessment.
TRIAL REGISTRATION
clinicaltrials.gov Identifier: NCT00687349
doi:10.1001/jama.2013.282081
PMCID: PMC4310457  PMID: 24302090
15.  Parathyroid Hormone in the Evaluation of Hypercalcemia 
JAMA  2014;312(24):2680-2681.
doi:10.1001/jama.2014.9195
PMCID: PMC4308950  PMID: 25536261
16.  Association between seven years of intensive treatment of type 1 diabetes and long term mortality 
JAMA  2015;313(1):45-53.
Importance
Whether mortality in type 1 diabetes mellitus is affected following intensive glycemic therapy has not been established.
Objective
To determine whether mortality differed between the original intensive and conventional treatment groups in the long-term follow-up of the Diabetes Control and Complications Trial (DCCT) cohort.
Design, Setting, and Participants
After the DCCT (1983–1993) ended, participants were followed up in a multisite (27 US and Canadian academic clinical centers) observational study (Epidemiology of Diabetes Control and Complications [EDIC]) until December 31, 2012. Participants were 1441 healthy volunteers with diabetes mellitus who, at baseline, were 13 to 39 years of age with 1 to 15 years of diabetes duration and no or early microvascular complications, and without hypertension, preexisting cardiovascular disease, or other potentially life-threatening disease.
Intervention/Exposure
During the clinical trial, participants were randomly assigned to receive intensive therapy (n = 711) aimed at achieving glycemia as close to the nondiabetic range as safely possible, or conventional therapy (n = 730) with the goal of avoiding symptomatic hypoglycemia and hyperglycemia. At the end of the DCCT, after a mean of 6.5 years, intensive therapy was taught and recommended to all participants and diabetes care was returned to personal physicians.
Main Outcomes
Total and cause-specific mortality was assessed through annual contact with family and friends and through records over 27 years mean follow-up.
Results
Vital status was ascertained for 1429 (99.2%) participants. There were 107 deaths, 64 in the conventional and 43 in the intensive group. The absolute risk difference was −109 per 100 000 patient-years (95%CI, −218 to −1), with lower all-cause mortality risk in the intensive therapy group (hazard ratio [HR] = 0.67 [95%CI, 0.46–0.99]; P = .045). Primary causes of death were cardiovascular disease (24 deaths; 22.4%), cancer (21 deaths; 19.6%), acute diabetes complications (19 deaths; 17.8%), and accidents or suicide (18 deaths; 16.8%). Higher levels of glycated hemoglobin (HbA1c) were associated with all-cause mortality (HR = 1.56 [95%CI, 1.35–1.81 per 10% relative increase in HbA1c]; P < .001), as well as the development of albuminuria (HR = 2.20 [95%CI, 1.46–3.31]; P < .001).
Conclusions and Relevance
After a mean of 27 years’ follow-up of patients with type 1 diabetes, 6.5 years of initial intensive diabetes therapy was associated with a modestly lower all-cause mortality compared with conventional therapy.
doi:10.1001/jama.2014.16107
PMCID: PMC4306335  PMID: 25562265
17.  Use of Corticosteroids After Hepatoportoenterostomy for Bile Drainage in Infants With Biliary Atresia 
JAMA  2014;311(17):1750-1759.
IMPORTANCE
Biliary atresia is the most common cause of end-stage liver disease in children. Controversy exists as to whether use of steroids after hepatoportoenterostomy improves clinical outcome.
OBJECTIVE
To determine whether the addition of high-dose corticosteroids after hepatoportoenterostomy is superior to surgery alone in improving biliary drainage and survival with the native liver.
DESIGN, SETTING, AND PATIENTS
The multicenter, double-blind Steroids in Biliary Atresia Randomized Trial (START) was conducted in 140 infants (mean age, 2.3 months) between September 2005 and February 2011 in the United States; follow-up ended in January 2013.
INTERVENTIONS
Participants were randomized to receive intravenous methylprednisolone (4 mg/kg/d for 2 weeks) and oral prednisolone (2 mg/kg/d for 2 weeks) followed by a tapering protocol for 9 weeks (n = 70) or placebo (n = 70) initiated within 72 hours of hepatoportoenterostomy.
MAIN OUTCOMES AND MEASURES
The primary end point (powered to detect a 25% absolute treatment difference) was the percentage of participants with a serum total bilirubin level of less than 1.5 mg/dL with his/her native liver at 6 months posthepatoportoenterostomy. Secondary outcomes included survival with native liver at 24 months of age and serious adverse events.
RESULTS
The proportion of participants with improved bile drainage was not statistically significantly improved by steroids at 6 months posthepatoportoenterostomy (58.6% [41/70] of steroids group vs 48.6% [34/70] of placebo group; adjusted relative risk, 1.14 [95% CI, 0.83 to 1.57]; P = .43). The adjusted absolute risk difference was 8.7% (95% CI, −10.4% to 27.7%). Transplant-free survival was 58.7% in the steroids group vs 59.4% in the placebo group (adjusted hazard ratio, 1.0 [95% CI, 0.6 to 1.8]; P = .99) at 24 months of age. The percentage of participants with serious adverse events was 81.4% [57/70] of the steroids group and 80.0% [56/70] of the placebo group (P > .99); however, participants receiving steroids had an earlier time of onset of their first serious adverse event by 30 days posthepatoportoenterostomy (37.2% [95% CI, 26.9% to 50.0%] of steroids group vs 19.0% [95% CI, 11.5% to 30.4%] of placebo group; P= .008).
CONCLUSIONS AND RELEVANCE
Among infants with biliary atresia who have undergone hepatoportoenterostomy, high-dose steroid therapy following surgery did not result in statistically significant treatment differences in bile drainage at 6 months, although a small clinical benefit could not be excluded. Steroid treatment was associated with earlier onset of serious adverse events in children with biliary atresia.
doi:10.1001/jama.2014.2623
PMCID: PMC4303045  PMID: 24794368
18.  [No title available] 
PMCID: PMC4293125  PMID: 23299593
19.  [No title available] 
PMCID: PMC4289618  PMID: 25038348
20.  Health Care–Associated Infection After Red Blood Cell Transfusion 
JAMA  2014;311(13):1317-1326.
IMPORTANCE
The association between red blood cell (RBC) transfusion strategies and health care–associated infection is not fully understood.
OBJECTIVE
To evaluate whether RBC transfusion thresholds are associated with the risk of infection and whether risk is independent of leukocyte reduction.
DATA SOURCES
MEDLINE, EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, Cochrane Database of Sytematic Reviews, ClinicalTrials.gov, International Clinical Trials Registry, and the International Standard Randomized Controlled Trial Number register were searched through January 22, 2014.
STUDY SELECTION
Randomized clinical trials with restrictive vs liberal RBC transfusion strategies.
DATA EXTRACTION AND SYNTHESIS
Twenty randomized trials with 8598 patients met eligibility criteria, of which 17 trials (n = 7456 patients) contained sufficient information for meta-analyses. DerSimonian and Laird random-effects models were used to report pooled risk ratios. Absolute risks of infection were calculated using the profile likelihood random-effects method.
MAIN OUTCOMES AND MEASURES
Incidence of health care–associated infection such as pneumonia, mediastinitis, wound infection, and sepsis.
RESULTS
The pooled risk of all serious infections was 10.6% (95% CI, 5.6%-15.9%) in the restrictive group and 12.7% (95% CI, 7.0%-18.7%) in the liberal group. The risk ratio (RR) for the association between transfusion strategies and infection (serious infections and selected infections, combined) was 0.92 (95% CI, 0.82-1.04) with little heterogeneity (I2 = 6.3%; τ2 = .0041). The RR for the association between transfusion strategies and serious infection was 0.84 (95% CI, 0.73-0.96; I2 = 0%, τ2 <.0001). The number needed to treat (NNT) with restrictive strategies to prevent serious infection was 48 (95% CI, 36-71). The risk of infection remained reduced with a restrictive strategy, even with leukocyte reduction (RR, 0.83 [95% CI, 0.69-0.99]). For trials with a restrictive hemoglobin threshold of <7.0 g/dL, the RR was 0.86 (95% CI, 0.72-1.02). With stratification by patient type, the RR for serious infection was 0.72 (95% CI, 0.53-0.97) in patients undergoing orthopedic surgery and 0.51 (95% CI, 0.28-0.95) in patients presenting with sepsis. There were no significant differences in the incidence of infection by RBC threshold for patients with cardiac disease, the critically ill, those with acute upper gastrointestinal bleeding, or for infants with low birth weight.
CONCLUSIONS AND RELEVANCE
Among hospitalized patients, a restrictive RBC transfusion strategy compared with a liberal transfusion strategy was not associated with a reduced risk of health care–associated infection overall, although it was associated with a reduced risk of serious infection. Implementing restrictive strategies may have the potential to lower the incidence of serious health care–associated infection.
doi:10.1001/jama.2014.2726
PMCID: PMC4289152  PMID: 24691607
21.  Tobacco Control and the Reduction in Smoking-related Premature Deaths in the United States, 1964–2012 
Importance
The 50th anniversary of the first Surgeon General’s Report on smoking and health is celebrated in 2014. This seminal document inspired efforts by government s, non-governmental organizations, and the private sector to reduce the toll of cigarette smoking through reduced initiation and increased cessation.
Objective
To quantify reductions in smoking -related mortality associated with implementation of tobacco control since 1964.
Design, Setting and Participants
Smoking histories for individual birth cohorts that actually occurred and under likely scenarios had tobacco control never emerged were estimated. National mortality rates and mortality rate ratio estimates from analytical studies of the effect of smoking on mortality yielded death rates by smoking status. Actual smoking -related mortality from 1964–2012 was compared to estimated mortality under no tobacco control that included a likely scenario (primary counterfactual) and upper and lower bounds that would capture plausible alternatives.
Exposure
National Health Interview Surveys yielded cigarette smoking histories for the US adult population from 1964–2012.
Main Outcomes and Measures
Number of premature deaths avoided and years of life saved were primary outcomes. Change in life expectancy at age 40 associated with change in cigarette smoking exposure constituted another measure of overall health outcomes.
Results
From 1964–2012, an estimated 17.6 million deaths were related to smoking, an estimated 8.0 (7.4–8.3, for the lower and upper tobacco control counterfactuals, respectively) million fewer premature smoking-induced deaths than what would have occurred under the alternatives and thus associated with tobacco control (5.3 (4.8–5.5) million males and 2.7 (2.5–2.7) million females). This resulted in an estimated 157 (139–165) million years of life saved, a mean of 19.6 years for each beneficiary, (111 (97–117) million for males, 46 (42–48) million for females). During this time, estimated life expectancy at age 40 increased 7.8 years for males and 5.4 years for females, of which tobacco control is associated with2.3 (1.8–2.5) years [30% (23–32%)] of the increase for males and 1.6 (1.4–1.7) years [29% (25–32%)] for females.
Conclusions and Relevance
Tobacco control is associated with avoidance of millions of premature deaths, and an estimated extended mean lifespan of 19–20 years. While tobacco control represents an important public health achievement, smoking continues to be the leading contributor to the nation’s death toll.
doi:10.1001/jama.2013.285112
PMCID: PMC4056770  PMID: 24399555
Smoking; Mortality; Tobacco control; Surgeon General’s Report
22.  EFFECT OF ERYTHROPOIETIN ADMINISTRATION AND TRANSFUSION THRESHOLD ON NEUROLOGICAL RECOVERY AFTER TRAUMATIC BRAIN INJURY 
Importance
There is limited information about the effect of erythropoietin or a high transfusion threshold in traumatic brain injury (TBI).
Objective
To compare the effects of erythropoietin and two transfusion thresholds (7 and 10 g/dl) on neurological recovery after TBI.
Design
Randomized trial using a factorial design to test: i.) whether erythropoietin would fail to improve favorable outcomes by 20%, and ii.) whether a transfusion threshold of >10 g/dl would increase favorable outcomes without increasing complications.
Setting
Neurosurgical intensive care units of two Houston level 1 trauma centers
Participants
Between May 2006 and August 2012, 200 patients with closed head injury who were unable to follow commands were enrolled within 6 hours of injury; 102 patients received erythropoetin and 98 received placebo. Erythropoetin or placebo was initially dosed daily for 3 days and then weekly for 2 more weeks (n=74) and then the 24h and 48h doses were dropped for the remainder (n=126). Ninety-nine and 101 patients were assigned to the 7g/dl and 10g/dl transfusion thresholds.
Intervention
Intravenous erythropoietin 500 IU/kg or saline per dose. Transfusion threshold maintained with packed red blood cell transfusion.
Main Outcome
Glasgow Outcome Scale dichotomized as favorable (good recovery and moderate disability) and unfavorable (severe disability, vegetative, or dead) at 6 months post-injury.
Results
There was no erythropoeitin-transfusion threshold interaction. Compared to placebo (favorable outcome rate: 34/89 [38.2%]; 95%CI=28.2-49.1%), both erythropoetin groups were futile (first dosing regimen: 17/35 [48.6%]; 95%CI=31.4-66.0%, p=0.13, and second dosing regimen: 17/57 [29.8%]; 95%CI=18.4-43.4%, p<0.001). Favorable outcome rates were 37/87 (42.5%) and 31/94 (33.0%) in the 7 and 10 g/dl threshold groups (95%CI for the difference = − 0.05 to 0.25, p=0.28). There was a higher incidence of thromboembolic events in the 10 g/dl threshold group (22/101 [21.8%] vs. 8/99 [8.1%], p=0.009).
Conclusions and Relevance
In patients with closed head injury, neither the administration of erythropoietin nor maintaining hemoglobin concentration > 10 g/dl resulted in improved neurological outcome at 6 months and the 10 g/dl threshold was associated with a higher incidence of adverse events.. These findings do not support either approach in this setting.
doi:10.1001/jama.2014.6490
PMCID: PMC4113910  PMID: 25058216
23.  Transendocardial Mesenchymal Stem Cells and Mononuclear Bone Marrow Cells for Ischemic Cardiomyopathy: The TAC-HFT Randomized Trial 
Importance
Whether culture expanded mesenchymal stem cells or whole bone marrow mononuclear cells are safe and effective in chronic ischemic cardiomyopathy (ICM) remains controversial.
Objective
To demonstrate the safety of transendocardial stem cell injection with autologous mesenchymal stem cells (MSCs) and whole bone marrow mononuclear cells (BMCs) in patients with ischemic cardiomyopathy.
Design, Setting and Patients
A phase 1 and 2 randomized blinded placebo-controlled study involving 65 patients with ischemic cardiomyopathy and left ventricular (LV) ejection fraction less than50%(September 1, 2009-July 12, 2013). The study compared injection of MSCs (N=19) and placebo (N=11) or BMCs (N=19) with placebo (N=10) with 1-year of follow up.
Interventions
Injections into 10 LV sites with an infusion catheter.
Main Outcomes and Measures
Treatment-emergent 30 day serious adverse event rate defined as composite of death, myocardial infarction, stroke, hospitalization for worsening heart failure, perforation, tamponade or sustained ventricular arrhythmias.
Results
No patient had a treatment-emergent serious adverse events at day 30. The 1-year incidence of serious adverse events was 31.6% (95% CI, 12.6%-56.6%) for MSCs, 31.6% (95% CI, 12.6%-56.6%) for BMCs, and 38.1% (95% CI, 18.1%-61.6%) for placebo. Over 1-year the Minnesota Living with Heart Failure (MLHF) score improved with MSCs (repeated measures ANOVA P= .02) and BMCs (P= .005) but not placebo (P= .38), and 6-minute walk distance increased with MSCs only (repeated measures model P= .03). Infarct size as a percentage of LV Mass was reduced by MSCs (-18.9%; 95% CI, -30.4 to -7.4; within-group P= .004) but not by BMCs (-7.0%; 95% CI, -15.7%-1.7%; within-group P= .11) or placebo (-5.2; 95% CI, -16.8%-6.5%; within-group P=.36). Regional myocardial function as peak Eulerian circumferential strain at the site of injection improved with MSCs (-4.9; 95% CI, -13.3-3.5; within-group repeated measures P=.03) but not BMCs (-2.1; 95% CI -5.5-1.3; P=.21) or placebo (-0.03; 95% CI, -1.9-1.9; P=.14). Left ventricular chamber volume and ejection fraction did not change.
Conclusions and Relevance
Transendocardial stem cell injection with MSCs or BMCs appeared to be safe for patients with chronic ischemic cardiomyopathy and LV dysfunction. Although the sample size and multiple comparisons preclude a definitive statement about safety and clinical effect, these results provide the basis for larger studies to provide definitive evidence about safety and to assess efficacy of this new therapeutic approach.
doi:10.1001/jama.2013.282909
PMCID: PMC4111133  PMID: 24247587
24.  CLINICAL TRIAL EVIDENCE SUPPORTING FDA APPROVAL OF NOVEL THERAPEUTICS, 2005–2012 
Importance
Many patients and physicians assume that the safety and effectiveness of newly approved therapeutics is well understood; however, the strength of the clinical trial evidence supporting approval decisions by the Food and Drug Administration (FDA) has not been evaluated.
Objectives
To characterize pivotal efficacy trials, clinical trials that served as the basis of FDA approval, for newly approved novel therapeutics.
Design and Setting
Cross-sectional analysis using publicly available FDA documents for all novel therapeutics approved between 2005 and 2012.
Main Outcome Measures
We classified pivotal efficacy trials according to the following design features: randomization, blinding, comparator and trial endpoint. “Surrogate outcomes” were defined as any endpoint using a biomarker that is expected to predict clinical benefit. We also determined the number of patients, trial duration, and trial completion rates.
Results
Between 2005 and 2012, FDA approved 188 novel therapeutics for 206 indications on the basis of 448 pivotal efficacy trials. Median number of pivotal trials per indication was two (interquartile range: 1–2.5), although 74 (36.8%) indications were approved on the basis of a single pivotal trial. Nearly all trials were randomized (89.3%, 95% Confidence Interval [CI], 86.4%–92.2%), double-blinded (79.5%, 95% CI, 75.7%–83.2%), and used either an active or placebo comparator (87.1%, 95% CI, 83.9%–90.2%). Median number of patients enrolled per indication among all pivotal trials was 760 (interquartile range: 270–1550). At least one pivotal trial with a duration of 6 months or greater supported the approval of 68 (33.8%, 95% CI, 27.2%–40.4%) indications. Pivotal trials using surrogate endpoints as their primary outcome formed the exclusive basis of approval for 91 (45.3%, 95% CI, 38.3%–52.2%) indications, clinical outcomes for 67 (33.3%, 95% CI, 26.8%–39.9%), and clinical scales for 36 (17.9%, 95% CI, 12.6%–23.3%). Trial features differed by therapeutic and indication characteristics, such as therapeutic area, expected length of treatment, orphan status and accelerated approval.
Conclusions and Relevance
The quality of clinical trial evidence used by the FDA as the basis of recent novel therapeutic approvals varied widely across indications.
doi:10.1001/jama.2013.282034
PMCID: PMC4144867  PMID: 24449315
25.  Diagnostic Accuracy of Fractional Flow Reserve From Anatomic CT Angiography 
JAMA  2012;308(12):1237-1245.
Context
Coronary computed tomographic (CT) angiography is a noninvasive anatomic test for diagnosis of coronary stenosis that does not determine whether a stenosis causes ischemia. In contrast, fractional flow reserve (FFR) is a physiologic measure of coronary stenosis expressing the amount of coronary flow still attainable despite the presence of a stenosis, but it requires an invasive procedure. Noninvasive FFR computed from CT (FFRCT) is a novel method for determining the physiologic significance of coronary artery disease (CAD), but its ability to identify ischemia has not been adequately examined to date.
Objective
To assess the diagnostic performance of FFRCT plus CT for diagnosis of hemodynamically significant coronary stenosis.
Design, Setting, and Patients
Multicenter diagnostic performance study involving 252 stable patients with suspected or known CAD from 17 centers in 5 countries who underwent CT, invasive coronary angiography (ICA), FFR, and FFRCT between October 2010 and October 2011. Computed tomography, ICA, FFR, and FFRCT were interpreted in blinded fashion by independent core laboratories. Accuracy of FFRCT plus CT for diagnosis of ischemia was compared with an invasive FFR reference standard. Ischemia was defined by an FFR or FFRCT of 0.80 or less, while anatomically obstructive CAD was defined by a stenosis of 50% or larger on CT and ICA.
Main Outcome Measures
The primary study outcome assessed whether FFRCT plus CT could improve the per-patient diagnostic accuracy such that the lower boundary of the 1-sided 95% confidence interval of this estimate exceeded 70%.
Results
Among study participants, 137 (54.4%) had an abnormal FFR determined by ICA. On a per-patient basis, diagnostic accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of FFRCT plus CT were 73% (95% CI, 67%–78%), 90% (95% CI, 84%–95%), 54% (95% CI, 46%–83%), 67% (95% CI, 60%–74%), and 84% (95% CI, 74%–90%), respectively. Compared with obstructive CAD diagnosed by CT alone (area under the receiver operating characteristic curve [AUC], 0.68; 95% CI, 0.62–0.74), FFRCT was associated with improved discrimination (AUC, 0.81; 95% CI, 0.75–0.86; P<.001).
Conclusion
Although the study did not achieve its prespecified primary outcome goal for the level of per-patient diagnostic accuracy, use of noninvasive FFRCT plus CT among stable patients with suspected or known CAD was associated with improved diagnostic accuracy and discrimination vs CT alone for the diagnosis of hemodynamically significant CAD when FFR determined at the time of ICA was the reference standard.
doi:10.1001/2012.jama.11274
PMCID: PMC4281479  PMID: 22922562

Results 1-25 (779)