Selective serotonin reuptake inhibitor (SSRI) medications have been linked to increased bleeding risk, however, the actual association between warfarin, SSRI exposure, and bleeding risk has not been well-established. We studied the AnTicoagulation and Risk factors In Atrial fibrillation (ATRIA) cohort of 13,559 adults with atrial fibrillation (AF), restricted to the 9186 patients contributing follow-up time while taking warfarin. Exposure to SSRIs and tricyclic antideprssants (TCAs) were assessed from pharmacy database dispensing data. The main outcome was hospitalization for major hemorrhage. Results were adjusted for bleeding risk and time in an INR range ≥ 3. We identified 461 major hemorrhages during 32,888 person-years of follow-up, 45 events during SSRI use, 12 during TCA only use, and 404 without either medication. Hemorrhage rates were higher during periods of SSRI exposure compared with periods on no antidepressants (2.32 per 100 person-years vs. 1.35 per 100 person-years, p ≤ 0.001) and did not differ between TCA exposure and no antidepressants (1.30 per 100 person-years on TCAs, p = 0.93). After adjusting for bleeding risk and time in INR range > 3, SSRI exposure was associated with an increased rate of hemorrhage compared with no antidepressants (adjusted relative risk 1.41, 95% CI: 1.04-1.92, p=0.03), whereas TCA exposure was not (adjusted relative risk 0.82, 95% CI: 0.46-1.46, p=0.50). In conclusion, SSRI exposure was associated with higher major hemorrhage risk in patients on warfarin and this risk should be considered when selecting antidepressant treatments in those patients.
Anticoagulation; atrial fibrillation; warfarin; bleeding risk
Improving the ability to risk-stratify patients is critical for efficiently allocating resources within healthcare systems.
The purpose of this study was to evaluate a physician-defined complexity prediction model against outpatient Charlson score (OCS) and a commercial risk predictor (CRP).
Using a cohort in which primary care physicians reviewed 4302 of their adult patients, we developed a predictive model for estimated physician-defined complexity (ePDC) and categorized our population using ePDC, OCS and CRP.
143,372 primary care patients in a practice-based research network participated in the study.
For all patients categorized as complex in 2007 by one or more risk-stratification method, we calculated the percentage of total person time from 2008–2011 for which eligible cancer screening was incomplete, HbA1c was ≥ 9 %, and LDL was ≥ 130 mg/dl (in patients with cardiovascular disease). We also calculated the number of emergency department (ED) visits and hospital admissions per person year (ppy).
There was modest agreement among individuals classified as complex using ePDC compared with OCS (36.7 %) and CRP (39.6 %). Over 4 follow-up years, eligible ePDC-complex patients had higher proportions (p < 0.001) of time with: incomplete cervical (17.8 % vs. 13.3 % for OCS; 19.4 % vs. 11.2 % for CRP), breast (21.4 % vs. 14.9 % for OCS; 22.7 % vs. 15.0 % for CRP), and colon (25.9 % vs. 18.7 % for OCS; 27.0 % vs. 18.2 % for CRP) cancer screening; HbA1c ≥ 9 % (15.6 % vs. 8.1 % for OCS; 15.9 % vs. 6.9 % for CRP); and LDL ≥ 130 mg/dl (12.4 % vs. 7.9 % for OCS; 11.8 % vs 9.0 % for CRP). ePDC-complex patients had higher rates (p < 0.003) of: ED visits (0.21 vs. 0.11 ppy for OCS; 0.17 vs. 0.15 ppy for CRP), and admissions in patients 45–64 and ≥ 65 years old (0.11 vs. 0.10 ppy AND 0.24 vs. 0.21 ppy for OCS).
Our measure for estimated physician-defined complexity compared favorably to commonly used risk-prediction approaches in identifying future suboptimal quality and utilization outcomes.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-015-3357-8) contains supplementary material, which is available to authorized users.
Homeless people have a high burden of cancer risk factors and suboptimal rates of cancer screening, but the epidemiology of cancer has not been well described in this population. We assessed cancer incidence, stage, and mortality in homeless adults relative to general population standards.
We cross-linked a cohort of 28,033 adults seen at Boston Health Care for the Homeless Program in 2003–2008 to Massachusetts cancer registry and vital registry records. We calculated age-standardized cancer incidence and mortality ratios (SIRs and SMRs). We examined tobacco use among incident cases and estimated smoking-attributable fractions. Trend tests were used to compare cancer stage distributions with those in Massachusetts adults. Analyses were conducted in 2012–2015.
During 90,450 person-years of observation, there were 361 incident cancers (SIR=1.13, 95% CI=1.02, 1.25) and 168 cancer deaths (SMR=1.88, 95% CI=1.61, 2.19) among men, and 98 incident cancers (SIR=0.93, 95% CI=0.76, 1.14) and 38 cancer deaths (SMR=1.61, 95% CI=1.14, 2.20) among women. For both sexes, bronchus and lung cancer was the leading type of incident cancer and cancer death, exceeding Massachusetts estimates more than twofold. Oropharyngeal and liver cancer cases and deaths occurred in excess among men, whereas cervical cancer cases and deaths occurred in excess among women. About one third of incident cancers were smoking-attributable. Colorectal, female breast, and oropharyngeal cancers were diagnosed at more-advanced stages than in Massachusetts adults.
Efforts to reduce cancer disparities in homeless people should include addressing tobacco use and enhancing participation in evidence-based screening.
Decisions about cardiopulmonary resuscitation (CPR) and intubation are a core part of advance care planning, particularly for seriously ill hospitalized patients. However, these discussions are often avoided.
We aimed to examine the impact of a video decision tool for CPR and intubation on patients’ choices, knowledge, medical orders, and discussions with providers.
This was a prospective randomized trial conducted between 9 March 2011 and 1 June 2013 on the internal medicine services at two hospitals in Boston.
One hundred and fifty seriously ill hospitalized patients over the age of 60 with an advanced illness and a prognosis of 1 year or less were included. Mean age was 76 and 51 % were women.
Three-minute video describing CPR and intubation plus verbal communication of participants’ preferences to their physicians (intervention) (N = 75) or control arm (usual care) (N = 75).
The primary outcome was participants’ preferences for CPR and intubation (immediately after viewing the video in the intervention arm). Secondary outcomes included: orders to withhold CPR/intubation, documented discussions with providers during hospitalization, and participants’ knowledge of CPR/ intubation (five-item test, range 0–5, higher scores indicate greater knowledge).
Intervention participants (vs. controls) were more likely not to want CPR (64 % vs. 32 %, p <0.0001) and intubation (72 % vs. 43 %, p < 0.0001). Intervention participants (vs. controls) were also more likely to have orders to withhold CPR (57 % vs. 19 %, p < 0.0001) and intubation (64 % vs.19 %, p < 0.0001) by hospital discharge, documented discussions about their preferences (81 % vs. 43 %, p < 0.0001), and higher mean knowledge scores (4.11 vs. 2.45; p < 0.0001).
Seriously ill patients who viewed a video about CPR and intubation were more likely not to want these treatments, be better informed about their options, have orders to forgo CPR/ intubation, and discuss preferences with providers.
Trial registration: Clinicaltrials.gov NCT01325519
Registry Name: A prospective randomized trial using video images in advance care planning in seriously ill hospitalized patients.
Background: Fulminant Clostridium difficile colitis (fCDC) is a highly lethal disease with mortality rates ranging between 12% and 80%. Although often these patients require a total abdominal colectomy (TAC) with ileostomy, there is no established management protocol for post-operative antibiotics. In this study we aim to make some recommendations for post-operative antibiotic usage, while describing the practice across different institutions.
Methods: Multi-institutional retrospective case series including fCDC patients who underwent a TAC between January 1, 2007, and June 30, 2012. We first analyzed the complete cohort and consecutively performed a survivor analysis, comparing different antibiotic regimens. Additionally we stratified by time interval (antibiotics for ≤7 d, or ≥8 d). Primary outcome was in-hospital mortality. Additional secondary outcomes included hospital length of stay (HLOS), ICU LOS, number of ventilator-free days, and occurrence of intra-abdominal complications (proctitis, abscess, sepsis, etc.).
Results: A total of 100 fCDC patients that underwent a TAC were included across five institutions. Four different antibiotic regimens were compared; A (metronidazole IV+vancomycin PO), B (metronidazole IV), C (metronidazole IV+vanco PO and PR), and D (metronidazole IV+vancomycin PR). The combination of IV metronidazole with or without PO vancomycin showed superior outcomes in terms of a shorter ICU length of stay and more ventilator-free days. However, when comparing metronidazole alone vs. metronidazole and any combination of vancomycin, no significant differences were found. Neither the addition of vancomycin enema, nor the time interval changed outcomes.
Conclusion: Patients, after a TAC for fCDC, may be placed on either IV metronidazole or PO vancomycin depending upon local antibiograms, and proctitis may be treated with the addition of a vancomycin enema (PR). There was no data to support routine treatment of more than 7 d.
Falls among older adults (aged ≥65 years) are the leading cause of both injury deaths and emergency department (ED) visits for trauma. We examine the characteristics and prevalence of older adult ED fallers as well as the recurrent ED visit and mortality rate.
This was a retrospective analysis of a cohort of elderly fall patients who presented to the ED between 2005 and 2011 at two urban, level-1 trauma, teaching hospitals with approximately 80,000-95,000 annual visits. We examined the frequency of ED revisits and death at 3 days, 7 days, 30 days, and 1 year controlling for certain covariates.
Our cohort included 21,340 patients. The average age was 78.6. An increasing proportion of patients revisited the ED over the course of a year, ranging from 2% of patients at 3 days to 25% at 1 year. Death rates increased from 1.2% at 3 days to 15% at 1 year. 10,728 (50.2%) patients returned to the ED at some point during our 7-year study period and 36% of patients had an ED revisit or death within 1 year. In multivariate logistic regression, male sex and comorbidities were associated with ED revisits and death.
Over a third of older adult ED fall patients had an ED revisit or died within one year. Falls are one of the geriatric syndromes that contribute to frequent ED revisits and death rates. Future research should determine whether falls increase the risk of such outcomes and how to prevent future fall and death.
geriatrics; emergency care; trauma
Improving colorectal cancer (CRC) screening rates for patients from socioeconomically disadvantaged backgrounds is a recognized public health priority.
Our aim was to determine if implementation of a system-wide screening intervention could reduce disparities in the setting of improved overall screening rates.
This was an interrupted time series (ITS) analysis before and after a population management intervention.
Patients eligible for CRC screening (age 52–75 years without prior total colectomy) in an 18-practice research network from 15 June 2009 to 15 June 2012 participated in the study.
The Technology for Optimizing Population Care (TopCare) intervention electronically identified patients overdue for screening and facilitated contact by letter or telephone scheduler, with or without physician involvement. Patients identified by algorithm as high risk for non-completion entered into intensive patient navigation.
Patients were dichotomized as ≤ high school diploma (≤ HS), an indicator of socioeconomic disadvantage, vs. >HS diploma (> HS). The monthly disparity between ≤ HS and > HS with regard to CRC screening completion was examined.
At baseline, 72 % of 47,447 eligible patients had completed screening, compared with 75 % of 51,442 eligible patients at the end of follow-up (p < 0.001). CRC screening completion was lower in ≤ HS vs. >HS patients in June 2009 (65.7 % vs. 74.5 %, p < 0.001) and remained lower in June 2012 (69.4 % vs. 76.7 %, p < 0.001). In the ITS analysis, which accounts for secular trends, TopCare was associated with a significant decrease in the CRC screening disparity (0.7 %, p < 0.001). The effect of TopCare represents approximately 99 additional ≤ HS patients screened above prevailing trends, or 26 life-years gained had these patients remained unscreened.
A multifaceted population management intervention sensitive to the needs of vulnerable patients modestly narrowed disparities in CRC screening, while also increasing overall screening rates. Embedding interventions for vulnerable patients within larger population management systems represents an effective approach to increasing overall quality of care while also decreasing disparities.
colorectal cancer; cancer screening; health status disparities; socioeconomic factors; quality improvement
Warfarin reduces ischemic stroke risk in atrial fibrillation (AF) but increases bleeding risk. Novel anticoagulants challenge warfarin as stroke‐preventive therapy for AF. They are available at fixed doses but are more costly. Warfarin anticoagulation at a time in therapeutic range (TTR) ≥70% is similarly as effective and safe as novel anticoagulants. It is unclear whether AF patients with TTR ≥70% will remain stably anticoagulated and avoid the need to switch to a novel anticoagulant. We assessed stability of warfarin anticoagulation in AF patients with an initial TTR ≥70%.
Methods and Results
Within the community‐based Anticoagulation and Risk Factors in AF (ATRIA) cohort followed from 1996 to 2003, we identified 2841 new warfarin users who continued warfarin over 9 months. We excluded months 1 to 3 to achieve a stable dose. For the 987 patients with TTR ≥70% in an initial 6‐month period (TTR
1; months 4–9), we described the distribution of TTR
2 (months 10–15) and assessed multivariable correlates of persistent TTR ≥70%. Of patients with TTR
1 ≥70%, 57% persisted with TTR
2 ≥70% and 16% deteriorated to TTR
2 <50%. Only initial TTR
1 ≥90% (adjusted odds ratio 1.47, 95% CI 1.07–2.01) independently predicted TTR
2 ≥70%. Heart failure was moderately associated with marked deterioration (TTR
2 <50%); adjusted odds ratio 1.45, 95% CI 1.00–2.10.
Nearly 60% of AF patients with high‐quality TTR1 on warfarin maintained TTR ≥70% over the next 6 months. A minority deteriorated to very poor TTR. Patient features did not strongly predict TTR in the second 6‐month period. Our analyses support watchful waiting for AF patients with initial high‐quality warfarin anticoagulation before considering alternative anticoagulants.
anticoagulants; arrhythmia; embolism; prevention; risk factors; Atrial Fibrillation; Anticoagulants; Primary Prevention; Epidemiology
Every U.S. state has a free telephone quitline that tobacco users can access to receive cessation assistance, yet referral rates for parents in the pediatric setting remain low. This study evaluates, within pediatric offices, the impact of proactive enrollment of parents to quitlines compared to provider suggestion to use the quitline and identifies other factors associated with parental quitline use.
As part of a cluster randomized controlled trial (Clinical Effort Against Secondhand Smoke Exposure), research assistants completed post-visit exit interviews with parents in 20 practices in 16 states. Parents’ quitline use was assessed at a 12-month follow-up interview. A multivariable analysis was conducted for quitline use at 12 months using a logistic regression model with generalized estimating equations to account for provider clustering. Self-reported cessation rates were also compared among quitline users based on the type of referral they received at their child’s doctor’s office.
Of the 1980 parents enrolled in the study, 1355 (68 %) completed a 12-month telephone interview and of those 139 (10 %) reported talking with a quitline (15 % intervention versus 6 % control; p < .0001). Parents who were Hispanic (aOR 2.12 (1.22, 3.70)), black (aOR 1.57 (1.14, 2.16)), planned to quit smoking in the next 30 days (aOR 2.32 (1.47, 3.64)), and had attended an intervention practice (aOR 2.37 (1.31, 4.29)) were more likely to have talked with a quitline. Parents who only received a suggestion from a healthcare provider to use the quitline (aOR 0.45 (0.23, 0.90)) and those who were not enrolled and did not receive a suggestion (aOR 0.33 (0.17, 0.64)) were less likely to talk with a quitline than those who were enrolled in the quitline during the baseline visit. Self-reported cessation rates among quitline users were similar regardless of being proactively enrolled (19 %), receiving only a suggestion (25 %), or receiving neither a suggestion nor an enrollment (17 %) during a visit (p = 0.47).
These results highlight the enhanced clinical effectiveness of not just recommending the quitline to parents but also offering them enrollment in the quitline at the time of their child's visit to the pediatric office.
ClinicalTrials.gov, Identifier: NCT00664261
Smoking; Tobacco; Pediatrics; Family practice; Parent; Smoking cessation; Secondhand smoke; Tobacco control; Quitline; Telephone counseling
Lack of timely medication intensification and inadequate medication safety monitoring are two prevalent and potentially modifiable barriers to effective and safe chronic care. Innovative applications of health information technology tools may help support chronic disease management.
To examine the clinical impact of a novel health IT tool designed to facilitate between-visit ordering and tracking of future laboratory testing.
Design and Participants
Clinical trial randomized at the provider level (n = 44 primary care physicians); patient-level outcomes among 3,655 primary care patients prescribed 5,454 oral medicines for hyperlipidemia, diabetes, and/or hypertension management over a 12-month period.
Time from prescription to corresponding follow-up laboratory testing; proportion of follow-up time that patients achieved corresponding risk factor control (A1c, LDL); adverse event laboratory monitoring 4 weeks after medicine prescription.
Patients whose physicians were allocated to the intervention (n = 1,143) had earlier LDL laboratory assessment compared to similar patients (n = 703) of control physicians [adjusted hazard ratio (aHR): 1.15 (1.01-1.32), p = 0.04]. Among patients with elevated LDL (486 intervention, 324 control), there was decreased time to LDL goal in the intervention group [aHR 1.26 (0.99-1.62)]. However, overall there were no significant differences between study arms in time spent at LDL or HbA1c goal. Follow-up safety monitoring (e.g., creatinine, potassium, or transaminases) was relatively infrequent (ranging from 7 % to 29 % at 4 weeks) and not statistically different between arms. Intervention physicians indicated that lack of reimbursement for non-visit-based care was a barrier to use of the tool.
A health IT tool to support between-visit laboratory monitoring improved the LDL testing interval but not LDL or HbA1c control, and it did not alter safety monitoring. Adoption of innovative tools to support physicians in non-visit-based chronic disease management may be limited by current visit-based financial and productivity incentives.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-014-3152-y) contains supplementary material, which is available to authorized users.
Little is known about the pattern of electronic cigarette (e-cigarette) use over time or among smokers with medical comorbidity.
We assessed current cigarette smokers’ use of e-cigarettes during the 30 days before admission to 9 hospitals in 5 geographically dispersed US cities: Birmingham, AL; Boston, MA; Kansas City, KS; New York, NY; and Portland, OR. Each hospital was conducting a randomized controlled trial as part of the NIH-sponsored Consortium of Hospitals Advancing Research on Tobacco (CHART). We conducted a pooled analysis using multiple logistic regression to examine changes in e-cigarette use over time and to identify correlates of e-cigarette use.
Among 4,660 smokers hospitalized between July 2010 and December 2013 (mean age 57 years, 57% male, 71% white, 56% some college, average 14 cigarettes/day), 14% reported using an e-cigarette during the 30 days before admission. The prevalence of e-cigarette use increased from 1.1% in 2010 to 10.3% in 2011, 10.2% in 2012, and 18.4% in 2013; the increase was statistically significant (p < .0001) after adjustment for age, sex, education, and CHART study. Younger, better educated, and heavier smokers were more likely to use e-cigarettes. Smokers who were Hispanic, non-Hispanic black, and who had Medicaid or no insurance were less likely to use e-cigarettes. E-cigarette use also varied by CHART project and by geographic region.
E-cigarette use increased substantially from 2010 to 2013 among a large sample of hospitalized adult cigarette smokers. E-cigarette use was more common among heavier smokers and among those who were younger, white, and who had higher socioeconomic status.
Malnutrition and underfeeding are major challenges in caring for critically ill patients. Our goal was to characterize interruptions in enteral nutrition (EN) delivery and their impact on caloric debt in the surgical intensive care unit (ICU).
Materials and Methods
We performed a prospective, observational study of adults admitted to surgical ICUs at a Boston teaching hospital (March–December 2012). We categorized EN interruptions as “unavoidable” vs “avoidable” and compared caloric deficit between patients with ≥1 EN interruption (group 1) vs those without interruptions (group 2). Multivariable logistic regression was used to investigate the association of EN interruption with the risk of underfeeding. Poisson regression was used to investigate the association of EN interruption with length of stay (LOS) and mortality.
Ninety-four patients comprised the analytic cohort. Twenty-six percent of interruptions were deemed “avoidable.” Group 1 (n = 64) had a significantly higher mean daily and cumulative caloric deficit vs group 2 (n = 30). Patients in group 1 were at a 3-fold increased risk of being underfed (adjusted odds ratio, 2.89; 95% confidence interval [CI], 1.03–8.11), had a 30% higher risk of prolonged ICU LOS (adjusted incident risk ratio [IRR], 1.27; 95% CI, 1.14–1.42), and had a 50% higher risk of prolonged hospital LOS (adjusted IRR, 1.53; 95% CI, 1.41–1.67) vs group 2.
In our cohort of critically ill surgical patients, EN interruption was frequent, largely “unavoidable,” and associated with undesirable outcomes. Future efforts to optimize nutrition in the surgical ICU may benefit from considering strategies that maximize nutrient delivery before and after clinically appropriate EN interruptions.
malnutrition; caloric debt; ICU nutrition; critical illness
To determine whether an evidence-based pediatric outpatient intervention for parents who smoke persisted after initial implementation.
A cluster randomized controlled trial of 20 pediatric practices in 16 states that received either Clinical and Community Effort Against Secondhand Smoke Exposure (CEASE) intervention or usual care. The intervention provided practices with training to provide evidence-based assistance to parents who smoke. The primary outcome, assessed by the 12-month follow-up telephone survey with parents, was provision of meaningful tobacco control assistance, defined as discussing various strategies to quit smoking, discussing smoking cessation medication, or recommending the use of the state quitline after initial enrollment visit. We also assessed parental quit rates at 12 months, determined by self-report and biochemical verification.
Practices’ rates of providing any meaningful tobacco control assistance (55% vs 19%), discussing various strategies to quit smoking (25% vs 10%), discussing cessation medication (41% vs 11%), and recommending the use of the quitline (37% vs 9%) were all significantly higher in the intervention than in the control groups, respectively (P < .0001 for each), during the 12-month postintervention implementation. Receiving any assistance was associated with a cotinine-confirmed quitting adjusted odds ratio of 1.89 (95% confidence interval: 1.13–3.19). After controlling for demographic and behavioral factors, the adjusted odds ratio for cotinine-confirmed quitting in intervention versus control practices was 1.07 (95% confidence interval: 0.64–1.78).
Intervention practices had higher rates of delivering tobacco control assistance than usual care practices over the 1-year follow-up period. Parents who received any assistance were more likely to quit smoking; however, parents’ likelihood of quitting smoking was not statistically different between the intervention and control groups. Maximizing parental quit rates will require more complete systems-level integration and adjunctive cessation strategies.
parental smoking; smoking cessation; secondhand smoke
Most parental smokers are deeply invested in their child’s health, but it is unknown what factors influence parent risk perceptions of the effects of smoking on their child’s health and benefits to the child of cessation.
To explore differences in former versus current smokers’ beliefs about harms of continuing to smoke, benefits of quitting, and how much smoking interferes with their parenting.
As part of a cluster RCT to increase tobacco control in the pediatric setting, we analyzed data collected at the ten control arm practices for 24 months starting in May 2010; a cross-sectional secondary data analysis was conducted in 2013. Parents were asked about smoking status and perceived harm, benefit, and well-being related to smoking behaviors.
Of the 981 enrolled smoking parents, 710 (72.4%) were contacted at 12 months. The odds of having successfully quit at 12 months was 4.12 times more likely (95% CI=1.57, 10.8) for parents who believed that quitting will benefit their children, 1.68 times more likely (95% CI=1.13, 2.51) for parents with more than a high school education, and 1.74 times greater (95% CI=1.13, 2.68) for parents with children under age 3 years. Another factor associated with having successfully quit was a prior quit attempt.
Providers’ smoking-cessation advice and support should begin early and underscore how cessation will benefit the health and well-being of patients’ children. Additionally, parents who have recently attempted to quit may be particularly primed for another attempt.
Objectives. To study the relationship between nocturnal periodic breathing episodes and behavioral awakenings at high altitude. Methods. Observational study. It is 6-day ascent of 4 healthy subjects from Besisahar (760 meters) to Manang (3540 meters) in Nepal in March 2012. A recording pulse oximeter was worn by each subject to measure their oxygen saturation and the presence of periodic breathing continuously through the night. An actigraph was simultaneously worn in order to determine nocturnal behavioral awakenings. There were no interventions. Results. 187-hour sleep at high altitude was analyzed, and of this, 145 hours (78%) had at least one PB event. At high altitude, 10.5% (95% CI 6.5–14.6%) of total sleep time was spent in PB while 15 out of 50 awakenings (30%, 95% CI: 18–45%) occurring at high altitudes were associated with PB (P < 0.001). Conclusions. Our data reveals a higher than expected number of behavioral awakenings associated with PB compared to what would be expected by chance. This suggests that PB likely plays a role in behavioral awakenings at high altitude.
Health care systems need effective models to manage chronic diseases like tobacco dependence across transitions in care. Hospitalizations provide opportunities for smokers to quit, but research suggests that hospital-delivered interventions are effective only if treatment continues after discharge.
To determine whether an intervention to sustain tobacco treatment after hospital discharge increases smoking cessation rates over standard care.
A randomized controlled trial conducted from August 2010-November 2012 compared Sustained Care, a post-discharge tobacco cessation intervention, vs. Standard Care among hospitalized adult smokers who received a tobacco dependence intervention in the hospital and wanted to quit smoking after discharge.
Massachusetts General Hospital, Boston, MA.
397 hospitalized daily smokers (mean age 53 years, 48% male, 81% nonhispanic white). 92% of eligible patients and 44% of screened patients enrolled.
Sustained Care participants received automated interactive voice response telephone calls and their choice of free FDA-approved cessation medication for 90 days. The automated calls promoted cessation, provided medication management, and triaged smokers for additional counseling. Standard Care patients received recommendations for post-discharge pharmacotherapy and counseling.
Biochemically-validated past 7-day tobacco abstinence 6 months after discharge (primary outcome); self-reported tobacco abstinence and smoking cessation treatment use at 1, 3, and 6 months.
Smokers assigned to Sustained Care (n=198) used more counseling and more pharmacotherapy at each follow-up than those assigned to Standard Care (n=199). Biochemically-validated 7-day tobacco abstinence at 6 months was higher with Sustained Care than Standard Care (26% vs. 15%; RR 1.71, 95% CI 1.14–2.56, p=0.009; NNT=9.4, 95% CI 6.4–35.5). Using multiple imputation for missing outcomes, the RR was 1.55 (95%CI 1.03–2.21, p=0.038). Sustained Care also produced higher self-reported continuous abstinence rates for 6 months after discharge (27% vs. 16%; RR 1.70, 95% CI 1.15–2.51, p=0.007).
Among hospitalized adult smokers who planned to quit smoking, a post-discharge intervention providing automated telephone calls and free medication resulted in higher rates of smoking cessation at 6 months compared with a standard recommendation to use counseling and medication after discharge. These findings, if replicated, suggest an approach to help achieve sustained smoking cessation after a hospital stay.
Inpatients; hospitalization; smoking cessation; nicotine dependence; nicotine addiction; tobacco use; interactive voice response; randomized controlled trial
To determine if the belief that thirdhand smoke is harmful to children is associated with smoking parents’ attitudes, home or car smoking policies, and quitting behaviors.
Data from a national randomized controlled trial, Clinical Effort Against Secondhand Smoke Exposure, assessed thirdhand smoke beliefs of 1947 smoking parents in an exit survey after a pediatric office visit in 10 intervention and 10 control practices. Twelve-month follow-up data were collected from 1355 parents. Multivariable logistic regression determined whether belief that thirdhand smoke harms the health of children is independently associated with parental behaviors and attitudes 12 months later. A χ2 test assessed whether parents who disagreed that thirdhand smoke is harmful were more likely to make a quit attempt if they later believed that thirdhand smoke is harmful.
Belief at the exit survey that thirdhand smoke is harmful was independently associated with having a strictly enforced smoke-free home policy (adjusted odds ratio: 2.05; 95% CI: 1.37–3.05) and car policy (adjusted odds ratio: 1.69; 95% CI: 1.04–2.74) at the 12-month follow-up. A significantly higher percentage (71% vs 50%) of parents who did not hold the thirdhand smoke harm belief at baseline made at least 1 quit attempt if they agreed that thirdhand smoke is harmful at the 12-month follow-up (P = .02).
Thirdhand smoke harm belief was associated with a strictly enforced smoke-free home and car and attempts to quit smoking. Sensitizing parents to thirdhand smoke risk could facilitate beneficial tobacco control outcomes.
smoking; tobacco; thirdhand smoke; pediatrics; family practice; parent; smoking cessation; secondhand smoke; environmental tobacco smoke; tobacco control
While the short-term impact of atrial fibrillation–related stroke has been well studied, surprisingly little is known about its long-term effect on survival.
We followed 13,559 patients with atrial fibrillation for a median of 6 years, identifying ischemic strokes through computerized databases and validating 1,025 events. Stroke severity was determined from hospital records. We compared survival of stroke patients with comparator nonstroke patients (matched for age, sex, race, comorbid conditions, and time of entry into the cohort) using proportional hazard models controlling for warfarin use and compared survival by degree of discharge deficit.
Median survival after stroke was 1.8 years compared with 5.7 years for matched nonstroke comparators (hazard ratio [HR] 2.8, 95% confidence interval [CI] 2.5–3.2). This increased risk of all-cause death persisted even after restricting the analysis to the 576 stroke patients who survived 6 months after the initial stroke event (HR 2.0, 95% CI 1.7–2.5, adjusting for warfarin use). Risk of death was strongly associated with stroke severity: HR 2.9 (95% CI 2.3–3.5) for strokes resulting in major deficits and HR 8.3 (95% CI 5.2–13.2) for strokes resulting in severe deficits compared with matched comparators without stroke.
Ischemic stroke approximately triples the mortality rate in patients with atrial fibrillation. This effect persists well beyond the immediate period poststroke and is strongly associated with disability after stroke. Stroke prevention by anticoagulation has even greater beneficial effects on survival than usually considered when focusing solely on 30-day mortality rates.
Smoking cessation interventions for hospitalized smokers are effective in promoting smoking cessation, but only if the tobacco dependence treatment continues after the patient leaves the hospital. Sustaining tobacco dependence treatment after hospital discharge is a challenge for health care systems. Our previous single-site randomized controlled trial demonstrated the effectiveness of an intervention that facilitated the delivery of comprehensive tobacco cessation treatment, including both medication and counseling, after hospital discharge. We subsequently streamlined the intervention model to increase its potential for dissemination. This new model is being tested in a larger multi-site trial with broader eligibility criteria in order to enroll a more representative sample of hospitalized smokers. This paper describes the trial design and contrasts it with the earlier study.
A 2-arm, 3-site randomized controlled trial is testing the hypothesis that a multi-component Sustained Care intervention is more effective than Standard Care in helping hospitalized cigarette smokers stop smoking after hospital discharge. The trial enrolls adult daily cigarette smokers who are admitted to 1 of 3 participating hospitals in Massachusetts or Pennsylvania. Participants receive the same smoking cessation intervention in the hospital. They are randomly assigned to receive either Standard Care or Sustained Care after hospital discharge. Participants in the Sustained Care arm receive a free 3-month supply of FDA-approved smoking cessation medication and 5 interactive voice response calls that provide tailored motivational messages, medication refills, and access to a live tobacco treatment counselor. Participants in the Standard Care arm receive a smoking cessation medication recommendation and information about community resources. Outcomes are assessed at 1, 3, and 6 months after discharge. The primary outcome is biochemically-validated tobacco abstinence for the past 7 days at 6-month follow-up. Other outcome measures include self-reported tobacco abstinence measures, use of medication and counseling after discharge, hospital readmissions, and program cost-effectiveness.
We adapted a proven intervention for hospitalized smokers to enhance its potential for dissemination and are testing it in a multi-site trial. Study enrollment data suggests that the trial achieved the goal of recruiting a broader sample of hospitalized smokers.
Comparative Effectiveness of Post-Discharge Strategies for Hospitalized Smokers (Helping HAND2) NCT01714323. Registered October 22, 2012.
Smoking cessation; Hospitalization; Pharmacotherapy; Counseling; Interactive voice response; Randomized controlled trial
Many clinical trials focus on restricting hematoma expansion following acute intracerebral hemorrhage (ICH), but selecting those patients at highest risk of hematoma expansion is challenging.
To develop a prediction score for hematoma expansion in patients with primary ICH.
DESIGN, SETTING, AND PARTICIPANTS
Prospective cohort study at 2 urban academic medical centers among patients having primary ICH with available baseline and follow-up computed tomography for volumetric analysis (817 patients in the development cohort and 195 patients in the independent validation cohort).
MAIN OUTCOMES AND MEASURES
Hematoma expansion was assessed using semiautomated software and was defined as more than 6 mL or 33% growth. Covariates were tested for association with hematoma expansion using univariate and multivariable logistic regression. A 9-point prediction score was derived based on the regression estimates and was subsequently tested in the independent validation cohort.
Hematoma expansion occurred in 156 patients (19.1%). In multivariable analysis, predictors of expansion were as follows: warfarin sodium use, the computed tomography angiography spot sign, and shorter time to computed tomography (≤ 6 vs >6 hours) (P <.001 for all), as well as baseline ICH volume (<30 [reference], 30–60 [P =.03], and >60 [P =.005] mL). The incidence of hematoma expansion steadily increased with higher scores. In the independent validation cohort (n = 195), our prediction score performed well and showed strong association with hematoma expansion (odds ratio, 4.59; P <.001 for a high vs low score). The C statistics for the score were 0.72 for the development cohort and 0.77 for the independent validation cohort.
CONCLUSIONS AND RELEVANCE
A 9-point prediction score for hematoma expansion was developed and independently validated. The results open a path for individualized treatment and trial design in ICH aimed at patients at highest risk of hematoma expansion with maximum potential for therapeutic benefit.
Intracerebral hemorrhage (ICH) is the most feared complication of oral anticoagulant therapy (OAT). While anticoagulated patients have increased severity of bleeding following ICH, they may also be at increased risk for thromboembolic events (TEs) given that they had been prescribed OAT prior to their ICH. We hypothesized that TEs are relatively common following ICH, and that anticoagulated patients are at higher risk for these complications.
Consecutive patients with primary ICH presenting to a tertiary care hospital from 1994 to 2006 were prospectively characterized and followed. Hospital records were retrospectively reviewed for clinically relevant inhospital TEs and patients were prospectively followed for 90 day mortality.
For 988 patients of whom 218 (22%) were on OAT at presentation, median hospital length of stay was 7 (IQR 4–13) days and 90-day mortality was 36%. TEs were diagnosed in 71 patients (7.2%) including pulmonary embolism (1.8%), deep venous thrombosis (1.1%), myocardial ischemia (1.6%), and cerebrovascular ischemia (3.0%). Mean time to event was 8.4 ± 7.0 days. Rates of TE were 5% among those with OAT-related ICH and 8% among those with non-OAT ICH (P = 0.2). After multivariable Cox regression, the only independent risk factor for developing a TE was external ventricular drain placement (HR 2.1, 95% CI 1.1–4.1, P = 0.03). TEs had no effect on 90-day mortality (HR 0.7, 95% CI 0.5–1.1, P = 0.1).
The incidence of TEs in an unselected ICH population was 7.2%. Patients with OAT-related ICH were not at increased risk of TEs.
Cerebral hemorrhage; Warfarin; Venous thrombosis; Pulmonary embolism; Brain ischemia; Myocardial ischemia
Excessive crystalloid administration is common and associated with negative outcomes in critically ill trauma patients. Continuous furosemide infusion (CFI) to remove excessive fluid has not been previously described in this population. We hypothesized that a goal-directed CFI is more effective for fluid removal than intermittent bolus injection (IBI) diuresis without excess incidence of hypokalemia or renal failure.
Materials and Methods:
CFI cases were prospectively enrolled between November 2011 and August 2012, and matched to historic IBI controls by age, gender, Injury Severity Score (ISS), and net fluid balance (NFB) at diuresis initiation. Paired and unpaired analyses were performed to compare groups. The primary endpoints were net fluid balance, potassium and creatinine levels. Secondary endpoints included intensive care unit (ICU) and hospital length of stay (LOS), ventilator-free days (VFD), and mortality.
55 patients were included, with 19 cases and 36 matched controls. Mean age was 54 years, mean ISS was 32.7, and mean initial NFB was +7.7 L. After one day of diuresis with CFI vs. IBI, net 24 h fluid balance was negative (−0.55 L vs. +0.43 L, P = 0.026) only for the CFI group, and there was no difference in potassium and creatinine levels. Cumulative furosemide dose (59.4mg vs. 25.4mg, P < 0.001) and urine output (4.2 L vs. 2.8 L, P < 0.001) were also significantly increased with CFI vs. IBI. There were no statistically significant differences in ICU LOS, hospital LOS, VFD, or mortality.
Compared to IBI, goal-directed diuresis by CFI is more successful in achieving net negative fluid balance in patients with fluid overload with no detrimental side effects on renal function or patient outcome.
Crystalloids; diuresis; furosemide; trauma
While emergency department (ED) crowding has myriad causes and negative downstream effects, applying systems engineering science and targeting throughput remains a potential solution to increase functional capacity. However, the most effective techniques for broad application in the ED remain unclear. We examined the hypothesis that Lean-based reorganization of Fast Track process flow would improve length of stay (LOS), percent of patients discharged within one hour, and room use, without added expense.
This study was a prospective, controlled, before-and-after analysis of Fast Track process improvements in a Level 1 tertiary care academic medical center with >95,000 annual patient visits. We included all adult patients seen during the study periods of 6/2010–10/2010 and 6/2011–10/2011, and data were collected from an electronic tracking system. We used concurrent patients seen in another care area used as a control group. The intervention consisted of a simple reorganization of patient flow through existing rooms, based in systems engineering science and modeling, including queuing theory, demand-capacity matching, and Lean methodologies. No modifications to staffing or physical space were made. Primary outcomes included LOS of discharged patients, percent of patients discharged within one hour, and time in exam room. We compared LOS and exam room time using Wilcoxon rank sum tests, and chi-square tests for percent of patients discharged within one hour.
Following the intervention, median LOS among discharged patients was reduced by 15 minutes (158 to 143 min, 95%CI 12 to 19 min, p<0.0001). The number of patients discharged in <1 hr increased by 2.8% (from 6.9% to 9.7%, 95%CI 2.1% to 3.5%, p<0.0001), and median exam room time decreased by 34 minutes (90 to 56 min, 95%CI 31 to 38 min, p<0.0001). In comparison, the control group had no change in LOS (265 to 267 min) or proportion of patients discharged in <1 hr (2.9% to 2.9%), and an increase in exam room time (28 to 36 min, p<0.0001).
In this single center trial, a focused Lean-based reorganization of patient flow improved Fast Track ED performance measures and capacity, without added expense. Broad multi-centered application of systems engineering science might further improve ED throughput and capacity.
To examine strict smoke-free home policies among smoking parents assessed in pediatric offices.
We analyzed baseline parental survey data from 10 control practices in a national trial of pediatric office-based tobacco control interventions (Clinical Effort Against Second-hand Smoke Exposure, CEASE). We used logistic regression models with generalized estimating equations to examine factors associated with strict smoke-free home policies.
Subjects were 952 parents who were current smokers. Just over half (54.3%) reported strict smoke-free home policies. Few reported being asked (19.9%) or advised (17.1%) regarding policies by pediatricians. Factors associated with higher odds of policies were child 5 years or younger (adjusted odds ratio [aOR] 2.43, 95% confidence interval [CI] 1.53, 3.86), nonblack race/ethnicity (aORs 2.17–2.60, 95% CIs 1.25–5.00), non-Medicaid (HMO/private (aOR 1.84, 95% CI 1.31, 2.58); self-pay/other aOR 1.76, 95% CI 1.12, 2.78); well-child versus sick child visit (aOR 1.61, 95% CI 1.11, 2.34), fewer than 10 cigarettes per day (aOR 1.80, 95% CI 1.31, 2.47), no other home smokers (aOR 1.68, 95% CI 1.26, 2.25), only father smoking (aOR 1.73, 95% CI 1.06, 2.83), and strict smoke-free car policy (aOR 3.51, 95% CI 2.19, 5.64).
Nearly half of smoking parents did not have strict smoke-free home policies. Parents were less likely to report policies if they were heavier smokers, black, living with other smokers, or attending a sick child visit; if they did not have a young child or smoke-free car policy; if they had a child on Medicaid; and if anyone other than only the father smoked. Few pediatricians addressed or recommended strict smoke-free home policies in an office visit. The pediatric office encounter represents a currently missed opportunity to intervene regarding smoke-free homes, particularly for high-risk groups.
environmental tobacco smoke; parental smoking; secondhand smoke; smoking