|Home | About | Journals | Submit | Contact Us | Français|
This update in hospital medicine summarizes the most recent evidence relevant to the management of hospitalized patients. Our aims were (1) to describe and interpret advances in hospital medicine that have practical implications for patient care, and (2) to describe and interpret recent evidence that has teaching implications for attending physicians on general medicine ward services.
We screened the Table of Contents of 40 journals (Box) for articles published between January 2010 and March 2011 that were relevant to the care of hospitalized patients. We ranked candidate articles on a 3-point Likert-type scale based on whether the evidence from a study is likely to (1) change, modify, or confirm current inpatient practice, or (2) change, modify, or confirm current inpatient teaching. Using a consensus process, we selected 10 of the 25 highest ranking articles for presentation. We then organized the selected articles into problem-based categories for ease of reading.
Dabbagh et al. Coagulopathy does not protect against venous thromboembolism in hospitalized patients with chronic liver disease. Chest 2010;137(5):1145–1149.
Many patients with chronic liver disease have coagulopathy with elevated international normalized ratios (INRs). Previous research has questioned whether this coagulopathy results in “auto-anticoagulation” and protects against venous thromboembolism (VTE).1
In this retrospective cohort study conducted at a single university tertiary care center, investigators analyzed the clinical records of 190 adult patients with chronic liver disease and cirrhosis between 2000 and 2007. Comparison groups were divided into quartiles based on admission INR: <1.4, 1.4–1.7, 1.8–2.2, and >2.2. Patients on chronic anticoagulation and those with known active VTE or on palliative care were excluded. The primary outcome was symptomatic VTE (defined as deep vein thrombosis or pulmonary embolism).
In-hospital VTE was diagnosed in 12 patients (6%). Fifty percent of VTE was diagnosed in patients with INR ≥1.7, and 17% was diagnosed in patients with INR ≥2.2. Investigators found no significant difference in VTE incidence by INR quartile. One hundred forty-two (74.7%) patients did not receive VTE prophylaxis (mechanical or pharmacologic). VTE incidence did not differ between those who did and did not receive prophylaxis. Of note, less than half of the patients in this database underwent diagnostic testing, so VTE incidence may have been underestimated.
Bottom Line: This observational study is limited by the low incidence of VTE and also is subject to bias from unmeasured confounders. Nevertheless, elevated INR does not appear to protect against VTE in hospitalized patients with chronic liver disease. Prophylaxis did not appear to impact VTE incidence. Until additional evidence or specific guidelines become available, patients with end-stage liver disease should receive DVT prophylaxis similar to standard-risk medical inpatients even if their INR is elevated.
Webster et al. Clinically indicated replacement versus routine replacement of peripheral venous catheters. Cochrane Database Syst Rev 2010;(3):CD007798.
The Centers for Disease Control and Prevention guidelines recommend routinely replacing peripheral intravenous (IV) catheters every 72 to 96 h because of concern about IV catheter-associated bacteremia and phlebitis, yet evidence from individual studies suggests that IV catheter duration is not associated with worse outcomes.2,3
To evaluate the evidence for routinely replacing peripheral IV catheters, the investigators reviewed all randomized controlled trials that compared routine peripheral IV replacement with a strategy of removing IV catheters only when clinically indicated (e.g., IV malfunction, swelling, induration, or erythema). Primary outcomes included suspected device-related bacteremia, thrombophlebitis, and cost. Reviewers used a comprehensive search strategy with attention given to risk bias, blinding, intention to treat analysis, and completeness of follow-up and measures of treatment effect.
Six trials involving 3,455 patients met the inclusion criteria—four published RCTs and two unpublished interim analyses. The authors rated the strength of evidence as high according to the GRADE scale. Overall, there was no significant difference in rates of bacteremia in the routine replacement group compared to clinically indicated replacement (0.4% vs 0.2%, P=0.37). Six trials addressed phlebitis and found a small, but non-significant decrease in phlebitis in the routine replacement group (7.2% vs. 9.0%, P=0.09). In the two trials where they were measured, hospital costs were significantly reduced in the clinically indicated replacement groups (cost reduced 6.21 Australian dollars per patient, 95% CI −9.32 to −3.11, P<0.001).
Bottom Line: The investigators found no conclusive benefit to routinely replacing peripheral IV catheters every 72 to 96 h. Health care facilities should consider a policy of changing catheters only when clinically indicated. Not only could this practice spare hospitals modest incremental costs, it could spare patients the discomfort of obtaining new IV access.
Boyd EA, Lo B, Evans LR, et al. Factors that influence surrogate decision-makers’ perceptions of prognosis. Crit Care Med 2010;38:1270–1275.
Patients who are critically ill in the intensive care unit (ICU) often must rely on surrogate decision makers to make end-of-life decisions.4 While physician prognostication has been well described, little is understood about how surrogate decision-makers arrive at prognostic estimates.5
This prospective study at a single university medical center used surveys and structured interviews to determine the factors that surrogate decision-makers use to determine prognosis for critically ill patients. Participants were adult English-speaking surrogate decision-makers for patients who were incapacitated and at high risk for death in the ICU. Participants were asked to provide a percentage estimate of their loved one’s chances of surviving the hospitalization. Then, they were interviewed and asked “Can you tell me a little bit about what has made you think this is his/her prognosis?” Responses were categorized according to common themes.
Investigators enrolled 179 surrogate decision-makers for 142 patients. Overall, 45% of the patients died during hospitalization. Only 3 of 179 (2%) of surrogates stated they based their prognostic estimate solely on the physician’s estimate. Less than half (47%) stated they based at least part of their estimate on information provided by the physician. The majority highlighted other factors that contributed to their estimation of prognosis including: (1) perception of the patient’s physical strength and will to live, (2) the patient’s unique history of survival, (3) the patient’s physical appearance, (4) the presence of the surrogate at the bedside, and (5) the surrogate’s faith (religious or otherwise) or optimism. Most surrogates appreciated the need to balance these factors with information provided by the physician.
Bottom Line: Surrogate decision-makers use diverse sources to estimate the prognosis for critically ill loved ones. Physicians should be aware that the clinical information they provide is only one (sometimes minor) piece of information surrogates use in this complex process. Prognostication may best be viewed as a bidirectional discussion that accounts for physiological and surrogate/patient factors.
Lees et al. Alteplase is effective and safe up to 4.5 hours from symptom onset in acute ischemic stroke. Lancet. 2010;375(9727):1695–1703.
In 1995, the National Institute of Neurologic Disorders and Stroke-2 (NINDS-2) trial found that administration of recombinant tissue plasminogen activator within 3 h of symptom onset in acute ischemic stroke improved outcomes without increasing harm.6 The evidence has been unclear about whether this benefit extends to 4.5 h from symptom onset.
The authors conducted an updated meta-analysis of eight randomized, double-blind placebo controlled trials (n=3,670) that examined the effect of time to treatment with alteplase in acute stroke. Outcomes, measured at 90 days, were mortality, intracranial hemorrhage, and favorable functional status [defined as a modified Rankin score ranging from 0 (no disability) to 1 (able to carry out all usual activities)], hemorrhage, and mortality. Outcomes were analyzed by time-to-treatment categories: 0–90 min, 91–180 min, 181–270 min, and 271–360 min.
Receipt of alteplase after 270 min (4.5 h) was associated increased mortality (adjusted OR=1.49, 95% CI 1.00–2.21, P=0.05). Alteplase improved functional status when given within 4.5 h of symptom onset (for 0–90 min, adjusted OR=2.55, 95% CI 1.44 to 4.52; for 91–180 min, adjusted OR=1.64, 95% CI 1.12 to 2.40; and for 181–270 min, adjusted OR=1.34, 95% CI 1.06 to 1.68). Beyond 4.5 h, alteplase seemed to potentially worsen functional status (adjusted OR=1.22, 95% CI 0.92 to 1.61). Among all patients in the pooled studies, parenchymal hemorrhage occurred in 5.2% of the alteplase group and in 1.0% of the control group (adjusted OR=5.37, 95% CI 3.22–8.95, P<0.001). However, in an analysis of only patients who received treatment within 4.5 h, there was no clear increase in intracranial hemorrhage between the alteplase and control groups (P=0.41).
Bottom Line: Alteplase improves functional status at 90 days when given up to 4.5 h in patients with acute stroke, without increased risk of bleeding. However, earliest administration is best and must always remain the goal—these data are not a license to wait. The American Heart Association now endorses this approach.7
Smith et al. Early anticoagulation is associated with reduced mortality for acute pulmonary embolism. Chest. 2010;137:1382–1390.
Anticoagulation is the cornerstone of treatment for acute pulmonary embolism (PE) and is known to decrease mortality, most likely by preventing recurrent events.8–10 Earlier anticoagulation may impact mortality; however, less evidence exists regarding the association between the timing of therapeutic anticoagulation and mortality in acute PE.
Investigators conducted a retrospective cohort study of 400 consecutive adult patients diagnosed with acute PE in a single tertiary care emergency department (ED) from 2002 to 2005. By hospital protocol at the time, all patients were treated with IV unfractionated heparin either in the ED or after transfer to the hospital floor. The location of anticoagulation (i.e., ED vs. floor) as well as time from ED arrival to therapeutic activated partial thromboplastin time (aPTT) was measured.
Of the 400 patients enrolled, 280 (70%) received IV heparin in the ED, but only 20 (5%) received empiric treatment prior to confirmatory diagnosis. Three hundred forty-four (86%) patients had a therapeutic aPTT within 24 h of ED arrival. Overall in-hospital and 30-day mortality rates for all participants were 3.0% and 7.7%, respectively. Patients who received heparin in the ED had lower in-hospital (1.4% vs 6.7%; OR, 0.2; P=0.009) and 30-day mortality (4.4% vs 15.3%; OR, 0.25; P<0.001) compared with those started on heparin after transfer to the floor. Those with therapeutic aPTT within 24 h had lower 30-day mortality (5.6% vs. 14.8%, OR, 0.34; P=0.037) and trended towards lower in-hospital mortality.
Bottom Line: While difficult to draw firm conclusions from a retrospective review, initiation of anticoagulation in the ED and achieving a therapeutic aPTT in less than 24 h were both associated with decreased mortality in acute PE.
De Backer D, et al. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med. 2010;362:779–789.
The incidence of severe sepsis and septic shock continues to rise in the United States, and they are associated with significant morbidity and mortality.11,12 There have been very few randomized controlled trials examining the optimal vasopressor in shock. The Surviving Sepsis Guidelines from 2008 recommended either dopamine or norepinephrine as the first-line agent.13
In this large, multicenter, international randomized controlled trial, researchers sought to compare dopamine to norepinephrine as first-line vasopressor therapy in the treatment of shock. Between 2003–2007, investigators enrolled patients with shock who required vasopressor therapy. Shock was defined as a mean arterial pressure <70 mmHg or a systolic blood pressure <100 mmHg despite receiving IV fluids (1,000 ml of crystalloid) in addition to clinical evidence of hypoperfusion (e.g., confusion, low urine output, elevated lactate). Enrolled patients were randomized to receive dopamine or norepinephrine infusions at standard dosing adjusted as needed.
A total of 1,679 patients were enrolled; 62% had septic shock, 17% cardiogenic shock, and 16% hypovolemic shock. There was no difference in 28-day mortality between the dopamine and norepinephrine groups (52.5% vs 48.5%; P=0.10). There was also no difference in time to shock resolution, rates of ICU death, in-hospital death, or death at 6 and 12 months between the two groups. Arrhythmias were much more common in the dopamine group (24.1% vs 12.4%; P<0.001), especially atrial fibrillation. In subgroup analyses, patients with cardiogenic shock experienced significantly higher 28-day mortality in the dopamine group compared to the norepinephrine group (P=0.03, displayed on Kaplan-Meier plot).
Bottom Line: In patients with shock (septic, cardiogenic, or hypovolemic), norepinephrine and dopamine had similar outcomes, but dopamine was associated with significantly more arrhythmias and a higher mortality in cardiogenic shock. Norepinephrine should be our first-line vasopressor in patients with shock that is not fluid responsive.
Felker et al. Diuretic strategies in patient with acute decompensated heart failure. N Engl J Med. 2011;364(9):797–805.
Many patients hospitalized with acute decompensated heart failure present with signs and symptoms of fluid overload, as well as concomitant renal dysfunction. Treatment with diuretics is a mainstay of therapy, but the optimal strategy for relieving symptoms while preserving or improving renal function is unknown.14
In this multicenter, 2 × 2 factorial design, double-blind, randomized controlled trial, investigators enrolled 308 patients, who were hospitalized with at least one sign and one symptom of decompensated heart failure. The study was designed to compare (1) continuous IV furosemide infusion with a strategy of IV boluses every 12 h and (2) administration of low-dose furosemide (equivalent to a patient’s pre-admission oral dose) with a high-dose strategy (equivalent to 2.5 times a patient’s pre-admission oral dose).
The authors found no significant difference in either patients’ global assessment of symptoms (P=0.47) or in mean change in creatinine (+0.07 vs +0.05 mg/dl in 72 h, P=0.45) between the continuous infusion and intermittent bolus strategies. They also reported no significant difference for these outcomes between the low-dose and high-dose strategies, although they noted a numeric trend towards greater improvement in patients’ global assessment of symptoms (for difference in global assessment of symptoms, P=0.06; difference in change in creatinine was +0.04 vs +0.08 mg/dl in 72 h, P=0.21). In sub-analyses, the investigators found significantly greater net fluid loss (4,899 ml vs 3,575 ml, P=0.001) and improvement in dyspnea (P=0.04) among patients in the high-dose group. The median hospital length of stay (LOS) did not differ among any of the treatment strategies.
Bottom Line: There are no substantive differences in the effects of low-dose and high-dose furosemide strategies or in continuous infusion and intermittent bolus strategies on patients’ global assessment of symptoms or renal function. This study should reassure clinicians who treat decompensated heart failure with intermittent, high-dose diuretics, as this strategy appears to relieve dyspnea more quickly without adverse effect on kidney function.
Rutten et al. β-blockers may reduce mortality and risk of exacerbations in patients with chronic obstructive pulmonary disease. Arch Intern Med. 2010;170(10):880–887.
Physicians often avoid β-blockers in chronic obstructive pulmonary disease (COPD) because of concern about adverse impact.15 The long-term effects of β-blockade in COPD patients with and without cardiovascular disease are unknown.
In this observational study, investigators enrolled 2,230 outpatients age 45 and older with COPD from general practice clinics in the Netherlands and compared those who received either cardioselective or non-selective β-blockers with those who did not. Patients were followed for a mean of 7.2 years. Primary outcomes were all-cause mortality and COPD exacerbations (defined as exacerbations requiring pulse dose steroids and/or hospital admission). Propensity scoring was used to balance covariates.
Overall, 686 patients (30.8%) died during follow-up. Fewer COPD patients who received β-blockers died than those who did not (27.2% vs 32.3%, P=0.02). The adjusted hazard ratio (HR) using propensity scoring was 0.64 (95% CI 0.52–0.77). In subanalyses, this association was present for cardioselective β-blockers (HR, 0.63, 95% CI: 0.51–0.77) but not for non-selective β-blockers (HR, 0.80, 95% CI: 0.60–1.05). Forty-five percent of patients had known cardiovascular disease. Among those without recognized cardiovascular disease (n=1,229), adjusted HR using propensity scoring was 0.68 (95% CI 0.46–1.02).
During the follow-up period, 1,055 patients (47.3%) had at least one COPD exacerbation. Fewer patients receiving β-blockers experienced a COPD exacerbation (42.7% vs 49.3%, P=0.005). Adjusted HR of COPD exacerbation among patients in the β-blocker group was 0.64 (95% CI: 0.55–0.75), similar to both the cardioselective β-blocker subgroup (HR, 0.68, 95% CI: 0.58–0.80) and the non-selective β-blocker subgroup (HR, 0.70, 95% CI: 0.56–0.89). In subgroup analysis of patients without known cardiovascular disease, it trended numerically towards favoring β-blocker use (HR 0.68, 95% CI 0.46–1.02).
Bottom Line: Long-term use of β-blockers in patients with COPD does not appear to be harmful and may improve survival and reduce exacerbations in patients with or without known cardiovascular disease.
Lindenauer et al. Association of corticosteroid dose and route of administration with risk of treatment failure in acute exacerbation of chronic obstructive pulmonary disease. JAMA. 2010;303(23):2359–2367.
Administration of corticosteroids has been shown to improve physiologic measures of lung function, reduce risk of treatment failure, and decrease hospital LOS among patients with acute exacerbation of COPD.16 However, the optimal dose for this therapy is uncertain.
In this retrospective cohort study investigators analyzed data from 79,985 patients admitted to the general medical services of 414 hospitals to compare a low-dose oral steroid strategy (prednisone 20–80 mg daily) with a high-dose IV steroid strategy (120–800 mg daily of a prednisone-equivalent medication) within the first 48 h of hospitalization. The primary outcome was treatment failure (defined as mechanical ventilation after the second hospital day, death during hospitalization, or rehospitalization for COPD with 30 days of discharge).
To minimize potential confounding, the investigators generated a propensity score for initial treatment with low-dose steroids based on multiple possible confounders. Overall 92% of patients received the high-dose steroid strategy. The median daily dose was equivalent to 60 mg prednisone in the low-dose strategy and 600 mg in the high-dose strategy. After matching for propensity score and adjusting for unbalanced covariates, the investigators found lower odds of treatment failure in patients receiving the low-dose strategy (OR 0.84, 95% CI: 0.75–0.95) and for secondary outcomes: hospital LOS (OR 0.90, 95% CI: 0.88–0.91) and total costs (OR 0.91, 95% CI: 0.89–0.93).
Bottom Line: Although many clinicians appear to favor high-dose IV steroids when treating patients hospitalized for acute COPD exacerbation, a low-dose oral steroids strategy does not appear to be harmful and may be associated with lower risk of treatment failure as well as lower hospital costs.
Al-Omran et al. Enteral nutrition is superior to total parenteral nutrition in patients with acute pancreatitis. Cochrane Database Syst Rev. 2010;(1):CD002837.
The best way to feed patients with acute pancreatitis who cannot take oral nutrition is a common conundrum, and providers must often decide between the risks associated with total parenteral nutrition (TPN)17 and the challenges of enteral nutrition (EN).18
In this meta-analysis of eight randomized controlled trials involving 348 patients, the investigators compared the effect of TPN versus EN on clinical outcomes in patients with acute pancreatitis. Pancreatitis was defined as clinical signs and symptoms in addition to an elevated serum marker (e.g., lipase). Five trials included only patients with severe acute pancreatitis. In six trials, a nasojenunal tube was inserted either endoscopically or fluoroscopically beyond the ligament of Treitz. The authors rated the strength of evidence for most outcomes as low to moderate according to the GRADE scale.
Use of EN instead of TPN reduced death (RR=0.50, 95% CI 0.28 to 0.91), multiple organ failure (RR=0.55, 95% CI 0.37 to 0.81), systemic infection (RR=0.39, 95% CI 0.23 to 0.65) and the need for operative intervention (RR=0.44, 95% CI 0.29 to 0.67). A non-statistically significant reduction in LOS was seen in the EN group (−2.37 days, 95% CI −7.18 to 2.44). In a subgroup analysis of patients with severe pancreatitis, EN was even more effective in reducing death (RR=0.18, 95% CI 0.06 to 0.58).
Bottom Line: The best available evidence supports the use of EN instead of TPN in patients with acute pancreatitis who cannot be fed orally. EN should be delivered via a nasojenunal tube placed distally to the ligament of Treitz.