PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (62)
 

Clipboard (0)
None

Select a Filter Below

Journals
more »
Year of Publication
more »
1.  The efficacy and safety of plasma exchange in patients with sepsis and septic shock: a systematic review and meta-analysis 
Critical Care  2014;18(6):699.
Introduction
Sepsis and septic shock are leading causes of intensive care unit (ICU) mortality. They are characterized by excessive inflammation, upregulation of procoagulant proteins and depletion of natural anticoagulants. Plasma exchange has the potential to improve survival in sepsis by removing inflammatory cytokines and restoring deficient plasma proteins. The objective of this study is to evaluate the efficacy and safety of plasma exchange in patients with sepsis.
Methods
We searched MEDLINE, EMBASE, CENTRAL, Scopus, reference lists of relevant articles, and grey literature for relevant citations. We included randomized controlled trials comparing plasma exchange or plasma filtration with usual care in critically ill patients with sepsis or septic shock. Two reviewers independently identified trials, extracted trial-level data and performed risk of bias assessments using the Cochrane Risk of Bias tool. The primary outcome was all-cause mortality reported at longest follow-up. Meta-analysis was performed using a random-effects model.
Results
Of 1,957 records identified, we included four unique trials enrolling a total of 194 patients (one enrolling adults only, two enrolling children only, one enrolling adults and children). The mean age of adult patients ranged from 38 to 53 years (n = 128) and the mean age of children ranged from 0.9 to 18 years (n = 66). All trials were at unclear to high risk of bias. The use of plasma exchange was not associated with a significant reduction in all-cause mortality (risk ratio (RR) 0.83, 95% confidence interval (CI) 0.45 to 1.52, I2 60%). In adults, plasma exchange was associated with reduced mortality (RR 0.63, 95% CI 0.42 to 0.96; I2 0%), but was not in children (RR 0.96, 95% CI 0.28 to 3.38; I2 60%). None of the trials reported ICU or hospital lengths of stay. Only one trial reported adverse events associated with plasma exchange including six episodes of hypotension and one allergic reaction to fresh frozen plasma.
Conclusions
Insufficient evidence exists to recommend plasma exchange as an adjunctive therapy for patients with sepsis or septic shock. Rigorous randomized controlled trials evaluating clinically relevant patient-centered outcomes are required to evaluate the impact of plasma exchange in this condition.
Electronic supplementary material
The online version of this article (doi:10.1186/s13054-014-0699-2) contains supplementary material, which is available to authorized users.
doi:10.1186/s13054-014-0699-2
PMCID: PMC4318234  PMID: 25527094
2.  A ventilator strategy combining low tidal volume ventilation, recruitment maneuvers, and high positive end-expiratory pressure does not increase sedative, opioid, or neuromuscular blocker use in adults with acute respiratory distress syndrome and may improve patient comfort 
Background
The Lung Open Ventilation Study (LOV Study) compared a low tidal volume strategy with an experimental strategy combining low tidal volume, lung recruitment maneuvers, and higher plateau and positive end-expiratory pressures (PEEP) in adults with acute respiratory distress syndrome (ARDS). Herein, we compared sedative, opioid, and neuromuscular blocker (NMB) use among patients managed with the intervention and control strategies and clinicians' assessment of comfort in both groups.
Methods
This was an observational substudy of the LOV Study, a randomized trial conducted in 30 intensive care units in Canada, Australia, and Saudi Arabia. In 16 centers, we recorded daily doses of sedatives, opioids, and NMBs and surveyed bedside clinicians about their own comfort with the assigned ventilator strategy and their perceptions of patient comfort. We compared characteristics and outcomes of patients who did and did not receive NMBs.
Results
Study groups received similar sedative, opioid, and NMB dosing on days 1, 3, and 7. Patient comfort as assessed by clinicians was not different in the two groups: 93% perceived patients had no/minimal discomfort. In addition, 92% of clinicians were comfortable with the assigned ventilation strategy without significant differences between the two groups. When clinicians expressed discomfort, more expressed discomfort about PEEP levels in the intervention vs control group (2.9% vs 0.7%, P <0.0001), and more perceived patient discomfort among controls (6.0% vs 4.3%, P = 0.049). On multivariable analysis, the strongest associations with NMB use were higher plateau pressure (hazard ratio (HR) 1.15; 95% confidence interval (CI) 1.07 to 1.23; P = 0.0002) and higher daily sedative dose (HR 1.03; 95% CI 1.02 to 1.05; P <0.0001). Patients receiving NMBs had more barotrauma, longer durations of mechanical ventilation and hospital stay, and higher mortality.
Conclusions
In the LOV Study, high PEEP, low tidal volume ventilation did not increase sedative, opioid, or NMB doses in adults with ARDS, compared with a lower PEEP strategy, and appeared at least as comfortable for patients. NMB use may reflect worse lung injury, as these patients had more barotrauma, longer durations of ventilation, and higher mortality.
Trial registration
ClinicalTrials.gov Identifier NCT00182195
doi:10.1186/s13613-014-0033-9
PMCID: PMC4273695  PMID: 25593749
ARDS; Neuromuscular blocker; Sedation; Opioid; Mechanical ventilation; Clinician comfort
3.  Enhancing the quality of end-of-life care in Canada  
doi: 10.1503/cmaj.130716
PMCID: PMC3826364  PMID: 24062171
4.  Implementing a multifaceted tailored intervention to improve nutrition adequacy in critically ill patients: results of a multicenter feasibility study 
Critical Care  2014;18(3):R96.
Introduction
Tailoring interventions to address identified barriers to change may be an effective strategy to implement guidelines and improve practice. However, there is inadequate data to inform the optimal method or level of tailoring. Consequently, we conducted the PERFormance Enhancement of the Canadian nutrition guidelines by a Tailored Implementation Strategy (PERFECTIS) study to determine the feasibility of a multifaceted, interdisciplinary, tailored intervention aimed at improving adherence to critical care nutrition guidelines for the provision of enteral nutrition.
Methods
A before-after study was conducted in seven ICUs from five hospitals in North America. During a 3-month pre-implementation phase, each ICU completed a nutrition practice audit to identify guideline-practice gaps and a barriers assessment to identify obstacles to practice change. During a one day meeting, the results of the audit and barriers assessment were reviewed and used to develop a site-specific tailored action plan. The tailored action plan was then implemented over a 12-month period that included bi-monthly progress meetings. Compliance with the tailored action plan was determined by the proportion of items in the action plan that was completely implemented. We examined acceptability of the intervention through staff responses to an evaluation questionnaire. In addition, the nutrition practice audit and barriers survey were repeated at the end of the implementation phase to determine changes in barriers and nutrition practices.
Results
All five sites successfully completed all aspects of the study. However, their ability to fully implement all of their developed action plans varied from 14% to 75% compliance. Nurses, on average, rated the study-related activities and resources as ‘somewhat useful’ and a third of respondents ‘agreed’ or ‘strongly agreed’ that their nutrition practice had changed as a result of the intervention. We observed a statistically significant 10% (Site range -4.3% to -26.0%) decrease in overall barriers score, and a non-significant 6% (Site range -1.5% to 17.9%) and 4% (-8.3% to 18.2%) increase in the adequacy of total nutrition from calories and protein, respectively.
Conclusions
The multifaceted tailored intervention appears to be feasible but further refinement is warranted prior to testing the effectiveness of the approach on a larger scale.
Trial registration
ClinicalTrials.gov NCT01168128. Registered 21 July 2010.
doi:10.1186/cc13867
PMCID: PMC4229943  PMID: 24887445
5.  The validation of a questionnaire to assess barriers to enteral feeding in critically ill patients: a multicenter international survey 
Background
A growing body of literature supports the need to identify and address barriers to knowledge use as a strategy to improve care delivery. To this end, we developed a questionnaire to assess barriers to enterally feeding critically ill adult patients, and sought to gain evidence to support the construct validity of this instrument by testing the hypothesis that barriers identified by the questionnaire are inversely associated with nutrition performance.
Methods
We conducted a multilevel multivariable regression analysis of data from an observational study in 55 Intensive Care Units (ICUs) from 5 geographic regions. Data on nutrition practices were abstracted from 1153 patient charts, and 1439 critical care nurses completed the ‘Barriers to Enterally Feeding critically Ill Patients’ questionnaire. Our primary outcome was adequacy of calories from enteral nutrition (proportion of prescribed calories received enterally) and our primary predictor of interest was a barrier score derived from ratings of importance of items in the questionnaire.
Results
The mean adequacy of calories from enteral nutrition was 48 (Standard Deviation (SD)17)%. Evaluation for confounding identified patient type, proportion of nurse respondents working in the ICU greater than 5 years, and geographic region as important covariates. In a regression model adjusting for these covariates plus evaluable nutrition days and APACHE II score, we observed that a 10 point increase in overall barrier score is associated with a 3.5 (Standard Error (SE)1.3)% decrease in enteral nutrition adequacy (p-values <0.01).
Conclusion
Our results provide evidence to support our a priori hypothesis that barriers negatively impact the provision of nutrition in ICUs, suggesting that our recently developed questionnaire may be a promising tool to identify these important factors, and guide the selection of interventions to optimize nutrition practice. Further research is required to illuminate if and how the type of barrier, profession of the provider, and geographic location of the hospital may influence this association.
doi:10.1186/1472-6963-14-197
PMCID: PMC4012747  PMID: 24885039
Barriers; Critical care; Enteral nutrition; Instrument development; Nutrition therapy; Quality improvement; Multi-level regression analysis; Validity
6.  Thromboprophylaxis patterns and determinants in critically ill patients: a multicenter audit 
Critical Care  2014;18(2):R82.
Introduction
Heparin is safe and prevents venous thromboembolism in critical illness. We aimed to determine the guideline concordance for thromboprophylaxis in critically ill patients and its predictors, and to analyze factors associated with the use of low molecular weight heparin (LMWH), as it may be associated with a lower risk of pulmonary embolism and heparin-induced thrombocytopenia without increasing the bleeding risk.
Methods
We performed a retrospective audit in 28 North American intensive care units (ICUs), including all consecutive medical-surgical patients admitted in November 2011. We documented ICU thromboprophylaxis and reasons for omission. Guideline concordance was determined by adding days in which patients without contraindications received thromboprophylaxis to days in which patients with contraindications did not receive it, divided by the total number of patient-days. We used multilevel logistic regression including time-varying, center and patient-level covariates to determine the predictors of guideline concordance and use of LMWH.
Results
We enrolled 1,935 patients (62.3 ± 16.7 years, Acute Physiology and Chronic Health Evaluation [APACHE] II score 19.1 ± 8.3). Patients received thromboprophylaxis with unfractionated heparin (UFH) (54.0%) or LMWH (27.6%). Guideline concordance occurred for 95.5% patient-days and was more likely in patients who were sicker (odds ratio (OR) 1.49, 95% confidence interval (CI) 1.17, 1.75 per 10-point increase in APACHE II), heavier (OR 1.32, 95% CI 1.05, 1.65 per 10-m/kg2 increase in body mass index), had cancer (OR 3.22, 95% CI 1.81, 5.72), previous venous thromboembolism (OR 3.94, 95% CI 1.46,10.66), and received mechanical ventilation (OR 1.83, 95% CI 1.32,2.52). Reasons for not receiving thromboprophylaxis were high risk of bleeding (44.5%), current bleeding (16.3%), no reason (12.9%), recent or upcoming invasive procedure (10.2%), nighttime admission or discharge (9.7%), and life-support limitation (6.9%). LMWH was less often administered to sicker patients (OR 0.65, 95% CI 0.48, 0.89 per 10-point increase in APACHE II), surgical patients (OR 0.41, 95% CI 0.24, 0.72), those receiving vasoactive drugs (OR 0.47, 95% CI 0.35, 0.64) or renal replacement therapy (OR 0.10, 95% CI 0.05, 0.23).
Conclusions
Guideline concordance for thromboprophylaxis was high, but LMWH was less commonly used, especially in patients who were sicker, had surgery, or received vasopressors or renal replacement therapy, representing a potential quality improvement target.
doi:10.1186/cc13844
PMCID: PMC4057024  PMID: 24766968
7.  Predictors of physical restraint use in Canadian intensive care units 
Critical Care  2014;18(2):R46.
Introduction
Physical restraint (PR) use in the intensive care unit (ICU) has been associated with higher rates of self-extubation and prolonged ICU length of stay. Our objectives were to describe patterns and predictors of PR use.
Methods
We conducted a secondary analysis of a prospective observational study of analgosedation, antipsychotic, neuromuscular blocker, and PR practices in 51 Canadian ICUs. Data were collected prospectively for all mechanically ventilated adults admitted during a two-week period. We tested for patient, treatment, and hospital characteristics that were associated with PR use and number of days of use, using logistic and Poisson regression respectively.
Results
PR was used on 374 out of 711 (53%) patients, for a mean number of 4.1 (standard deviation (SD) 4.0) days. Treatment characteristics associated with PR were higher daily benzodiazepine dose (odds ratio (OR) 1.05, 95% confidence interval (CI) 1.00 to 1.11), higher daily opioid dose (OR 1.04, 95% CI 1.01 to 1.06), antipsychotic drugs (OR 3.09, 95% CI 1.74 to 5.48), agitation (Sedation-Agitation Scale (SAS) >4) (OR 3.73, 95% CI 1.50 to 9.29), and sedation administration method (continuous and bolus versus bolus only) (OR 3.09, 95% CI 1.74 to 5.48). Hospital characteristics associated with PR indicated patients were less likely to be restrained in ICUs from university-affiliated hospitals (OR 0.32, 95% CI 0.17 to 0.61). Mainly treatment characteristics were associated with more days of PR, including: higher daily benzodiazepine dose (incidence rate ratio (IRR) 1.07, 95% CI 1.01 to 1.13), daily sedation interruption (IRR 3.44, 95% CI 1.48 to 8.10), antipsychotic drugs (IRR 15.67, 95% CI 6.62 to 37.12), SAS <3 (IRR 2.62, 95% CI 1.08 to 6.35), and any adverse event including accidental device removal (IRR 8.27, 95% CI 2.07 to 33.08). Patient characteristics (age, gender, Acute Physiology and Chronic Health Evaluation II score, admission category, prior substance abuse, prior psychotropic medication, pre-existing psychiatric condition or dementia) were not associated with PR use or number of days used.
Conclusions
PR was used in half of the patients in these 51 ICUs. Treatment characteristics predominantly predicted PR use, as opposed to patient or hospital/ICU characteristics. Use of sedative, analgesic, and antipsychotic drugs, agitation, heavy sedation, and occurrence of an adverse event predicted PR use or number of days used.
doi:10.1186/cc13789
PMCID: PMC4075126  PMID: 24661688
8.  Development and psychometric properties of a questionnaire to assess barriers to feeding critically ill patients 
Background
To successfully implement the recommendations of critical care nutrition guidelines, one potential approach is to identify barriers to providing optimal enteral nutrition (EN) in the intensive care unit (ICU), and then address these barriers systematically. Therefore, the purpose of this study was to develop a questionnaire to assess barriers to enterally feeding critically ill patients and to conduct preliminary validity testing of the new instrument.
Methods
The content of the questionnaire was guided by a published conceptual framework, literature review, and consultation with experts. The questionnaire was pre-tested on a convenience sample of 32 critical care practitioners, and then field tested with 186 critical care providers working at 5 hospitals in North America. The revised questionnaire was pilot tested at another ICU (n = 43). Finally, the questionnaire was distributed to a random sample of ICU nurses twice, two weeks apart, to determine test retest reliability (n = 17). Descriptive statistics, exploratory factor analysis, Cronbach alpha, intraclass correlations (ICC), and kappa coefficients were conducted to assess validity and reliability.
Results
We developed a questionnaire with 26 potential barriers to delivery of EN asking respondents to rate their importance as barriers in their ICU. Face and content validity of the questionnaire was established through literature review and expert input. The factor analysis indicated a five-factor solution and accounted for 72% of the variance in barriers: guideline recommendations and implementation strategies, delivery of EN to the patient, critical care provider attitudes and behavior, dietitian support, and ICU resources. Overall, the indices of internal reliability for the derived factor subscales and the overall instrument were acceptable (subscale Cronbach alphas range 0.84 – 0.89). However, the test retest reliability was variable and below acceptable thresholds for the majority of items (ICC’s range −0.13 to 0.70). The within group agreement, an indices reflecting the reliability of aggregating individual responses to the ICU level was also variable (ICC’s range 0.0 to 0.82).
Conclusions
We developed a questionnaire to identify barriers to enteral feeding in critically ill patients. Additional studies are planned to further revise and evaluate the reliability and validity of the instrument.
doi:10.1186/1748-5908-8-140
PMCID: PMC4235036  PMID: 24305039
Barriers; Critical care; Factor analysis; Guideline implementation; Instrument development; Nutrition; Reliability; Validity
9.  Randomized controlled trials in pediatric critical care: a scoping review 
Critical Care  2013;17(5):R256.
Introduction
Evidence from randomized controlled trials (RCTs) is required to guide treatment of critically ill children, but the number of RCTs available is limited and the publications are often difficult to find. The objectives of this review were to systematically identify RCTs in pediatric critical care and describe their methods and reporting.
Methods
We searched MEDLINE, EMBASE, LILACS and CENTRAL (from inception to April 16, 2013) and reference lists of included RCTs and relevant systematic reviews. We included published RCTs administering any intervention to children in a pediatric ICU. We excluded trials conducted in neonatal ICUs, those enrolling exclusively preterm infants, and individual patient crossover trials. Pairs of reviewers independently screened studies for eligibility, assessed risk of bias, and abstracted data. Discrepancies were resolved by consensus.
Results
We included 248 RCTs: 45 (18%) were multicentered and 14 (6%) were multinational. Trials most frequently enrolled both medical and surgical patients (43%) but postoperative cardiac surgery was the single largest population studied (19%). The most frequently evaluated types of intervention were medications (63%), devices (11%) and nutrition (8%). Laboratory or physiological measurements were the most frequent type of primary outcomes (18%). Half of these trials (50%) reported blinding. Of the 107 (43%) trials that reported an a priori sample size, 34 (32%) were stopped early. The median number of children randomized per trial was 49 and ranged from 6 to 4,947. The frequency of RCT publications increased at a mean rate of 0.7 RCTs per year (P<0.001) from 1 to 20 trials per year.
Conclusions
This scoping review identified the available RCTs in pediatric critical care and made them accessible to clinicians and researchers (http://epicc.mcmaster.ca). Most focused on medications and intermediate or surrogate outcomes, were single-centered and were conducted in North America and Western Europe. The results of this review underscore the need for trials with rigorous methodology, appropriate outcome measures, and improved quality of reporting to ensure that high quality evidence exists to support clinical decision-making in this vulnerable population.
doi:10.1186/cc13083
PMCID: PMC4057256  PMID: 24168782
10.  Cardiac ischemia in patients with septic shock randomized to vasopressin or norepinephrine 
Critical Care  2013;17(3):R117.
Introduction
Cardiac troponins are sensitive and specific biomarkers of myocardial necrosis. We evaluated troponin, CK, and ECG abnormalities in patients with septic shock and compared the effect of vasopressin (VP) versus norepinephrine (NE) on troponin, CK, and ECGs.
Methods
This was a prospective substudy of a randomized trial. Adults with septic shock randomly received, blinded, a low-dose infusion of VP (0.01 to 0.03 U/min) or NE (5 to 15 μg/min) in addition to open-label vasopressors, titrated to maintain a mean blood pressure of 65 to 75 mm Hg. Troponin I/T, CK, and CK-MB were measured, and 12-lead ECGs were recorded before study drug, and 6 hours, 2 days, and 4 days after study-drug initiation. Two physician readers, blinded to patient data and drug, independently interpreted ECGs.
Results
We enrolled 121 patients (median age, 63.9 years (interquartile range (IQR), 51.1 to 75.3), mean APACHE II 28.6 (SD 7.7)): 65 in the VP group and 56 in the NE group. At the four time points, 26%, 36%, 32%, and 21% of patients had troponin elevations, respectively. Baseline characteristics and outcomes were similar between patients with positive versus negative troponin levels. Troponin and CK levels and rates of ischemic ECG changes were similar in the VP and the NE groups. In multivariable analysis, only APACHE II was associated with 28-day mortality (OR, 1.07; 95% CI, 1.01 to 1.14; P = 0.033).
Conclusions
Troponin elevation is common in adults with septic shock. We observed no significant differences in troponin, CK, and ECGs in patients treated with vasopressin and norepinephrine. Troponin elevation was not an independent predictor of mortality.
Trial registration
Controlled-trials.com ISRCTN94845869
doi:10.1186/cc12789
PMCID: PMC4057204  PMID: 23786655
Septic shock; myocardial ischemia; vasopressin; norepinephrine; troponin; electrocardiogram
11.  Addressing Dichotomous Data for Participants Excluded from Trial Analysis: A Guide for Systematic Reviewers 
PLoS ONE  2013;8(2):e57132.
Introduction
Systematic reviewer authors intending to include all randomized participants in their meta-analyses need to make assumptions about the outcomes of participants with missing data.
Objective
The objective of this paper is to provide systematic reviewer authors with a relatively simple guidance for addressing dichotomous data for participants excluded from analyses of randomized trials.
Methods
This guide is based on a review of the Cochrane handbook and published methodological research. The guide deals with participants excluded from the analysis who were considered ‘non-adherent to the protocol’ but for whom data are available, and participants with missing data.
Results
Systematic reviewer authors should include data from ‘non-adherent’ participants excluded from the primary study authors' analysis but for whom data are available. For missing, unavailable participant data, authors may conduct a complete case analysis (excluding those with missing data) as the primary analysis. Alternatively, they may conduct a primary analysis that makes plausible assumptions about the outcomes of participants with missing data. When the primary analysis suggests important benefit, sensitivity meta-analyses using relatively extreme assumptions that may vary in plausibility can inform the extent to which risk of bias impacts the confidence in the results of the primary analysis. The more plausible assumptions draw on the outcome event rates within the trial or in all trials included in the meta-analysis. The proposed guide does not take into account the uncertainty associated with assumed events.
Conclusions
This guide proposes methods for handling participants excluded from analyses of randomized trials. These methods can help in establishing the extent to which risk of bias impacts meta-analysis results.
doi:10.1371/journal.pone.0057132
PMCID: PMC3581575  PMID: 23451162
12.  Co-enrollment of critically ill patients into multiple studies: patterns, predictors and consequences 
Critical Care  2013;17(1):R1.
Introduction
Research on co-enrollment practices and their impact are limited in the ICU setting. The objectives of this study were: 1) to describe patterns and predictors of co-enrollment of patients in a thromboprophylaxis trial, and 2) to examine the consequences of co-enrollment on clinical and trial outcomes.
Methods
In an observational analysis of an international thromboprophylaxis trial in 67 ICUs, we examined the co-enrollment of critically ill medical-surgical patients into more than one study, and examined the clinical and trial outcomes among co-enrolled and non-co-enrolled patients.
Results
Among 3,746 patients enrolled in PROTECT (Prophylaxis for ThromboEmbolism in Critical Care Trial), 713 (19.0%) were co-enrolled in at least one other study (53.6% in a randomized trial, 37.0% in an observational study and 9.4% in both). Six factors independently associated with co-enrollment (all P < 0.001) were illness severity (odds ratio (OR) 1.35, 95% confidence interval (CI) 1.19 to 1.53 for each 10-point Acute Physiology and Chronic Health Evaluation (APACHE) II score increase), substitute decision-makers providing consent, rather than patients (OR 3.31, 2.03 to 5.41), experience of persons inviting consent (OR 2.67, 1.74 to 4.11 for persons with > 10 years' experience compared to persons with none), center size (all ORs > 10 for ICUs with > 15 beds), affiliation with trials groups (OR 5.59, 3.49 to 8.95), and main trial rather than pilot phase (all ORs > 8 for recruitment year beyond the pilot). Co-enrollment did not influence clinical or trial outcomes or risk of adverse events.
Conclusions
Co-enrollment was strongly associated with features of the patients, research personnel, setting and study. Co-enrollment had no impact on trial results, and appeared safe, acceptable and feasible. Transparent reporting, scholarly discourse, ethical analysis and further research are needed on the complex topic of co-enrollment during critical illness.
doi:10.1186/cc11917
PMCID: PMC4056073  PMID: 23298553
13.  Strategies to enhance venous thromboprophylaxis in hospitalized medical patients (SENTRY): a pilot cluster randomized trial 
Background
Venous thromboembolism (VTE) is a common preventable cause of mortality in hospitalized medical patients. Despite rigorous randomized trials generating strong recommendations for anticoagulant use to prevent VTE, nearly 40% of medical patients receive inappropriate thromboprophylaxis. Knowledge-translation strategies are needed to bridge this gap.
Methods
We conducted a 16-week pilot cluster randomized controlled trial (RCT) to determine the proportion of medical patients that were appropriately managed for thromboprophylaxis (according to the American College of Chest Physician guidelines) within 24 hours of admission, through the use of a multicomponent knowledge-translation intervention. Our primary goal was to determine the feasibility of conducting this study on a larger scale. The intervention comprised clinician education, a paper-based VTE risk assessment algorithm, printed physicians’ orders, and audit and feedback sessions. Medical wards at six hospitals (representing clusters) in Ontario, Canada were included; three were randomized to the multicomponent intervention and three to usual care (i.e., no active strategies for thromboprophylaxis in place). Blinding was not used.
Results
A total of 2,611 patients (1,154 in the intervention and 1,457 in the control group) were eligible and included in the analysis. This multicomponent intervention did not lead to a significant difference in appropriate VTE prophylaxis rates between intervention and control hospitals (appropriate management rate odds ratio = 0.80; 95% confidence interval: 0.50, 1.28; p = 0.36; intra-class correlation coefficient: 0.022), and thus was not considered feasible. Major barriers to effective knowledge translation were poor attendance by clinical staff at education and feedback sessions, difficulty locating preprinted orders, and lack of involvement by clinical and administrative leaders. We identified several factors that may increase uptake of a VTE prophylaxis strategy, including local champions, support from clinical and administrative leaders, mandatory use, and a simple, clinically relevant risk assessment tool.
Conclusions
Hospitals allocated to our multicomponent intervention did not have a higher rate of medical inpatients appropriately managed for thromboprophylaxis than did hospitals that were not allocated to this strategy.
doi:10.1186/1748-5908-8-1
PMCID: PMC3547806  PMID: 23279972
Thromboprophylaxis; Medical patients; Anticoagulants; Venous thromboembolism; Cluster randomization; Standard orders
14.  Prognostic utility and characterization of cell-free DNA in patients with severe sepsis 
Critical Care  2012;16(4):R151.
Introduction
Although sepsis is the leading cause of death in noncoronary critically ill patients, identification of patients at high risk of death remains a challenge. In this study, we examined the incremental usefulness of adding multiple biomarkers to clinical scoring systems for predicting intensive care unit (ICU) mortality in patients with severe sepsis.
Methods
This retrospective observational study used stored plasma samples obtained from 80 severe sepsis patients recruited at three tertiary hospital ICUs in Hamilton, Ontario, Canada. Clinical data and plasma samples were obtained at study inclusion for all 80 patients, and then daily for 1 week, and weekly thereafter for a subset of 50 patients. Plasma levels of cell-free DNA (cfDNA), interleukin 6 (IL-6), thrombin, and protein C were measured and compared with clinical characteristics, including the primary outcome of ICU mortality and morbidity measured with the Multiple Organ Dysfunction (MODS) score and Acute Physiology and Chronic Health Evaluation (APACHE) II scores.
Results
The level of cfDNA in plasma at study inclusion had better prognostic utility than did MODS or APACHE II scores, or the biomarkers measured. The area under the receiver operating characteristic (ROC) curves for cfDNA to predict ICU mortality is 0.97 (95% CI, 0.93 to 1.00) and to predict hospital mortality is 0.84 (95% CI, 0.75 to 0.94). We found that a cfDNA cutoff value of 2.35 ng/μl had a sensitivity of 87.9% and specificity of 93.5% for predicting ICU mortality. Sequential measurements of cfDNA suggested that ICU mortality may be predicted within 24 hours of study inclusion, and that the predictive power of cfDNA may be enhanced by combining it with protein C levels or MODS scores. DNA-sequence analyses and studies with Toll-like receptor 9 (TLR9) reporter cells suggests that the cfDNA from sepsis patients is host derived.
Conclusions
These studies suggest that cfDNA provides high prognostic accuracy in patients with severe sepsis. The serial data suggest that the combination of cfDNA with protein C and MODS scores may yield even stronger predictive power. Incorporation of cfDNA in sepsis risk-stratification systems may be valuable for clinical decision making or for inclusion into sepsis trials.
doi:10.1186/cc11466
PMCID: PMC3580740  PMID: 22889177
15.  SURVEY OF PHARMACOLOGIC THROMBOPROPHYLAXIS IN CRITICALLY ILL CHILDREN 
Critical care medicine  2011;39(7):1773-1778.
Objective
There is lack of evidence to guide thromboprophylaxis in the pediatric intensive care unit (PICU). We aimed to assess current prescribing practice for pharmacologic thromboprophylaxis in critically ill children.
Setting
PICUs in the United States and Canada with at least 10 beds.
Design
Cross-sectional self-administered survey of pediatric intensivists using adolescent, child and infant scenarios.
Participants
PICU clinical directors or section heads.
Intervention
None.
Measurements and Main Results
Physician leaders from 97 of 151 (64.2%) PICUs or their designees responded to the survey. In mechanically ventilated children, 42.3% of the respondents would usually or always prescribe thromboprophylaxis for the adolescent but only 1.0% would prescribe it for the child and 1.1% for the infant. Considering all PICU patients, 3.1%, 32.0% and 44.2% of respondents would never prescribe thromboprophylaxis for the adolescent, child and infant scenarios, respectively. These findings were significant (P<.001 for the adolescent versus child and infant; P=.002 for child versus infant). Other patient factors that increased the likelihood of prescribing prophylaxis to a critically ill child for all 3 scenarios were the presence of hypercoagulability, prior deep venous thrombosis or a cavopulmonary anastomosis. Prophylaxis was less likely to be prescribed to patients with major bleeding or an anticipated invasive intervention. Low molecular weight heparin was the most commonly prescribed drug.
Conclusions
In these scenarios, physician leaders in PICUs were more likely to prescribe thromboprophylaxis to adolescents compared to children or infants, but they prescribed it less often in adolescents than is recommended by evidence-based guidelines for adults. The heterogeneity in practice we documented underscores the need for rigorous randomized trials to determine the need for thromboprophylaxis in critically ill adolescents and children.
doi:10.1097/CCM.0b013e3182186ec0
PMCID: PMC3118917  PMID: 21423003
venous thromboembolism; anticoagulants; prevention; risk factor; intensive care
16.  Genome-Wide Association Study among Four Horse Breeds Identifies a Common Haplotype Associated with In Vitro CD3+ T Cell Susceptibility/Resistance to Equine Arteritis Virus Infection ▿  
Journal of Virology  2011;85(24):13174-13184.
Previously, we have shown that horses could be divided into susceptible and resistant groups based on an in vitro assay using dual-color flow cytometric analysis of CD3+ T cells infected with equine arteritis virus (EAV). Here, we demonstrate that the differences in in vitro susceptibility of equine CD3+ T lymphocytes to EAV infection have a genetic basis. To investigate the possible hereditary basis for this trait, we conducted a genome-wide association study (GWAS) to compare susceptible and resistant phenotypes. Testing of 267 DNA samples from four horse breeds that had a susceptible or a resistant CD3+ T lymphocyte phenotype using both Illumina Equine SNP50 BeadChip and Sequenom's MassARRAY system identified a common, genetically dominant haplotype associated with the susceptible phenotype in a region of equine chromosome 11 (ECA11), positions 49572804 to 49643932. The presence of a common haplotype indicates that the trait occurred in a common ancestor of all four breeds, suggesting that it may be segregated among other modern horse breeds. Biological pathway analysis revealed several cellular genes within this region of ECA11 encoding proteins associated with virus attachment and entry, cytoskeletal organization, and NF-κB pathways that may be associated with the trait responsible for the in vitro susceptibility/resistance of CD3+ T lymphocytes to EAV infection. The data presented in this study demonstrated a strong association of genetic markers with the trait, representing de facto proof that the trait is under genetic control. To our knowledge, this is the first GWAS of an equine infectious disease and the first GWAS of equine viral arteritis.
doi:10.1128/JVI.06068-11
PMCID: PMC3233183  PMID: 21994447
17.  Economic analyses of venous thromboembolism prevention strategies in hospitalized patients: a systematic review 
Critical Care  2012;16(2):R43.
Introduction
Despite evidence-based guidelines for venous thromboembolism prevention, substantial variability is found in practice. Many economic evaluations of new drugs for thromboembolism prevention do not occur prospectively with efficacy studies and are sponsored by the manufacturers, raising the possibility of bias. We performed a systematic review of economic analyses of venous thromboembolism prevention in hospitalized patients to inform clinicians and policy makers about cost-effectiveness and the potential influence of sponsorship.
Methods
We searched MEDLINE, EMBASE, Cochrane Databases, ACP Journal Club, and Database of Abstracts of Reviews of Effects, from 1946 to September 2011. We extracted data on study characteristics, quality, costs, and efficacy.
Results
From 5,180 identified studies, 39 met eligibility and quality criteria. Each addressed pharmacologic prevention: low-molecular-weight heparins versus placebo (five), unfractionated heparin (12), warfarin (eight), one or another agents (five); fondaparinux versus enoxaparin (11); and rivaroxaban and dabigatran versus enoxaparin (two). Low-molecular-weight heparins were most economically attractive among most medical and surgical patients, whereas fondaparinux was favored for orthopedic patients. Fondaparinux was associated with increased bleeding events. Newer agents rivaroxaban and dabigatran may offer additional value. Of all economic evaluations, 64% were supported by manufacturers of a "new" agent. The new agent had a favorable outcome in 38 (97.4%) of 39 evaluations [95% confidence interval [CI] (86.5 to 99.9)]. Among studies supported by a pharmaceutical company, the sponsored medication was economically attractive in 24 (96.0%) of 25 [95% CI, 80.0 to 99.9)]. We could not detect a consistent bias in outcome based on sponsorship; however, only a minority of studies were unsponsored.
Conclusion
Low-molecular-weight heparins and fondaparinux are the most economically attractive drugs for venous thromboembolism prevention in hospitalized patients. Approximately two thirds of evaluations were supported by the manufacturer of the new agent; such drugs were likely to be reported as economically favorable.
doi:10.1186/cc11241
PMCID: PMC3964799
19.  Red blood cell transfusion and increased length of storage are not associated with deep vein thrombosis in medical and surgical critically ill patients: a prospective observational cohort study 
Critical Care  2011;15(6):R263.
Introduction
With prolonged storage times, cell membranes of red blood cells (RBCs) undergo morphologic and biochemical changes, termed 'RBC storage lesions'. Storage lesions may promote inflammation and thrombophilia when transfused. In trauma patients, RBC transfusion was an independent risk factor for deep vein thrombosis (DVT), specifically when RBC units were stored > 21 days or when 5 or more units were transfused. The objective of this study was to determine if RBC transfusions or RBC storage age predicts incident DVT in medical or surgical intensive care unit (ICU) patients.
Methods
Using a database which prospectively enrolled 261 patients over the course of 1 year with an ICU stay of at least 3 days, we analyzed DVT and RBC transfusions using Cox proportional hazards regression. Transfusions were analyzed with 4 thresholds, and storage age using 3 thresholds. DVTs were identified by twice-weekly proximal leg ultrasounds. Multivariable analyses were adjusted for 4 significant DVT predictors in this population (venous thrombosis history, chronic dialysis, platelet transfusion and inotropes).
Results
Of 261 patients, 126 (48.3%) had at least 1 RBC transfusion; 46.8% of those transfused had ≥ 5 units in ICU. Patients receiving RBCs were older (68.8 vs 64.1 years), more likely to be female (47.0 vs 30.7), sicker (APACHEII 26.8 vs 24.4), and more likely to be surgical (21.4 vs 8.9) (P < 0.05). The total number of RBCs per patient was 1-64, mean was 6.3 (SD 7.5), median was 4 (IQR 2,8). In univariate analyses, there was no association between DVT and RBC exposure (1 day earlier, 3 days earlier, 7 days earlier, or ever) or RBC storage (≤ 7 or > 7 days, ≤ 14 or > 14 days, ≤ 21 or > 21 days). Among patients transfused, no multivariable analyses showed that RBC transfusion or storage age predicted DVT. Trends were counter to the hypothesis (e.g., RBC storage for ≤ 7 days suggested a higher DVT risk compared to > 7 days (HR 5.3; 95% CI 1.3-22.1).
Conclusions
We were unable to detect any association between RBC transfusions or prolonged red cell storage and increased risk of DVT in medical or surgical ICU patients. Alternate explanations include a lack of sufficient events or patients' interaction, between patient groups, a mixing of red cell storage times creating differential effects on DVT risk, and unmeasured confounders.
doi:10.1186/cc10526
PMCID: PMC3388665  PMID: 22044745
20.  A Canadian Critical Care Trials Group project in collaboration with the international forum for acute care trialists - Collaborative H1N1 Adjuvant Treatment pilot trial (CHAT): study protocol and design of a randomized controlled trial 
Trials  2011;12:70.
Background
Swine origin influenza A/H1N1 infection (H1N1) emerged in early 2009 and rapidly spread to humans. For most infected individuals, symptoms were mild and self-limited; however, a small number developed a more severe clinical syndrome characterized by profound respiratory failure with hospital mortality ranging from 10 to 30%. While supportive care and neuraminidase inhibitors are the main treatment for influenza, data from observational and interventional studies suggest that the course of influenza can be favorably influenced by agents not classically considered as influenza treatments. Multiple observational studies have suggested that HMGCoA reductase inhibitors (statins) can exert a class effect in attenuating inflammation. The Collaborative H1N1 Adjuvant Treatment (CHAT) Pilot Trial sought to investigate the feasibility of conducting a trial during a global pandemic in critically ill patients with H1N1 with the goal of informing the design of a larger trial powered to determine impact of statins on important outcomes.
Methods/Design
A multi-national, pilot randomized controlled trial (RCT) of once daily enteral rosuvastatin versus matched placebo administered for 14 days for the treatment of critically ill patients with suspected, probable or confirmed H1N1 infection. We propose to randomize 80 critically ill adults with a moderate to high index of suspicion for H1N1 infection who require mechanical ventilation and have received antiviral therapy for ≤ 72 hours. Site investigators, research coordinators and clinical pharmacists will be blinded to treatment assignment. Only research pharmacy staff will be aware of treatment assignment. We propose several approaches to informed consent including a priori consent from the substitute decision maker (SDM), waived and deferred consent. The primary outcome of the CHAT trial is the proportion of eligible patients enrolled in the study. Secondary outcomes will evaluate adherence to medication administration regimens, the proportion of primary and secondary endpoints collected, the number of patients receiving open-label statins, consent withdrawals and the effect of approved consent models on recruitment rates.
Discussion
Several aspects of study design including the need to include central randomization, preserve allocation concealment, ensure study blinding compare to a matched placebo and the use novel consent models pose challenges to investigators conducting pandemic research. Moreover, study implementation requires that trial design be pragmatic and initiated in a short time period amidst uncertainty regarding the scope and duration of the pandemic.
Trial Registration Number
ISRCTN45190901
doi:10.1186/1745-6215-12-70
PMCID: PMC3068961  PMID: 21388549
21.  Pressure and Volume Limited Ventilation for the Ventilatory Management of Patients with Acute Lung Injury: A Systematic Review and Meta-Analysis 
PLoS ONE  2011;6(1):e14623.
Background
Acute lung injury (ALI) and acute respiratory distress syndrome (ARDS) are life threatening clinical conditions seen in critically ill patients with diverse underlying illnesses. Lung injury may be perpetuated by ventilation strategies that do not limit lung volumes and airway pressures. We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) comparing pressure and volume-limited (PVL) ventilation strategies with more traditional mechanical ventilation in adults with ALI and ARDS.
Methods and Findings
We searched Medline, EMBASE, HEALTHSTAR and CENTRAL, related articles on PubMed™, conference proceedings and bibliographies of identified articles for randomized trials comparing PVL ventilation with traditional approaches to ventilation in critically ill adults with ALI and ARDS. Two reviewers independently selected trials, assessed trial quality, and abstracted data. We identified ten trials (n = 1,749) meeting study inclusion criteria. Tidal volumes achieved in control groups were at the lower end of the traditional range of 10–15 mL/kg. We found a clinically important but borderline statistically significant reduction in hospital mortality with PVL [relative risk (RR) 0.84; 95% CI 0.70, 1.00; p = 0.05]. This reduction in risk was attenuated (RR 0.90; 95% CI 0.74, 1.09, p = 0.27) in a sensitivity analysis which excluded 2 trials that combined PVL with open-lung strategies and stopped early for benefit. We found no effect of PVL on barotrauma; however, use of paralytic agents increased significantly with PVL (RR 1.37; 95% CI, 1.04, 1.82; p = 0.03).
Conclusions
This systematic review suggests that PVL strategies for mechanical ventilation in ALI and ARDS reduce mortality and are associated with increased use of paralytic agents.
doi:10.1371/journal.pone.0014623
PMCID: PMC3030554  PMID: 21298026
22.  Defining priorities for improving end-of-life care in Canada 
Background
High-quality end-of-life care should be the right of every Canadian. The objective of this study was to identify aspects of end-of-life care that are high in priority as targets for improvement using feedback elicited from patients and their families.
Methods
We conducted a multicentre, cross-sectional survey involving patients with advanced, life-limiting illnesses and their family caregivers. We administered the Canadian Health Care Evaluation Project (CANHELP) questionnaire along with a global rating question to measure satisfaction with end-of-life care. We derived the relative importance of individual questions on the CANHELP questionnaire from their association with a global rating of satisfaction, as determined using Pearson correlation coefficients. To determine high-priority issues, we identified questions that had scores indicating high importance and low satisfaction.
Results
We approached 471 patients and 255 family members, of whom 363 patients and 193 family members participated, with response rates of 77% for patients and 76% for families. From the perspective of patients, high-priority areas needing improvement were related to feelings of peace, to assessment and treatment of emotional problems, to physician availability and to satisfaction that the physician took a personal interest in them, communicated clearly and consistently, and listened. From the perspective of family members, similar areas were identified as high in priority, along with the additional areas of timely information about the patient’s condition and discussions with the doctor about final location of care and use of end-of-life technology.
Interpretation
End-of-life care in Canada may be improved for patients and their families by providing better psychological and spiritual support, better planning of care and enhanced relationships with physicians, especially in aspects related to communication and decision-making.
doi:10.1503/cmaj.100131
PMCID: PMC2972350  PMID: 20921249
23.  Resuscitation fluid use in critically ill adults: an international cross-sectional study in 391 intensive care units 
Critical Care  2010;14(5):R185.
Introduction
Recent evidence suggests that choice of fluid used for resuscitation may influence mortality in critically ill patients.
Methods
We conducted a cross-sectional study in 391 intensive care units across 25 countries to describe the types of fluids administered during resuscitation episodes. We used generalized estimating equations to examine the association between patient, prescriber and geographic factors and the type of fluid administered (classified as crystalloid, colloid or blood products).
Results
During the 24-hour study period, 1,955 of 5,274 (37.1%) patients received resuscitation fluid during 4,488 resuscitation episodes. The main indications for administering crystalloid or colloid were impaired perfusion (1,526/3,419 (44.6%) of episodes), or to correct abnormal vital signs (1,189/3,419 (34.8%)). Overall, colloid was administered to more patients (1,234 (23.4%) versus 782 (14.8%)) and during more episodes (2,173 (48.4%) versus 1,468 (32.7%)) than crystalloid. After adjusting for patient and prescriber characteristics, practice varied significantly between countries with country being a strong independent determinant of the type of fluid prescribed. Compared to Canada where crystalloid, colloid and blood products were administered in 35.5%, 40.6% and 28.3% of resuscitation episodes respectively, odds ratios for the prescription of crystalloid in China, Great Britain and New Zealand were 0.46 (95% confidence interval (CI) 0.30 to 0.69), 0.18 (0.10 to 0.32) and 3.43 (1.71 to 6.84) respectively; odds ratios for the prescription of colloid in China, Great Britain and New Zealand were 1.72 (1.20 to 2.47), 4.72 (2.99 to 7.44) and 0.39 (0.21 to 0.74) respectively. In contrast, choice of fluid was not influenced by measures of illness severity (for example, Acute Physiology and Chronic Health Evaluation (APACHE) II score).
Conclusions
Administration of resuscitation fluid is a common intervention in intensive care units and choice of fluid varies markedly between countries. Although colloid solutions are more expensive and may possibly be harmful in some patients, they were administered to more patients and during more resuscitation episodes than crystalloids were.
doi:10.1186/cc9293
PMCID: PMC3219291  PMID: 20950434
24.  Whole-Genome SNP Association in the Horse: Identification of a Deletion in Myosin Va Responsible for Lavender Foal Syndrome 
PLoS Genetics  2010;6(4):e1000909.
Lavender Foal Syndrome (LFS) is a lethal inherited disease of horses with a suspected autosomal recessive mode of inheritance. LFS has been primarily diagnosed in a subgroup of the Arabian breed, the Egyptian Arabian horse. The condition is characterized by multiple neurological abnormalities and a dilute coat color. Candidate genes based on comparative phenotypes in mice and humans include the ras-associated protein RAB27a (RAB27A) and myosin Va (MYO5A). Here we report mapping of the locus responsible for LFS using a small set of 36 horses segregating for LFS. These horses were genotyped using a newly available single nucleotide polymorphism (SNP) chip containing 56,402 discriminatory elements. The whole genome scan identified an associated region containing these two functional candidate genes. Exon sequencing of the MYO5A gene from an affected foal revealed a single base deletion in exon 30 that changes the reading frame and introduces a premature stop codon. A PCR–based Restriction Fragment Length Polymorphism (PCR–RFLP) assay was designed and used to investigate the frequency of the mutant gene. All affected horses tested were homozygous for this mutation. Heterozygous carriers were detected in high frequency in families segregating for this trait, and the frequency of carriers in unrelated Egyptian Arabians was 10.3%. The mapping and discovery of the LFS mutation represents the first successful use of whole-genome SNP scanning in the horse for any trait. The RFLP assay can be used to assist breeders in avoiding carrier-to-carrier matings and thus in preventing the birth of affected foals.
Author Summary
Genetic disorders affect many domesticated species, including the horse. In this study we have focused on Lavender Foal Syndrome, a seizure disorder that leads to suffering and death in foals soon after birth. A recessively inherited disorder, its occurrence is often unpredictable and difficult for horse breeders to avoid without a diagnostic test for carrier status. The recent completion of the horse genome sequence has provided new tools for mapping traits with unprecedented resolution and power. We have applied one such tool, the Equine SNP50 genotyping chip, to a small sample set from horses affected with Lavender Foal Syndrome. A single genetic location associated with the disorder was rapidly identified using this approach. Subsequent sequencing of functional candidate genes in this location revealed a single base deletion that likely causes Lavender Foal Syndrome. From a practical standpoint, this discovery and the development of a diagnostic test for the LFS allele provides a valuable new tool for breeders seeking to avoid the disease in their foal crop. However, this work also illustrates the utility of whole-genome association studies in the horse.
doi:10.1371/journal.pgen.1000909
PMCID: PMC2855325  PMID: 20419149
25.  A retrospective cohort pilot study to evaluate a triage tool for use in a pandemic 
Critical Care  2009;13(5):R170.
Introduction
The objective of this pilot study was to assess the usability of the draft Ontario triage protocol, to estimate its potential impact on patient outcomes, and ability to increase resource availability based on a retrospective cohort of critically ill patients cared for during a non-pandemic period.
Methods
Triage officers applied the protocol prospectively to 2 retrospective cohorts of patients admitted to 2 academic medical/surgical ICUs during an 8 week period of peak occupancy. Each patient was assigned a treatment priority (red -- 'highest', yellow -- 'intermediate', green -- 'discharge to ward', or blue/black -- 'expectant') by the triage officers at 3 separate time points (at the time of admission to the ICU, 48, and 120 hours post admission).
Results
Overall, triage officers were either confident or very confident in 68.4% of their scores; arbitration was required in 54.9% of cases. Application of the triage protocol would potentially decrease the number of required ventilator days by 49.3% (568 days) and decrease the total ICU days by 52.6% (895 days). On the triage protocol at ICU admission the survival rate in the red (93.7%) and yellow (62.5%) categories were significantly higher then that of the blue category (24.6%) with associated P values of < 0.0001 and 0.0003 respectively. Further, the survival rate of the red group was significantly higher than the overall survival rate of 70.9% observed in the cohort (P < 0.0001). At 48 and 120 hours, survival rates in the blue group increased but remained lower then the red or yellow groups.
Conclusions
Refinement of the triage protocol and implementation is required prior to future study, including improved training of triage officers, and protocol modification to minimize the exclusion from critical care of patients who may in fact benefit. However, our results suggest that the triage protocol can help to direct resources to patients who are most likely to benefit, and help to decrease the demands on critical care resources, thereby making available more resources to treat other critically ill patients.
doi:10.1186/cc8146
PMCID: PMC2784402  PMID: 19874595

Results 1-25 (62)