PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (67)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  The influence of corticosteroid treatment on the outcome of influenza A(H1N1pdm09)-related critical illness 
Critical Care  2016;20:75.
Background
Patients with 2009 pandemic influenza A(H1N1pdm09)-related critical illness were frequently treated with systemic corticosteroids. While observational studies have reported significant corticosteroid-associated mortality after adjusting for baseline differences in patients treated with corticosteroids or not, corticosteroids have remained a common treatment in subsequent influenza outbreaks, including avian influenza A(H7N9). Our objective was to describe the use of corticosteroids in these patients and investigate predictors of steroid prescription and clinical outcomes, adjusting for both baseline and time-dependent factors.
Methods
In an observational cohort study of adults with H1N1pdm09-related critical illness from 51 Canadian ICUs, we investigated predictors of steroid administration and outcomes of patients who received and those who did not receive corticosteroids. We adjusted for potential baseline confounding using multivariate logistic regression and propensity score analysis and adjusted for potential time-dependent confounding using marginal structural models.
Results
Among 607 patients, corticosteroids were administered to 280 patients (46.1 %) at a median daily dose of 227 (interquartile range, 154–443) mg of hydrocortisone equivalents for a median of 7.0 (4.0–13.0) days. Compared with patients who did not receive corticosteroids, patients who received corticosteroids had higher hospital crude mortality (25.5 % vs 16.4 %, p = 0.007) and fewer ventilator-free days at 28 days (12.5 ± 10.7 vs 15.7 ± 10.1, p < 0.001). The odds ratio association between corticosteroid use and hospital mortality decreased from 1.85 (95 % confidence interval 1.12–3.04, p = 0.02) with multivariate logistic regression, to 1.71 (1.05–2.78, p = 0.03) after adjustment for propensity score to receive corticosteroids, to 1.52 (0.90–2.58, p = 0.12) after case-matching on propensity score, and to 0.96 (0.28–3.28, p = 0.95) using marginal structural modeling to adjust for time-dependent between-group differences.
Conclusions
Corticosteroids were commonly prescribed for H1N1pdm09-related critical illness. Adjusting for only baseline between-group differences suggested a significant increased risk of death associated with corticosteroids. However, after adjusting for time-dependent differences, we found no significant association between corticosteroids and mortality. These findings highlight the challenges and importance in adjusting for baseline and time-dependent confounders when estimating clinical effects of treatments using observational studies.
Electronic supplementary material
The online version of this article (doi:10.1186/s13054-016-1230-8) contains supplementary material, which is available to authorized users.
doi:10.1186/s13054-016-1230-8
PMCID: PMC4818504  PMID: 27036638
2.  Corticosteroid therapy in critical illness due to seasonal and pandemic influenza 
Corticosteroids are a powerful class of drugs used to treat several chronic and acute conditions; however, they may be harmful to a subset of critically ill intensive care unit patients with specific conditions. Therefore, understanding the clinical triggers for corticosteroid use in this environment is a quality of care issue. Accordingly, this study used a seasonal influenza outbreak in two Canadian cities to explore the variables associated with corticosteroid use in the intensive care unit.
BACKGROUND:
Survey data suggest that Canadian intensivists administer corticosteroids to critically ill patients primarily in response to airway obstruction, perceived risk for adrenal insufficiency and hemodynamic instability.
OBJECTIVE:
To describe variables independently associated with systemic corticosteroid therapy during an influenza outbreak.
METHODS:
The present analysis was retrospective cohort study involving critically ill patients with influenza in two Canadian cities. Hospital records were reviewed for critically ill patients treated in the intensive care units (ICUs) of eight hospitals in Canada during the 2008 to 2009 and 2009 to 2010 influenza outbreaks. Abstracted data included demographic information, symptoms at disease onset, chronic comorbidities and baseline illness severity scores. Corticosteroid use data were extracted for every ICU day and expressed as hydrocortisone dose equivalent in mg. Multivariable regression models were constructed to identify variables independently associated with corticosteroid therapy in the ICU.
RESULTS:
The study cohort included 90 patients with a mean (± SD) age of 55.0±17.3 years and Acute Physiology and Chronic Health Evaluation II score of 19.8±8.3. Patients in 2009 to 2010 were younger with more severe lung injury but similar exposure to corticosteroids. Overall, 54% of patients received corticosteroids at a mean daily dose of 343±330 mg of hydrocortisone for 8.5±4.8 days. Variables independently associated with corticosteroid therapy in the ICU were history of airway obstruction (OR 4.8 [95% CI 1.6 to 14.9]) and hemodynamic instability (OR 4.6 [95% CI 1.2 to 17.8]).
CONCLUSION:
Observational data revealed that hemodynamic instability and airway obstruction were associated with corticosteroid therapy in the critical care setting, similar to a recent survey of stated practice. Efforts to determine the effects of corticosteroids in the ICU for these specific clinical situations are warranted.
PMCID: PMC4596649  PMID: 26436911
Cohort study; Corticosteroids; Critical illness; Influenza; Intensive care unit; Pandemic H1N1; Seasonal influenza
3.  Risk factors for and prediction of mortality in critically ill medical–surgical patients receiving heparin thromboprophylaxis 
Background
Previous studies have suggested that prediction models for mortality should be adjusted for additional risk factors beyond the Acute Physiology and Chronic Health Evaluation (APACHE) score. Our objective was to identify risk factors independent of APACHE II score and construct a prediction model to improve the predictive accuracy for hospital and intensive care unit (ICU) mortality.
Methods
We used data from a multicenter randomized controlled trial (PROTECT, Prophylaxis for Thromboembolism in Critical Care Trial) to build a new prediction model for hospital and ICU mortality. Our primary outcome was all-cause 60-day hospital mortality, and the secondary outcome was all-cause 60-day ICU mortality.
Results
We included 3746 critically ill non-trauma medical–surgical patients receiving heparin thromboprophylaxis (43.3 % females) in this study. The new model predicting 60-day hospital mortality incorporated APACHE II score (main effect: hazard ratio (HR) = 0.97 for per-point increase), body mass index (BMI) (main effect: HR = 0.92 for per-point increase), medical admission versus surgical (HR = 1.67), use of inotropes or vasopressors (HR = 1.34), acetylsalicylic acid or clopidogrel (HR = 1.27) and the interaction term between APACHE II score and BMI (HR = 1.002 for per-point increase). This model had a good fit to the data and was well calibrated and internally validated. However, the discriminative ability of the prediction model was unsatisfactory (C index < 0.65). Sensitivity analyses supported the robustness of these findings. Similar results were observed in the new prediction model for 60-day ICU mortality which included APACHE II score, BMI, medical admission and invasive mechanical ventilation.
Conclusion
Compared with the APACHE II score alone, the new prediction model increases data collection, is more complex but does not substantially improve discriminative ability.
Trial registration: ClinicalTrials.gov Identifier: NCT00182143
doi:10.1186/s13613-016-0116-x
PMCID: PMC4769241  PMID: 26921148
Prediction model; Critical care; APACHE; Intensive care unit; Mortality
4.  The determinants of home and nursing home death: a systematic review and meta-analysis 
BMC Palliative Care  2016;15:8.
Background
Most Canadians die in hospital, and yet, many express a preference to die at home. Place of death is the result of the interaction among sociodemographic, illness- and healthcare-related factors. Although home death is sometimes considered a potential indicator of end-of-life/palliative care quality, some determinants of place of death are more modifiable than others. The objective of this systematic review was to evaluate the determinants of home and nursing home death in adult patients diagnosed with an advanced, life-limiting illness.
Methods
A systematic literature search was performed for studies in English published from January 1, 2004 to September 24, 2013 that evaluated the determinants of home or nursing home death compared to hospital death in adult patients with an advanced, life-limiting condition. The adjusted odds ratios, relative risks, and 95 % confidence intervals of each determinant were extracted from the studies. Meta-analyses were performed if appropriate. The quality of individual studies was assessed using the Newcastle-Ottawa scale and the body of evidence was assessed according to the GRADE Working Group criteria.
Results
Of the 5,900 citations identified, 26 retrospective cohort studies were eligible. The risk of bias in the studies identified was considered low. Factors associated with an increased likelihood of home versus hospital death included multidisciplinary home palliative care, preference for home death, cancer as opposed to other diagnoses, early referral to palliative care, not living alone, having a caregiver, and the caregiver’s coping skills.
Conclusions
Knowledge about the determinants of place of death can be used to inform care planning between healthcare providers, patients and family members regarding the feasibility of dying in the preferred location and may help explain the incongruence between preferred and actual place of death.
Modifiable factors such as early referral to palliative care, presence of a multidisciplinary home palliative care team were identified, which may be amenable to interventions that improve the likelihood of a patient dying in the preferred location. Place of death may not be a very good indicator of the quality of end-of-life/palliative care since it is determined by multiple factors and is therefore dependent on individual circumstances.
Electronic supplementary material
The online version of this article (doi:10.1186/s12904-016-0077-8) contains supplementary material, which is available to authorized users.
doi:10.1186/s12904-016-0077-8
PMCID: PMC4721064  PMID: 26791258
Determinants of place of death; Palliative care; Preference for place of death; Determinants of home death; Determinants of nursing home death
5.  Use of Viremia to Evaluate the Baseline Case Fatality Ratio of Ebola Virus Disease and Inform Treatment Studies: A Retrospective Cohort Study 
PLoS Medicine  2015;12(12):e1001908.
Background
The case fatality ratio (CFR) of Ebola virus disease (EVD) can vary over time and space for reasons that are not fully understood. This makes it difficult to define the baseline CFRs needed to evaluate treatments in the absence of randomized controls. Here, we investigate whether viremia in EVD patients may be used to evaluate baseline EVD CFRs.
Methods and Findings
We analyzed the laboratory and epidemiological records of patients with EVD confirmed by reverse transcription PCR hospitalized in the Conakry area, Guinea, between 1 March 2014 and 28 February 2015. We used viremia and other variables to model the CFR. Data for 699 EVD patients were analyzed. In the week following symptom onset, mean viremia remained stable, and the CFR increased with viremia, V, from 21% (95% CI 16%–27%) for low viremia (V < 104.4 copies/ml) to 53% (95% CI 44%–61%) for intermediate viremia (104.4 ≤ V < 105.2 copies/ml) and 81% (95% CI 75%–87%) for high viremia (V ≥ 105.2 copies/ml). Compared to adults (15–44 y old [y.o.]), the CFR was larger in young children (0–4 y.o.) (odds ratio [OR]: 2.44; 95% CI 1.02–5.86) and older adults (≥45 y.o.) (OR: 2.84; 95% CI 1.81–4.46) but lower in children (5–14 y.o.) (OR: 0.46; 95% CI 0.24–0.86). An order of magnitude increase in mean viremia in cases after July 2014 compared to those before coincided with a 14% increase in the CFR. Our findings come from a large hospital-based study in Conakry and may not be generalizable to settings with different case profiles, such as with individuals who never sought care.
Conclusions
Viremia in EVD patients was a strong predictor of death that partly explained variations in CFR in the study population. This study provides baseline CFRs by viremia group, which allow appropriate adjustment when estimating efficacy in treatment studies. In randomized controlled trials, stratifying analysis on viremia groups could reduce sample size requirements by 25%. We hypothesize that monitoring the viremia of hospitalized patients may inform the ability of surveillance systems to detect EVD patients from the different severity strata.
In a retrospective cohort study, Simon Cauchemez and colleagues find viral load can predict case fatality ratios among patients with Ebola Virus.
Editors' Summary
Background
During the current outbreak of Ebola virus disease (EVD) in West Africa, which started in December 2013, there have been more than 28,000 confirmed, probable, and suspected cases of EVD and more than 11,000 deaths from the disease. Ebola virus is transmitted to people from wild animals and spreads in human populations through direct contact with the bodily fluids (including blood, saliva, and urine) or organs of infected people or through contact with bedding and other materials contaminated with bodily fluids. The symptoms of EVD, which start 2–21 days after infection, include fever, headache, vomiting, diarrhea, and internal and external bleeding. Infected individuals are not infectious until they develop symptoms but remain infectious as long as their bodily fluids contain virus. There is no proven treatment or vaccine for EVD, although several treatments are now being assessed in people following promising laboratory studies. Supportive care—given under strict isolation conditions to prevent the spread of the disease to other patients or to healthcare workers—improves survival.
Why Was This Study Done?
Ideally, the efficacy of a potential treatment for any disease is assessed in a randomized controlled trial, a study that compares outcomes among people chosen at random to receive the treatment with outcomes among people given a placebo (dummy treatment). However, because EVD is frequently fatal, randomized controlled trials of potential treatments are considered unethical. Instead, studies evaluating treatments for EVD usually compare the case fatality ratio (CFR; the number of deaths caused by a disease divided by the number of cases of that disease; a CFR of 100% indicates that everyone who develops the disease dies) among treated patients with a baseline CFR estimated from historical data. But the CFR of EVD varies markedly over time and space for poorly understood reasons (for example, changes in patient care or variations in the detection of people with disease of different severity might change the CFR). Thus, the CFR in the treatment group could differ from the baseline CFR for reasons that are independent of the treatment. To find a way around this problem, in this retrospective cohort study, the researchers investigate whether there is a relationship between viremia (the amount of virus in the blood) and the CFR among patients with EVD.
What Did the Researchers Do and Find?
The researchers used laboratory and epidemiological data (for example, patient age and date of symptom onset and death; epidemiology is the study of disease patterns in populations) to investigate the relationship between viremia and CFR among 699 patients with confirmed EVD hospitalized in the Conakry area of Guinea between March 2014 and February 2015. In the week following symptom onset, mean (average) viremia remained stable, and the CFR among the patients increased with the level of viremia. Thus, the CFRs for patients with low, intermediate, and high viremia (defined by the number of virus copies per milliliter of blood) were 21%, 53%, and 81%, respectively. Compared to adults aged 15–44 years, young children (aged less than five years) and older adults had a higher CFR, but children aged 5–14 years had a lower CFR. Notably, the CFR in the study population was 14% higher after July 2014 than in the months of March–July 2014, an increase that coincided with a ten-fold increase in the average level of viremia in the population.
What Do These Findings Mean?
These findings suggest that viremia is a strong predictor of death that can partly explain variations in the CFR of EVD. Because these findings are based on data collected from hospitalized patients, they may not be generalizable to other settings. Importantly, however, these findings provide estimates of CFR by viremia group that can now be used to adjust risk when undertaking clinical evaluations of EVD-specific treatments. That is, by allowing for differing levels of viremia, it will be possible to assess the efficacy of treatments for EVD more accurately in nonrandomized clinical trials. Moreover, the researchers calculate that stratification of patients by viremia group could reduce the sample size needed in any randomized trials that are undertaken (for example, comparisons of two potential treatments) by 25%. Finally, the researchers suggest that monitoring viremia among patients hospitalized for EVD might provide information about the ability of different surveillance systems to detect patients with different levels of disease severity (probability of death).
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001908.
The World Health Organization (WHO) provides information about EVD, information about potential EVD therapies, and regular updates on the current EVD epidemic; a summary of the discussion of a WHO Ethics Working Group Meeting on the ethical issues related trials of EVD treatments is available; the WHO website also provides information about efforts to control Ebola in the field and personal stories from people who have survived EVD
The UK National Health Service Choices website provides detailed information on EVD
The US Centers for Disease Control and Prevention also provides information about http://www.cdc.gov/vhf/ebola/EVD
doi:10.1371/journal.pmed.1001908
PMCID: PMC4666644  PMID: 26625118
6.  Feasibility, safety, clinical, and laboratory effects of convalescent plasma therapy for patients with Middle East respiratory syndrome coronavirus infection: a study protocol 
SpringerPlus  2015;4:709.
As of September 30, 2015, a total of 1589 laboratory-confirmed cases of infection with the Middle East respiratory syndrome coronavirus (MERS-CoV) have been reported to the World Health Organization (WHO). At present there is no effective specific therapy against MERS-CoV. The use of convalescent plasma (CP) has been suggested as a potential therapy based on existing evidence from other viral infections. We aim to study the feasibility of CP therapy as well as its safety and clinical and laboratory effects in critically ill patients with MERS-CoV infection. We will also examine the pharmacokinetics of the MERS-CoV antibody response and viral load over the course of MERS-CoV infection. This study will inform a future randomized controlled trial that will examine the efficacy of CP therapy for MERS-CoV infection. In the CP collection phase, potential donors will be tested by the enzyme linked immunosorbent assay (ELISA) and the indirect fluorescent antibody (IFA) techniques for the presence of anti-MERS-CoV antibodies. Subjects with anti-MERS-CoV IFA titer of ≥1:160 and no clinical or laboratory evidence of MERS-CoV infection will be screened for eligibility for plasma donation according to standard donation criteria. In the CP therapy phase, 20 consecutive critically ill patients admitted to intensive care unit with laboratory-confirmed MERS-CoV infection will be enrolled and each will receive 2 units of CP. Post enrollment, patients will be followed for clinical and laboratory outcomes that include anti-MERS-CoV antibodies and viral load. This protocol was developed collaboratively by King Abdullah International Medical Research Center (KAIMRC), Gulf Cooperation Council (GCC) Infection Control Center Group and the World Health Organization—International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC-WHO) MERS-CoV Working Group. It was approved in June 2014 by the Ministry of the National Guard Health Affairs Institutional Review Board (IRB). A data safety monitoring board (DSMB) was formulated. The study is registered at http://www.clinicaltrials.gov (NCT02190799).
doi:10.1186/s40064-015-1490-9
PMCID: PMC4653124  PMID: 26618098
Middle east respiratory syndrome coronavirus; MERS-CoV; Viral pneumonia; Intensive care; Convalescent plasma; Serology; Genome; Neutralizing antibodies
7.  Prospective cohort study protocol to describe the transfer of patients from intensive care units to hospital wards 
BMJ Open  2015;5(7):e007913.
Introduction
The transfer of patient care between the intensive care unit (ICU) and the hospital ward is associated with increased risk of medical error and adverse events. This study will describe patient transfer from ICU to hospital ward by documenting (1) patient, family and provider experiences related to ICU transfer, (2) communication between stakeholders involved in ICU transfer, (3) adverse events that follow ICU transfer and (4) opportunities to improve ICU to hospital ward transfer.
Methods
This is a mixed methods prospective observational study of ICU to hospital ward transfer practices in 10 ICUs across Canada. We will recruit 50 patients at each site (n=500) who are transferred from ICU to hospital ward, and distribute surveys to enrolled patients, family members, and healthcare providers (ICU and ward physicians and nurses) after patient transfer. A random sample of 6 consenting study participants (patients, family members, healthcare providers) from each study site (n=60) will be offered an opportunity to participate in interviews to further describe stakeholders’ experience with ICU to hospital ward transfer. We will abstract information from patient health records to identify clinical data and use of transfer tools, and identify adverse events that are related to the transfer.
Ethics and Dissemination
Research ethics board approval has been obtained at the coordinating study centre (UofC REB13-0021) and 5 study sites (UofA Pro00050646; UBC-PHC H14-01667; Sunnybrook 336-2014; QCH 14-07; Sherbrooke 14-172). Dissemination of the findings will provide a comprehensive description of transfer from ICU to hospital ward in Canada including the uptake of validated or local transfer tools, a conceptual framework of the experiences and needs of stakeholders in the ICU transfer process, a summary of adverse events experienced by patients after transfer from ICU to hospital ward, and opportunities to guide quality improvement efforts.
doi:10.1136/bmjopen-2015-007913
PMCID: PMC4499701  PMID: 26155820
8.  Productivity loss and indirect costs associated with cardiovascular events and related clinical procedures 
Background
The high acute costs of cardiovascular disease and acute cardiovascular events are well established, particularly in terms of direct medical costs. The costs associated with lost work productivity have been described in a broad sense, but little is known about workplace absenteeism or short term disability costs among high cardiovascular risk patients. The objective of this study was to quantify workplace absenteeism (WA) and short-term disability (STD) hours and costs associated with cardiovascular events and related clinical procedures (CVERP) in United States employees with high cardiovascular risk.
Methods
Medical, WA and/or STD data from the Truven Health MarketScan® Research Databases were used to select full-time employees aged 18–64 with hyperlipidemia during 2002–2011. Two cohorts (with and without CVERP) were created and screened for medical, drug, WA, and STD eligibility. The CVERP cohort was matched with a non-CVERP cohort using propensity score matching. Work loss hours and indirect costs were calculated for patients with and without CVERP and by CVERP type. Wages were based on the 2013 age-, gender-, and geographic region-adjusted wage rate from the United States Bureau of Labor Statistics.
Results
A total of 5,808 WA-eligible, 21,006 STD-eligible, and 3,362 combined WA and STD eligible patients with CVERP were well matched to patients without CVERP, creating three cohorts of patients with CVERP and three cohorts of patients without CVERP. Demographics were similar across cohorts (mean age 52.2-53.1 years, male 81.3-86.8 %). During the first month of follow-up, patients with CVERP had more WA/STD-related hours lost compared with patients without CVERP (WA-eligible: 23.4 more hours, STD-eligible: 51.7 more hours, WA and STD-eligible: 56.3 more hours) (p < 0.001). Corresponding costs were $683, $895, and $1,119 higher, respectively (p < 0.001). Differences narrowed with longer follow-up. In the first month and year of follow-up, patients with coronary artery bypass graft experienced the highest WA/STD-related hours lost and costs compared with patients with other CVERP.
Conclusions
CVERP were associated with substantial work loss and indirect costs. Prevention or reduction of CVERP could result in WA and STD-related cost savings for employers.
Electronic supplementary material
The online version of this article (doi:10.1186/s12913-015-0925-x) contains supplementary material, which is available to authorized users.
doi:10.1186/s12913-015-0925-x
PMCID: PMC4478719  PMID: 26104784
Indirect cost; Cardiovascular diseases; Absenteeism; Short-term disability
9.  Medical mentorship in Afghanistan: How are military mentors perceived by Afghan health care providers? 
Canadian Journal of Surgery  2015;58(3 Suppl 3):S98-S103.
Background
Previous work has been published on the experiences of high-resource setting physicians mentoring in low-resource environments. However, not much is known about what mentees think about their First World mentors. We had the opportunity to explore this question in an Afghan Army Hospital, and we believe this is the first time this has been studied.
Methods
We conducted a pilot cross-sectional survey of Afghan health care providers evaluating their Canadian mentors. We created a culturally appropriate 19-question survey with 5-point Likert scores that was then translated into the local Afghan language. The survey questions were based on domains of Royal College of Physicians and Surgeons of Canada’s CanMEDS criteria.
Results
The survey response rate was 90% (36 of 40). The respondents included 13 physicians, 21 nurses and 2 other health care professionals. Overall, most of the Afghan health care workers felt that working with mentors from high-resource settings was a positive experience (median 4.0, interquartile range [IQR] 4–4), according to CanMEDS domains. However, respondents indicated that the mentors were reliant on medical technology for diagnosis (median 5.0, IQR 4–5) and failed to consider the limited resources available in Afghanistan.
Conclusion
The overall impression of Afghan health care providers was that mentors are appropriate and helpful. CanMEDS can be used as a framework to evaluate mentors in low-resource conflict environments.
doi:10.1503/cjs.012214
PMCID: PMC4467500  PMID: 26100785
10.  A ΔdinB mutation that sensitizes Escherichia coli to the lethal effects of UV and X-radiation 
Mutation research  2014;0:19-27.
The DinB (PolIV) protein of Escherichia coli participates in several cellular functions. We investigated a dinB mutation, Δ(dinB-yafN)883(::kan) [referred to as ΔdinB883], which strongly sensitized E. coli cells to both UV- and X-radiation killing. Earlier reports indicated dinB mutations had no obvious effect on UV radiation sensitivity which we confirmed by showing that normal UV radiation sensitivity is conferred by the ΔdinB749 allele. Compared to a wild-type strain, the ΔdinB883 mutant was most sensitive (160-fold) in early to mid-logarithmic growth phase and much less sensitive (twofold) in late log or stationary phases, thus showing a growth phase-dependence for UV radiation sensitivity. This sensitizing effect of ΔdinB883 is assumed to be completely dependent upon the presence of UmuDC protein; since the ΔdinB883 mutation did not sensitize the ΔumuDC strain to UV radiation killing throughout log phase and early stationary phase growth. The DNA damage checkpoint activity of UmuDC was clearly affected by ΔdinB883 as shown by testing a umuC104 ΔdinB883 double-mutant. The sensitivities of the ΔumuDC strain and the ΔdinB883 ΔumuDC double-mutant strain were significantly greater than for the ΔdinB883 strain, suggesting that the ΔdinB883 allele only partially suppresses UmuDC activity. The ΔdinB883 mutation partially sensitized (fivefold) uvrA and uvrB strains to UV radiation, but did not sensitize a ΔrecA strain. A comparison of the DNA sequences of the ΔdinB883 allele with the sequences of the Δ(dinB-yafN)882(::kan) and ΔdinB749 alleles, which do not sensitize cells to UV radiation, revealed ΔdinB883 is likely a “gain-of-function” mutation. The ΔdinB883 allele encodes the first 54 amino acids of wild-type DinB followed by 29 predicted residues resulting from the continuation of the dinB reading frame into an adjacent insertion fragment. The resulting polypeptide is proposed to interfere directly or indirectly with UmuDC function(s) involved in protecting cells against the lethal effects of radiation.
doi:10.1016/j.mrfmmm.2014.03.003
PMCID: PMC4172556  PMID: 24657250
Escherichia coli; DinB; UmuDC; UV radiation sensitivity; TLS; checkpoint
11.  Bacteremia Antibiotic Length Actually Needed for Clinical Effectiveness (BALANCE): study protocol for a pilot randomized controlled trial 
Trials  2015;16:173.
Background
Bacteremia is a leading cause of mortality and morbidity in critically ill adults. No previous randomized controlled trials have directly compared shorter versus longer durations of antimicrobial treatment in these patients.
Methods/Design
This is a multicenter pilot randomized controlled trial in critically ill patients with bacteremia. Eligible patients will be adults with a positive blood culture with pathogenic bacteria identified while in the intensive care unit. Eligible, consented patients will be randomized to either 7 days or 14 days of adequate antimicrobial treatment for the causative pathogen(s) detected on blood cultures. The diversity of pathogens and treatment regimens precludes blinding of patient and clinicians, but allocation concealment will be extended to day 7 and outcome adjudicators will be blinded. The primary outcome for the main trial will be 90-day mortality. The primary outcome for the pilot trial is feasibility defined by (i) rate of recruitment exceeding 1 patient per site per month and (ii) adherence to treatment duration protocol ≥ 90%. Secondary outcomes include intensive care unit, hospital and 90-day mortality rates, relapse rates of bacteremia, antibiotic-related side effects and adverse events, rates of Clostridium difficile infection, rates of secondary infection or colonization with antimicrobial resistant organisms, ICU and hospital lengths of stay, mechanical ventilation and vasopressor duration in intensive care unit, and procalcitonin levels on the day of randomization, and day 7, 10 and 14 after the index blood culture.
Discussion
The BALANCE pilot trial will inform the design and execution of the subsequent BALANCE main trial, which will evaluate shorter versus longer duration treatment for bacteremia in critically ill patients, and thereby provide an evidence basis for treatment duration decisions for these infections.
Trial registration
The Pilot Trial was registered on 26 September 2014. Trial registration number: NCT02261506.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-015-0688-z) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-015-0688-z
PMCID: PMC4407544  PMID: 25903783
intensive care; critically ill; bacteremia; bloodstream infection; antimicrobial; treatment duration; mortality; antimicrobial stewardship
13.  Critical care capacity in Canada: results of a national cross-sectional study 
Critical Care  2015;19(1):133.
Introduction
Intensive Care Units (ICUs) provide life-supporting treatment; however, resources are limited, so demand may exceed supply in the event of pandemics, environmental disasters, or in the context of an aging population. We hypothesized that comprehensive national data on ICU resources would permit a better understanding of regional differences in system capacity.
Methods
After the 2009–2010 Influenza A (H1N1) pandemic, the Canadian Critical Care Trials Group surveyed all acute care hospitals in Canada to assess ICU capacity. Using a structured survey tool administered to physicians, respiratory therapists and nurses, we determined the number of ICU beds, ventilators, and the ability to provide specialized support for respiratory failure.
Results
We identified 286 hospitals with 3170 ICU beds and 4982 mechanical ventilators for critically ill patients. Twenty-two hospitals had an ICU that routinely cared for children; 15 had dedicated pediatric ICUs. Per 100,000 population, there was substantial variability in provincial capacity, with a mean of 0.9 hospitals with ICUs (provincial range 0.4-2.8), 10 ICU beds capable of providing mechanical ventilation (provincial range 6–19), and 15 invasive mechanical ventilators (provincial range 10–24). There was only moderate correlation between ventilation capacity and population size (coefficient of determination (R2) = 0.771).
Conclusion
ICU resources vary widely across Canadian provinces, and during times of increased demand, may result in geographic differences in the ability to care for critically ill patients. These results highlight the need to evolve inter-jurisdictional resource sharing during periods of substantial increase in demand, and provide background data for the development of appropriate critical care capacity benchmarks.
Electronic supplementary material
The online version of this article (doi:10.1186/s13054-015-0852-6) contains supplementary material, which is available to authorized users.
doi:10.1186/s13054-015-0852-6
PMCID: PMC4426537  PMID: 25888116
14.  Patient transitions relevant to individuals requiring ongoing ventilatory assistance: A Delphi study 
Distinguishing patient cohorts that require ongoing mechanical ventilation has been made problematic by the use of various terms, which, in turn, leads to difficulty in comparative studies and uncertainty in the timing of clinical decision making. Consensus conferences have achieved little in terms of standardizing transition terms and information that can be broadly applied to include the many clinical specialities reflecting the continuum of care. Accordingly, this study aimed to identify the defining features of key transition points for individuals requiring ongoing ventilator assistance based on expert-derived consensus.
BACKGROUND:
Various terms, including ‘prolonged mechanical ventilation’ (PMV) and ‘long-term mechanical ventilation’ (LTMV), are used interchangeably to distinguish patient cohorts requiring ventilation, making comparisons and timing of clinical decision making problematic.
OBJECTIVE:
To develop expert, consensus-based criteria associated with care transitions to distinguish cohorts of ventilated patients.
METHODS:
A four-round (R), web-based Delphi study with consensus defined as >70% was performed. In R1, participants listed, using free text, criteria perceived to should and should not define seven transitions. Transitions comprised: T1 – acute ventilation to PMV; T2 – PMV to LTMV; T3 – PMV or LTMV to acute ventilation (reverse transition); T4 – institutional to community care; T5 – no ventilation to requiring LTMV; T6 – pediatric to adult LTMV; and T7 – active treatment to end-of-life care. Subsequent Rs sought consensus.
RESULTS:
Experts from intensive care (n=14), long-term care (n=14) and home ventilation (n=10), representing a variety of professional groups and geographical areas, completed all Rs. Consensus was reached on 14 of 20 statements defining T1 and 21 of 25 for T2. ‘Physiological stability’ had the highest consensus (97% and 100%, respectively). ‘Duration of ventilation’ did not achieve consensus. Consensus was achieved on 13 of 18 statements for T3 and 23 of 25 statements for T4. T4 statements reaching 100% consensus included: ‘informed choice’, ‘patient stability’, ‘informal caregiver support’, ‘caregiver knowledge’, ‘environment modification’, ‘supportive network’ and ‘access to interprofessional care’. Consensus was achieved for 15 of 17 T5, 16 of 20 T6 and 21 of 24 T7 items.
CONCLUSION:
Criteria to consider during key care transitions for ventilator-assisted individuals were identified. Such information will assist in furthering the consistency of clinical care plans, research trials and health care resource allocation.
PMCID: PMC4198230  PMID: 24791254
Home ventilation; Long-term mechanical ventilation; Mechanical ventilation; Pediatric ventilation; Prolonged mechanical ventilation; Transition
15.  Being Ready to Treat Ebola Virus Disease Patients 
As the outbreak of Ebola virus disease (EVD) in West Africa continues, clinical preparedness is needed in countries at risk for EVD (e.g., United States) and more fully equipped and supported clinical teams in those countries with epidemic spread of EVD in Africa. Clinical staff must approach the patient with a very deliberate focus on providing effective care while assuring personal safety. To do this, both individual health care providers and health systems must improve EVD care. Although formal guidance toward these goals exists from the World Health Organization, Medecin Sans Frontières, the Centers for Disease Control and Prevention, and other groups, some of the most critical lessons come from personal experience. In this narrative, clinicians deployed by the World Health Organization into a wide range of clinical settings in West Africa distill key, practical considerations for working safely and effectively with patients with EVD.
doi:10.4269/ajtmh.14-0746
PMCID: PMC4347319  PMID: 25510724
16.  Economic evaluation of the prophylaxis for thromboembolism in critical care trial (E-PROTECT): study protocol for a randomized controlled trial 
Trials  2014;15:502.
Background
Venous thromboembolism (VTE) is a common complication of critical illness with important clinical consequences. The Prophylaxis for ThromboEmbolism in Critical Care Trial (PROTECT) is a multicenter, blinded, randomized controlled trial comparing the effectiveness of the two most common pharmocoprevention strategies, unfractionated heparin (UFH) and low molecular weight heparin (LMWH) dalteparin, in medical-surgical patients in the intensive care unit (ICU). E-PROTECT is a prospective and concurrent economic evaluation of the PROTECT trial.
Methods/Design
The primary objective of E-PROTECT is to identify and quantify the total (direct and indirect, variable and fixed) costs associated with the management of critically ill patients participating in the PROTECT trial, and, to combine costs and outcome results to determine the incremental cost-effectiveness of LMWH versus UFH, from the acute healthcare system perspective, over a data-rich time horizon of ICU admission and hospital admission. We derive baseline characteristics and probabilities of in-ICU and in-hospital events from all enrolled patients. Total costs are derived from centers, proportional to the numbers of patients enrolled in each country. Direct costs include medication, physician and other personnel costs, diagnostic radiology and laboratory testing, operative and non-operative procedures, costs associated with bleeding, transfusions and treatment-related complications. Indirect costs include ICU and hospital ward overhead costs. Outcomes are the ratio of incremental costs per incremental effects of LMWH versus UFH during hospitalization; incremental cost to prevent a thrombosis at any site (primary outcome); incremental cost to prevent a pulmonary embolism, deep vein thrombosis, major bleeding event or episode of heparin-induced thrombocytopenia (secondary outcomes) and incremental cost per life-year gained (tertiary outcome). Pre-specified subgroups and sensitivity analyses will be performed and confidence intervals for the estimates of incremental cost-effectiveness will be obtained using bootstrapping.
Discussion
This economic evaluation employs a prospective costing methodology concurrent with a randomized controlled blinded clinical trial, with a pre-specified analytic plan, outcome measures, subgroup and sensitivity analyses. This economic evaluation has received only peer-reviewed funding and funders will not play a role in the generation, analysis or decision to submit the manuscripts for publication.
Trial registration
Clinicaltrials.gov Identifier: NCT00182143. Date of registration: 10 September 2005.
Electronic supplementary material
The online version of this article (doi:10.1186/1745-6215-15-502) contains supplementary material, which is available to authorized users.
doi:10.1186/1745-6215-15-502
PMCID: PMC4413997  PMID: 25528663
Economic; Cost-effectiveness; Venous; Thromboembolism; PROTECT; Unfractionated; Heparin; Low molecular weight; Intensive; Critical
17.  Factors Affecting Family Satisfaction with Inpatient End-of-Life Care 
PLoS ONE  2014;9(11):e110860.
Background
Little data exists addressing satisfaction with end-of-life care among hospitalized patients, as they and their family members are systematically excluded from routine satisfaction surveys. It is imperative that we closely examine patient and institution factors associated with quality end-of-life care and determine high-priority target areas for quality improvement.
Methods
Between September 1, 2010 and January 1, 2012 the Canadian Health care Evaluation Project (CANHELP) Bereavement Questionnaire was mailed to the next-of-kin of recently deceased inpatients to seek factors associated with satisfaction with end-of-life care. The primary outcome was the global rating of satisfaction. Secondary outcomes included rates of actual versus preferred location of death, associations between demographic factors and global satisfaction, and identification of targets for quality improvement.
Results
Response rate was 33% among 275 valid addresses. Overall, 67.4% of respondents were very or completely satisfied with the overall quality of care their relative received. However, 71.4% of respondents who thought their relative did not die in their preferred location favoured an out-of-hospital location of death. A common location of death was the intensive care unit (45.7%); however, this was not the preferred location of death for 47.6% of such patients. Multivariate Poisson regression analysis showed respondents who believed their relative died in their preferred location were 1.7 times more likely to be satisfied with the end-of-life care that was provided (p = 0.001). Items identified as high-priority targets for improvement included: relationships with, and characteristics of health care professionals; illness management; communication; and end-of-life decision-making.
Interpretation
Nearly three-quarters of recently deceased inpatients would have preferred an out-of-hospital death. Intensive care units were a common, but not preferred, location of in-hospital deaths. Family satisfaction with end-of-life care was strongly associated with their relative dying in their preferred location. Improved communication regarding end-of-life care preferences should be a high-priority quality improvement target.
doi:10.1371/journal.pone.0110860
PMCID: PMC4234251  PMID: 25401710
18.  Duration of antibiotic therapy for critically ill patients with bloodstream infections: A retrospective cohort study 
BACKGROUND:
The optimal duration of antibiotic treatment for bloodstream infections is unknown and understudied.
METHODS:
A retrospective cohort study of critically ill patients with bloodstream infections diagnosed in a tertiary care hospital between March 1, 2010 and March 31, 2011 was undertaken. The impact of patient, pathogen and infectious syndrome characteristics on selection of shorter (≤10 days) or longer (>10 days) treatment duration, and on the number of antibiotic-free days, was examined. The time profile of clinical response was evaluated over the first 14 days of treatment. Relapse, secondary infection and mortality rates were compared between those receiving shorter or longer treatment.
RESULTS:
Among 100 critically ill patients with bloodstream infection, the median duration of antibiotic treatment was 11 days, but was highly variable (interquartile range 4.5 to 17 days). Predictors of longer treatment (fewer antibiotic-free days) included foci with established requirements for prolonged treatment, underlying respiratory tract focus, and infection with Staphylococcus aureus or Pseudomonas species. Predictors of shorter treatment (more antibiotic-free days) included vascular catheter source and bacteremia with coagulase-negative staphylococci. Temperature improvements plateaued after the first week; white blood cell counts, multiple organ dysfunction scores and vasopressor dependence continued to decline into the second week. Among 72 patients who survived to 10 days, clinical outcomes were similar between those receiving shorter and longer treatment.
CONCLUSION:
Antibiotic treatment durations for patients with bloodstream infection are highly variable and often prolonged. A randomized trial is needed to determine the duration of treatment that will maximize cure while minimizing adverse consequences of antibiotics.
PMCID: PMC3852449  PMID: 24421823
Antibacterials; Antibiotic stewardship; Bacteremia; Bacterial infections; Critical care
19.  SodiUm SeleniTe Adminstration IN Cardiac Surgery (SUSTAIN CSX-trial): study design of an international multicenter randomized double-blinded controlled trial of high dose sodium-selenite administration in high-risk cardiac surgical patients 
Trials  2014;15:339.
Background
Cardiac surgery has been shown to result in a significant decrease of the antioxidant selenium, which is associated with the development of multiorgan dysfunction and increased mortality. Thus, a large-scale study is needed to investigate the effect of perioperative selenium supplementation on the occurrence of postoperative organ dysfunction.
Methods/Design
We plan a prospective, randomized double-blind, multicenter controlled trial, which will be conducted in North and South America and in Europe. In this trial we will include 1,400 high-risk patients, who are most likely to benefit from selenium supplementation. This includes patients scheduled for non-emergent combined and/or complex procedures, or with a predicted operative mortality of ≥5% according to the EuroSCORE II. Eligible patients will be randomly assigned to either the treatment group (bolus infusion of 2,000 μg sodium selenite immediately prior to surgery, followed by an additional dosage of 2,000 μg at ICU admission, and a further daily supplementation of 1,000 μg up to 10 days or ICU discharge) or to the control group (placebo administration at the same time points).
The primary endpoint of this study is a composite of 'persistent organ dysfunction’ (POD) and/or death within 30 days from surgery (POD + death). POD is defined as any need for life-sustaining therapies (mechanical ventilation, vasopressor therapy, mechanical circulatory support, continuous renal replacement therapy, or new intermittent hemodialysis) at any time within 30 days from surgery.
Discussion
The SUSTAIN-CSX™ study is a multicenter trial to investigate the effect of a perioperative high dosage sodium selenite supplementation in high-risk cardiac surgical patients.
Trial registration
This trial was registered at Clinicaltrials.gov (identifier: NCT02002247) on 28 November 2013.
Electronic supplementary material
The online version of this article (doi:10.1186/1745-6215-15-339) contains supplementary material, which is available to authorized users.
doi:10.1186/1745-6215-15-339
PMCID: PMC4247649  PMID: 25169040
Selenium; Inflammatory response; Oxidative stress; Antioxidant capacity; Myocardial ischemia/reperfusion; Postoperative organ failure
21.  Intracranial Pressure Monitoring in Severe Traumatic Brain Injury: Results from the American College of Surgeons Trauma Quality Improvement Program 
Journal of Neurotrauma  2013;30(20):1737-1746.
Abstract
Although existing guidelines support the utilization of intracranial pressure (ICP) monitoring in patients with traumatic brain injury (TBI), the evidence suggesting benefit is limited. To evaluate the impact on outcome, we determined the relationship between ICP monitoring and mortality in centers participating in the American College of Surgeons Trauma Quality Improvement Program (TQIP). Data on 10,628 adults with severe TBI were derived from 155 TQIP centers over 2009–2011. Random-intercept multilevel modeling was used to evaluate the association between ICP monitoring and mortality after adjusting for important confounders. We evaluated this relationship at the patient level and at the institutional level. Overall mortality (n=3769) was 35%. Only 1874 (17.6%) patients underwent ICP monitoring, with a mortality of 32%. The adjusted odds ratio (OR) for mortality was 0.44 [95% confidence interval (CI), 0.31–0.63], when comparing patients with ICP monitoring to those without. It is plausible that patients receiving ICP monitoring were selected because of an anticipated favorable outcome. To overcome this limitation, we stratified hospitals into quartiles based on ICP monitoring utilization. Hospitals with higher rates of ICP monitoring use were associated with lower mortality: The adjusted OR of death was 0.52 (95% CI, 0.35–0.78) in the quartile of hospitals with highest use, compared to the lowest. ICP monitoring utilization rates explained only 9.9% of variation in mortality across centers. Results were comparable irrespective of the method of case-mix adjustment. In this observational study, ICP monitoring utilization was associated with lower mortality. However, variability in ICP monitoring rates contributed only modestly to variability in institutional mortality rates. Identifying other institutional practices that impact on mortality is an important area for future research.
doi:10.1089/neu.2012.2802
PMCID: PMC3796332  PMID: 23731257
head injury; intracranial pressure; multilevel analysis; traumatic brain injury
22.  Early observational research and registries during the 2009–2010 influenza A pandemic 
Critical care medicine  2010;38(4 0):e120-e132.
As a critical care community, we have an obligation to provide not only clinical care but also the research that guides initial and subsequent clinical responses during a pandemic. There are many challenges to conducting such research. The first is speed of response. However, given the near inevitability of certain events, for example, viral respiratory illness such as the 2009 pandemic, geographically circumscribed natural disasters, or acts of terror, many study and trial designs should be preplanned and modified quickly when specific events occur. Template case report forms should be available for modification and web entry; centralized research ethics boards and funders should have the opportunity to preview and advise on such research beforehand; and national and international research groups should be prepared to work together on common studies and trials for common challenges. We describe the early international critical care research response to the influenza A 2009 (H1N1) pandemic, including specifics of observational study case report form, registry, and clinical trial design, cooperation of international critical care research organizations, and the early results of these collaborations.
doi:10.1097/CCM.0b013e3181d20c77
PMCID: PMC3705715  PMID: 20101176
critical care; intensive care; registry; H1N1; influenza; pandemic
23.  Medication utilization patterns among type 2 diabetes patients initiating Exenatide BID or insulin glargine: a retrospective database study 
Background
Type 2 diabetes is a common and costly illness, associated with significant morbidity and mortality. Despite this, there is relatively little information on the ‘real-world’ medication utilization patterns for patients with type 2 diabetes initiating exenatide BID or glargine. The objective of this study was to evaluate the ‘real-world’ medication utilization patterns in patients with type 2 diabetes treated with exenatide BID (exenatide) versus insulin glargine (glargine).
Methods
Adult patients( ≥18 years of age) with type 2 diabetes who were new initiators of exenatide or glargine from October 1, 2006 through March 31, 2008 with continuous enrollment for the 12 months pre- and 18 months post-index period were selected from the MarketScan® Commercial and Medicare Databases. To control for selection bias, propensity score matching was used to complete a 1:1 match of glargine to exenatide patients. Key study outcomes (including the likelihood of overall treatment modification, discontinuation, switching, or intensification) were analyzed using survival analysis.
Results
A total of 9,197 exenatide- and 4,499 glargine-treated patients were selected. Propensity score matching resulted in 3,774 matched pairs with a mean age of 57 years and a mean Deyo Charlson Comorbidity Index score of 1.6; 54% of patients were males. The 18-month treatment intensification rates were 15.9% and 26.0% (p < 0.0001) and the discontinuation rates were 38.3% and 40.0% (p = 0.14) for exenatide and glargine, respectively. Alternatively, 14.9% of exenatide-treated patients switched therapies, compared to 10.0% of glargine-treated patients (p < 0.0001). Overall, glargine-treated patients were more likely to modify their treatment [hazard ratio (HR) = 1.33, p < 0.0001] with shorter mean time on treatment until modification (123 vs. 159 days, p < 0.0001). Compared to exenatide-treated patients, glargine-treated patients were more likely to discontinue [hazard ratio (HR) = 1.25, p < 0.0001] or intensify therapy (HR = 1.72, p < 0.0001) but less likely to switch (HR = 0.71, p < 0.0001) the index therapy.
Conclusions
Patients treated for type 2 diabetes with exenatide BID or insulin glargine differ in their adherence to therapy. Exenatide-treated patients were less likely to discontinue or modify treatment but more likely to switch therapy compared to glargine-treated patients.
doi:10.1186/1472-6823-13-20
PMCID: PMC3750447  PMID: 23799930
24.  Acute kidney injury among critically ill patients with pandemic H1N1 influenza A in Canada: cohort study 
BMC Nephrology  2013;14:123.
Background
Canada’s pandemic H1N1 influenza A (pH1N1) outbreak led to a high burden of critical illness. Our objective was to describe the incidence of AKI (acute kidney injury) in these patients and risk factors for AKI, renal replacement therapy (RRT), and mortality.
Methods
From a prospective cohort of critically ill adults with confirmed or probable pH1N1 (16 April 2009–12 April 2010), we abstracted data on demographics, co-morbidities, acute physiology, AKI (defined by RIFLE criteria for Injury or Failure), treatments in the intensive care unit, and clinical outcomes. Univariable and multivariable logistic regression analyses were used to evaluate the associations between clinical characteristics and the outcomes of AKI, RRT, and hospital mortality.
Results
We included 562 patients with pH1N1-related critical illness (479 [85.2%] confirmed, 83 [14.8%] probable]: mean age 48.0 years, 53.4% female, and 13.3% aboriginal. Common co-morbidities included obesity, diabetes, and chronic obstructive pulmonary disease. AKI occurred in 60.9%, with RIFLE categories of Injury (23.0%) and Failure (37.9%). Independent predictors of AKI included obesity (OR 2.94; 95%CI, 1.75-4.91), chronic kidney disease (OR 4.50; 95%CI, 1.46-13.82), APACHE II score (OR per 1-unit increase 1.06; 95%CI, 1.03-1.09), and PaO2/FiO2 ratio (OR per 10-unit increase 0.98; 95%CI, 0.95-1.00). Of patients with AKI, 24.9% (85/342) received RRT and 25.8% (85/329) died. Independent predictors of RRT were obesity (OR 2.25; 95% CI, 1.14-4.44), day 1 mechanical ventilation (OR 4.09; 95% CI, 1.21-13.84), APACHE II score (OR per 1-unit increase 1.07; 95% CI, 1.03-1.12), and day 1 creatinine (OR per 10 μmol/L increase, 1.06; 95%CI, 1.03-1.10). Development of AKI was not independently associated with hospital mortality.
Conclusion
The incidence of AKI and RRT utilization were high among Canadian patients with critical illness due to pH1N1.
doi:10.1186/1471-2369-14-123
PMCID: PMC3694036  PMID: 23763900
Acute kidney injury; Renal replacement therapy; Influenza; Critical illness; Mortality; Resource utilization
25.  Clinical and temporal patterns of severe pneumonia causing critical illness during Hajj 
BMC Infectious Diseases  2012;12:117.
Background
Pneumonia is a leading cause of hospitalization during Hajj and susceptibility and transmission may be exacerbated by extreme spatial and temporal crowding. We describe the number and temporal onset, co–morbidities, and outcomes of severe pneumonia causing critical illness among pilgrims.
Method
A cohort study of all critically ill Hajj patients, of over 40 nationalities, admitted to 15 hospitals in 2 cities in 2009 and 2010. Demographic, clinical, and laboratory data, and variables necessary for calculation of the Acute Physiology and Chronic Health Evaluation IV scores were collected.
Results
There were 452 patients (64.6% male) who developed critical illness. Pneumonia was the primary cause of critical illness in 123 (27.2%) of all intensive care unit (ICU) admissions during Hajj. Pneumonia was community (Hajj)–acquired in 66.7%, aspiration–related in 25.2%, nosocomial in 3.3%, and tuberculous in 4.9%. Pneumonia occurred most commonly in the second week of Hajj, 95 (77.2%) occurred between days 5–15 of Hajj, corresponding to the period of most extreme pilgrim density. Mechanical ventilation was performed in 69.1%. Median duration of ICU stay was 4 (interquartile range [IQR] 1–8) days and duration of ventilation 4 (IQR 3–6) days. Commonest preexisting co–morbidities included smoking (22.8%), diabetes (32.5%), and COPD (17.1%). Short–term mortality (during the 3–week period of Hajj) was 19.5%.
Conclusion
Pneumonia is a major cause of critical illness during Hajj and occurs amidst substantial crowding and pilgrim density. Increased efforts at prevention for at risk pilgrim prior to Hajj and further attention to spatial and physical crowding during Hajj may attenuate this risk.
doi:10.1186/1471-2334-12-117
PMCID: PMC3458962  PMID: 22591189
Respiratory tract infection; Pneumonia; Hajj; Co–morbidities; APACHE IV

Results 1-25 (67)