As a critical care community, we have an obligation to provide not only clinical care but also the research that guides initial and subsequent clinical responses during a pandemic. There are many challenges to conducting such research. The first is speed of response. However, given the near inevitability of certain events, for example, viral respiratory illness such as the 2009 pandemic, geographically circumscribed natural disasters, or acts of terror, many study and trial designs should be preplanned and modified quickly when specific events occur. Template case report forms should be available for modification and web entry; centralized research ethics boards and funders should have the opportunity to preview and advise on such research beforehand; and national and international research groups should be prepared to work together on common studies and trials for common challenges. We describe the early international critical care research response to the influenza A 2009 (H1N1) pandemic, including specifics of observational study case report form, registry, and clinical trial design, cooperation of international critical care research organizations, and the early results of these collaborations.
critical care; intensive care; registry; H1N1; influenza; pandemic
Increasing antimicrobial costs, reduced development of novel antimicrobials, and growing antimicrobial resistance necessitate judicious use of available agents. Antimicrobial stewardship programs (ASPs) may improve antimicrobial use in intensive care units (ICUs). Our objective was to determine whether the introduction of an ASP in an ICU altered the decision to treat cultures from sterile sites compared with nonsterile sites (which may represent colonization or contamination). We also sought to determine whether ASP education improved documentation of antimicrobial use, including an explicit statement of antimicrobial regimen, indication, duration, and de-escalation.
We retrospectively analyzed consecutive patients with positive bacterial cultures admitted to a 16-bed medical-surgical ICU over 2-month periods before and after ASP introduction (April through May 2008 and 2009, respectively). We evaluated the antimicrobial treatment of positive sterile- versus nonsterile-site cultures, specified a priori. We reviewed patient charts for clinician documentation of three specific details regarding antimicrobials: an explicit statement of antimicrobial regimen/indication, duration, and de-escalation. We also analyzed cost and defined daily doses (DDDs) (a World Health Organization (WHO) standardized metric of use) before and after ASP.
Patient demographic data between the pre-ASP (n = 139) and post-ASP (n = 130) periods were similar. No difference was found in the percentage of positive cultures from sterile sites between the pre-ASP period and post-ASP period (44.9% versus 40.2%; P = 0.401). A significant increase was noted in the treatment of sterile-site cultures after ASP (64% versus 83%; P = 0.01) and a reduction in the treatment of nonsterile-site cultures (71% versus 46%; P = 0.0002). These differences were statistically significant when treatment decisions were analyzed both at an individual patient level and at an individual culture level. Increased explicit antimicrobial regimen documentation was observed after ASP (26% versus 71%; P < 0.0001). Also observed were increases in formally documented stop dates (53% versus 71%; P < 0.0001), regimen de-escalation (15% versus 23%; P = 0.026), and an overall reduction in cost and mean DDDs after ASP implementation.
Introduction of an ASP in the ICU was associated with improved microbiologically targeted therapy based on sterile or nonsterile cultures and improved documentation of antimicrobial use in the medical record.
Among critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.
This was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.
We randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.
Our results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.
All medication errors are serious, but those associated with the IV route of administration often result in the most severe outcomes. According to the literature, IV medications are associated with 54% of potential adverse events, and 56% of medication errors.
To determine the type and frequency of errors associated with prescribing, documenting, and administering IV infusions, and to also determine if a correlation exists between the incidence of errors and either the time of day (day versus night) or the day of the week (weekday versus weekend) in an academic medicosurgical intensive care unit without computerized order entry or documentation.
As part of a quality improvement initiative, a prospective, observational audit was conducted for all IV infusions administered to critically ill patients during 40 randomly selected shifts over a 7-month period in 2007. For each IV infusion, data were collected from 3 sources: direct observation of administration of the medication to the patient, the medication administration record, and the patient’s medical chart. The primary outcome was the occurrence of any infusion-related errors, defined as any errors of omission or commission in the context of IV medication therapy that harmed or could have harmed the patient.
It was determined that up to 21 separate errors might occur in association with a single dose of an IV medication. In total, 1882 IV infusions were evaluated, and 5641 errors were identified. Omissions or discrepancies related to documentation accounted for 92.7% of all errors. The most common errors identified via each of the 3 data sources were incomplete labelling of IV tubing (1779 or 31.5% of all errors), omission of infusion diluent from the medication administration record (474 or 8.4% of all errors), and discrepancy between the medication order as recorded in the patient’s chart and the IV medication that was being infused (105 or 1.9% of all errors).
Strict definitions of errors and direct observation methods allowed identification of errors at every step of the medication administration process that was evaluated. Documentation discrepancies were the most prevalent type of errors in this paper-based system.
IV infusion; continuous infusion; errors; intensive care unit; critical care; perfusion i.v.; perfusion continue; erreurs; unité de soins intensifs; soins aux malades en phase critique
There is a paucity of data about the clinical characteristics that help identify patients at high risk of influenza infection upon ICU admission. We aimed to identify predictors of influenza infection in patients admitted to ICUs during the 2007/2008 and 2008/2009 influenza seasons and the second wave of the 2009 H1N1 influenza pandemic as well as to identify populations with increased likelihood of seasonal and pandemic 2009 influenza (pH1N1) infection.
Six Toronto acute care hospitals participated in active surveillance for laboratory-confirmed influenza requiring ICU admission during periods of influenza activity from 2007 to 2009. Nasopharyngeal swabs were obtained from patients who presented to our hospitals with acute respiratory or cardiac illness or febrile illness without a clear nonrespiratory aetiology. Predictors of influenza were assessed by multivariable logistic regression analysis and the likelihood of influenza in different populations was calculated.
In 5,482 patients, 126 (2.3%) were found to have influenza. Admission temperature ≥38°C (odds ratio (OR) 4.7 for pH1N1, 2.3 for seasonal influenza) and admission diagnosis of pneumonia or respiratory infection (OR 7.3 for pH1N1, 4.2 for seasonal influenza) were independent predictors for influenza. During the peak weeks of influenza seasons, 17% of afebrile patients and 27% of febrile patients with pneumonia or respiratory infection had influenza. During the second wave of the 2009 pandemic, 26% of afebrile patients and 70% of febrile patients with pneumonia or respiratory infection had influenza.
The findings of our study may assist clinicians in decision making regarding optimal management of adult patients admitted to ICUs during future influenza seasons. Influenza testing, empiric antiviral therapy and empiric infection control precautions should be considered in those patients who are admitted during influenza season with a diagnosis of pneumonia or respiratory infection and are either febrile or admitted during weeks of peak influenza activity.
Minimization of hemodynamic instability during renal replacement therapy (RRT) in patients with acute kidney injury (AKI) is often challenging. We examined the relative hemodynamic tolerability of sustained low efficiency dialysis (SLED) and continuous renal replacement therapy (CRRT) in critically ill patients with AKI. We also compared the feasibility of SLED administration with that of CRRT and intermittent hemodialysis (IHD).
This cohort study encompassed four critical care units within a single university-affiliated medical centre. 77 consecutive critically ill patients with AKI who were treated with CRRT (n = 30), SLED (n = 13) or IHD (n = 34) and completed at least two RRT sessions were included in the study. Overall, 223 RRT sessions were analyzed. Hemodynamic instability during a given session was defined as the composite of a > 20% reduction in mean arterial pressure or any escalation in pressor requirements. Treatment feasibility was evaluated based on the fraction of the prescribed therapy time that was delivered. An interrupted session was designated if < 90% of the prescribed time was administered. Generalized estimating equations were used to compare the hemodynamic tolerability of SLED vs CRRT while accounting for within-patient clustering of repeated sessions and key confounders.
Hemodynamic instability occurred during 22 (56.4%) SLED and 43 (50.0%) CRRT sessions (p = 0.51). In a multivariable analysis that accounted for clustering of multiple sessions within the same patient, the odds ratio for hemodynamic instability with SLED was 1.20 (95% CI 0.58-2.47), as compared to CRRT. Session interruption occurred in 16 (16.3), 30 (34.9) and 11 (28.2) of IHD, CRRT and SLED therapies, respectively.
In critically ill patients with AKI, the administration of SLED is feasible and provides comparable hemodynamic control to CRRT.
In the 2003 Toronto SARS outbreak, SARS-CoV was transmitted in hospitals despite adherence to infection control procedures. Considerable controversy resulted regarding which procedures and behaviours were associated with the greatest risk of SARS-CoV transmission.
A retrospective cohort study was conducted to identify risk factors for transmission of SARS-CoV during intubation from laboratory confirmed SARS patients to HCWs involved in their care. All SARS patients requiring intubation during the Toronto outbreak were identified. All HCWs who provided care to intubated SARS patients during treatment or transportation and who entered a patient room or had direct patient contact from 24 hours before to 4 hours after intubation were eligible for this study. Data was collected on patients by chart review and on HCWs by interviewer-administered questionnaire. Generalized estimating equation (GEE) logistic regression models and classification and regression trees (CART) were used to identify risk factors for SARS transmission.
45 laboratory-confirmed intubated SARS patients were identified. Of the 697 HCWs involved in their care, 624 (90%) participated in the study. SARS-CoV was transmitted to 26 HCWs from 7 patients; 21 HCWs were infected by 3 patients. In multivariate GEE logistic regression models, presence in the room during fiberoptic intubation (OR = 2.79, p = .004) or ECG (OR = 3.52, p = .002), unprotected eye contact with secretions (OR = 7.34, p = .001), patient APACHE II score ≥20 (OR = 17.05, p = .009) and patient Pa02/Fi02 ratio ≤59 (OR = 8.65, p = .001) were associated with increased risk of transmission of SARS-CoV. In CART analyses, the four covariates which explained the greatest amount of variation in SARS-CoV transmission were covariates representing individual patients.
Close contact with the airway of severely ill patients and failure of infection control practices to prevent exposure to respiratory secretions were associated with transmission of SARS-CoV. Rates of transmission of SARS-CoV varied widely among patients.
The ventilatory management of patients with acute respiratory failure is supported by good evidence, aiming to reduce lung injury by pressure limitation and reducing the duration of ventilatory support by regular assessment for discontinuation. Certain patient groups, however, due to their altered physiology or disease-specific complications, may require some variation in usual ventilatory management. The present manuscript reviews the ventilatory management in three special populations, namely the patient with brain injury, the pregnant patient and the morbidly obese patient.
You have recently heard reports that synthetic colloids may be associated with renal failure and other morbidities in certain populations of critically ill patients. You have been asked by the hospital chief of staff whether there should be a suspension of the use of synthetic colloids until further information is available. You need to make a decision.
Information and communication technology has the potential to address many problems encountered in intensive care unit (ICU) care, namely managing large amounts of patient and research data and reducing medical errors. The paper by Morrison and colleagues in the previous issue of Critical Care describes the adverse impact of introducing an electronic patient record in the ICU on multi-disciplinary communication during ward rounds. The importance of evaluation and technology assessment in the implementation and use of new computing technology is highlighted.
The Intensive Care Unit (ICU) is a data-rich environment where information technology (IT) may enhance patient care. We surveyed ICUs in the province of Ontario, Canada, to determine the availability, implementation and variability of information systems.
A self-administered internet-based survey was completed by ICU directors between May and October 2006. We measured the spectrum of ICU clinical data accessible electronically, the availability of decision support tools, the availability of electronic imaging systems for radiology, the use of electronic order entry and medication administration systems, and the availability of hardware and wireless or mobile systems. We used Fisher's Exact tests to compare IT availability and Classification and Regression Trees (CART) to estimate the optimal cut-point for the number of computers per ICU bed.
We obtained responses from 50 hospitals (68.5% of institutions with level 3 ICUs), of which 21 (42%) were university-affiliated. The majority electronically accessed laboratory data and imaging reports (92%) and used picture archiving and communication systems (PACS) (76%). Other computing functions were less prevalent (medication administration records 46%, physician or nursing notes 26%; medication order entry 22%). No association was noted between IT availability and ICU size or university affiliation. Sites used clinical information systems from15 different vendors and 8 different PACS systems were in use. Half of the respondents described the number of computers available as insufficient. Wireless networks and mobile computing systems were used in 23 ICUs (46%).
Ontario ICUs demontrate a high prevalence of the use of basic information technology systems. However, implementation of the more complex and potentially more beneficial applications is low. The wide variation in vendors utilized may impair information exchange, interoperability and uniform data collection.
Wireless communication and data transmission are playing an increasing role in the critical care environment. Early anecdotal reports of electromagnetic interference (EMI) with intensive care unit (ICU) equipment resulted in many institutions banning these devices. An increasing literature database has more clearly defined the risks of EMI. Restrictions to the use of mobile devices are being lifted, and it has been suggested that the benefits of improved communication may outweigh the small risks. However, increased use of cellular phones and ever changing communication technologies require ongoing vigilance by healthcare device manufacturers, hospitals and device users, to prevent potentially hazardous events due to EMI.
Personal digital assistants (PDAs) find many uses in health care. Knowing rates of collective PDA use among health care providers is an important guiding step to further understanding those health care contexts that are most suited to PDA use and whether PDAs provide improved health outcomes.
The objectives of this study were to estimate current and future PDA use among health care providers and to discuss possible implications of that use on choice of technology in clinical practice and research.
This study was a systematic review of PDA usage surveys. Surveys were identified as part of an ongoing systematic review on the use of handheld devices. Reports from eight databases covering both biomedical sciences and engineering (1993-2006) were screened against distinct eligibility criteria. Data from included surveys were extracted and verified in a standardized way and were assessed descriptively.
We identified 23 relevant surveys, 15 of which were derived from peer-reviewed journals. This cohort of surveys was published between 2000 and 2005. Overall, since 1999, there is clear evidence of an increasing trend in PDA use. The current overall adoption rate for individual professional use ranges between 45% and 85%, indicating high but somewhat variable adoption, primarily among physicians.
Younger physicians and residents and those working in large and hospital-based practices are more likely to use a PDA. The adoption rate is now at its highest rate of increase according to a commonly accepted diffusion of innovations model. A common problem with the evaluation of information technology is that use frequently precedes research. This is the case here, in which PDA adoption rates are already high and projections are for rapid growth in the short term. In general, it appears that professional PDA use in health care settings involves more administrative and organizational tasks than those related to patient care, perhaps signaling where the growth in adoption is most likely to occur. We conclude that physicians are likely accustomed to using a PDA, and, therefore, technology expertise will probably not be a barrier to implementing PDA applications. However, there is an urgent need to evaluate the effectiveness and efficiency of specific tasks using handheld technology to inform those developing and those using PDA applications.
Personal digital assistant; systematic review; survey; health care; health technology adoption
Disaster management plans have traditionally been required to manage major traumatic events that create a large number of victims. Infectious diseases, whether they be natural (e.g. SARS [severe acute respiratory syndrome] and influenza) or the result of bioterrorism, have the potential to create a large influx of critically ill into our already strained hospital systems. With proper planning, hospitals, health care workers and our health care systems can be better prepared to deal with such an eventuality. This review explores the Toronto critical care experience of coping in the SARS outbreak disaster. Our health care system and, in particular, our critical care system were unprepared for this event, and as a result the impact that SARS had was worse than it could have been. Nonetheless, we were able to organize a response rapidly during the outbreak. By describing our successes and failures, we hope to help others to learn and avoid the problems we encountered as they develop their own disaster management plans in anticipation of similar future situations.
Critical care physicians may benefit from immediate access to medical reference material. We evaluated the feasibility and potential benefits of a handheld computer based knowledge access system linking a central academic intensive care unit (ICU) to multiple community-based ICUs.
Four community hospital ICUs with 17 physicians participated in this prospective interventional study. Following training in the use of an internet-linked, updateable handheld computer knowledge access system, the physicians used the handheld devices in their clinical environment for a 12-month intervention period. Feasibility of the system was evaluated by tracking use of the handheld computer and by conducting surveys and focus group discussions. Before and after the intervention period, participants underwent simulated patient care scenarios designed to evaluate the information sources they accessed, as well as the speed and quality of their decision making. Participants generated admission orders during each scenario, which were scored by blinded evaluators.
Ten physicians (59%) used the system regularly, predominantly for nonmedical applications (median 32.8/month, interquartile range [IQR] 28.3–126.8), with medical software accessed less often (median 9/month, IQR 3.7–13.7). Eight out of 13 physicians (62%) who completed the final scenarios chose to use the handheld computer for information access. The median time to access information on the handheld handheld computer was 19 s (IQR 15–40 s). This group exhibited a significant improvement in admission order score as compared with those who used other resources (P = 0.018). Benefits and barriers to use of this technology were identified.
An updateable handheld computer system is feasible as a means of point-of-care access to medical reference material and may improve clinical decision making. However, during the study, acceptance of the system was variable. Improved training and new technology may overcome some of the barriers we identified.
clinical; computer; critical care; decision support systems; handheld; internet; point-of-care systems; practice guidelines; simulation
We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program.
Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys.
All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals.
A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation.
critical care; handheld computers; internet; procedure logging; training program
In patients with acute respiratory distress syndrome (ARDS), the lung comprises areas of aeration and areas of alveolar collapse, the latter producing intrapulmonary shunt and hypoxemia. The currently suggested strategy of ventilation with low lung volumes can aggravate lung collapse and potentially produce lung injury through shear stress at the interface between aerated and collapsed lung, and as a result of repetitive opening and closing of alveoli. An 'open lung strategy' focused on alveolar patency has therefore been recommended. While positive end-expiratory pressure prevents alveolar collapse, recruitment maneuvers can be used to achieve alveolar recruitment. Various recruitment maneuvers exist, including sustained inflation to high pressures, intermittent sighs, and stepwise increases in positive end-expiratory pressure or peak inspiratory pressure. In animal studies, recruitment maneuvers clearly reverse the derecruitment associated with low tidal volume ventilation, improve gas exchange, and reduce lung injury. Data regarding the use of recruitment maneuvers in patients with ARDS show mixed results, with increased efficacy in those with short duration of ARDS, good compliance of the chest wall, and in extrapulmonary ARDS. In this review we discuss the pathophysiologic basis for the use of recruitment maneuvers and recent evidence, as well as the practical application of the technique.
acute respiratory distress syndrome; artificial respiration; atelectasis; mechanical ventilation; positive end-expiratory pressure
Handheld computers have become a valuable and popular tool in various fields of medicine. A systematic review of articles was undertaken to summarize the current literature regarding the use of handheld devices in medicine. A variety of articles were identified, and relevant information for various medical fields was summarized. The literature search covered general information about handheld devices, the use of these devices to access medical literature, electronic pharmacopoeias, patient tracking, medical education, research, business management, e-prescribing, patient confidentiality, and costs as well as specialty-specific uses for personal digital assistants (PDAs).
The authors concluded that only a small number of articles provide evidence-based information about the use of PDAs in medicine. The majority of articles provide descriptive information, which is nevertheless of value. This article aims to increase the awareness among physicians about the potential roles for handheld computers in medicine and to encourage the further evaluation of their use.
SARS (severe acute respiratory syndrome) proved an enormous physical and emotional challenge to frontline health care workers throughout the world in late 2002 through to mid 2003. A large percentage of patients (many being health care workers themselves) became critically ill. Unfortunately, clinicians caring for these individuals did not have the advantage of previous experience or research data on which to base treatment decisions. As a result, at least early in the outbreak, a 'best guess approach' and/or anecdotes drove therapy. In many centres systemic steroids, which carry many potential downsides, became a mainstay of therapy. In this issue of Critical Care, two groups that have frontline experience of SARS debate the role of steroids. Let us hope and pray together that we never have the patient population needed to resolve the questions the two sides raise.
critical care; respiratory failure; SARS; steroids; viral pneumonia
One of the highlights of the intensive care unit when I was a resident was the opportunity to place a pulmonary artery catheter and then spend the rest of the day calculating parameters such as oxygen delivery, oxygen consumption, intrapulmonary shunt fraction, and so on. I have noticed in the past few years that the use of these devices in our unit is much less frequent. In our case I am not absolutely certain of the reason for this. Perhaps with time our clinical sense has improved to the point that we do not need the data available, perhaps other tests have replaced the pulmonary artery catheter's role or perhaps we are worried about the possible morbidity/mortality associated with its use. In the present article, we revisit this important debate.
hemodynamics; pulmonary artery catheterization; pulmonary wedge pressure; Swan–Ganz catheterization
Both a reduction in tidal volume and alveolar recruitment may be necessary to prevent ventilator-induced lung injury in the management of patients with acute respiratory distress syndrome. The lung collapse associated with endotracheal suctioning produces hypoxaemia, but it also causes de-recruitment, potentially aggravating lung injury. A study conducted by Dyhr and colleagues demonstrates the benefit of lung recruitment manoeuvres after suctioning, which help to improve oxygenation and restore lung volume more rapidly. Although this intervention appears safe and beneficial, the precise role of lung volume recruitment manoeuvres remains to be elucidated.
acute respiratory distress syndrome; atelectasis; mechanical ventilation; suctioning
The Internet is an invaluable resource for critical care clinicians. However, the search for useful Internet resources can be frustrating and time-consuming. In this issue, Critical Care launches a new section entitled 'Web Reports', which will regularly provide critical appraisal of Internet resources that may be of interest to critical care health care workers.
critical care; Internet; online resources
To evaluate the feasibility of incorporating hand-held computing technology in a surgical residency program, by means of hand-held devices for surgical procedure logging linked through the Internet to a central database.
Division of General Surgery, University of Toronto.
A survey of general surgery residents.
The 69 residents in the general surgery training program received hand-held computers with preinstalled medical programs and a program designed for surgical procedure logging. Procedural data were uploaded via the Internet to a central database. Survey data were collected regarding previous computer use as well as previous procedure logging methods.
Main outcome measure
Utilization of the procedure logging system.
After a 5-month pilot period, 38% of surgical residents were using the procedure-logging program successfully and on a regular basis. Program use was higher among more junior trainees. Analysis of the database provided valuable information on individual trainees, hospital programs and supervising surgeons, data that would assist in program development.
Hand-held devices can be implemented in a large division of general surgery to provide a reference database and a procedure-logging platform. However, user acceptance is not uniform and continued training and support are necessary to increase acceptance. The procedure database provides important information for optimizing trainees’ educational experience.