Irrespective of initiating factors, the peripheral circulation shows two general phases during the development and treatment of shock. Most published reports support earlier knowledge that the peripheral circulation is among the first to deteriorate and the last to be restored. With the advent of new and old techniques that allow us to continuously monitor peripheral perfusion, we may further shift our focus from pressure-based to flow-based resuscitation. The persisting challenge is the validation (effect on outcome parameters) of peripheral perfusion monitoring tools that can be simple and readily available worldwide.
Recent studies challenge the utility of central venous pressure monitoring as a surrogate for cardiac preload. Starting with Starling’s original studies on the regulation of cardiac output, this review traces the history of the experiments that elucidated the role of central venous pressure in circulatory physiology. Central venous pressure is an important physiologic parameter, but it is not an independent variable that determines cardiac output.
Circulatory shock is a life-threatening syndrome resulting in multiorgan failure and a high mortality rate. The aim of this consensus is to provide support to the bedside clinician regarding the diagnosis, management and monitoring of shock.
The European Society of Intensive Care Medicine invited 12 experts to form a Task Force to update a previous consensus (Antonelli et al.: Intensive Care Med 33:575–590, 2007). The same five questions addressed in the earlier consensus were used as the outline for the literature search and review, with the aim of the Task Force to produce statements based on the available literature and evidence. These questions were: (1) What are the epidemiologic and pathophysiologic features of shock in the intensive care unit? (2) Should we monitor preload and fluid responsiveness in shock? (3) How and when should we monitor stroke volume or cardiac output in shock? (4) What markers of the regional and microcirculation can be monitored, and how can cellular function be assessed in shock? (5) What is the evidence for using hemodynamic monitoring to direct therapy in shock? Four types of statements were used: definition, recommendation, best practice and statement of fact.
Forty-four statements were made. The main new statements include: (1) statements on individualizing blood pressure targets; (2) statements on the assessment and prediction of fluid responsiveness; (3) statements on the use of echocardiography and hemodynamic monitoring.
This consensus provides 44 statements that can be used at the bedside to diagnose, treat and monitor patients with shock.
Circulatory shock; Intensive care unit; Hemodynamic monitoring; Echocardiography; Consensus statement/guidelines
The Surviving Sepsis Campaign guidelines recommend goal-directed therapy (GDT) for the early resuscitation of patients with sepsis. However, the findings of the ProCESS (Protocolized Care for Early Septic Shock) trial showed no benefit from GDT for reducing mortality rates in early septic shock. We performed a meta-analysis to integrate these findings with existing literature on this topic and evaluate the effect of GDT on mortality due to sepsis.
We searched the PubMed, Embase and CENTRAL (Cochrane Central Register of Controlled Trials) databases and reference lists of extracted articles. Randomized controlled trials comparing GDT with standard therapy or usual care in patients with sepsis were included. The prespecified primary outcome was overall mortality.
In total, 13 trials involving 2,525 adult patients were included. GDT significantly reduced overall mortality in the random-effects model (relative risk (RR), 0.83; 95% confidence interval (CI), 0.71 to 0.96; P =0.01; I2 = 56%). Predefined subgroup analysis according to the timing of GDT for resuscitation suggested that a mortality benefit was seen only in the subgroup of early GDT within the first 6 hours (seven trials; RR, 0.77; 95% CI, 0.67 to 0.89; P =0.0004; I2 = 40%), but not in the subgroup with late or unclear timing of GDT (six trials; RR, 0.92; 95% CI, 0.69 to 1.24; P =0.59; I2 = 56%). GDT was significantly associated with the use of dobutamine (five trials; RR, 2.71; 95% CI, 1.20 to 6.10; P =0.02).
The results of the present meta-analysis suggest that GDT significantly reduces overall mortality in patients with sepsis, especially when initiated early. However, owing to the variable quality of the studies, strong and definitive recommendations cannot be made.
Electronic supplementary material
The online version of this article (doi:10.1186/s13054-014-0570-5) contains supplementary material, which is available to authorized users.
The decision of when to stop septic shock resuscitation is a critical but yet a relatively unexplored aspect of care. This is especially relevant since the risks of over-resuscitation with fluid overload or inotropes have been highlighted in recent years. A recent guideline has proposed normalization of central venous oxygen saturation and/or lactate as therapeutic end-points, assuming that these variables are equivalent or interchangeable. However, since the physiological determinants of both are totally different, it is legitimate to challenge the rationale of this proposal. We designed this study to gain more insights into the most appropriate resuscitation goal from a dynamic point of view. Our objective was to compare the normalization rates of these and other potential perfusion-related targets in a cohort of septic shock survivors.
We designed a prospective, observational clinical study. One hundred and four septic shock patients with hyperlactatemia were included and followed until hospital discharge. The 84 hospital-survivors were kept for final analysis. A multimodal perfusion assessment was performed at baseline, 2, 6, and 24 h of ICU treatment.
Some variables such as central venous oxygen saturation, central venous-arterial pCO2 gradient, and capillary refill time were already normal in more than 70% of survivors at 6 h. Lactate presented a much slower normalization rate decreasing significantly at 6 h compared to that of baseline (4.0 [3.0 to 4.9] vs. 2.7 [2.2 to 3.9] mmol/L; p < 0.01) but with only 52% of patients achieving normality at 24 h. Sublingual microcirculatory variables exhibited the slowest recovery rate with persistent derangements still present in almost 80% of patients at 24 h.
Perfusion-related variables exhibit very different normalization rates in septic shock survivors, most of them exhibiting a biphasic response with an initial rapid improvement, followed by a much slower trend thereafter. This fact should be taken into account to determine the most appropriate criteria to stop resuscitation opportunely and avoid the risk of over-resuscitation.
Septic shock; Perfusion; Resuscitation; Lactate; Microcirculation
Definitions of shock and resuscitation endpoints traditionally focus on blood pressures and cardiac output. This carries a high risk of overemphasizing systemic hemodynamics at the cost of tissue perfusion. In line with novel shock definitions and evidence of the lack of a correlation between macro- and microcirculation in shock, we recommend that macrocirculatory resuscitation endpoints, particularly arterial and central venous pressure as well as cardiac output, be reconsidered. In this viewpoint article, we propose a three-step approach of resuscitation endpoints in shock of all origins. This approach targets only a minimum individual and context-sensitive mean arterial blood pressure (for example, 45 to 50 mm Hg) to preserve heart and brain perfusion. Further resuscitation is exclusively guided by endpoints of tissue perfusion irrespectively of the presence of arterial hypotension ('permissive hypotension'). Finally, optimization of individual tissue (for example, renal) perfusion is targeted. Prospective clinical studies are necessary to confirm the postulated benefits of targeting these resuscitation endpoints.
Delirium in critically ill patients has a strong adverse impact on prognosis. In spite of its recognized importance, however, delirium screening and treatment procedures are often not in accordance with current guidelines. This implementation study is designed to assess barriers and facilitators for guideline adherence and next to develop a multifaceted tailored implementation strategy. Effects of this strategy on guideline adherence as well as important clinical outcomes will be described.
Current practices and guideline deviations will be assessed in a prospective baseline measurement. Barriers and facilitators will be identified from a survey among intensive care health care professionals (intensivists and nurses) and focus group interviews with selected health care professionals (n = 60). Findings will serve as a foundation for a tailored guideline implementation strategy. Adherence to the guideline and effects of the implementation strategies on relevant clinical outcomes will be piloted in a before-after study in six intensive care units (ICUs) in the southwest Netherlands. The primary outcomes are adherence to screening and treatment in line with the Dutch ICU delirium guideline. Secondary outcomes are process measures (e.g. attendance to training and knowledge) and clinical outcomes (e.g. incidence of delirium, hospital-mortality changes, and length of stay). Primary and secondary outcome data will be collected at four time points including at least 924 patients. Furthermore, a process evaluation will be done, including an economical evaluation.
Little is known on effective implementation of delirium management in the critically ill. The proposed multifaceted implementation strategy is expected to improve process measures such as screening adherence in line with the guideline and may improve clinical outcomes, such as mortality and length of stay. This ICU Delirium in Clinical Practice Implementation Evaluation study (iDECePTIvE-study) will generate important knowledge for ICU health care providers on how to improve their clinical practice to establish optimum care for delirious patients.
Clinical Trials NCT01952899
Intensive care; Critical care; Delirium; Screening; Delirium management; Implementation; Guideline
Invasive species threaten biodiversity and incur costs exceeding billions of US$. Eradication efforts, however, are nearly always unsuccessful. Throughout much of North America, land managers have used expensive, and ultimately ineffective, techniques to combat invasive Phragmites australis in marshes. Here, we reveal that Phragmites may potentially be controlled by employing an affordable measure from its native European range: livestock grazing. Experimental field tests demonstrate that rotational goat grazing (where goats have no choice but to graze Phragmites) can reduce Phragmites cover from 100 to 20% and that cows and horses also readily consume this plant. These results, combined with the fact that Europeans have suppressed Phragmites through seasonal livestock grazing for 6,000 years, suggest Phragmites management can shift to include more economical and effective top-down control strategies. More generally, these findings support an emerging paradigm shift in conservation from high-cost eradication to economically sustainable control of dominant invasive species.
Top-down control; Salt marshes; Invasive species; Biocontrol
In plant leaves, resource use follows a trade-off between rapid resource capture and conservative storage. This “worldwide leaf economics spectrum” consists of a suite of intercorrelated leaf traits, among which leaf mass per area, LMA, is one of the most fundamental as it indicates the cost of leaf construction and light-interception borne by plants. We conducted a broad-scale analysis of the evolutionary history of LMA across a large dataset of 5401 vascular plant species. The phylogenetic signal in LMA displayed low but significant conservatism, that is, leaf economics tended to be more similar among close relatives than expected by chance alone. Models of trait evolution indicated that LMA evolved under weak stabilizing selection. Moreover, results suggest that different optimal phenotypes evolved among large clades within which extremes tended to be selected against. Conservatism in LMA was strongly related to growth form, as were selection intensity and phenotypic evolutionary rates: woody plants showed higher conservatism in relation to stronger stabilizing selection and lower evolutionary rates compared to herbaceous taxa. The evolutionary history of LMA thus paints different evolutionary trajectories of vascular plant species across clades, revealing the coordination of leaf trait evolution with growth forms in response to varying selection regimes.
Brownian model; functional trait; Ornstein–Uhlenbeck model; phenotypic evolution
Recent clinical studies have shown a relationship between abnormalities in peripheral perfusion and unfavorable outcome in patients with circulatory shock. Nitroglycerin is effective in restoring alterations in microcirculatory blood flow. The aim of this study was to investigate whether nitroglycerin could correct the parameters of abnormal peripheral circulation in resuscitated circulatory shock patients.
This interventional study recruited patients who had circulatory shock and who persisted with abnormal peripheral perfusion despite normalization of global hemodynamic parameters. Nitroglycerin started at 2 mg/hour and doubled stepwise (4, 8, and 16 mg/hour) each 15 minutes until an improvement in peripheral perfusion was observed. Peripheral circulation parameters included capillary refill time (CRT), skin-temperature gradient (Tskin-diff), perfusion index (PI), and tissue oxygen saturation (StO2) during a reactive hyperemia test (RincStO2). Measurements were performed before, at the maximum dose, and after cessation of nitroglycerin infusion. Data were analyzed by using linear model for repeated measurements and are presented as mean (standard error).
Of the 15 patients included, four patients (27%) responded with an initial nitroglycerin dose of 2 mg/hour. In all patients, nitroglycerin infusion resulted in significant changes in CRT, Tskin-diff, and PI toward normal at the maximum dose of nitroglycerin: from 9.4 (0.6) seconds to 4.8 (0.3) seconds (P <0.05), from 3.3°C (0.7°C) to 0.7°C (0.6°C) (P <0.05), and from [log] -0.5% (0.2%) to 0.7% (0.1%) (P <0.05), respectively. Similar changes in StO2 and RincStO2 were observed: from 75% (3.4%) to 84% (2.7%) (P <0.05) and 1.9%/second (0.08%/second) to 2.8%/second (0.05%/second) (P <0.05), respectively. The magnitude of changes in StO2 was more pronounced for StO2 of less than 75%: 11% versus 4%, respectively (P <0.05).
Dose-dependent infusion of nitroglycerin reverted abnormal peripheral perfusion and poor tissue oxygenation in patients following circulatory shock resuscitation. Individual requirements of nitroglycerin dose to improve peripheral circulation vary between patients. A simple and fast physical examination of peripheral circulation at the bedside can be used to titrate nitroglycerin infusion.
Altered peripheral perfusion is strongly associated with poor outcome in critically ill patients. We wanted to determine whether repeated assessments of peripheral perfusion during the days following surgery could help to early identify patients that are more likely to develop postoperative complications.
Haemodynamic measurements and peripheral perfusion parameters were collected one day prior to surgery, directly after surgery (D0) and on the first (D1), second (D2) and third (D3) postoperative days. Peripheral perfusion assessment consisted of capillary refill time (CRT), peripheral perfusion index (PPI) and forearm-to-fingertip skin temperature gradient (Tskin-diff). Generalized linear mixed models were used to predict severe complications within ten days after surgery based on Clavien-Dindo classification.
We prospectively followed 137 consecutive patients, from among whom 111 were included in the analysis. Severe complications were observed in 19 patients (17.0%). Postoperatively, peripheral perfusion parameters were significantly altered in patients who subsequently developed severe complications compared to those who did not, and these parameters persisted over time. CRT was altered at D0, and PPI and Tskin-diff were altered on D1 and D2, respectively. Among the different peripheral perfusion parameters, the diagnostic accuracy in predicting severe postoperative complications was highest for CRT on D2 (area under the receiver operating characteristic curve = 0.91 (95% confidence interval (CI) = 0.83 to 0.92)) with a sensitivity of 0.79 (95% CI = 0.54 to 0.94) and a specificity of 0.93 (95% CI = 0.86 to 0.97). Generalized mixed-model analysis demonstrated that abnormal peripheral perfusion on D2 and D3 was an independent predictor of severe postoperative complications (D2 odds ratio (OR) = 8.4, 95% CI = 2.7 to 25.9; D2 OR = 6.4, 95% CI = 2.1 to 19.6).
In a group of patients assessed following major abdominal surgery, peripheral perfusion alterations were associated with the development of severe complications independently of systemic haemodynamics. Further research is needed to confirm these findings and to explore in more detail the effects of peripheral perfusion–targeted resuscitation following major abdominal surgery.
Circulatory shock is common and associated with high morbidity and mortality. Appropriate shock treatment relies on a good understanding of the pathophysiological mechanisms underlying shock. In this article, we provide an update on the description, classification, and management of shock states built on foundations laid by Dr Max Harry Weil, a key early contributor to this field.
A growing body of evidence exists associating depressed microcirculatory function and morbidity and mortality in a wide array of clinical scenarios. It has been suggested that volume replacement therapy using fluids and/or blood in combination with vasoactive agents to modulate macro- and microvascular perfusion might be essential for resuscitation of severely septic patients. Even after interventions effectively optimizing macrocirculatory hemodynamics, however, high mortality rates still persist in critically ill and especially in septic patients. Therefore, rather than limiting therapy to macrocirculatory targets alone, microcirculatory targets could be incorporated to potentially reduce mortality rates in these critically ill patients. In the present review we first provide a brief history of clinical imaging of the microcirculation and describe how microcirculatory imaging has been of prognostic value in intensive care patients. We then give an overview of therapies potentially improving the microcirculation in critically ill patients and propose a clinical trial aimed at demonstrating that therapy targeting improvement of the microcirculation results in improved organ function in patients with severe sepsis and septic shock. We end with some recent technological advances in clinical microcirculatory image acquisition and analysis.
Increased blood lactate levels (hyperlactataemia) are common in critically ill patients. Although frequently used to diagnose inadequate tissue oxygenation, other processes not related to tissue oxygenation may increase lactate levels. Especially in critically ill patients, increased glycolysis may be an important cause of hyperlactataemia. Nevertheless, the presence of increased lactate levels has important implications for the morbidity and mortality of the hyperlactataemic patients. Although the term lactic acidosis is frequently used, a significant relationship between lactate and pH only exists at higher lactate levels. The term lactate associated acidosis is therefore more appropriate. Two recent studies have underscored the importance of monitoring lactate levels and adjust treatment to the change in lactate levels in early resuscitation. As lactate levels can be measured rapidly at the bedside from various sources, structured lactate measurements should be incorporated in resuscitation protocols.
Objective. Sublingual microcirculatory alterations are associated with an adverse prognosis in several critical illness subgroups. Up to now, single-center studies have reported on sublingual microcirculatory alterations in ICU patient subgroups, but an extensive evaluation of the prevalence of these alterations is lacking. We present the study design of an international multicenter observational study to investigate the prevalence of microcirculatory alterations in critically ill: the Microcirculatory Shock Occurrence in Acutely ill Patients (microSOAP). Methods. 36 ICU's worldwide have participated in this study aiming for inclusion of over 500 evaluable patients. To enable communication and data collection, a website, an Open Clinica 3.0 database, and image uploading software have been designed. A one-session assessment of the sublingual microcirculation using Sidestream Dark Field imaging and data collection on patient characteristics has been performed in every ICU patient >18 years, regardless of underlying disease. Statistical analysis will provide insight in the prevalence and severity of sublingual alterations, its relation to systemic hemodynamic variables, disease, therapy, and outcome. Conclusion. This study will be the largest microcirculation study ever performed. It is expected that this study will also establish a basis for future studies related to the microcirculation in critically ill.
Acute kidney injury (AKI) is strongly associated with increased morbidity and mortality in critically ill patients. Efforts to change its clinical course have failed because clinically available therapeutic measures are currently lacking, and early detection is impossible with serum creatinine (SCr). The demand for earlier markers has prompted the discovery of several candidates to serve this purpose. In this paper, we review available biomarker studies on the early predictive performance in developing AKI in adult critically ill patients. We make an effort to present the results from the perspective of possible clinical utility.
AKI; biomarkers; ICU
Near-infrared spectroscopy has been used as a noninvasive monitoring tool for tissue oxygen saturation (StO2) in acutely ill patients. This study aimed to investigate whether local vasoconstriction induced by body surface cooling significantly influences thenar StO2 as measured by InSpectra model 650.
Eight healthy individuals (age 26 ± 6 years) participated in the study. Using a cooling blanket, we aimed to cool the entire body surface to induce vasoconstriction in the skin without any changes in central temperature. Thenar StO2 was noninvasively measured during a 3-min vascular occlusion test using InSpectra model 650 with a 15-mm probe. Measurements were analyzed for resting StO2 values, rate of StO2 desaturation (RdecStO2, %/min), and rate of StO2 recovery (RincStO2, %/s) before, during, and after skin cooling. Measurements also included heart rate (HR), mean arterial pressure (MAP), cardiac output (CO), stroke volume (SV), capillary refill time (CRT), forearm-to-fingertip skin-temperature gradient (Tskin-diff), perfusion index (PI), and tissue hemoglobin index (THI).
In all subjects MAP, CO, SV, and core temperature did not change during the procedure. Skin cooling resulted in a significant decrease in StO2 from 82% (80–87) to 72% (70–77) (P < 0.05) and in RincStO2 from 3.0%/s (2.8–3.3) to 1.7%/s (1.1–2.0) (P < 0.05). Similar changes in CRT, Tskin-diff, and PI were also observed: from 2.5 s (2.0–3.0) to 8.5 s (7.2–11.0) (P < 0.05), from 1.0°C (−1.6–1.8) to 3.1°C (1.8–4.3) (P < 0.05), and from 10.0% (9.1–11.7) to 2.5% (2.0–3.8), respectively. The THI values did not change significantly.
Peripheral vasoconstriction due to body surface cooling could significantly influence noninvasive measurements of thenar StO2 using InSpectra model 650 with 15-mm probe spacing.
Near-infrared spectroscopy; Skin temperature; Body surface cooling; Capillary refill time
We studied whether the choice of timing of discussing organ donation for the first time with the relatives of a patient with catastrophic brain injury in The Netherlands has changed over time and explored its possible consequences. Second, we investigated how thorough the process of brain death determination was over time by studying the number of medical specialists involved. And we studied the possible influence of the Donor Register on the consent rate.
We performed a retrospective chart review of all effectuated brain dead organ donors between 1987 and 2009 in one Dutch university hospital with a large neurosurgical serving area.
A total of 271 medical charts were collected, of which 228 brain dead patients were included. In the first period, organ donation was discussed for the first time after brain death determination (87%). In 13% of the cases, the issue of organ donation was raised before the first EEG. After 1998, we observed a shift in this practice. Discussing organ donation for the first time after brain death determination occurred in only 18% of the cases. In 58% of the cases, the issue of organ donation was discussed before the first EEG but after confirming the absence of all brain stem reflexes, and in 24% of the cases, the issue of organ donation was discussed after the prognosis was deemed catastrophic but before a neurologist or neurosurgeon assessed and determined the absence of all brain stem reflexes as required by the Dutch brain death determination protocol.
The phases in the process of brain death determination and the time at which organ donation is first discussed with relatives have changed over time. Possible causes of this change are the introduction of the Donor Register, the reintroduction of donation after circulatory death and other logistical factors. It is unclear whether the observed shift contributed to the high refusal rate in The Netherlands and the increase in family refusal in our hospital in the second studied period. Taking published literature on this subject into account, it is possible that this may have a counterproductive effect.
Computed tomography of the lung has shown that ventilation shifts from dependent to nondependent lung regions. In this study, we investigated whether, at the bedside, electrical impedance tomography (EIT) at the cranial and caudal thoracic levels can be used to visualize changes in ventilation distribution during a decremental positive end-expiratory pressure (PEEP) trial and the relation of these changes to global compliance in mechanically ventilated patients.
Ventilation distribution was calculated on the basis of EIT results from 12 mechanically ventilated patients after cardiac surgery at a cardiothoracic ICU. Measurements were taken at four PEEP levels (15, 10, 5 and 0 cm H2O) at both the cranial and caudal lung levels, which were divided into four ventral-to-dorsal regions. Regional compliance was calculated using impedance and driving pressure data.
We found that tidal impedance variation divided by tidal volume significantly decreased on caudal EIT slices, whereas this measurement increased on the cranial EIT slices. The dorsal-to-ventral impedance distribution, expressed according to the center of gravity index, decreased during the decremental PEEP trial at both EIT levels. Optimal regional compliance differed at different PEEP levels: 10 and 5 cm H2O at the cranial level and 15 and 10 cm H2O at the caudal level for the dependent and nondependent lung regions, respectively.
At the bedside, EIT measured at two thoracic levels showed different behavior between the caudal and cranial lung levels during a decremental PEEP trial. These results indicate that there is probably no single optimal PEEP level for all lung regions.
electric impedance; mechanical ventilation; positive-pressure respiration; atelectasis; critical care; humans
First we aimed to evaluate the ability of neutrophil gelatinase-associated lipocalin (NGAL) and cystatin-C (CyC) in plasma and urine to discriminate between sustained, transient and absent acute kidney injury (AKI), and second to evaluate their predictive performance for sustained AKI in adult intensive care unit (ICU) patients.
A prospective cohort study of 700 patients was studied. Sample collection was performed over 8 time points starting on admission.
After exclusion 510 patients remained for the analysis. All biomarkers showed significant differentiation between sustained and no AKI at all time points (p ≤ 0.0002) except for urine CyC (uCyC) on admission (p = 0.06). Urine NGAL (uNGAL) was the only biomarker significantly differentiating sustained from transient AKI on ICU admission (p = 0.02). Individually, uNGAL performed better than the other biomarkers (area under the curves, AUC = 0.80, 95% confidence interval, CI = 0.72–0.88) for the prediction of sustained AKI. The combination with plasma NGAL (pNGAL) showed a nonsignificant improvement (AUC = 0.83, 95% CI = 0.75–0.91). The combination of individual markers with a model of clinical characteristics (MDRD eGFR, HCO3− and sepsis) did not improve its performance significantly. However, the integrated discrimination improvement showed significant improvement when uNGAL was added (p = 0.04).
uNGAL measured on ICU admission differentiates patients with sustained AKI from transient or no-AKI patients. Combining biomarkers such as pNGAL, uNGAL and plasma CyC with clinical characteristics adds some value to the predictive model.
Acute kidney injury; Cystatin C; Intensive care unit; Neutrophil gelatinase-associated lipocalin; Renal replacement therapy; Sustained acute kidney injury
Intensive care is generally regarded as expensive, and as a result beds are limited. This has raised serious questions about rationing when there are insufficient beds for all those referred. However, the evidence for the cost effectiveness of intensive care is weak and the work that does exist usually assumes that those who are not admitted do not survive, which is not always the case. Randomised studies of the effectiveness of intensive care are difficult to justify on ethical grounds; therefore, this observational study examined the cost effectiveness of ICU admission by comparing patients who were accepted into ICU after ICU triage to those who were not accepted, while attempting to adjust such comparison for confounding factors.
This multi-centre observational cohort study involved 11 hospitals in 7 EU countries and was designed to assess the cost effectiveness of admission to intensive care after ICU triage. A total of 7,659 consecutive patients referred to the intensive care unit (ICU) were divided into those accepted for admission and those not accepted. The two groups were compared in terms of cost and mortality using multilevel regression models to account for differences across centres, and after adjusting for age, Karnofsky score and indication for ICU admission. The analyses were also stratified by categories of Simplified Acute Physiology Score (SAPS) II predicted mortality (< 5%, 5% to 40% and >40%). Cost effectiveness was evaluated as cost per life saved and cost per life-year saved.
Admission to ICU produced a relative reduction in mortality risk, expressed as odds ratio, of 0.70 (0.52 to 0.94) at 28 days. When stratified by predicted mortality, the odds ratio was 1.49 (0.79 to 2.81), 0.7 (0.51 to 0.97) and 0.55 (0.37 to 0.83) for <5%, 5% to 40% and >40% predicted mortality, respectively. Average cost per life saved for all patients was $103,771 (€82,358) and cost per life-year saved was $7,065 (€5,607). These figures decreased substantially for patients with predicted mortality higher than 40%, $60,046 (€47,656) and $4,088 (€3,244), respectively. Results were very similar when considering three-month mortality. Sensitivity analyses performed to assess the robustness of the results provided findings similar to the main analyses.
Not only does ICU appear to produce an improvement in survival, but the cost per life saved falls for patients with greater severity of illness. This suggests that intensive care is similarly cost effective to other therapies that are generally regarded as essential.
It is desirable to identify a potential organ donor (POD) as early as possible to achieve a donor conversion rate (DCR) as high as possible which is defined as the actual number of organ donors divided by the number of patients who are regarded as a potential organ donor. The DCR is calculated with different assessment tools to identify a POD. Obviously, with different assessment tools, one may calculate different DCRs, which make comparison difficult. Our aim was to determine which assessment tool can be used for a realistic estimation of a POD pool and how they compare to each other with regard to DCR.
Retrospective chart review of patients diagnosed with a subarachnoid haemorrhage, traumatic brain injury or intracerebral haemorrhage. We applied three different assessment tools on this cohort of patients.
We identified a cohort of 564 patients diagnosed with a subarachnoid haemorrhage, traumatic brain injury or intracerebral haemorrhage of whom 179/564 (31.7%) died. After applying the three different assessment tools the number of patients, before exclusion of medical reasons or age, was 76 for the IBD-FOUR definition, 104 patients for the IBD-GCS definition and 107 patients based on the OPTN definition of imminent neurological death. We noted the highest DCR (36.5%) in the IBD-FOUR definition.
The definition of imminent brain death based on the FOUR-score is the most practical tool to identify patients with a realistic chance to become brain dead and therefore to identify the patients most likely to become POD.
Brain death; Critical care organisation; Ethics; Neurotrauma; Stroke; Transplantation
Hospitals are increasingly forced to consider the economics of technology use. We estimated the incremental cost-consequences of remifentanil-based analgo-sedation (RS) vs. conventional analgesia and sedation (CS) in patients requiring mechanical ventilation (MV) in the intensive care unit (ICU), using a modelling approach.
A Markov model was developed to describe patient flow in the ICU. The hourly probabilities to move from one state to another were derived from UltiSAFE, a Dutch clinical study involving ICU patients with an expected MV-time of two to three days requiring analgesia and sedation. Study medication was either: CS (morphine or fentanyl combined with propofol, midazolam or lorazepam) or: RS (remifentanil, combined with propofol when required). Study drug costs were derived from the trial, whereas all other ICU costs were estimated separately in a Dutch micro-costing study. All costs were measured from the hospital perspective (price level of 2006). Patients were followed in the model for 28 days. We also studied the sub-population where weaning had started within 72 hours.
The average total 28-day costs were €15,626 with RS versus €17,100 with CS, meaning a difference in costs of €1474 (95% CI -2163, 5110). The average length-of-stay (LOS) in the ICU was 7.6 days in the RS group versus 8.5 days in the CS group (difference 1.0, 95% CI -0.7, 2.6), while the average MV time was 5.0 days for RS versus 6.0 days for CS. Similar differences were found in the subgroup analysis.
Compared to CS, RS significantly decreases the overall costs in the ICU.