This study provides a modern description of AKI among adult patients with severe burns. Unlike reports from the last century (1
), that include patients with varying severity, onset, and etiology of AKI, we have provided a structured definition of AKI based on the RIFLE criteria. Using a large, prospectively collected multi-institutional database, we were able to distinguish early (occurring during the resuscitative phase) from late (occurring after the resuscitative phase) AKI, and we found significant differences in outcomes between these two groups. Survival with early AKI was better (79.6%) than with late AKI (64.1%) and late AKI was also associated with both early and late organ failure ( and ).
We also noted that those with progressive AKI had more comorbidities, the worst organ failure scores (NROF), and the lowest survival (18.8%) (). While none of the patients with early AKI were treated with dialysis and one third of those with late AKI were treated with dialysis, the highest dialysis use (75%) occurred in the progressive AKI group and 62.5% of these patients continued dialysis at the time of discharge (). In comparison, only 23.1% of those with late AKI were being dialyzed at discharge, indicating that most patients with late AKI regained enough renal function to be liberated from renal replacement therapy and that progressive AKI is a different entity from early or late AKI.
It is important to contrast our 17.7% rate of late AKI with recent reports of AKI using the RIFLE criteria. The first use of RIFLE criteria in the burned population was by Coca et al. who report an overall 27% rate of AKI, but they included patients with 10% TBSA or greater, used the lowest creatinine value in the first five days as a baseline, and used the highest creatinine value of the hospitalization to create a RIFLE score (15
). Their reported mortality of 60% with the Failure class, is similar to our mortality for late AKI of 64.1% (), but they did not distinguish between early or late AKI and analyzed smaller burns (15
). Mosier and colleagues found that patients with early AKI had a higher mortality and greater incidence of early MODS than patients without early AKI and also identified differences between early, progressive, and late AKI (17
). Clearly, the timing of AKI onset significantly effects outcomes, as those with late AKI suffer more MODS and worse mortality ( and ), but it is not as dismal of a prognosis of 80 to 100% mortality as previously reported in the last century (1
). Defining the subset of patients with progressive AKI distinguishes the group of patients with the worst survival (18.8%), and we feel that these classifications will be important in stratifying patients for evaluation of novel AKI therapies in the future.
With this in mind, we created a decision tree using CART analysis to identify patients at risk for late AKI early
in their hospital course. Our goal was to create a simple decision tool that could be used at the bedside. The resulting decision tree was four levels. Not surprisingly, the NROF score was the best variable for classifying patients at risk for late AKI (). Late AKI has long been associated with sepsis and MODS (1
). Additionally, MODS and late kidney injury have been associated with inhalation injury (1
), but our decision tree does not include inhalation injury. Although univariate analysis found significant differences in the percentage of inhalation injury between those with none, early, or late AKI, there were a significant number of patients with inhalation injury among those that did not suffer any AKI or who only experienced early AKI (25% and 44.9%, respectively, ). The mathematical driving force behind CART analysis is to develop a “pure” node where 100% of its subjects can be classified as having a particular outcome. Therefore, inhalation injury proved less powerful in its ability to split the group into a “pure” node. A NROF score of zero, on the other hand, provided a terminal node where 88% of its occupants did not suffer late AKI ().
For patients with a NROF score of one or greater, their worst base deficit within the first 24 hours proved best in its ability to split the group according to the outcome of interest, late AKI (). Numerous studies in both humans and animal models have demonstrated a relationship between early base deficit, MODS, and death (34
). Most of these reports find that early base deficits correlate with a more severe inflammatory response, tissue malperfusion, and higher fluid requirements. This has led some to suggest real-time monitoring of acid-base status with therapy specifically directed at this end-point rather than traditional end-points of urine output or mean arterial pressure (37
). Volumes of resuscitation did not differ between those without AKI, early AKI, or late AKI (), and although Parkland score was a variable input into the CART software (Supplementary Table
), it was not selected for the decision tree. Alterations in acid-base status may reflect kidney malperfusion, or an early subclinical renal injury, while traditional markers of urine output and hemodynamic parameters are preserved; as other investigators have noted a higher incidence of ARDS and MODS with a 24 hour base deficit < -6 (40
Among those with less severe base deficit, lowest glucose within the first 24 hours split the group into those with late AKI (lowest glucose ≤ 83 mg/dL) versus those without late AKI (lowest glucose > 83 mg/dL). Hyperglycemia has long been associated with poor outcomes in both burn and trauma patients (41
). Pidcoke and colleagues have demonstrated an increased mortality associated with glucose variability, as opposed to hyperglycemia alone (46
). Here, the lowest glucose reading within the first 24 hours may be a marker for glucose variability or may highlight the increased mortality observed with hypoglycemic events that occur with intensive insulin therapy (47
). Other proposed mechanisms include increased muscle catabolism and modulation of the innate immune response (42
This last point underscores CART's ability to uncover layered relationships not immediately obvious with simple observation or univariate analysis. For example, the decision tree depicts a connection between those with NROF score of one or greater, worst base deficit, lowest glucose, and blood transfusion within the first 24 hours of admission (). Such a relationship was not immediately obvious either by intuition or traditional statistical methods (), illustrating this method's power in discerning and displaying these relationships in an intuitive, visual output that can easily be converted into clinically useful algorithms.
The final layer in our decision tree indicated that among those with a lowest 24-hour glucose of greater than 83 mg/dL, a blood transfusion during this same early period best divided the group into those with late AKI versus those who did not have late AKI, although this last layer is less “pure” than some of the terminal nodes higher in the tree. As an example, 56.3% of those who received a blood transfusion within the first 24 hours subsequently developed late AKI. Nonetheless, blood product transfusion has been associated with increased complications, including MODS and death in trauma, burn, and critically ill patients (49
). The only direct link between blood product transfusion and kidney injury occurs with antigen mismatch or other immunologic reactions. Given this variable's lack of discerning power for late AKI, it may simply be a marker for late MODS or severity of injury.
This last level of the decision tree illustrates how CART lacks explanatory capability. Variables that prove best at subdividing patients according to outcome of interest might not be causal factors. Rather, they could be markers for etiologic variables either not collected, or data that is not predictive because of how it is coded. The latter might certainly be the case in this large, multi-institutional database.
Another limitation of the study is that the Glue Grant TRDB was prospectively gathered while the data from LUMC was retrospectively collected. This might account for the decreased classification accuracy within the LUMC dataset. An accuracy of 73% in the testing set and 80% in the learning set; however, is in line with many published results using this same method, and often trees are developed and published without any evaluation of its performance on a separate dataset as we have done here (27