|Home | About | Journals | Submit | Contact Us | Français|
Between March 3, 1981, and June 1, 1984, 216 children were evaluated for orthotopic liver transplantation. Of the 216 patients, 117 (55%) had received at least one liver transplant by June 1, 1985. Fifty-five (25%) died before transplantation. The 117 patients who received transplants were grouped according to severity of disease and degree of general decompensation at the time of transplantation. The severity of a patient’s medical condition with the possible exception of deep hepatic coma, did not predict outcome following orthotopic liver transplantation. Severity variables were assessed at the time of the evaluation. Twenty-three of the 70 variables were found to have prognostic significance with regard to death from progressive liver disease before transplantation. These 23 variables were incorporated into a multivariate model to provide a means of determining the relative risk of death among pediatric patients with end-stage liver disease. This information may allow more informed selection of candidates awaiting liver transplantation.
The advent of cyclosporine and advances in surgical technique changed the status of orthotopic liver transplantation from an experimental procedure to accepted care for lethal liver disease.1 As 1-year survival probabilities approached 66% of OLTx recipients in late 1981 and early 1982,2 the question of who should receive the next available organ took on critical importance. The improvement in success of OLTx increased the number of children referred for the procedure. More referrals not only sparked an increase in transplantation but drew attention to the shortage of available organs.3 Twenty-five percent of referred children died of progressive liver disease before transplantation because an appropriate organ could not be located. Donated organs became a precious resource, and their utilization demanded a logical approach to recipient selection.
The process of selecting a recipient when a donor liver becomes available is determined primarily by matching the donor and recipient according to size and according to ABO blood group, conforming to blood banking rules of blood transfusions.4 In situations of great urgency, even ABO incompatibility may be disregarded as a condition for donor recipient matching.5 No attempt to match minor red cell antigens or Rh factor is made. Extensive tissue typing is not performed because of time constraints,4 and it appears from retrospective analysis that it may be unnecessary.4, 5 A number of appropriately matched potential recipients may exist for each donated organ. We attempted to determine whether providing a transplant for the sickest available potential recipient has an adverse effect on transplantation outcome, and second, to develop criteria to distinguish the sickest patient among those with heterogeneous but lethal liver diseases.
Between March 3, 1981, and June 1, 1984, 216 pediatric patients were evaluated for OLTx (Table I). Follow-up for the purpose of this study extended to June 1, 1985. Evaluation generally consisted of a 3- to 4-day hospitalization during which six areas were assessed: (1) confirmation of the diagnosis of lethal liver disease, (2) assessment of severity and rate of progression, (3) assessment of complications, (4) confirmation of anatomic suitability for transplantation, (5) psychosocial evaluation of the family and child, and (6) education of the family and child about liver transplantation. Details of this evaluation have recently been published.3
Of the 216 patients, 117 had received at least one OLTx as of June 1, 1985, the majority (107) at our institution and the remainder (10) at one of four other institutions. Fifty-five (25%) patients died before transplantation. Forty-four (20%) patients either await transplantation (22) or were deemed noncandidates (22) at the time of the evaluation and had survived to June 1, 1985.
The study was conducted in two phases. During phase 1 the 117 patients who received transplants were retrospectively grouped according to clinical status at the time of transplantation (Table II). Group 1 comprised patients who were known to have either severe liver disease or a disease with a downhill course but who were clinically stable. This group of patients had received no medical therapy for liver disease or its complications. α1-Antitrypsin deficiency with compensated cirrhosis or biliary atresia with a failed Kasai procedure (but early in the post-Kasai course) are typical of this group. A third subgroup in group 1 included patients with hepatic malignancy with metabolic liver disease who did not have end-stage liver disease. Some patients with tyrosinemia would fit in this third subgroup.
Group 2 consisted of children residing at home and receiving outpatient medical management. They had had no life-threatening complications of liver disease. Typically they were receiving choleretic, antipruritic, and diuretic therapy.
Group 3 patients had accelerating decompensation of liver disease. These patients were either hospitalized because of complications of liver disease at the time of transplantation or had been repeatedly hospitalized in the recent past because of complications and were home at the time of transplantation.
Group 4 patients were confined to the intensive care unit because of complications of liver disease. This group was the most heterogeneous, and included patients with active variceal hemorrhage receiving pitressin and blood transfusions, patients with renal failure secondary to hepatorenal syndrome, and patients with grade 1–3 encephalopathy.
Finally, group 5 included patients with stage 4 hepatic coma. These patients were responsive only to deep pain, with nonpurposeful reaction (decerebrate posturing) and oculocephalic and oculovestibular responses.
Survival curves for the four groups were created using the actuarial method of Cutler and Ederer6,7 (Fig. 1). The life tables from which these curves were derived were compared by logrank statistics.7, 8
Phase 2 of the study utilized the data collected for patients during evaluation for OLTx to clarify which factors are predictive of imminent death from progressive liver disease so that the candidates with most urgent need of OLTx could be identified. The evaluation data consisted of 70 variables* obtained in each of the 216 patients, from history, physical examination, and laboratory data. Variables were screened for their prognostic significance relative to death from progressive liver disease by methods suggested by Byar.9
Historical data were recorded as yes or no responses to questions (e.g., Does the patient have a history of encephalopathy? Does the patient have a history of ascites?). Physical examination findings were categorized as either ordinal data in the normal or increasingly abnormal range (e.g., spleen size normal , enlarged , or massively enlarged , on the basis of centimeters below the left costal margin) or nominal data recording the abnormality directly (e.g., cardiac examination revealing diastolic murmur). Laboratory data were grouped by choosing cut points within the range of values of a given variable. Cut points delineating such groupings were chosen either because they had pathophysiologic significance or because they resulted in an even distribution of patients in the various response groups. For example, patients with serum sodium concentration ≤136 mEq/L were grouped together because this value represented a value below the physiologic range, whereas patients with ALT <80 IU, 80 to 150 IU, and > 150 IU defined three response levels with approximately 72 patients in each group. Whenever a continuous variable was treated as categorical, the variable was also evaluated continuously to ensure that the grouping did not alter its prognostic importance.
Average death rates (deaths per 1000 patient-months) were estimated at each response level for each variable. Estimates of the relative risk of death of patients in one level versus those in another were obtained using the Cox proportional hazards model.10 The death rate, for example, in patients with a history of ascites was 45.1, compared with 6.9 in those without a history of ascites. Using the Cox model, the estimated risk of dying was 5.2 times greater among patients with ascites than without ascites. The p value for the comparison of the curves at each response level for a given variable were then obtained using the log-rank statistic.8 Comparison of the two survival curves (Fig. 2) for patients with and without a history of ascites revealed the curves to be statistically different (P <0.0001).
Each of the 70 variables was examined in this fashion. Variables found to have prognostic significance were tabulated with their response levels, number of patients, number of patient deaths, follow-up months, death rate (as previously defined), risk, and P value. A potential source of error in evaluating the 70 variables lies in our bias toward providing transplants for patients who, in our perception, had the most advanced disease.11 Consequently, to evaluate whether this bias had influenced the variables indicating prognostic importance for pretransplantation survival, we assessed the same variables in a subset of patients excluding those undergoing transplantation. Finally, the variables of prognostic significance before transplantation were used to develop a multivariate model of overall patient risk of death related to progressive liver disease.
The survival curves for the four clinical status groups flattened after sloping downward during the first postoperative 18 months (Fig. 1). Final survival over the study period was 85% for group 1, 74% for group 2, 81% for group 3, and 77% for group 4. Clinical status group 5 contained only one patient and was not included in this evaluation. The survival curves of groups 1 through 4 were not statistically different (P = 0.9).
Twenty-three of the 70 variables had prognostic significance relative to death caused by progressive liver disease (Table III). These variables with responses are defined in more detail in Clinical Appendix I. The variables generally had a predictive relationship to early death from progressive liver failure that might be expected. However, a few of the variables had unexpected relationships. Age had no statistically significant prognostic value. A number of laboratory values (ALT, gamma glutamyl transpeptidase, alkaline phosphatase) had a direct relationship with survival. Higher, more abnormal values were seen in patients with longest survival. Other variables (serum cholesterol and indirect bilirubin concentrations) had an unexpectedly strong correlation with survival. Serum cholesterol <100 mg/dL was the strongest predictor of early death. Elevation of indirect bilirubin was a stronger predictor of death than was elevation of direct bilirubin.
The results for the 23 variables when the patients with transplants were eliminated from the analysis generally agreed closely with the results when all patients were included (Table III). All variables remained statistically significant (P <0.05), and only one variable showed a decrease in risk of more than 25% (the risk coefficient for serum chloride concentration decreased from 9.1 when all patients were considered to 5.7 when only patients without transplants were considered).
Once univariate analysis had established the 23 variables of pretransplantation prognostic significance, they were evaluated in multivariate fashion to assess their joint prognostic importance. First, variables that could be easily altered by the medical caretaker (Na, Cl, Ca, pH) were eliminated (Clinical Appendix II). Next, variables associated with the highest risk were reviewed. Seven variables had risks in excess of 5 at their highest risk level. The seven variables were reduced to four (cholesterol, PTT, history of ascites, indirect bilirubin) on clinical grounds by eliminating those that gave redundant information. PTT was chosen over PT and PTPK because it has the strongest predictive value of those variables measuring coagulation. Cholesterol was chosen over serum albumin because it has the strongest predictive value of those variables measuring synthetic function and nutritional state.
The four variables were placed in the multiplicative exponential model (Statistical Appendix II). Each of the remaining variables and the three “redundant” variables were then added to the four in an attempt to improve the prognostic capability of the multivariate model. No further improvement in the model was achieved with any of the added variables. Thus these four variables with the five response levels (indirect bilirubin at 3 to 6 mg/dL and >6 mg/dL response levels) provided much of the prognostic information that is available from the original 23 variables. An algorithm for selection of patients was determined by fitting the model to the 153 patients for whom all four variables were known.
The multiplicative exponential model was used to develop survival curves predictive of a patient’s probability of survival over a given time. Fig. 3 shows survival curves of patients at high, moderate, and low risk. Patients at high risk were defined as those whose risk of dying within 6 months of evaluation was greater than 75%; moderate risk, 25% to 75%; and low risk, less than 25%. Fig. 3 also illustrates the observed survival curves for patients in the three risk groups. The moderate risk curve was consistently above the moderate risk actuarial curve, but there was otherwise close approximation of the model’s predictive curves to the actuarial curves. The characteristics of patients in the three risk groups are given in Table IV.
Based on the fitted model, it was possible to determine scores that give the relative importance of each of the prognostic variables (Table V). The total scores for patients in the low-risk group ranged from 0 to 27; from 28 to 39 in the medium risk group; and from 40 to 55 in the high-risk group.
Prior to this study, the approach to selection of recipients for liver transplantation was unclear. One approach would utilize transplantable organs in patients with the most advanced liver disease because these patients would be least likely to survive until another organ becomes available. Alternatively, transplantable livers could be used in patients with more stable conditions, with the rationale that this group of patients might be more likely to survive the rigors of surgery and thus benefit from the organ. It was necessary to know which of these two approaches would optimize the number of survivors from the group of patients waiting for transplants.
Phase 1 of our study revealed no significant difference in mortality among the various clinical status groups, suggesting that the rationale for giving transplants to the most stable patients first is incorrect. The immediate implication is the necessity to give a transplant to the “sickest” patients in clinical status groups 1 through 4. This implication is not, however, without uncertainty. Our series is the largest to date, but the number of patients in the clinical status groups mandates some caution when interpreting the lack of significant difference.12 Groups 1 and 4 were small, and even in groups 2 and 3, which were much larger, the chance of detecting a twofold difference in death rate was only approximately 50%. Consequently, continued investigation of the impact of the recipient’s medical condition on transplantation outcome is imperative.*
There was only one patient in clinical status group 5 (stage 4 hepatic coma), and she was not included in this analysis. In that one patient, OLTx was technically successful, but she never regained neurologic function and was subsequently declared brain dead. The experience with adult patients in similar clinical condition reveals a high immediate posttransplantation mortality.13,14 This high-risk group deserves special consideration. The decision to proceed with transplantation must be individualized. Physicians must consider both the relative urgency for surgery among other candidates who also match the available organ, and the likelihood that another organ of similar size and blood type will become available in the near future (some sizes and blood types are more difficult to obtain than others).
Although our data suggest that survival outcome is not dependent on the patient’s medical condition at the time of transplantation, the medical effort to get the sickest patients successfully through transplantation may be greater. There are other compelling reasons for a transplantation before severe decompensation of liver disease. First, the children with the most advanced disease have a greater degree of growth failure than do their less sick counterparts. Second, these sickest children have more developmental delay as a direct result of both the disease (e.g., chronic hyperammonemia, nutritional deprivation) and the medical response to that disease15 (frequent and prolonged hospitalizations). Third, emotional disturbances in these chronically ill children, related to altered parent-child relationships, are more frequent.16 Finally, even in patients not apparently in extremis, sudden decompensation leading to death before transplantation is always a risk.17 Thus, although this study delineates the necessity of providing a transplant for the sickest available patients (with the possible exception of those with stage 4 hepatic coma), no patient should have to wait until advanced disease has evolved if the opportunity for transplantation presents itself earlier.
Phase 2 was undertaken to develop a system to identify the sickest candidates through review of their evaluation data. The 23 variables with prognostic significance are, for the most part, those that an experienced clinician would expect. ALT elevation as an indicator of liver injury, and gamma glutamyl transpeptidase and alkaline phosphatase as indicators of cholangiopathy were of little predictive value compared with factors directly measuring residual hepatic function (e.g., PT, PIT, cholesterol) or altered physiology secondary to advanced liver disease (e.g., ascites, hypochloremia). An unexpected prognostic variable was elevated leukocyte count. This elevation did not appear to be related to acute or chronic infection. Possibly it is the result of hepatocellular necrosis, analogous to the leukocytosis seen in alcoholic hepatitis,18 thus explaining its relationship with death from progressive liver failure.
Houssin and Franco19 attempted to clarify criteria for liver transplantation from clinical data collected from June 1973 to December 1977 in adult patients with cirrhosis. At that time, mortality following liver transplantation was excessive, and the decision to go forward with liver transplantation seemed reasonable only in those patients with short expected survival. Thus criteria that predicted a high likelihood of dying in the subsequent month were determined. Since the development of cyclosporine,20 survival after liver transplantation has dramatically improved2,4 and transplantation in less critically ill patients has become accepted. In our series, survival after transplantation (76%) equaled the percent survival from evaluation until transplantation (75%), making death from progressive liver failure the highest risk for the pediatric patient awaiting transplantation. Phase 1 of our study suggested that the patient’s medical condition at the time of transplantation (except perhaps stage 4 hepatic coma) lacked a significant detrimental impact on transplantation outcome and crystallized the importance of identifying the sickest patient. Phase 2 provided an approach to the identification of the sickest patient awaiting transplantation. These sickest patients are in urgent need of transplantation, not as in Houssin’s patients in whom there was nothing to lose, but because there is everything to gain.
The multivariate model developed in our study provides one means of identifying the sickest patient with advanced liver disease. This model has been demonstrated to be internally consistent in that actuarial data from which the model was derived and the predictions of the model have close correlation. This or a similar model now needs to be applied to patient data from multiple other institutions to support its predictive value.
We recommend that pediatric patients accepted as candidates for transplantation be allocated to a high-, moderate-, or low-risk group on the basis of the four factors in our multivariate model. When a transplantable organ becomes available, those patients who are size and blood type matched to the donor and are in the highest risk category should be considered for transplantation first, then those patients at moderate risk, and finally patients considered to be at low risk.
We recommend that data for at least the four variables used in the multivariate model be collected every 3 to 4 months in candidate patients so that reassessment of risk can be made. By 3 to 4 months after evaluation, most patients in the high-risk group either will have died or will have received transplants, and newly evaluated patients as well as patients previously in the low- and moderate-risk groups will make up a new high-risk group in urgent need of transplantation. At centers with large numbers of pediatric patients on the candidate list, and where the high-risk group has multiple patients of similar size and blood type, then data for a portion or all of the 23 variables of prognostic significance should be collected at 3 to 4 month intervals to aid in assessment of patient risk. The physician’s ability to identify the sickest patient among the candidates allows the most appropriate child to be selected as a transplant recipient. This approach, which apparently will not increase transplantation deaths, will simultaneously decrease patient deaths from progressive liver failure.
In addition to providing a means of choosing a candidate patient for liver transplantation within a center, extension of a model of this type may guide organ sharing among regional centers whose geographic borders would be defined by the time limit dictates of present methods of liver preservation. Ultimately, with improved preservation methods, a model of this type may provide a means of national organ allocation.
The possible responses to historical variables and to various laboratory variables are defined, in the order of their appearance in Table III.
A positive history of gastrointestinal bleeding was subdivided into patients with a history of acute variceal hemorrhage and those with other bleeding, generally stomal bleeding in patients with an external conduit from a prior portoenterostomy, or slow gastrointestinal blood loss.
History of ascites was recorded if the patient’s medical history revealed ascites or ascites was present on physical examination at the time of the evaluation.
A history of coagulopathy was indicated when the patient’s medical record reported prolonged prothrombin time.
History of encephalopathy was recorded if frank clinical encephalopathy had been diagnosed; an elevated serum NH3 concentration or abnormal results of psychometric testing did not qualify as a positive history of encephalopathy.
PT and PTPK were either normal, 3 to 5 seconds prolonged beyond control, or more than 5 seconds prolonged beyond control. Patients with normal PT did not receive vitamin K; however, the normal values listed for PTPK included those with a normal PT plus those with normalized PT after vitamin K injection.
Low serum chloride concentration, for example, was a strong indicator of early patient death. This variable revealed prognostic significance when the entire group of patients (216) was evaluated, and presumably is an indicator of both the disturbed fluid and electrolyte state in end-stage liver disease and of therapeutic attempts at management of end-stage liver disease (in this instance, diuretic therapy). However, the referring physician might alter hypochloremia by electrolyte replacement. Thus it is conceivable that a patient may falsely have absence of the hypochloremia risk factor, in which case the model might give erroneous risk. Thus chloride was dropped, as were other easily alterable factors.
The Cox proportional hazards model,10 a semiparametric regression model, assumes that the hazard function (i.e., a type of death rate) for the i-th patient can be written as
where ho(t) is an arbitrary baseline hazard function, β is a vector of regression coefficients, and xi is the vector of covariates for the i-th patient. It is assumed that the ratio of hazard functions for any two patients with different covariates is constant over time and that the covariates affect the hazard rate in a multiplicative manner. This approach to regression analysis furnishes estimates of relative risk between patients with different sets of covariates
To perform the multivariate regression analysis, a multiplicative exponential survival model was chosen.
Plots of the log survival versus time were approximately linear, indication that the hazard rate is constant over time, which is characteristic of survival times following an exponential probability distribution. The exponential model is a special case of the more general Weibull model, which allows the hazard rate to vary over time. The Weibull model was also fit to the pretransplantation data, but there was no significant difference between the exponential and Weibull models, so the simpler exponential model was chosen.
We assume that the probability of death at time t for patient i (i = 1,2,…, N) followed an exponential distribution:
where λi is the hazard rate. Furthermore, we assumed, as in the Cox proportional hazards model, that the covariates have multiplicative effects on the hazard rate of the form λi = exp(β’xi), where β and xi are the regression parameters and covariates for the i-th patient, respectively. The first element of xi is 1 to account for an intercept, allowing the average hazard rate among patients without any of the covariates to be estimated also. The cumulative survive curve at time t for patient i is given by
Thus, knowing the regression parameter β, one may predict survival, based on this model, for a given patient when the covariates are known.
The covariates were coded as binary indicators of the levels of the four prognostic variables; (1) history of ascites (1 if yes, 0 if no); (2) bilirubin indirect, with two covariates (level 1, if bilirubin 3 to 6 mg/dL, 1; if not, 0; level 2, if bilirubin was >6 mg/dL, 1; if not, 0); (3) cholesterol (1 if <100; 0 if not); and (4) PTT (1 if >20 sec; 0 if not). Thus the estimated regression coefficients could be used to compute multivariate risk scores, because each coefficient represents the relative importance of its corresponding covariate in terms of risk of dying. The estimated coefficients were −10.01 (intercept), 1.93 (history of ascites), 1.49 (bilirubin indirect, level 1), 1.66 (bilirubin indirect, level 2), 1.30 (PTT), and 1.94 (cholesterol).
To allow easier calculation of the multivariate risk scores, the intercept coefficient was ignored (it was the same for all patients) and the other coefficients were divided by the smallest, multiplied by 10, and rounded to the nearest integer. By this method the relative importance of each of the prognostic variables was retained in the scores, but the use of the scores was simplified. This method of forming risk groups and creating multivariate risk scores is given in Byar.21
*The 70 variables assessed have been tabulated, and can be obtained from the National Auxiliary Publication Service, c/o Microfiche Publications, P.O. Box 3513, Grand Central Station, New York, NY 10013.
*The “National Institutes of Health if presently developing a Liver Transplantation Data Base to examine, among many issues, candidate selection.