PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Am J Transplant. Author manuscript; available in PMC 2010 July 2.
Published in final edited form as:
PMCID: PMC2895923
NIHMSID: NIHMS215686

Survival Benefit-Based Deceased-Donor Liver Allocation

Abstract

Currently, patients awaiting deceased-donor liver transplantation are prioritized by medical urgency. Specifically, wait-listed chronic liver failure patients are sequenced in decreasing order of Model for End-stage Liver Disease (MELD) score. To maximize lifetime gained through liver transplantation, posttransplant survival should be considered in prioritizing liver waiting list candidates. We evaluate a survival benefit based system for allocating deceased-donor livers to chronic liver failure patients. Under the proposed system, at the time of offer, the transplant survival benefit score would be computed for each patient active on the waiting list. The proposed score is based on the difference in 5-year mean lifetime (with vs. without a liver transplant) and accounts for patient and donor characteristics. The rank correlation between benefit score and MELD score is 0.67. There is great overlap in the distribution of benefit scores across MELD categories, since waiting list mortality is significantly affected by several factors. Simulation results indicate that over 2000 life-years would be saved per year if benefit-based allocation was implemented. The shortage of donor livers increases the need to maximize the life-saving capacity of procured livers. Allocation of deceased-donor livers to chronic liver failure patients would be improved by prioritizing patients by transplant survival benefit.

Keywords: Albumin, bilirubin, creatinine, Model for End-stage Liver Disease (MELD), organ allocation, Organ Procurement and Transplantation Network (OPTN), Scientific Registry of Transplant Recipients (SRTR), waiting list

Introduction

In all areas of medicine, it is essential to determine whether or not a patient will benefit from a given treatment. In the case of organ failure, such questions are perhaps even more important since the preferred treatment (organ transplantation) is not available for all patients. In certain cases, even if there were a sufficient number of donor organs, certain types of patients would be better off not receiving a transplant (1,2) since their waiting list mortality is not sufficiently high to offset the high post and perioperative mortality, which is now well-established in the literature (3).

There are at least three possible bases for organ allocation: medical urgency; utility and transplant benefit. Typically, the prioritization of patients active on the waiting list on a particular date depends strongly on the allocation scheme. Under a medical urgency-based allocation system, patients with worse waiting list outcomes are given higher priority for transplantation. Conversely, a utility-based system would assign priority in accordance with expected posttransplant outcomes. An allocation scheme based on transplant benefit considers both waiting list and posttransplant outcomes. For example, a patient's priority for transplantation could be based on the contrast between two settings: (1) the patient receives the allocated organ and (2) the patient receives no organ.

Each of the allocation schemes mentioned in the preceding paragraph (urgency, utility, benefit) has advantages and disadvantages. For concreteness, suppose that the only outcome considered is mortality. An urgency-based system succeeds in assigning donor organs to patients who are most likely to die on the waiting list. However, this approach may be at the expense of utility since patients at the greatest risk of waiting list death may also be the patients with the highest posttransplant mortality. One can envision an extreme case where medical urgency-based allocation does not result in any fewer deaths, but merely shifts mortality from the pre- to posttransplant side. Conversely, a utility-based allocation system would ensure that transplanted organs are received by patients with lowest posttransplant mortality. However, patients with the best posttransplant outcomes may also have the best waiting list outcomes. In an extreme case, an ordering based on utility could also result in transplantation having no effect on the mortality experience of the patient population, since the low death rate faced by the low-risk patients is merely traded for a low posttransplant death rate. This is different from the extreme scenario we described for an urgency-based system, where a high death rate on the waiting list is traded for a high posttransplant death rate. In both cases, however, the lifetime experienced by the patient population is equal to that in the absence of access to transplantation.

A survival benefit based allocation system seeks to minimize mortality to the patient population as a whole by prioritizing patients based on their lifetime gained due to transplantation. To see that maximizing survival benefit results in minimizing patient population mortality, consider the fact that every patient is guaranteed at least their waiting list lifetime. In the absence of transplantation, all patients would experience their waiting list lifetime and that alone. Suppose that one donor organ is available for transplantation and to be allocated. If that one organ is transplanted, the recipient receives his/her waiting list lifetime, plus any gain in lifetime attributable to the transplant (the transplant survival benefit). Therefore, allocating the one available organ to the patient with the largest difference in posttransplant and waiting list lifetime (i.e. the greatest transplant survival benefit) will minimize mortality for the patient population as a whole.

In Table 1, we illustrate the last concept from the preceding paragraph that allocating an available organ to the patient with the greatest transplant benefit maximizes the total life-years lived by the patient population. In this simplified setting, there are three patients on the waiting list at the time a donor organ is to be allocated. The columns in Table 1 are (left to right) ID: patient identification number; WL: predicted waiting list lifetime (i.e. lifetime if no transplant is received); LT: predicted posttransplant lifetime (with the organ to be allocated); B: transplant benefit, computed as LT–WL; years lived, summed across all three patients, with row x representing the setting wherein the organ is allocated to patient with ID = x. If an urgency-based allocation system were in place, the organ would be allocated to patient 2, who has the lowest expected waiting list lifetime. Under a utility-based system, patient 1 would receive the transplant since that patient is predicted to live the longest with the organ. Under a benefit-based allocation system, the organ would go to patient 3, who is predicted to have neither the greatest posttransplant lifetime nor the lowest waiting list lifetime but the greatest difference between the two.

Table 1
Organ allocation example computing benefit to patient population

Continuing with our examination of Table 1, if indeed the organ was allocated to patient 3, as it would be under a survival benefit based allocation scheme, the lifetime lived by the patient population as a whole would equal 18 years; that is, 7 (patient 1, who remains on the waiting list) + 2 (patient 2, also left on the waiting list) + 9 (posttransplant lifetime of patient 3) years. The population lifetime calculation can be done in a more transparent way as follows. In terms of total lifetime, the worst that could happen is that the organ is not allocated, in which case all three patients remain on the waiting list. If the organ is allocated, the additional lifetime (i.e. transplant benefit) will be experienced by only the patient who receives the organ. Consider the left-most calculation in row 3 of Table 1. If the organ is allocated to patient 3, each patient is predicted to receive their waiting list lifetime (7+ 2 + 5), and patient 3 is predicted to receive their WL lifetime, plus the gain in lifetime due to the transplant (4 years). Since, irrespective of to whom the organ is allocated, each patient is predicted to receive at least their waiting list lifetime, the maximum gain to the patient population as a whole will occur if the patient with the greatest benefit score receives the organ. Although this example considers a very simple scenario, the main ideas extend to more general settings.

Currently in the United States, deceased-donor livers are allocated based primarily on medical urgency. Specifically, acute liver failure patients (Status 1) are given top priority, while chronic end-stage liver disease patients are sequenced on the liver waiting list in decreasing order of Model for End-stage Liver Disease (MELD) score (46). In February 2002, the MELD system replaced the Status system, which was based largely on the Child–Turcotte–Pugh (CTP) score (7,8). Both the CTP and MELD systems are based on medical urgency, since both utilize scores that are intended primarily to reflect waiting list mortality. Although not a transplant benefit based system (since posttransplant outcomes are not considered), the CTP system represented a great improvement over a system that did not prioritize waiting list patients based on their characteristics, for example, a system that ranks patients based on waiting time. Several articles have compared the MELD and CTP scores (9). Almost always, in instances where the analysis found a significant difference between the two scores, MELD was found to more accurately predict waiting list mortality (6). In many cases, however, MELD and CTP scores were not significantly different as predictors of waiting list mortality, due perhaps to inadequate sample size. However, even if one believed the two scores to be equally predictive of waiting list mortality, a system based on MELD would better achieve the objective of urgency-based allocation, since MELD has a finer gradation of risk. Ties are essentially broken by waiting time under either system, meaning that ranks based on the CTP score, which produces more ties, would be more influenced by waiting time. Despite its initial appeal as being equitable, allocation by waiting time identifies patients who have already survived the longest on the waiting list and, in some cases, selects patients for transplantation who need the organ the least. In sum, one would expect that CTP-based allocation is much closer to optimal than allocation by waiting time, and that allocation by MELD constitutes a further considerable improvement.

Although the MELD system has proven effective, it was not designed to reflect posttransplant survival. The persistent shortage of donor livers increases the pressure to make the best possible use of those available, which implies that in addition to urgency, utility be considered. As such, the Organ Procurement and Transplantation Network (OPTN) Liver and Intestine Committee is currently evaluating a transplant survival benefit based system of allocating deceased-donor livers to chronic end-stage liver disease patients. We must emphasize that the development of a benefit-based allocation system is a work in progress. This article represents the state of the proposal at the time of its writing. In terms of evaluating and testing the proposed system, much work remains before implementation can occur. Several important decisions have yet to be made, and some, which have been made, are subject to modification.

The remainder of this article is organized as follows. In Measuring Transplant Survival Benefit, we discuss the quantification of transplant survival benefit and describe the currently proposed benefit score. The posttransplant and waiting list survival models are discussed in Posttransplant Survival Model and Waiting List Survival Model, respectively. We evaluate the proposed transplant benefit score in Analysis Of Proposed Liver Transplant Survival Benefit Score, including comparisons to the MELD and various other scores. We evaluate the implications of benefit-based allocation through microsimulation in Evaluation of Benefit-Based Allocation Via Simulation. A Discussion concludes the article.

Measuring Transplant Survival Benefit

We quantify the liver transplant survival benefit for a given candidate as that candidate's 5-year mean lifetime with a transplant (specifically, with the organ to be allocated) minus his/her 5-year mean lifetime without a transplant. Thus, each time an organ is to be allocated, the transplant benefit score would be computed for each chronic liver failure patient active on the waiting list. In this sense, the scores are patient- and organ-specific. After computing each of their benefit scores, all active patients would then be sequenced in decreasing order of benefit score. In principle, the calculation of the benefit score is straightforward. For any donor–patient combination, a predicted posttransplant survival curve is available (described in Posttransplant Survival Model), as is a predicted waiting list survival curve (Waiting List Survival Model). In each case, the predicted future lifetime is the area under the survival curve out to 5 years, while the benefit score is then the difference between those two predictions. For example, if the area under the first 5 years of a patient's posttransplant survival curve equals 3.5, then that patient is expected to live 3.5 of the next 5 years with the transplant. If the patient's benefit score equals 1.5, then it is predicted that out of the next 5 years, the patient will live an extra 1.5 years with the transplant compared with the scenario where the patient receives no transplant. That is, the area between the posttransplant and waiting list survival curves (both followed out to the 5-year point) equals 1.5 years. The calculation is truncated at 5 years for two reasons. First, the available data provided 5 years worth of pertinent follow-up. Second, lifetime distributions are often skewed far to the right (i.e. the histogram has a long right tail), and the unrestricted mean would be too heavily influenced by this tail.

In Posttransplant Survival Model and Waiting List Survival Model we describe the posttransplant and waiting list survival models, respectively. Before doing so, it is useful to briefly compare our proposed approach for estimating transplant benefit with others in the literature. Several authors have quantified the survival benefit of liver transplantation (1,10) and kidney transplantation (1,2,11,12). Each of these works used a single Cox regression model with `transplant' coded as a binary indicator. That is, one model applies to both the waiting list and the posttransplant deaths, with the regression parameter corresponding to the transplant 0/1 indicator used to quantify the covariate-adjusted survival benefit of transplantation. As mentioned previously, we use separate models for waiting list and posttransplant survival, similar in spirit to the Lung Allocation Score (13) and the proposed Life Years From Transplant (LYFT) score to be used in deceased-donor kidney allocation (14).

Having discussed our proposed transplant survival benefit metric, we describe in the next two sections the posttransplant and waiting list models used in its calculation. Each step in the process of building and evaluating the survival models was carried out in consultation with the OPTN Liver and Intestine Committee, as well as SRTR clinicians, surgeons and biostatisticians.

Posttransplant Survival Model

The transplant study population included patients who received a deceased-donor liver transplant between September 1, 2001 and December 31, 2007. Follow-up began (time 0) at the date of transplant and ended at the earliest of death, retransplant or loss to follow-up. The death event was considered to be the earliest of death and liver retransplant. While there is no question any lifetime (posttransplant or otherwise) ends upon death, whether posttransplant lifetime ends at retransplant requires more careful thought. Basically, one organ is allocated at a time, and of interest is a potential recipient's predicted lifetime with that particular organ. Therefore, in the posttransplant modeling, if a patient was retransplanted, their lifetime with the original organ was considered to have ended.

Posttransplant survival was modeled using Cox regression (15). Although several regression models are available for survival analysis, the Cox proportional hazards model dominates the biomedical literature. The original article in which the Cox model was proposed (15) is among the most cited article in science. Covariate selection was a nonautomated form of backward elimination. Specifically, we started the model-building process by fitting a model that contained every covariate suspected of affecting patient or graft survival. This set of covariates consisted of all covariates included in the Program-Specific Report (PSR) models (used to compare center-specific mortality with the covariate-adjusted national average) and/or included in previous SRTR analyses of liver transplant mortality or survival benefit, as well as various additional covariates suggested by members of the SRTR or the OPTN Liver Committee.

The final model consisted of recipient factors: creatinine, albumin, sodium, age, diagnosis, diabetes, dialysis, hospitalization status, previous liver transplant, mechanical support, portal vein thrombosis, previous abdominal surgery, hepatitis C; donor factors: age, cause of death, donation after cardiac death and transplant factors (i.e. donor–recipient factors): cold ischemia time, and whether or not the transplant represented a regional or national share. In Table 2, we list the recipient factors included in the posttransplant mortality model. There is a baseline death rate that applies to all patients, and the hazard ratios (HR) listed in Table 2 are the multipliers of this baseline death rate. For example, a patient with a previous liver transplant (HR = 1.60) has a death rate 60% greater than a patient who would be receiving their first liver transplant. Donor factors included in the benefit score calculation are listed in Table 3. Note that the `Donor age ≥60 years and recipient HCV' entry in Table 3 (fifth row) represents an interaction term. Specifically, from the fourth row, having a donor age ≥60 years confers a 44% increase in risk since the hazard ratio is estimated at HR = 1.44. If, in addition, the recipient is HCV+, then donor age ≥60 years results in a greater than twofold increase in posttransplant mortality risk, since the hazard ratio is given by HR = 1.44 × 1.41 = 2.03.

Table 2
Recipient characteristics used in posttransplant mean lifetime prediction
Table 3
Donor characteristics used in posttransplant mean lifetime prediction

Outputs from the final posttransplant Cox model include the hazard ratios corresponding to each covariate and the baseline survival. The latter can be interpreted as the survival curve for a recipient–donor combination, whose characteristics are described by the reference levels of each covariate. Combining the baseline survival (which applies to all patients and donors) and the HRs, a predicted survival curve can be constructed for any patient–donor combination. Five-year posttransplant life expectancy can then be computed as the area under the survival curve, out to 5 years.

To evaluate the fit and predictive ability of the posttransplant survival model, we computed the index of concordance (or C statistic) based on all patients transplanted in the calendar year 2005. The C statistic is an estimator for the percentage of times that the model correctly predicts which of two patients will die first. To compute the C statistic, the denominator equals the number of pairs of patients where the ordering of the death times is observed. The numerator is the subset of the denominator where the ordering of the death times observed is concordant with that predicted by the model. If C = 1, then the model is estimated to perfectly predict the first of any two patients to die. If C = 0.5, the model predicts the first of two patients to die as well as would the toss of a coin. For the posttransplant model, C = 0.63, indicating satisfactory albeit not exceptional predictive ability. To cross-validate the posttransplant survival model, we randomly split the data set, fitted a Cox model to one half of the patients and computed the C statistic on the other half. We repeated this exercise 10 times, and the average C statistic equaled 0.63, indicating that our internal validation did not appear to overstate the ability of the model to predict posttransplant survival for future patients.

Waiting List Survival Model

To model waiting list survival, the study population consisted of 10 cross-sections of patients, the cross-section dates being May 1 and November 1 of each of calendar years 2002–2006. To be included in a particular cross-section (e.g. 11/01/2003), a patient would have to be alive and active on the waiting list as of 11/01/2003. Status 1 patients were excluded. We formed the study population using cross-sections since, when used for allocation, the model will be applied to cross-sections of patients (those active on the waiting list on a given date), as opposed to cohorts of patients. The survival time was defined as time since cross-section, with time previously survived on the waiting list included as a covariate in the model. After being included in a cross-section, patients were censored at the earliest of loss to follow-up or receipt of a liver transplant. To clarify, being active on the waiting list was a requirement to be included in a cross-section. However, subsequent deactivation would not be treated as a censoring event.

As mentioned in the preceding paragraph, patients were censored upon receipt of a liver transplant. Under the MELD system, patients at the highest risk of waiting list death also generally have the highest transplant rate. The potential bias was corrected through inverse probability of censoring weighting (16,17), a well established method to overcome dependent censoring. Specifically, a time-dependent weight is applied to each cross-section patient, with the weight being equal to the inverse of the probability that the patient remains untransplanted. Basically, patients with a higher (lower) probability of receiving a liver transplant are assigned a higher (lower) weight to balance out the fact that relatively less (more) follow-up on such patients is actually observed. To compute the weights, we need a time to transplant model, which contains all covariates in the waiting list survival model, plus time-dependent MELD and organ procurement organization.

The statistical methodology and other technical issues surrounding our approach to modeling waiting list mortality will be described in a separate report. The underlying ideas are given in some detail in the literature (12,18). Several SRTR (2,10) and other analyses (19) have been carried out using closely related methods. Related methodologic material can be found in Liang and Zeger (20), Wei et al. (21) and Zheng and Heagerty (22).

Cox regression was used to model waiting list survival. Covariate selection proceeded in a manner similar to that employed for the posttransplant survival model (Posttransplant Survival Model). Patients were classified based on their most recent measurement as of the cross-section date for lab measures such as the MELD components, albumin and serum sodium. This is appropriate since, when the score is computed in practice, only current and previous lab measurements will be known. Covariates were also defined for the slopes of each patient's prior bilirubin, creatinine, international normalized ratio (INR) and albumin values. Patient characteristics in the final waiting list model included creatinine, bilirubin, INR, albumin, sodium, age, body mass index, previous time on waiting list, diagnosis, diabetes, dialysis, medical condition at listing and prior history of malignancy. Table 4 lists the patient characteristics included in the waiting list lifetime prediction.

Table 4
Characteristics used in waiting list mean future lifetime prediction

To evaluate its discriminatory ability, we computed the C statistic for the waiting list survival model. Based on the cross-section of patients active on the waiting list on May 1, 2004, C = 0.74, indicating fairly good predictive ability. The cross-validation of the waiting list survival model proceeded analogously to that described earlier for the posttransplant model; that is, randomly splitting the data in half, then fitting the model to one half and computing the C statistic on the other half. Averaging over 10 random splits, the C statistic equaled 0.74.

Analysis of Proposed Liver Transplant Survival Benefit Score

Our objective in this section is to examine the proposed liver transplant benefit score and to compare it with the MELD score. We used the 10 cross-sections of patients already selected as the study population for the waiting list survival model. Within each cross-section, for each selected patient, we used the most recent lab MELD score and computed the proposed transplant survival benefit based on a typical liver donor; specifically, a donor with characteristics at the reference level for all categorical donor factors and approximately equal to the median of all continuous donor factors.

In Figure 1, we plot mean 5-year predicted future waiting list lifetime by MELD, with patients grouped by individual MELD score. It is clear that predicted waiting list lifetime strongly decreases as MELD increases, which would be expected since MELD is a very strong predictor of waiting list mortality. Mean predicted 5-year future waiting list lifetime equals approximately 4.5 years for patients with a MELD score of 6, meaning that on average, a patient with a MELD of 6 would be expected to live 4.5 of the next 5 years. In contrast, mean 5-year waiting list lifetime is only 0.5 years for patients on the waiting list with a MELD score of 40.

Figure 1
Mean 5-year future lifetime by MELD.

Also plotted in Figure 1 is mean 5-year posttransplant lifetime. Posttransplant life expectancy decreases as MELD increases, although the strength of the decrease is much less than for waiting list lifetime. Patients with a MELD score of 6 are expected to live an average of 4.1 years out of the next 5 years if they receive a liver transplant; this is an average of 0.4 years less than if they remain on the waiting list. Patients with such a low MELD score do not, on average, benefit from liver transplantation because their waiting list mortality is too low to offset the high post- and perioperative mortality risk associated with liver transplantation.

Average 5-year transplant benefit is the distance between the two lines in Figure 1. We plot average survival benefit by integer MELD score in Figure 2. Average survival benefit increases steadily as MELD increases; consistent with the fact that as MELD increases, mean waiting list lifetime decreases at a much greater rate than mean posttransplant lifetime. On average, patients with MELD less than 10 have negative benefit scores, indicating reduced lifetime posttransplant, based on a 5-year time horizon. It is estimated that patients with a MELD score of 40 gain an average of three (out of a possible five) future years if they receive a liver transplant.

Figure 2
Mean 5-year transplant benefit by MELD.

Of note, Figures 1 and and22 are based on averages. Overlooking this fact could lead one to believe that allocating livers by survival benefit would essentially amount to allocating by MELD, since the trends appear to be in the same direction. However, it is possible that patient X could have a much greater MELD score than patient Y, yet have a lower benefit score. True, the MELD score consists of three very strong predictors of waiting list mortality. However, recent evidence suggests that the MELD components are not weighted optimally in the MELD formula (23). Moreover, one of the MELD components, creatinine, is also a very strong predictor of posttransplant mortality (24). Since creatinine predicts both waiting list and posttransplant mortality, its effect on the benefit score is of less magnitude than one might expect. Moreover, as identified in Measuring Transplant Survival Benefit and Posttransplant Survival Model, several factors in addition to the MELD components predict waiting list and/or posttransplant mortality. As a result, there is considerable variability in the distribution of the benefit scores at any MELD score, as evidenced by the box and whisker plots in Figure 3. Note that the boxes in Figure 3 contain the middle 50% of the data (spanning the 25th and 75th percentiles), while the whiskers contain the middle 90%. The pattern in the averages observed in Figure 2 are consistent with the trend repeated in the boxes in Figure 3. However, what is much more prominent is the degree of overlap; not just among the distributions in adjacent MELD categories but among MELD scores three and four categories apart. Although, MELD 6–8 patients do not, on average, benefit from liver transplantation (Figure 3), approximately 20% of such patients do have positive benefit scores (Figure 3). Similarly, there are patients with MELD ≥21 with negative benefit scores (Figure 3) although, on average (Figure 2), the benefit is quite strong in this high MELD subgroup.

Figure 3
Transplant benefit by MELD box plots.

We computed the rank correlation between the proposed transplant benefit score and various other scores (Table 5). The rank correlation (also known as the Spearman correlation) equals the more commonly used correlation coefficient (also known as the Pearson correlation) computed on the ranks, rather than the actual scores. The rank correlation is bounded by −1 and 1. A value of 1 (−1) indicates that as one score increases, the other increases (decreases); values close to 0 indicate no correspondence between the two scores. The rank correlation between the proposed benefit score and MELD score is 0.67, which appears to be consistent with Figure 3. Although the benefit and MELD scores are related, it is clear that one score is not duplicating the other, judging by the overlap in benefit scores among MELD categories. Based on the rank correlation being only 0.67 and the overlap in Figure 3, it appears that the ranks of waiting list patients would be altered considerably under a benefit-based allocation system. A rank correlation of 0.67 is perhaps closer to 1 than expected since MELD considers waiting list mortality, while the benefit score captures both pre- and posttransplant mortality. However, the rank correlation between predicted 5-year waiting list lifetime (the waiting list component of the benefit calculation) and MELD was only −0.72. This is further away from −1 than one might anticipate given that, in this case, the rank correlation is being computed on two quantities intended to measure urgency. The fact that the rank correlation between our 5-year predicted waiting list lifetime and MELD is not closer to −1 reflects the fact that several factors in addition to MELD affect waiting list mortality.

Table 5
Rank correlation among scores

In Table 6, we further explore the ideas from Table 5. The rank correlation between predicted 5-year waiting list lifetime and the benefit score was calculated to be −0.89. The closeness of this particular rank correlation to −1 (which would indicate that the benefit score could be based on predicted waiting list lifetime alone) is consistent with the fact that covariate-adjusted posttransplant death rates are substantially less than (approximately one fifth) those on the waiting list (1). As such, there appears to be less variation in posttransplant outcomes since 5-year survival on the waiting list is much less than that of posttransplant. If we restrict attention to patients with positive transplant benefit scores (those patients much more likely to obtain offers under a benefit-based allocation system), the rank correlation between the score and predicted waiting list lifetime equals −0.90 and hence is approximately equal to that based on all patients.

Table 6
Rank correlation between transplant benefit and various other scores

To obtain a summary measure of the importance of factors other than the MELD components, we computed the rank correlation between the proposed benefit score and a benefit score based on the MELD components alone: 0.68 (Table 6). When restricted to patients with positive (proposed) benefit scores, this correlation decreased to 0.61 (Table 6). In either case, the message is that the MELD components strongly influence but certainly do not dominate the proposed benefit score.

As a follow-up to Figure 3, in Figure 4 we display box and whisker plots for benefit scores by age group. Based on Figure 4, it appears that the proposed benefit score is not strongly influenced by age, judging by the apparent similarity of the distributions across age groups. This may seem strange, since age covariates were selected for inclusion in both the waiting list and posttransplant models. Two things are important in this regard. First, since age predicts both pre- and posttransplant survival, it would have a stronger influence on the benefit score if it predicted one and not the other, or if it predicted one much more strongly than the other. Second, although age is predictive of mortality, it is much less predictive than several factors, such as the MELD components and albumin.

Figure 4
Benefit scores by age patients with benefit > 0.

We sought to rank each patient covariate in the benefit score in terms of relative importance. To do so, we took the cross-section of patients active on the waiting list on May 1, 2004 and computed the benefit score for each patient using a randomly selected donor from calendar year 2004. We then fitted a linear regression model, with the benefit score serving as the response variate and each patient covariate serving as the predictor variates. We judged the importance of each covariate by the percentage of variation in the benefit scores it explained. The results of this exercise are listed in Table 7. Each row can be interpreted as the contribution of that covariate, after factoring out the contribution of all covariates listed above. Note that the ordering was based on a sequence of linear regression models to determine the most important covariate (which happened to be albumin), followed by the second most important covariate (after factoring out the contribution of albumin), and so on. Albumin accounted for over nearly half (53%) of the variation in benefit scores. After factoring out the contribution of albumin, bilirubin accounted for an additional 15% of the variation. The next most important factor was donor age (8%). The only remaining covariate, which accounted for greater than 5% of the variation in the benefit scores, was creatinine (5.3%). Note that recipient age accounted for less than 5% of the variation in scores. Together, the seven factors listed in Table 7 explained almost 97% of the variation in the benefit scores.

Table 7
Relative importance of covariates to benefit score

The proposed transplant benefit score is based on a 5-year timeline, with the truncation point largely chosen based on data availability. To assess the sensitivity of the patient rankings to the 5-year truncation time, we computed rank correlations between the proposed score and those based on 1-, 3- and 10-year lifetimes (Table 8). Survival models for the 10-year life expectancies were based on extrapolations, which assumed that the average hazard during years 5–10 was equal to that during year 4–5. As shown in the top row of the matrix in Table 8, the 1-year scores have decreasing rank correlation with the remaining time horizons, with the correlation decreasing as the time horizon is extended. The 3-, 5- and 10-year scores have pair-wise rank correlations very close to 1, indicating that the ordering of patients appears to be quite insensitive to the time horizon, provided the truncation point is 3 or more years.

Table 8
Rank correlations between benefit scores using different truncation points

Evaluation of Benefit-Based Allocation Via Simulation

We evaluated the performance of the proposed liver transplant survival benefit score using the Liver Simulated Allocation Model (LSAM) (25). LSAM is part of a family of simulated allocation models (SAMs) developed by the SRTR. Similar models exist for kidney/pancreas allocation (KPSAM) and the thoracic organs (TSAM). The SAMs have been used previously many times to assess the future effect of proposed changes in allocation policy. A schematic diagram of the general flow of data and event processing by LSAM is displayed in Figure 5. The model starts with the actual candidates on the waiting list; in our case, on January 1, 2006. The model runs for one full calendar year, processing new wait-listings (i.e. initial listings, relistings), transplants, deaths and removals from the waiting list for reasons other than death and transplant. The primary role of LSAM is to evaluate the changes in experience over the calendar year due to changes in the allocation system. Through LSAM, one can assess the impact (e.g. number of deaths) on 1 year's worth of experience of changing the allocation rules, which are programmed directly into LSAM for each run. Essentially, a waiting list lifetime (i.e. lifetime, in the absence of transplant) is simulated for each patient. This lifetime without transplant contains status history updates, inactive time and, possibly, removal. Deceased-donor livers arrive into the system and are allocated based on whatever rules are programmed into a particular LSAM run. Each organ is offered to patients active on the waiting list, and each offer is accepted or rejected with a probability that depends on the patient and donor characteristics. Posttransplant experience is generated for patients who are transplanted, including death or graft failure and subsequent relisting. Further details regarding LSAM are described in Thompson et al. (25).

Figure 5
Liver Simulated Allocation Model (LSAM) event processing.

We evaluated three allocation schemes. The first is the set of rules currently in place at the time this article is being written. Note that the SHARE15 rule is incorporated into the current allocation scheme. The second allocation system also features MELD/PELD based allocation but is based on regional sharing for all MELD scores; that is, local is essentially eliminated as the basis for allocation, with region then being the first geographic level of offer. There are two reasons that `total' regional sharing is of interest. First, it is currently being considered by the OPTN Liver and Intestine Committee. Second, the SHARE15 rule has no obvious analog under survival benefit based allocation, meaning that comparisons between regional sharing based allocation and the proposed benefit-based system are easier to interpret than comparisons between the current and benefit based system. The third allocation system evaluated is transplant benefit, again with regional sharing at all MELD scores.

Results of the LSAM modeling are averaged over 10 iterations. In Table 9, we list the number of deaths, by allocation system, with the numbers in parentheses representing the difference between that system and the system in the column to the immediate left. Based on one calendar year of experience, benefit based allocation would result in 83 fewer waiting list deaths, six fewer posttransplant deaths and 13 fewer postremoval deaths, a net saving of 102 lives. Naturally, regional sharing and transplant benefit allocation systems would result in transplants to different sets of patients. As such, in Table 10 we estimate the number of liver transplant-attributable life-years saved under each allocation system. Under a MELD/PELD based regional sharing system, we estimate that there would be 6273 liver transplants, and that the mean benefit score would be 1.63 years. Taking the product of these two numbers, there would be 6273 × 1.63 = 10 225 life-years saved. Under survival benefit based allocation, there are projected to be 80 fewer transplants, but the mean benefit score is 0.38 years greater than that under a regional sharing system. The result is an additional 2223 life-years. To better appreciate this calculation, recall that a patient's benefit score represents their predicted gain in lifetime (over the next 5 years) due to receiving a transplant. The total lifetime gained calculation applies the patient/organ-specific predicted gain in 5-year lifetime to patients who actually receive a transplant. Naturally, patients who are not transplanted are predicted to receive no transplant survival benefit.

Table 9
Liver Simulated Allocation Model (LSAM): number of deaths by allocation system
Table 10
Liver Simulated Allocation Model (LSAM): life-years saved by allocation system

Discussion

In this article we describe and evaluate a proposed score currently being considered to serve as the basis for a transplant survival benefit based deceased-donor liver allocation system for chronic liver failure patients. The score is currently being considered by the OPTN Liver and Intestine Committee and may undergo modification prior to its field implementation. The proposed score is based on models of a single end-point, mortality; hence the phrase `survival benefit'. Mortality has always been considered the most important outcome among organ failure patients. It may be possible to develop a benefit score that incorporates morbidity or quality of life measures. It is unclear how such a score should be constructed, and perhaps most importantly, it appears that reliable pertinent data are currently unavailable for outcomes other than death.

This article represents a natural extension of previous work on liver transplant survival benefit from the SRTR (1,10). Merion et al. (1) previously demonstrated that the mortality reduction due to liver transplantation increases with increasing MELD. Subsequently, Schaubel et al. (10) demonstrated that liver transplant benefit depends not only on MELD but also on the donor quality, as quantified by the Donor Risk Index (DRI) (26). Rather than using only MELD and DRI, the current article proposes a method for computing liver transplant benefit using all pertinent patient and donor characteristics. While the work of Merion et al. (1) and Schaubel et al. (10) compared average patients, our current proposal focuses on calculations at the patient level. For example, Merion et al. (1) indicated that, on average, patients with low MELD do not obtain survival benefit from liver transplantation. This is consistent with Figure 3, which indicates that the average benefit score is negative at low MELD scores. However, Figure 3 also indicates that there are patients at low MELD scores that do benefit from a liver transplant; likely not the patients who receive high DRI livers, judging by our previous work.

We estimate that a substantial number of life-years would be saved if transplant survival benefit based allocation were implemented. Comparing a MELD/PELD and benefit based allocations systems with regional sharing, we estimate that over 2000 life-years would be saved based on one calendar year's worth of experience and only considering the first 5 years posttransplant. It should be noted that our calculation vastly underestimates the gain in life-years by the patient population in at least two important ways. First, only one calendar year of liver transplants was simulated by LSAM. Second, the benefit score predicts the gain in lifetime over the next 5 years, as opposed to the total gain in lifetime.

With respect to model evaluation, the C statistic reflects the ability of the model to correctly rank patients in terms of death rate. For the waiting list model C = 0.74, indicating that among pairs of patients, the model correctly ranked the death times approximately three times as often as it did so incorrectly. The corresponding result was less encouraging for the posttransplant model, with C = 0.63. It is possible that the most important determinants of posttransplant survival are not known at the time of transplant; for example, events that occur in the post- and perioperative period. Additionally, since both patient and donor factors are important to posttransplant outcomes, unlike waiting list survival, there are two dimensions upon which to misclassify liver transplant recipients. For example, although variables like diabetes, mechanical support and previous abdominal surgery are predefined by the OPTN, the classifications for such variables are broad and somewhat subjective. It is thus possible that heterogeneity may exist across centers with respect to the definitions actually applied. The posttransplant model is under continued investigation, with the objective of obtaining a model with better discriminatory power, using the observed data.

An important concern is how the switch from an allocation system based on MELD/PELD to a system based on a benefit score would affect pediatric patients. Under the MELD/PELD system, patients aged less than 12 years are given a considerable advantage over adults in terms of priority. In particular, a patient with PELD x faces considerably lower waiting list mortality than a patient with a MELD of x. The advantage given to pediatric patients is implicit, in the sense that no modification is made to convert the MELD score into a PELD score for patients age <12 years. Instead, MELD and PELD scores are computed using different formulas, knowing that MELD more accurately reflects waiting list mortality risk than PELD. Given that the advantage given to pediatric patients appears to be well accepted by the liver transplant community, it would presumably be desirable for the advantage to patients age <12 years to be preserved under benefit-based allocation. Currently, the same benefit score is computed for pediatric and adult patients, with no explicit modification built in to advantage patients aged <12 years. One possibility is to add a certain constant (e.g. 0.5 years) to the calculated benefit score for pediatric patients. A second possibility is to downweight the role of waiting list life expectancy, which is known to be quite high among pediatric patients. For example, if the benefit score is calculated as B = LT–WL for adult patients (where LT and WL represent posttransplant and waiting list life expectancy, respectively), then we could set B* = LT − 0.75 × WL for pediatric patients. Preliminary LSAM results indicate that the percentage of deceased-donor kidneys transplanted to pediatric patients would not change under benefit-based allocation (data not shown). The most likely reason for the stability across allocation systems, with respect to the percentage of transplants allocated to pediatric patients, is the strong impact of donor-to-recipient size-matching and prioritization to pediatric patients of organs from pediatric donors.

Another important consideration in the transition to benefit-based allocation is the issue of exception scores. Currently, patients may apply to their regional review board to have their allocation MELD score increased. If granted, such patients will be prioritized based on their `exception' (as opposed to their calculated) MELD score. The intention is that exception scores be granted in cases where the patient's calculated MELD score is known to understate their true waiting list mortality risk. The benefit score is intended to explicitly quantify the impact of each patient characteristic that affects the amount of additional lifetime a liver transplant would provide. Therefore, exception scores should play a greatly reduced role under a benefit-based allocation system since most factors that, under a MELD/PELD based system, may have prompted an exception score application, would already be accounted for in the benefit score calculation. That is, patients with a certain condition would still get a boost in prioritization, provided that such a boost was consistent with the benefit score, which is evidence-based. It is possible that exception scores would still be granted under a benefit based system, for example, for conditions known to affect waiting list survival but which are too rare to be reliably incorporated into the survival models used to build the allocation score.

Currently, the most frequently occurring exception is that for hepatocellular carcinoma (HCC), although exceptions are granted for many other conditions. As mentioned previously, HCC is one of the waiting list model covariates. Based on our waiting list model, it is estimated that HCC patients face a significant 51% increase in covariate-adjusted waiting list mortality. Since HCC does not significantly affect posttransplant survival and is therefore not included in the posttransplant life expectancy computation, HCC patients would be given an advantage, under benefit-based allocation, relative to patients without HCC. This is not to imply that the increased priority offered to HCC patients would be greater under a benefit- (vs. MELD) based allocation system. In fact, the opposite may be true, particularly since the currently applied boost to a MELD exception score of 22 for HCC patients is likely indefensible empirically. This issue is currently under investigation, again using LSAM modeling.

Acknowledgments

These results were presented, in part, at the 2007 American Transplant Congress in San Francisco, CA. The Scientific Registry of Transplant Recipients is funded by contract number 234-2005-37009C from the Health Resources and Services Administration (HRSA), US Department of Health and Human Services. The views expressed herein are those of the authors and not necessarily those of the US Government. This is a US Government-sponsored work. There are no restrictions on its use. The statistical methodology development and analysis for this investigation was supported in part by National Institutes of Health grant R01 DK-70869 to the first author. This project was also supported, in part, by Grant No. KL2 RR024130 to S. W. Biggins from the National Center for Research Resources (NCRR) and DK076565 from the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and Agency for Healthcare Research and Quality (AHRQ).

References

1. Merion RM, Schaubel DE, Dykstra DM, Freeman RB, Port FK, Wolfe RA. The survival benefit of liver transplantation. Am J Transplant. 2005;5:307–313. [PubMed]
2. Miles CD, Schaubel DE, Jia X, Ojo AO, Port FK, Rao PS. Mortality experience in recipients undergoing repeat transplantation with expanded criteria donor and non-ECD deceased-donor kidneys. Am J Transplant. 2007;7:1140–1147. [PubMed]
3. Wolfe RA, Ashby VB, Milford EL, et al. Comparison of mortality in all patients on dialysis, patients on dialysis awaiting transplantation, and recipients of a first cadaveric transplant. N Eng J Med. 1999;341:1725–1730. [PubMed]
4. Malinchoc M, Kamath PS, Gordon FD, Peine CJ, Rank J, ter Borg PC. A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts. Hepatology. 2000;31:864–871. [PubMed]
5. Kamath PS, Wiesner RH, Malinchoc M, et al. A model to predict survival in patients with end-stage liver disease. Hepatology. 2001;33:464–470. [PubMed]
6. Wiesner RH, McDiarmid SV, Kamath PS, et al. MELD and PELD: Application of survival models to liver allocation. Liver Transpl. 2001;7:567–580. [PubMed]
7. Child CG, Turcotte JG. Surgery and portal hypertension. In: Child CG, editor. The Liver and Portal Hypertension. Saunders; Philadelphia: 1964. pp. 50–64.
8. Pugh RNH, Murray-Lyon IM, Dawson JL, Pietroni MC, Williams R. Transection of the esophagus in bleeding oesophageal varices. Br J Surg. 1973;60:648–652.
9. Cholongitas E, Marelli L, Shusang V, et al. A systematic review of the performance of the model for end-stage liver disease (MELD) in the setting of liver transplantation. Liver Transpl. 2006;12:1049–1061. [PubMed]
10. Schaubel DE, Sima CS, Goodrich NP, Feng S, Merion RM. The survival benefit of deceased donor liver transplantation as a function of candidate disease severity and donor quality. Am J Transplant. 2008;8:419–425. [PubMed]
11. Ojo AO, Hanson JA, Meier-Kriesche H, et al. Survival in recipients of marginal cadaveric donor kidneys compared with other recipients and wait-listed transplant candidates. J Am Soc Nephrol. 2001;12:589–597. [PubMed]
12. Schaubel DE, Wolfe RA, Port FK. A sequential stratification method for estimating the effect of a time-dependent experimental treatment in observational studies. Biometrics. 2006;62:910–917. [PubMed]
13. Egan TM, Murray S, Bustami RT, et al. Development of the new lung allocation system in the United States. Am J Transplant. 2006;6(5 Pt 2):1212–1227. [PubMed]
14. Wolfe RA, McCullough KP, Schaubel DE, et al. Calculating life years from transplant (LYFT): Methods for kidney and kidney-pancreas candidates. Am J Transplant. 2008;8(4 Pt 2):997–1011. [PubMed]
15. Cox DR. Regression models and life-tables (with discussion) J Roy Stat Soc Ser B (Methodological) 1972;34:187–220.
16. Robins JM, Rotnitzky A. Recovery of Information and Adjustment for Dependent Censoring using Surrogate Markers. In: Jewell N, Dietz K, Farewell V, editors. Proceedings of the Biopharmaceutical Section, AIDS Epidemiology-Methodological Issues. American Statistical Association; 1992. pp. 24–33.
17. Robins JM, Finkelstein DM. Correcting for noncompliance and dependent censoring in an AIDS clinical trial with inverse probability of censoring weighted (IPCW) log-rank tests. Biometrics. 2000;56:779–788. [PubMed]
18. Schaubel DE, Wolfe RA, Sima CS, Merion RM. Estimating the effect of a time-dependent treatment in by levels of an internal time-dependent covariate. J Am Stat Assoc. 2009 in press.
19. Berg CL, Gillespie BW, Merion RM, et al. Improvement in survival associated with adult-to-adult living donor liver transplantation. Gastroenterology. 2007;133:1806–1813. [PMC free article] [PubMed]
20. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73:13–22.
21. Wei LJ, Lin DY, Weissfeld L. Regression analysis of multivariate incomplete failure time data by modeling marginal distributions. J Am Stat Assoc. 1989;84:1065–1073.
22. Zheng Y, Heagerty PJ. Partly conditional survival models for longitudinal data. Biometrics. 2005;61:379–391. [PubMed]
23. Sharma P, Schaubel DE, Sima CS, Merion RM, Lok ASF. Re-weighting the Model for End-Stage Liver Disease score components. Gastroenterology. 2008;135:1575–1581. [PubMed]
24. Nair S, Verma S, Thuluvath PJ. Pretransplant renal function predicts survival in patients undergoing orthotopic liver transplantation. Hepatology. 2002;35:1179–1185. [PubMed]
25. Thompson D, Waisanen L, Wolfe R, Merion RM, McCullough K, Rodgers A. Simulating the allocation of organs for transplantation. Health Care Manag Sci. 2004;7:331–338. [PubMed]
26. Feng S, Goodrich NP, Bragg-Gresham JL, Dykstra DM, Punch JD, DebRoy MA. Characteristics associated with liver graft failure: The concept of a donor risk index. Am J Transplant. 2006;6:783–790. [PubMed]