|Home | About | Journals | Submit | Contact Us | Français|
Due to the shortage of deceased donor organs, transplant centers accept organs from marginal deceased donors, including older donors. Organ-specific donor risk indices have been developed to predict graft survival using various combinations of donor and recipient characteristics. We will review the kidney donor risk index (KDRI) and liver donor risk index (LDRI) and compare and contrast their strengths, limitations, and potential uses. The Kidney Donor Risk Index has a potential role in developing new kidney allocation algorithms. The Liver Donor Risk Index allows for greater appreciation of the importance of donor factors, particularly for hepatitis C-positive recipients; as the donor risk index increases, rates of allograft and patient survival among these recipients decrease disproportionately. Use of livers with high donor risk index is associated with increased hospital costs independent of recipient risk factors, and transplanting livers with high donor risk index into patients with Model for End-Stage Liver Disease scores < 15 is associated with lower allograft survival; use of the Liver Donor Risk Index has limited this practice. Significant regional variation in donor quality, as measured by the Liver Donor Risk Index, remains in the United States. We also review other potential indices for liver transplant, including donor-recipient matching and the retransplant donor risk index. While substantial progress has been made in developing donor risk indices to objectively assess donor variables that affect transplant outcomes, continued efforts are warranted to improve these indices to enhance organ allocation policies and optimize allograft survival.
Demand for organs for kidney and liver transplant far exceeds the supply of deceased donor organs. Transplant centers are therefore forced to consider using allografts from higher-risk donors, a need particularly evident in geographic areas with the longest waiting times. Allografts may be at risk for graft failure and subsequent recipient death due to factors such as donor age. To quantify this increased risk, donor risk indices were created. Currently, there are no widely accepted donor risk indices for heart and lung donors. Due to the difficulty in defining pancreas allograft survival, assessing a pancreas donor risk index is difficult. We review the history of the development of the Kidney Donor Risk Index (KDRI), which led to increasing interest in the Liver Donor Risk Index (LDRI), and we speculate on the potential future uses of these indices.
To increase the deceased donor organ pool, transplant centers use kidneys from marginal donors, including older donors. To help clinicians choose the best kidney allograft for their patients, scoring systems have been developed using various combinations of donor and recipient characteristics to predict expected all-cause allograft failure.
Historically, increasing the donor pool primarily meant using organs from older kidney donors; recent increases involve donation after cardiac death (DCD). In Spain, efforts to improve kidney donation in the 1990s increased the average age of kidney donors by 11 years, and more than 25% of all donors were aged older than 60 years (1). While these methods increased organ donation, concern arose that kidneys from older donors produced poorer recipient allograft function and survival (2,3). This led to a dilemma for clinicians: should older kidneys with poor allograft survival be used, or should patients remain on dialysis with its attendant mortality risk?
Expanded criteria donation was introduced in November 2001 in the Organ Procurement and Transplantation Network/United Network for Organ Sharing (OPTN/UNOS) policy for deceased donor kidney allocation; the criteria assist clinicians and patients in making decisions about accepting marginal kidneys (4). The goal was to establish donor factors that lead to increased risk of all-cause allograft failure. Port et al (5) used data from the Scientific Registry of Transplant Recipients (SRTR) to examine first deceased donor kidney-only transplants in the US from March 1995-November 2000, considering effects of donor age, sex, race, year of donation, diabetes, hypertension, impaired kidney function, and cause of death, and multiple recipient factors. Expanded criteria donors (ECD) were defined as those whose relative risk of allograft failure was greater than 1.7 compared with donors aged 10–39 years with terminal serum creatinine ≤ 1.5 mg/dL and no history of hypertension or cerebrovascular accident as cause of death. Of these donor characteristics, only age, impaired kidney function (serum creatinine > 1.5 mg/dL), hypertension, and cerebrovascular accident as cause of death were independently associated with increased risk of allograft failure. ECD criteria were defined as age older than 60 years, or 50–59 years with 2 or more additional donor risk factors.
Since reporting of ECD and revision of OPTN/UNOS allocation policy, all candidates must be asked if they wish to be considered for ECD kidney transplant (6). This policy requires a separate consent form to inform candidates of allocation procedures and potential differences in allograft survival. By consenting to ECD kidneys, candidates decrease the time they spend on the waiting list in exchange for higher risk of allograft failure. Despite the increased risk, 43% of candidates on the waiting list consent to ECD kidneys (7). Interestingly, overall mortality is lower for ECD kidney recipients who are aged older than 40 years, African American, or Asian than for wait-listed candidates (8). This difference is most notable in regions where waiting time exceeds 1350 days; more than 50% of wait-listed candidates in 23 of the 58 donor service areas in the US are willing to accept an ECD kidney, as are 80%–100% in 9 donor service areas (7).
The designation ECD does not in itself effectively portray organ quality. Variations within ECD substantially influence allograft survival in ways that cannot be accurately predicted by a dichotomous variable. Several studies have expanded on donor criteria to provide a more graded approach to allograft quality.
Before ECD was introduced, Nyberg et al (9) proposed a donor risk scoring system to identify deceased donor kidneys at highest risk of early allograft dysfunction. They used 18 risk factors for delayed graft function, including 12 donor and 6 recipient factors. Using univariate and mutlivariate analyses, they developed a scoring system that accounted for donor age, cause of death, history of hypertension and diabetes, creatinine clearance, preservation time, and degree of renal artery plaque. Scores ranged from 0–32; grade A was defined as risk score 0–5, grade B 6–10, grade C 11–15, and grade D ≥ 16. In a validation group, they found that creatinine clearance at 30 days posttransplant was > 40 mL/min for a significantly higher proportion of grade A than grade D recipients (91% versus 23%). Similarly, delayed graft function was less likely for grade A than for grade D recipients (17% versus 62%) (9).
Nyberg et al modified their donor risk scoring system in a larger cohort with donor information available at the time of procurement (10). Using SRTR data, they examined deceased donor kidney transplants using 9 donor and 4 recipient variables. This revised scoring system incorporated donor age, history of hypertension, creatinine clearance, number of HLA mismatches, and cause of death. Scores ranged from 0–39; grade A was defined as 0–9, grade B 10–19, grade C 20–29, and grade D 30–39. Again, a higher percentage of grade A than grade D recipients experienced good or excellent kidney function at 1 year (creatinine clearance ≥40 mL/min; 81% versus 37%). With this system, ECD kidneys can be subdivided into grade C or D, and 56% of grade C recipients experience good or excellent kidney function compared with 37% for grade D.
To further develop this scoring system, Schold et al (11) constructed a model focusing on all-cause allograft failure as the endpoint. The model includes donor age, race, cause of death, history of hypertension and diabetes, donor/recipient cytomegalovirus match, HLA mismatches, and cold ischemia time. A donor grade from I–V is assigned to the kidney, and 1- and 5-year allograft survival can be determined.
ECD criteria and the Nyberg and Schold risk scoring systems arbitrarily categorized risk, possibly reducing the accuracy of a risk score. To improve on previous models, the KDRI developed by Rao et al (12) provides a continuous risk score by avoiding categorized variables in the calculation; the model was developed using SRTR data from January 1, 1995, to December 31, 2005. The study assessed donor and transplant factors not included in previous donor risk scores: donor height and weight, DCD, cigarette use, hepatitis C, pulsatile perfusion, organ sharing, year of transplant, en bloc/double transplant, and ABO compatibility. It also assessed recipient factors including height, weight, angina pectoris, drug-treated chronic obstructive pulmonary disease, and hepatitis C. The final KDRI includes 14 donor and transplant factors, each independently associated with all-cause allograft failure: age, African American race, serum creatinine, hypertension and diabetes, cause of death, height, weight, DCD, hepatitis C, HLA mismatch, cold ischemia time, en bloc and double kidney transplant. The final score is compared with a reference donor with KDRI score 1.00, defined as a healthy 40-year-old, non-African American, non-hypertensive, non-diabetic, hepatitis C-negative, brain-dead (as opposed to DCD) donor with serum creatinine 1.0 mg/dL, height 170 cm, weight ≥ 80 kg, 2 HLA-B mismatches, 1 DR mismatch, and cold ischemia time of 20 hours (12). The KDRI can give a sense of the increased risk of allograft failure or death associated with use of a particular organ. For example, a KDRI of 1.22 means that the donor organ confers a 22% higher risk of allograft failure than the ideal reference donor (Figure 1).
The KDRI provides a continuous score that estimates allograft outcome. There is some doubt about its predictive power in certain donor subgroups. To test the discriminatory power of the KDRI model, the dataset was split in half 5 separate times, and a c-statistic was calculated for each split. A c-statistic of 0.5 implies a prediction by chance, and a c-statistic of 1.0 is a perfect prediction model. In the entire cohort, average c-statistic was 0.62, indicating reasonable discriminatory power. In the extreme quartiles, the c-statistic increased to 0.78, and in the middle 2 quartiles decreased to 0.58 (12). This suggests that the KDRI successfully predicts the extreme categories of allograft failure risk, but does not easily distinguish donors from the middle ranges. Nonetheless, the KDRI can provide transplant candidates and their physicians with important information about accepting higher-risk organs, and the risk can be balanced against the risk of remaining on the waiting list. This can lead to acceptance of high-risk organs with adequate understanding and acceptance of the risk.
OPTN/UNOS is considering a change to the kidney allocation system based on characteristics of the kidney using a model similar to the KDRI (13). The current system assigns ECD kidneys first to candidates willing to accept them. Kidneys from non-ECD donors are assigned to the waiting list as standard criteria donor kidneys. The proposed system will generate a kidney donor profile index (KDPI) score based on donor characteristics only (Table 1). Donor kidneys with the lowest KDPI score, representing the longest predicted survival time, are assigned to candidates with the longest estimated posttransplant survival. Kidneys with a KDPI score ≤ 20% would be offered first to candidates with the longest 20% estimated posttransplant survival, then to the remaining candidate pool.
The KDPI is based on an average- or median-quality donor, not on an ideal reference donor like the original KDRI (12). Using an average donor in the KDPI score allows clinicians to estimate allograft survival compared with general kidney donors rather than a reference donor. This score provides a more applicable risk estimate based on current donor characteristics, not on characteristics of a reference donor using data from 1995–2005. However, if the reference donor population is updated annually, the KDPI value from year to year may not be the same for donors with similar risk of allograft failure.
The proposed change to the OPTN/UNOS allocation system, transitioning from ECD to KDPI, would address the continued shortage of donor kidneys. The goal is to decrease discard rates of marginal kidneys, which would increase kidney availability. While the KDPI has marginal predictive power in the middle quartiles, the highest and lowest quartiles have been shown to be highly predictive of kidney transplant outcomes, especially compared with ECD. Use of donor risk indices in kidney transplantation led to increasing interest in a similar method of evaluating donor factors to predict allograft survival in liver transplantation.
Over the last twenty years, survival after liver transplant has steadily improved (14). However, given the wide gap between donor organ availability and patients in need of transplant, use of marginal, high-risk or ECD organs has increased (15). Though priority in liver allocation is based on the model for end-stage liver disease (MELD) score, donor/recipient matching occurs at the time of organ procurement and transplant, and substantial selection is involved in accepting an organ (16). Identifying donor-related factors that portend poor posttransplant outcome and analyses to guide use of organs according to donor characteristics have become increasingly important (17), especially given that donor characteristics and medical management vary by region and organ procurement organization, and may affect posttransplant outcomes (18).
The most important donor factor is age, which has repeatedly been shown to be a significant predictor of allograft failure and posttransplant deaths (17,19–22). This is especially true for patients undergoing transplant due to hepatitis C virus (HCV); outcomes are significantly worse for patients with HCV who receive livers from older donors. Less is understood about the effect of older donor allografts, especially regarding long-term outcomes, in non-HCV recipients. Type of donor is also important; use of DCD livers is associated with increased risk of posttransplant allograft failure (23,24).
Several mathematical models have been proposed to identify predictors of allograft and patient survival after liver transplant. The MELD score is an excellent predictor of wait-list mortality, but a suboptimal predictor of posttransplant allograft and patient survival because of donor, recipient, and transplant characteristics and unpredictable posttransplant events (e.g., patient compliance, allograft primary non-function, hepatic artery thrombosis). Objective parameters that quantify the risk associated with donor organs are actively sought.
In their seminal paper, Feng et al (19) discussed the concept of the LDRI. They used data from adult deceased donor liver transplants in the US 1998–2002 to identify factors associated with allograft failure, and examined thirty donor-associated characteristics. After adjustment for recipient and transplant factors that may affect allograft failure, a set of donor characteristics significant in multivariable modeling was derived. The original report identified seven donor characteristics that were significantly associated with liver allograft failure (Table 2). The final LDRI model also included regional/national share and cold ischemia time. Donor age > 60 years, DCD livers, and partial/split livers were associated with the highest risk of allograft failure. Livers from African American donors were associated with a 19% higher risk of allograft failure than livers from white donors. Using a reference donor (age < 40 years, death due to trauma, white race, cold ischemia time ≤ 8 hours, height 170 cm, local organ procurement, whole non-DCD organ), several combinations of donor characteristics were examined. Allograft survival rates correlated with increasing LDRI. Allograft survival was highest with the reference donor (LDRI ≤ 1, 20% of transplants); 1-year survival was 87.6% (86.6%–88.7%) and 3-year survival 81.2% (79.9%–82.6%). One-year survival of an organ with LDRI ≥ 2 (6% of transplants) was 71.4% (68.8%–74.1%) and 3-year survival 60.0% (56.9%–63.2%). The authors also reported that allografts with higher LDRI were likely used for low disease severity recipients (MELD score 10–14).
The immediate impact of the LDRI was an appreciation of the importance of donor factors and their influence on survival (Figure 2). In an analysis of transplant recipients 2005–2006, 3-year survival ranged from 83%–66% based on the LDRI. The LDRI provided the transplant community a common language, akin to the MELD score, for describing donor organ characteristics. It allowed transplant teams to formally consider variables in donor-recipient matching that previously were considered intuitively at the time of organ procurement and transplant. It allowed formal assessment of risks posed by a particular allograft and potential risk of death if an organ were declined (19). Further, it allowed for standardized assessment of transplant practices. Subsequent analyses confirmed the importance of the LDRI (25,26). Maluf et al (25) examined use of ECD livers (LDRI ≥ 1.7) transplanted 2002–2005. These high-risk donor livers were associated with a significant increase in relative risk of allograft failure in each MELD category.
While the LDRI serves an important role in assessing donor quality, much work remains to validate and optimize it. First, the LDRI was derived from data available in the pre-MELD era. Because the MELD-based allocation system represented a fundamental change in the practice of liver transplant, the LDRI should be examined with a modern, independent data set. Given that characteristics of currently wait-listed candidates differ from characteristics of patients who underwent transplant in the prior decade, changes in the significance and relative weighting of included variables are likely. Even for the highly vetted MELD score, refitting the score coefficients using an updated dataset produced several changes in the relative importance of the variables (27). Second, most of the predictive ability of the LDRI is derived from donor age, which single-handedly explains a significant amount of variability in posttransplant outcomes (25,28).
Third, certain variables included in the model primarily due to statistical significance during multivariable modeling lack biological plausibility. Donor race should not be construed as an indicator of donor quality (17). Several factors confound the association between donor race and allograft failure, including transplant center, transplanting donor organs too small for the recipient body size, and transplanting hepatitis B core-positive organs into hepatitis B-naive recipients. Using an updated dataset (January 2003–December 2005), the risk of allograft failure associated with African American donor race was lower and no longer significant once transplant center was considered. Compared with the 19% elevated risk associated with African American donor organs in the original LDRI, after appropriate adjustment the elevated risk was only 5% and no longer significant. Further, an interaction between donor and recipient race was observed, with variable rates of allograft failure in separate donor/recipient pairs. Hence, assignment of a singular risk for all donor/recipient pairs by race has been shown to be misleading (17,29).
Fourth, other variables, such as cause of death and regional or national sharing, do not have a consistent negative impact on allograft survival (17,30). Other donor variables not included in the LDRI have been identified as important predictors of allograft failure (31). The LDRI was derived from a retrospective use of SRTR data primarily designed to study recipient characteristics, and is limited in number of variables and extent of reporting of the collected data (32). Hence, unknown effects of missing variables (e.g., macrosteatosis on donor biopsy) have been addressed by several authors (17,28).
Finally, derivation of the LDRI involved consideration of approximately 60 variables; inclusion or exclusion of variables was driven by statistical modeling. This approach makes analyzing important interactions more difficult, and may inadvertently ignore collinearity among variables that explain the same effect. Clearly, much work is needed to refine and validate measures to objectively gauge the quality of donated organs. Using donor factors in isolation may give the LDRI poor predictive value (33). The effect of a high-risk donor is likely modified by important recipient characteristics such as HCV status (25,34).
An indirect benefit of better assessment of donor quality with the LDRI (and assessment of recipient mortality risk with the MELD score) is the ability to fine-tune distribution of a scarce resource. Further, practices that appear sound but may instead be detrimental to overall posttransplant outcomes can be examined (35,36). The LDRI has allowed the transplant community to further assess the organ allocation process and refine it to serve the needs of liver transplant patients.
Donor age is one of the most significant drivers of the LDRI, and it is evident that HCV recurrence and subsequent allograft failure is more likely for older-donor liver allografts (28). Maluf and colleagues showed that as LDRI increases, rates of allograft failure and death increase more in recipients with HCV than in non-HCV recipients (25). This difference persists even after adjustment for several recipient factors, including MELD score. In this study, much of the effect of LDRI (70%) was explained by donor age. Several reports have examined the interaction between donor age and HCV status. Schaubel et al (34) showed that in non-HCV recipients, the hazard ratio for allograft failure with donor age ≥ 60 years is 1.44, increasing to 2.03 if the recipient is HCV positive, a more than twofold increase in posttransplant mortality risk. Again, the LDRI led to formal analysis of the effect of donor characteristics, namely donor age and its negative implications, and supported a global practice change toward transplanting organs from younger donors into recipients with HCV.
Use of organs with high LDRI is associated with increased hospital costs independent of recipient risk factors (37). Across each MELD score category, resource utilization and hospital lengths of stay increase with increasing LDRI. In addition, the combination of high LDRI and high MELD is associated with the highest cost, albeit with acceptable posttransplant survival.
Volk et al examined donor/recipient matching in the MELD era (38). The overall quality of organs (as quantified by the LDRI) has decreased and higher-risk organs are being transplanted into less urgent candidates (in the post-MELD era), leading to worse outcomes in these candidates, and reducing posttransplant survival in recent years among patients with low MELD scores. Similarly, Schaubel et al showed that high-LDRI organs were more often transplanted into lower-MELD recipients and vice versa. The lowest MELD category recipients (scores 6–8) who received high-LDRI organs experienced significantly higher mortality (hazard ratio 3.70; P < 0.01) than if they had waited for a lower-LDRI organ (39). This led to a paradigm shift; high-risk organs are less frequently transplanted into low MELD-score recipients. Others have confirmed the detrimental effect of transplanting ECD organs, defined by elevated LDRI, into low MELD-score (< 15) recipients (40,41). An alternative conclusion is that high-risk organs should be transplanted into candidates who face high mortality risk without transplant, and therefore can benefit substantially from transplant (28).
Objective characterization of donor risk allows for examination of geographic disparity in donor quality. Regions with the longest wait times tend to transplant organs with higher LDRI. Differences in donor quality among the 11 OPTN/UNOS regions lead to disparate rates of allograft survival (42). Recently, even center-based differences in posttransplant outcomes have been examined as a function of donor quality. Despite adjustment of geography and patient characteristics, including disease severity, quality of donor organs differs between centers (LDRI range 1.74–2.37). Posttransplant mortality tends to be higher at centers using higher-risk organs (hazard ratio 1.10 per 0.1 increase in mean LDRI), implying that outcomes for liver transplant candidates may be variable between centers (43). Conversely, a separate analysis concluded that patient and allograft survival were better at high-volume centers, despite use of high-risk donors (higher LDRI) (44,45). Regardless, a center effect in allograft failure is apparent, even after adjusting for LDRI (46). Hence, factors other than those included in the LDRI may play a role.
Isolating donor characteristics from the milieu of factors that may influence posttransplant outcomes is difficult. Several authors have attempted to identify predictors of allograft failure and objectively characterize donor/recipient matching. One example is a model derived from 4 donor characteristics (age, cold ischemia time, sex, and race/ethnicity) and 9 recipient characteristics (age, body mass index, MELD score, OPTN/UNOS priority status, sex, race/ethnicity, diabetes mellitus, cause of liver disease, and serum albumin) (21). Separate models were developed to predict posttransplant survival in patients with and without HCV. Older donors (age > 75 years) and split-liver recipients were excluded, in contrast to the LDRI. More than 60 variables were considered. Risk of death was substantially different for high-risk and low-risk recipients; 1-year survival varied from 53%–96% based on the combination of donor and recipient factors. Within the dataset, the importance of donor characteristics (age, race, sex, and cold ischemia time), designated as the Score of Liver Donor (SOLD), was directly related to posttransplant survival; the higher the score, the lower the survival (21). However, the score was derived primarily from the pre-MELD era and considered variables that may lack biological plausibility (e.g., donor race). Further, the performance characteristics of the model (e.g., c-statistic) were not provided.
Similarly, Rana et al identified 13 recipient factors, 4 donor factors, and 2 operative factors (warm and cold ischemia time) as significant predictors of recipient mortality at 3 months posttransplant (33), using MELD-era data and including retransplants. The Survival Outcomes Following Liver Transplant (SOFT) score, using 18 risk factors (excluding warm ischemia time), successfully predicted 3-month recipient survival. The SOFT score included the MELD score at time of transplant categorized by MELD score > 30 or < 30. In their analysis, the concordance statistic was 0.63 for the MELD score and 0.70 for the SOFT score in predicting 3-month mortality after liver transplant. In comparison, the MELD score c-statistic is above 0.85 in predicting wait-list mortality (27). Donor race was not a significant predictor in this study. Concerns similar to those outlined above and complex statistical modeling limit its wide application. Further, longer time points would be needed to judge successful transplants; 3-month mortality estimates may be highly influenced by peri-operative factors, which may be indirectly related to transplant center characteristics.
The original LDRI does not include patients undergoing retransplant. Northup et al examined all retransplants performed in the US since 2002 (26). The LDRI was a significant predictor of overall mortality (hazard ratio 2.2, 1.63–2.94). Adding cause of allograft failure to the LDRI increased the risk of mortality (2.49, 1.89–3.27). Surprisingly, in patients with HCV as a component of allograft failure, use of a high-risk organ did not independently influence overall survival (26). This finding, if confirmed, would represent a significant shift in our understanding of mortality risk after retransplant.
The c-statistic for concordance between the LDRI and KDRI is 0.80. This is not surprising as both indices use similar factors (Tables 1 and and2).2). Both indices include donor demographics (age and African American race), donation after cardiac death, donor size (height and weight in the KDRI, height and partial/split liver in the LDRI), and stroke as cause of donor death. These factors all work in the same direction in both indices. Therefore, in a recent analysis, the c-statistic for the KDRI for predicting outcomes after liver transplant was similar to the c-statistic for the LDRI (c-statistic 0.57) (47).
The KDRI differs from the LDRI in that it incorporates more kidney-specific comorbid conditions that can affect kidney function (donor diabetes and hypertension) and the intended recipient (donor hepatitis C serostatus). The KDRI also incorporates donor kidney function through measurement of donor serum creatinine. Therefore, OPTN/UNOS is considering using the KDRI in a future kidney allocation system (13). Whether the liver transplant community will use the LDRI in a future allocation system remains to be seen.
Development of mathematical models that predict risk of allograft failure took a giant leap after introduction of the LDRI and KDRI. Despite their limitation, these models allow us to quantify and qualify the risks associated with use of higher-risk donor organs, and allow for standardized assessment of practices across the transplant community. However, the models can be improved. A rigorously vetted donor information database should be created and data collected prospectively to quantify the risk associated with high-risk donors (32). This would provide an objective element to donor/recipient matching that occurs in the middle of the night, and help improve posttransplant outcomes (14). However, variables based on clinical judgment, not simply statistical significance, should be used; this would be possible in any large dataset. Careful evaluation is needed before defining a characteristic as high risk, to forestall a slippery slope in which organs with certain characteristics (e.g., African American donor) are considered inferior, transplanted into high-risk recipients, and eventually associated with poor outcomes, culminating in a vicious cycle that would be hard to disprove in future analyses. Neither the KDRI nor the LDRI accounts for donor risks of transmitting viral infections, such as human immunodeficiency virus, hepatitis B, or HCV, as determined by the Centers for Disease Control and Prevention criteria for high-risk donors (48,49). Allograft survival of organs from donors with higher risk for transmitting infections has been found to be better than survival of high-risk organs as determined by KDRI (50). However, the largest benefit derived from indices of donor risk is the opportunity for better discussion with patients and informed deliberation between physicians and transplant candidates regarding the importance of factors that affect posttransplant results. (51). Providing patients with donor risk data should be an important part of informed consent. Minimally, this means providing the most accurate information regarding relative risks of accepting a higher-risk organ compared with risks of remaining on a waiting list. Such information can help guide decisions by physicians and transplant candidates regarding donor acceptance criteria. In turn, this will facilitate expeditious placement of high-risk organs and maximize organ utilization.
This work was supported by the Scientific Registry of Transplant Recipients. The Scientific Registry of Transplant Recipients is funded by a contract from the Health Resources and Services Administration (HRSA), US Department of Health and Human Services. The views expressed herein are those of the authors and not necessarily those of the US Government. This is a US Government-sponsored work. There are no restrictions on its use. This study was approved by HRSA’s SRTR project officer. Sanjeev Akkina was supported by Grant Number K23DK084121 from the National Institute of Diabetes and Digestive and Kidney Diseases.
The authors thank SRTR colleagues Shane Nygaard, BA, for manuscript preparation, and Nan Booth, MSW, MPH, ELS, for manuscript editing.
The authors have no conflicts of interest with its subject matter.