National transplant data collected by the Association of Organ Procurement Organizations (www.aopo.org
, accessed June 21, 2012) reveals a cardiac allograft utilization rate (number of hearts accepted for transplantation/total number of donors) of 28.2%-30.1% from 2009–2011. Many reasons exist for discarding donor hearts, including older donor age, small size, co-morbidities, logistical issues, left ventricular hypertrophy, and donor left ventricular dysfunction. Unfortunately, the current criteria for acceptance of donor hearts are poorly standardized and are often based on retrospective single-center studies and anecdotal experience. Indeed, large registry analyses have shown that very few donor characteristics have significant impact on recipient outcomes,32, 33
suggesting that recipient factors figure more prominently toward the risk of death after heart transplantation.
Our aim in conducting this study was to closely examine current practices with respect to cardiac allograft acceptance in a contemporary cohort of potential organ donors. During the eight year time period examined, we found that utilization rates decreased by an average of 4.2% per year, from a high of 56% in 2002 to a low of 37% in 2007. This decline in allograft acceptance occurred despite a decreased
incidence of donor risk factors for non-utilization, such as death due to cerebrovascular accident and older age. While utilization rates in our donation service area are higher than the national average, the significant decline over time suggests increasing avoidance of risk—a practice that could potentially lead to longer waiting list times and increasing waiting list mortality. One explanation for the trend towards avoidance of “high risk” donors could be related to advances in mechanical circulatory support (particularly ventricular assist devices, VADs) as a bridge to transplantation. Improvements in VAD design and a reduction in size of the devices has allowed for predictable clinical stabilization of a growing population of end-stage heart failure patients,1, 34
even those who are critically ill. The concern with adopting this approach indiscriminately is that bridging devices are expensive and can double the cost of an already expensive procedure. VAD implantation also exposes patients to the risk of additional surgical procedures, infections, and development of HLA antibodies,35
which may contribute to poorer outcomes after heart transplantation.36, 37
We first examined the associations between the eleven donor risk factors identified a priori, based on our review of the literature, and allograft utilization. Almost all of these risk factors, except for high dopamine requirement and cocaine/methamphetamine use, significantly predicted non-utilization. The total number of donor risk factors was also strongly associated with non-utilization.
Recognizing the fact that other donor characteristics may influence allograft acceptance decisions, we used all available donor covariate data to construct a Random Forest algorithm. Random Forest models confer several advantages: they can handle a large number of input variables, they can give an estimate of which variables are important in the classification, and they can produce a highly accurate classifier. In addition, they internally validate, thereby obviating the need for cross-validation. This model had excellent discrimination, with a c-statistic of 0.86.
After identifying donor predictors of allograft non-utilization, we examined the associations between these predictors and recipient outcomes. Only cerebrovascular accident as the donor cause of death marginally predicted prolonged recipient post-transplant hospitalization, and diabetes was the only donor predictor of increased recipient mortality. These findings concur with prior studies demonstrating the relatively small contribution of donor characteristics to post-transplant adverse events.4, 38, 39
One criticism of studies examining the influence of donor characteristics on recipient outcomes is that of selection bias. Allografts with many undesirable features are rarely accepted for transplantation, making it difficult to determine whether the recipient outcomes would have been acceptable. On the other hand, grafts with one or two unfavorable characteristics (such as reduced left ventricular systolic function) may be accepted if the graft is otherwise favorable (e.g. from a young and healthy male donor). Thus, it is difficult, if not impossible, to determine the relative contributions of unfavorable characteristics to recipient events. In other words, the grafts with undesirable characteristics are very carefully chosen to minimize the risk of adverse events. This phenomenon, combined with the relatively low mortality within 30-days to 1-year post-transplant, may paradoxically make it seem as though unfavorable characteristics are associated with improved recipient outcomes.
We used the excellent discriminative ability of our Random Forest model to predict which allografts in our cohort “would” and “would not” have been accepted for transplantation, based on a threshold of >50% predicted probability of utilization. Based on this model, 36% of the allografts accepted for transplantation had a <50% probability of utilization, based on donor characteristics. We once again examined recipient post-transplant outcomes, this time based on the predicted probability of allograft acceptance, and were unable to identify any significant differences in recipient survival. Thus, receipt of an allograft that was unlikely to be used for transplantation, based on combinations of unfavorable donor characteristics, did not result in adverse recipient events. Finally, donor characteristics in aggregate did not prove to be a reliable predictor of recipient outcomes.
This study has significant strengths and limitations. First and foremost, this represents the largest existing research database of detailed, rigorously adjudicated clinical data on over 1,800 potential organ donors managed in a standardized fashion in the current era. The recipient outcomes selected for analysis are robust, as all heart transplant centers in the United States are required to report metrics such as length of post-transplant hospitalization and recipient survival to UNOS. The primary limitation is the observational nature of the data. We are reluctant to make any causal statements about the relationship between donor characteristics and recipient outcomes. There are likely selection effects that may explain the relatively positive outcomes among recipients of the less desirable allografts. Also, reasons for allograft discard were not documented in the donor medical record. Thus, it is possible that reasons other than donor characteristics (lack of a suitable recipient, time considerations, donor family preference) could have accounted for some cases of non-utilization. Nonetheless, we were still able to identify a number of strong donor predictors of allograft non-utilization. Several important predictors were based on echocardiography: reduced left ventricular systolic function, left ventricular regional wall motion abnormalities, and left ventricular hypertrophy. However, donor echocardiograms were interpreted at local hospitals and were not centrally reviewed; we therefore cannot verify the accuracy of echocardiogram interpretation and measurements. Another limitation lies in the fact that relatively few recipients died within the first year post-transplant; this study may therefore have been underpowered to detect subtle influences of donor characteristics on post-transplant outcomes. Finally, this study was limited to donors managed by CTDN and may not be generalizable to heart transplant procedures performed throughout the United States or worldwide.
In conclusion, we have demonstrated the persistent low utilization of available cardiac allografts for transplantation in the face of a national organ donor shortage and a growing number of patients with end-stage heart disease. We identified predictors of allograft non-utilization and demonstrated that the anticipated relationship between these donor predictors and adverse recipient outcomes was not seen in our heart transplant cohort. These findings support the following statement recently put forth by the National Heart, Lung, and Blood Institute regarding the next decade of heart transplantation research: “Without clear evidence about the outcomes associated with different donor characteristics informing the donor selection process, it is probable that many potentially useful organs are currently being discarded. Because an important rate limiting factor in [heart transplantation] is the number of available donor organs, studies that define how to optimize donor use and develop biomarkers to define organ utility might increase the donor pool by providing evidence that would support the use of those organs deemed to be less than perfect.”40
The field of heart transplantation would therefore benefit greatly from prospective, multi-center trials studying the effects of liberalizing allograft acceptance criteria.