This study yields several conclusions that may inform lung transplant allocation decisions. First, the decision to offer a single or BLT procedure to a patient with COPD can have important effects on the availability of donor organs for other potential recipients. Second, no single allocation strategy is optimal under all circumstances; rather, the optimal strategy depends on factors that vary across regions and time periods. Third, the selection of an allocation strategy for patients with COPD depends in part on which of two different measures of effectiveness is prioritized: the total number of potential recipients who survive to transplantation or the total post-transplant survival of those who are transplanted.
The decision analysis demonstrates that an allocation strategy of SLT results in greater equity of access because it maximizes the number of people who receive lung transplantation under all plausible circumstances. If organs are scarce, a policy that allocates one lung to two recipients must increase the total number of patients transplanted compared with a policy that allocates two lungs to a single recipient. However, estimating the magnitude of this difference is both complex and essential to the development of more effective allocation policies. The fact that patients competing with patients with COPD for donor organs have, on average, LAS scores that predict a low risk of waitlist mortality may have prompted many policymakers to assume that allocation decisions for patients with COPD would not impact overall waitlist mortality in any appreciable way. By contrast, this study demonstrates that by prolonging waiting times for every patient listed below a single lung transplant recipient and a patient with COPD, often by several donor cycles, a policy of BLT for COPD increases the risk of waitlist mortality for potential single and bilateral recipients with many different diseases. Indeed, it was found that a policy of SLT in the base model resulted in an absolute reduction in the risk of waitlist mortality of 4.2% among all listed patients.
However, in many circumstances, a strategy of BLT promotes at least one important measure of effectiveness in that it maximizes post-transplant survival among recipients. Both the number of lives saved and total number of life-years gained are desirable and morally relevant outcomes (22
). Indeed, the LAS was designed explicitly to balance these considerations at the level of the individual recipient (2
). Although this study cannot define the “correct” balance between waitlist mortality and long-term survival, the results illustrate how allocation decisions made for individual recipients shift this balance across a population. Under such circumstances, the traditional focus on the individual recipient may need to be tempered by considerations of the effects of each allocation decision on outcomes for other listed patients. One consequence of this shift in focus is that gains in access to transplantation may be accompanied by reduced post-transplant survival for younger patients with COPD because equivalent total post-transplant survival is divided among a larger pool of recipients.
The finding that the magnitude of the tradeoff between post-transplant survival and waitlist mortality is strongly influenced by such factors as donor availability and waitlist length that are known to vary across regions and time periods allows room for reasoned debate regarding which circumstances warrant which allocation strategy. However, the results of the base model suggest that a default policy of allocating a single lung to patients with COPD results in greater access to lung transplantation for waitlisted patients without any decrement in total post-transplant survival. Exceptions to this default policy position may be appropriate for centers with a history of high donor frequency and short waitlists. However, as more patients become eligible for transplantation, these exceptions will become more difficult to justify unless there is a corresponding increase in the availability of donor organs.
The sensitivity of the optimal strategy to the relative survival benefit of BLT compared with SLT also has implications for allocation strategies as age restrictions are relaxed and older patients are more commonly receiving lung transplantation (12
). Because BLT (vs. SLT) does not seem to benefit individual patients with COPD older than 60 (18
), this trend of transplanting progressively older patients has the potential to make SLT a uniformly dominant allocation strategy from the societal perspective.
There are several potential limitations to this study. First, only the uniform policies of SLT or BLT for patients with COPD are considered, rather than the current practice of clinicians making case-by-case judgments. Similarly, based on existing data that demonstrate equivalent survival after SLT or BLT for patients with IPF (14
), it is assumed that all patients with IPF are eligible to receive a single lung. Case-by-case judgments could prove optimal for society, and there will invariably be individual exceptions to any allocation policy. Nonetheless, by demonstrating the potential effects of individual allocation decisions on societal goals across a broad range of plausible circumstances, this study shows that SLT for COPD can improve access to transplantation and may be an appropriate default policy for most patients listed with COPD.
Second, we have modeled survival without considering quality of life. Existing studies have demonstrated that the development of chronic rejection is associated with a decrement in quality of life post-transplant (26
), and that single lung recipients develop chronic rejection earlier than bilateral lung recipients (16
). However, to date existing data have not demonstrated a statistically significant difference in quality of life after the two procedures (29
). If future studies demonstrate that long-term quality of life is significantly better after BLT than after SLT, then the results may underestimate the post-transplant benefits of a BLT strategy. Conclusions based on the numbers of lives saved by the two strategies would not change.
Third, the model did not allow listed patients to descend in priority as sicker patients entered the list. Because such decrements in rank would prolong waitlist times and thereby increase the risk of dying before transplantation, this approach underestimates the benefits of SLT. Similarly, the model did not allow patients to skip to the top of the waitlist directly as might occur with rapid disease progression. Although allowing such movement would seem to reduce waitlist mortality, the model accomplished the same goal by basing LAS-specific mortality risks on observed rather than expected outcomes. Further, reordering patients on the waitlist would not alter the population's accrued waitlist time, and therefore would not change the model's estimates of overall waitlist mortality.
Fourth, the methods used UNOS regions to define input variables, such as donor interval and waitlist length, and to illustrate regional variations in expected outcomes. UNOS regions do not match the realities of current policy whereby donated organs are allocated locally first, and then in 500-mile concentric circles around a specific center. However, it was not feasible to calculate donor intervals, waitlist lengths, or distributions of recipient characteristics for each possible donor site and transplant center using existing data. Furthermore, the sensitivity analyses capture the range of plausible input variables, and so the regional variations illustrated in are likely to represent the geographic variation that exists under current allocation policies.
Fifth, for simplicity of illustration, we only modeled differences in donor availability to show regional variations in waitlist mortality and post-transplant survival. Other prognostically important variables, such as waitlist length or age composition of listed patients, also are dynamic over time. Considering these variations simultaneously would strengthen the conclusion that present and local characteristics should be considered in allocation decisions.
Finally, the model did not consider pretransplant survival time in calculating effectiveness. Conceptually, actuarial survival would be maximized by prolonging pretransplant survival as long as possible even if this resulted in a moderate reduction in post-transplant survival that was offset by increased waiting times. However, the value of increasing pretransplant survival is unclear, and at some point prolonging pretransplant survival worsens post-transplant survival, an outcome that few would find appropriate even if total survival was increased (30
BLT may offer survival benefits for individual patients with COPD; however, this benefit comes at the cost of increased mortality among other potential recipients awaiting organs from similar donors. Although SLT for patients with COPD always maximizes access, a BLT strategy maximizes the total number of life-years gained post-transplant when waitlists are short, donors are common, or the local survival benefits of BLT compared with SLT are large. In light of these results, an optimal allocation decision may vary from region to region and even center to center. However, in most circumstances, a policy of SLT for COPD improves access to lung transplantation without a significant decrement in total post-transplant survival. Further research is needed to determine how society prioritizes these competing measures of effectiveness and whether models such as this can be used to improve complex and time-sensitive allocation decisions.