|Home | About | Journals | Submit | Contact Us | Français|
Rationale: Bilateral lung transplantation (BLT) improves survival compared with single lung transplantation (SLT) for some individuals with chronic obstructive pulmonary disease (COPD). However, it is unclear which strategy optimally uses this scarce societal resource.
Objectives: To compare the effect of SLT versus BLT strategies for COPD on waitlist outcomes among the broader population of patients listed for lung transplantation.
Methods: We developed a Markov model to simulate the transplant waitlist using transplant registry data to define waitlist size, donor frequency, the risk of death awaiting transplant, and disease- and procedure-specific post-transplant survival. We then applied this model to 1,000 simulated patients and compared the number of patients under each strategy who received a transplant, the number who died before transplantation, and total post-transplant survival.
Measurements and Main Results: Under baseline assumptions, the SLT strategy resulted in more patients transplanted (809 vs. 758) and fewer waitlist deaths (157 vs. 199). The strategies produced similar total post-transplant survival (SLT = 4,586 yr vs. BLT = 4,577 yr). In sensitivity analyses, SLT always maximized the number of patients transplanted. The strategy that maximized post-transplant survival depended on the relative survival benefit of BLT versus SLT among patients with COPD, donor interval, and waitlist size.
Conclusions: In most circumstances, a policy of SLT for COPD improves access to organs for other potential recipients without significant reductions in total post-transplant survival. However, there may be substantial geographic variations in the effect of such a policy on the balance between these outcomes.
Bilateral lung transplantation has been shown to improve survival in some individuals with chronic obstructive pulmonary disease (COPD) compared with single lung transplantation. However, the existing pool of organ donors is currently insufficient to provide lung transplantation to all eligible patients. It is not known whether bilateral lung transplantation for COPD may adversely impact access to lung transplantation for other potential recipients.
An allocation strategy of bilateral lung transplantation for COPD was shown in simulation models to increase waitlist mortality for other potential recipients. In some circumstances, however, a bilateral lung transplantation strategy may also increase total post-transplant survival of those undergoing transplantation. The optimal allocation strategy therefore may vary across regions and depends on whether society prioritizes lives saved through transplantation or total survival after transplantation. In most circumstances, however, a policy of single lung transplantation for COPD will improve access to transplantation without a significant decrement in total post-transplant survival.
Lung transplantation is the best treatment available for many patients with end-stage lung diseases (1). However, there are insufficient numbers of standard-criteria donors to provide lung allografts to all patients who would benefit from transplantation. Although the introduction of the Lung Allocation Score (LAS) (2) in May, 2005, has shortened the average time spent awaiting lung transplantation (3), potential recipients continue to die on the waitlist (4). To address the lung shortage, institutions have turned to extended-criteria donors (5, 6) and donors after circulatory determination of death (7–11). Despite these strategies, a substantial gap remains between the supply of and demand for transplantable lungs.
If the supply of transplantable lungs cannot be increased sufficiently, an alternative is to compare methods for allocating the existing supply to expand access. In this context, the decision to offer bilateral lung transplantation (BLT), rather than single lung transplantation (SLT), to patients with chronic obstructive pulmonary disease (COPD) merits evaluation. Unlike other disease categories in which survival is similar after SLT and BLT (12–14), recent studies suggest that for individuals with COPD, BLT offers a survival advantage over SLT in younger recipients (12, 15–18). This individual survival benefit coincides with the increasing use of BLT for COPD observed in international registry data (12).
However, by allocating two lungs to a single recipient, a strategy of BLT for patients with COPD could extend the waiting times and increase the risk of death without transplantation for other patients in need of lung transplantation. This effect is most obvious for patients awaiting SLT who could otherwise receive the second donor lung. Because BLT alters the frequency with which patients are transplanted and thus removed from the waitlist, a policy of BLT for COPD also may prolong waiting times and thus affect the waitlist mortality of patients with other diseases awaiting bilateral transplantation.
A tension may therefore exist between allocation strategies that maximize survival for individual patients with COPD (BLT) and those that maximize access for other potential recipients (SLT). We designed this study to quantify this tension by estimating the societal benefits of single versus BLT strategies for patients with COPD, and to identify factors that may vary among geographic regions or over time that influence these estimates of benefit. Some of the results of this study have been presented previously in abstract form (19–21).
We constructed a decision analysis model using TreeAge Pro 2009 Release 1.0.2 (Williamstown, MA). Markov chains were used to simulate the effect of SLT and BLT allocation strategies for COPD on a waitlist of potential recipients (see online supplement). To determine model inputs, we examined a study cohort comprised of all patients listed for lung transplantation in the United Network for Organ Sharing (UNOS) Standard Transplant Analysis and Research file between May 4, 2005 (the date of LAS implementation) and February 1, 2008.
After creating the Markov model, the modeled waitlist was populated with simulated patients whose listing diagnosis was assigned based on the distribution of listing diagnoses in the study cohort. It was assumed that each available donor would donate two lungs based on data from our region that 864 of 1,011 lung donors (85.7%) successfully donated two lungs during the 2-year period after LAS implementation (unpublished data).
Once on the waitlist, patients remained in their waitlist position for one Markov cycle, defined as the number of days between eligible donors. At the end of each cycle, there were three possible transitions: advancing on the waitlist, death on the waitlist, and transplantation. The probabilities of these three transitions were determined by the patient's starting position on the waitlist and listing diagnosis, the transplant requirements of patients listed in higher positions (BLT or SLT), and the allocation strategy for patients with COPD (see online supplement).
We limited the population of potential recipients in our model to patients with the five most common listing diagnoses during the study period: COPD (including that attributable to α1- antitrypsin deficiency); idiopathic pulmonary fibrosis (IPF); pulmonary hypertension; cystic fibrosis; and sarcoidosis. Other diagnoses were excluded because they were nonspecific, comprised fewer than 2% of listed patients, or both. Patients undergoing retransplantation also were excluded. The size of the regional waitlist to model as the base case was determined by sampling the actual waitlists for each of the 11 UNOS regions on one randomly selected day in each of the 63 2-week periods from September 1, 2005, through February 1, 2008 (see online supplement).
To determine the interval between donors used to define the duration of each Markov cycle, we recorded the number of donors that became available while compatible recipients were waiting stratified among the 220 combinations of UNOS region (11), blood type (4), and height category (5). We set the baseline donor interval as the median of these observed intervals for blood types A and O because these comprised greater than 85% of the sample (see online supplement).
To calculate the probability of dying while on the waitlist, we used the UNOS sample to generate survival curves for each of the five most common listing diagnoses. For these purposes, we restricted the sample to patients with a listing LAS lower than the median LAS at transplantation among patients with COPD. This ensured inclusion of only those patients who might be affected by decisions to allocate one or two lungs to a COPD patient. We then calculated the daily probability of dying on the waitlist for each listing diagnosis and the probability of death during each of the possible donor intervals (see online supplement).
To define individual survival after transplantation, data from the International Society for Heart and Lung Transplantation registry were used to define the median survival for IPF after SLT, and for pulmonary hypertension, cystic fibrosis, sarcoidosis, and COPD after BLT. We then used the age-specific adjusted hazard ratios for BLT compared with SLT for patients with COPD from Thabut and colleagues (18) to calculate the expected survival for patients with COPD after SLT (see online supplement).
Outcomes were calculated in 1,000 listed patients after 1,000 donor cycles to roughly approximate the number of donors each year in the study sample. Outcome measures were (1) the number of patients who received a transplant; (2) the number who died on the waitlist; and (3) the total post-transplant survival among patients transplanted under each strategy. Total transplant survival was calculated as the sum of the expected post-transplant survival for each waitlisted individual surviving to transplantation. This figure therefore reflects both the number of patients transplanted and the expected survival of each individual after transplantation based on their listing diagnosis and, in the case of COPD, the procedure performed. In sensitivity analyses we varied the frequency of donor availability, the survival benefit of BLT compared with SLT for COPD, and the size of the waitlist (see online supplement).
This study was deemed exempt from review by the University of Pennsylvania Institutional Review Board.
In the base-case analysis, an allocation strategy of SLT for COPD resulted in more transplant recipients (809 vs. 758 recipients; difference = 51 or 5.1%; 95% confidence interval [CI], 4.6–5.6%) and fewer deaths on the waitlist (157 vs. 199 deaths; difference = 42 or 4.2%; CI, 3.8–4.6%). The two strategies yielded similar total post-transplant survival (4,586 yr for SLT vs. 4,577 yr for BLT; difference = 9 yr; CI about the difference = −34 to +54 yr).
The strategy of SLT for COPD consistently maximized the number of transplant recipients across sensitivity analyses, but the magnitude of this advantage was dependent on the duration between available donors (Figure 1). At the shortest tested donor interval, the absolute difference between strategies was 23 recipients (920 vs. 897) or 2.3% (CI, 2.1–2.6%). The magnitude of the difference also declined with smaller waitlist size from 51 recipients or 5.1% (CI, 4.6–5.6%) at the baseline size of 20 patients, to 31 patients or 3.1% (CI, 2.7–3.4%) at a waitlist size of 10 patients, and 17 patients or 1.7% (CI, 1.5–1.9%) at a waitlist size of five patients (Figure 2).
The strategy that maximized total post-transplant survival depended on the duration between available donors and the relative survival benefit of BLT compared with SLT; these factors’ influences also were modified by waitlist size (Figures 3 and and4).4). At the baseline waitlist size of 20 patients, the median donor interval for blood type O yielded greater post-transplant survival with BLT, whereas the median donor interval for blood type A yielded greater post-transplant survival with SLT. Similarly, variations in the benefit to the individual patient of BLT relative to SLT altered the preferred strategy. Two-way sensitivity analyses revealed that as the number of patients on the waitlist was reduced, the BLT strategy maximized post-transplant survival across most tested assumptions.
To illustrate the effects of regional variations in donor availability, the median donor interval actually observed in each region during the study period for blood types A and O was applied to the same simulation of 1,000 potential recipients of average height using regional waitlists of 20 patients (Figure 5). For both blood types, BLT increased waitlist deaths among all 11 regions, and the magnitude of the increase was broad (14–75 excess deaths per 1,000 donors). The impact on total post-transplant survival was more complex. For blood type O, a BLT strategy resulted in decreased overall survival in three regions and increased survival in eight regions (total range, −368 to +215 yr). For blood type A, a BLT strategy resulted in decreased total post-transplant survival in five regions and increased total post-transplant survival in six regions (total range, −368 to +148 yr).
In the baseline model, 80.9% of patients listed in the SLT strategy arm and 75.8% of patients listed in the BLT strategy arm received a transplant, whereas 15.7% (SLT) and 19.9% (BLT) died before transplantation. Among the cohort of patients listed in the UNOS database, 79.5% of patients listed received a transplant and 16.2% of patients died before receiving a donor organ. These results suggest that the baseline model closely approximates the outcomes of listed patients under the existing allocation system.
This study yields several conclusions that may inform lung transplant allocation decisions. First, the decision to offer a single or BLT procedure to a patient with COPD can have important effects on the availability of donor organs for other potential recipients. Second, no single allocation strategy is optimal under all circumstances; rather, the optimal strategy depends on factors that vary across regions and time periods. Third, the selection of an allocation strategy for patients with COPD depends in part on which of two different measures of effectiveness is prioritized: the total number of potential recipients who survive to transplantation or the total post-transplant survival of those who are transplanted.
The decision analysis demonstrates that an allocation strategy of SLT results in greater equity of access because it maximizes the number of people who receive lung transplantation under all plausible circumstances. If organs are scarce, a policy that allocates one lung to two recipients must increase the total number of patients transplanted compared with a policy that allocates two lungs to a single recipient. However, estimating the magnitude of this difference is both complex and essential to the development of more effective allocation policies. The fact that patients competing with patients with COPD for donor organs have, on average, LAS scores that predict a low risk of waitlist mortality may have prompted many policymakers to assume that allocation decisions for patients with COPD would not impact overall waitlist mortality in any appreciable way. By contrast, this study demonstrates that by prolonging waiting times for every patient listed below a single lung transplant recipient and a patient with COPD, often by several donor cycles, a policy of BLT for COPD increases the risk of waitlist mortality for potential single and bilateral recipients with many different diseases. Indeed, it was found that a policy of SLT in the base model resulted in an absolute reduction in the risk of waitlist mortality of 4.2% among all listed patients.
However, in many circumstances, a strategy of BLT promotes at least one important measure of effectiveness in that it maximizes post-transplant survival among recipients. Both the number of lives saved and total number of life-years gained are desirable and morally relevant outcomes (22–24). Indeed, the LAS was designed explicitly to balance these considerations at the level of the individual recipient (2). Although this study cannot define the “correct” balance between waitlist mortality and long-term survival, the results illustrate how allocation decisions made for individual recipients shift this balance across a population. Under such circumstances, the traditional focus on the individual recipient may need to be tempered by considerations of the effects of each allocation decision on outcomes for other listed patients. One consequence of this shift in focus is that gains in access to transplantation may be accompanied by reduced post-transplant survival for younger patients with COPD because equivalent total post-transplant survival is divided among a larger pool of recipients.
The finding that the magnitude of the tradeoff between post-transplant survival and waitlist mortality is strongly influenced by such factors as donor availability and waitlist length that are known to vary across regions and time periods allows room for reasoned debate regarding which circumstances warrant which allocation strategy. However, the results of the base model suggest that a default policy of allocating a single lung to patients with COPD results in greater access to lung transplantation for waitlisted patients without any decrement in total post-transplant survival. Exceptions to this default policy position may be appropriate for centers with a history of high donor frequency and short waitlists. However, as more patients become eligible for transplantation, these exceptions will become more difficult to justify unless there is a corresponding increase in the availability of donor organs.
The sensitivity of the optimal strategy to the relative survival benefit of BLT compared with SLT also has implications for allocation strategies as age restrictions are relaxed and older patients are more commonly receiving lung transplantation (12, 25). Because BLT (vs. SLT) does not seem to benefit individual patients with COPD older than 60 (18), this trend of transplanting progressively older patients has the potential to make SLT a uniformly dominant allocation strategy from the societal perspective.
There are several potential limitations to this study. First, only the uniform policies of SLT or BLT for patients with COPD are considered, rather than the current practice of clinicians making case-by-case judgments. Similarly, based on existing data that demonstrate equivalent survival after SLT or BLT for patients with IPF (14), it is assumed that all patients with IPF are eligible to receive a single lung. Case-by-case judgments could prove optimal for society, and there will invariably be individual exceptions to any allocation policy. Nonetheless, by demonstrating the potential effects of individual allocation decisions on societal goals across a broad range of plausible circumstances, this study shows that SLT for COPD can improve access to transplantation and may be an appropriate default policy for most patients listed with COPD.
Second, we have modeled survival without considering quality of life. Existing studies have demonstrated that the development of chronic rejection is associated with a decrement in quality of life post-transplant (26–28), and that single lung recipients develop chronic rejection earlier than bilateral lung recipients (16). However, to date existing data have not demonstrated a statistically significant difference in quality of life after the two procedures (29). If future studies demonstrate that long-term quality of life is significantly better after BLT than after SLT, then the results may underestimate the post-transplant benefits of a BLT strategy. Conclusions based on the numbers of lives saved by the two strategies would not change.
Third, the model did not allow listed patients to descend in priority as sicker patients entered the list. Because such decrements in rank would prolong waitlist times and thereby increase the risk of dying before transplantation, this approach underestimates the benefits of SLT. Similarly, the model did not allow patients to skip to the top of the waitlist directly as might occur with rapid disease progression. Although allowing such movement would seem to reduce waitlist mortality, the model accomplished the same goal by basing LAS-specific mortality risks on observed rather than expected outcomes. Further, reordering patients on the waitlist would not alter the population's accrued waitlist time, and therefore would not change the model's estimates of overall waitlist mortality.
Fourth, the methods used UNOS regions to define input variables, such as donor interval and waitlist length, and to illustrate regional variations in expected outcomes. UNOS regions do not match the realities of current policy whereby donated organs are allocated locally first, and then in 500-mile concentric circles around a specific center. However, it was not feasible to calculate donor intervals, waitlist lengths, or distributions of recipient characteristics for each possible donor site and transplant center using existing data. Furthermore, the sensitivity analyses capture the range of plausible input variables, and so the regional variations illustrated in Figure 5 are likely to represent the geographic variation that exists under current allocation policies.
Fifth, for simplicity of illustration, we only modeled differences in donor availability to show regional variations in waitlist mortality and post-transplant survival. Other prognostically important variables, such as waitlist length or age composition of listed patients, also are dynamic over time. Considering these variations simultaneously would strengthen the conclusion that present and local characteristics should be considered in allocation decisions.
Finally, the model did not consider pretransplant survival time in calculating effectiveness. Conceptually, actuarial survival would be maximized by prolonging pretransplant survival as long as possible even if this resulted in a moderate reduction in post-transplant survival that was offset by increased waiting times. However, the value of increasing pretransplant survival is unclear, and at some point prolonging pretransplant survival worsens post-transplant survival, an outcome that few would find appropriate even if total survival was increased (30).
BLT may offer survival benefits for individual patients with COPD; however, this benefit comes at the cost of increased mortality among other potential recipients awaiting organs from similar donors. Although SLT for patients with COPD always maximizes access, a BLT strategy maximizes the total number of life-years gained post-transplant when waitlists are short, donors are common, or the local survival benefits of BLT compared with SLT are large. In light of these results, an optimal allocation decision may vary from region to region and even center to center. However, in most circumstances, a policy of SLT for COPD improves access to lung transplantation without a significant decrement in total post-transplant survival. Further research is needed to determine how society prioritizes these competing measures of effectiveness and whether models such as this can be used to improve complex and time-sensitive allocation decisions.
Supported by grants F32HL092741 (J.C.M.), R01HL087115 and R01HL081619 (J.D.C.), K08HS018406, and a Transplant Registry Junior Faculty Award from the International Society for Heart and Lung Transplantation (S.D.H.).
This article has an online supplement, which is accessible from this issue's table of contents at www.atsjournals.org
Author Contributions: Each author was directly involved in the design, data acquisition, analysis and interpretation of the data, and drafting or revising the manuscript for intellectual content. All authors approved the final version for submission.
Originally Published in Press as DOI: 10.1164/rccm.201104-0695OC on August 25, 2011