|Home | About | Journals | Submit | Contact Us | Français|
The task of describing the birth of organ transplantation was simplified by an historical consensus development conference convened on March 25–27, 1999 at the University of California, Los Angeles (UCLA). The purpose was to identify the principal milestones leading to the clinical use of the various transplantation procedures. The 11-member consensus panel included eight recipients of the Medawar Prize (the highest distinction of the Transplantation Society) and three Nobel Laureates (Figure 1).
Carl Groth of Stockholm, president-elect of the Transplantation Society, was invited to chair the deliberations and to prepare an executive summary of the group’s conclusions. The summary, and personal reminiscences of the 11 individual participants, were published in the July 2000 issue of the World Journal of Surgery.1 The collection of articles was reprinted in monograph form for distribution to members of the Transplantation Society.
Milestone status was granted only to contributions made at least a quarter of a century ago. By 1975, 32 years had passed since convincing evidence was produced, first by Gibson and Medawar2 in a skin-grafted burn victim in Glasgow and then by Medawar3 in controlled animal experiments, that rejection is a host versus graft (HVG) immune response. Within the next few years, it was demonstrated that the same principle applied to renal allografts in unmodified dogs (Figure 2B).4,5
In addition, confirmatory observations were made in more than a dozen cases in which kidney allografts were transplanted to the anterior thigh6 or to the pelvis7–10 of unmodified human recipients. These cases included the first reported live organ donation;7 the mother’s kidney functioned for 3 weeks in her nonimmunosuppressed son before undergoing irreversible rejection.
It is noteworthy that, as early as 1951, the French surgeons Kuss and coworkers,8 Dubost and associates,9 and Servelle and colleagues,10 had developed for human use the extraperitoneal pelvic kidney transplant procedure that soon would be used by Murray and colleagues11 for the historic identical twin and fraternal twin transplantations12 of 1954 and 1959, respectively. Procedures for transplantation of the other vital organs had been developed in surgical laboratories by the late 1960s. By the end of the decade, therapeutic regimens were available with which successful clinical transplantation of five kinds of organ allografts, as well as allogeneic bone marrow cells, were accomplished (Table 1). 12–17
In my individual contribution to the UCLA conference,18 I pointed out that almost all of the clinical transplantation milestones could be traced to two seminal discoveries that were turning points from which divergent management strategies evolved; one leading to bone marrow transplantation and the other to organ transplantation. The first discovery, made in the experimental laboratory and eventually applied in the clinic, was that acquired tolerance to allografts could be induced. The second discovery, that organs could induce tolerance to themselves (ie, were inherently tolerogenic), was made by clinical surgeons and taken to the laboratory in search of an explanation.
When no explanation for the organ tolerogenicity could be found, the engraftment of organs was incorrectly attributed by consensus to different mechanisms than those of acquired tolerance. The error did not prevent the development of organ transplantation to a high level of efficacy. But it deprived transplantation immunology of an intelligible context for nearly three decades.
The demonstration that tolerance to allogeneic tissue could be acquired is dated precisely to a four-page experimental article by the English team of Billingham, Brent, and Medawar in the October 3, 1953, issue of the journal Nature.19 The tolerance was strongly associated with donor leukocyte chimerism.19,20
In their initial experiments, Billingham, Brent, and Medawar induced tolerance to subsequently transplanted skin allografts by first engrafting self-renewing donor strain spleen cells into gestational and neonatal mice before the recipient animal’s immune system was developed enough to reject the foreign leukocytes.19 Sixteen months later, Main and Prehn21 at the National Cancer Institute (Bethesda, MD) extended these findings to adult mice that were rendered comparably immune-defenseless by supralethal total body irradiation (TBI) and then rescued by an infusion of donor bone marrow cells. In both the neonatal and irradiation mouse models, the acquired tolerance was limited to skin allografts from the leukocyte donor strain (donor-specific nonreactivity).
The mouse neonatal tolerance19,20 and irradiation tolerance models21 were the direct forerunners of bone marrow transplantation to patients with immune deficiency diseases15,22 and to cytoablated recipients made immune deficient by TBI,23,24 respectively. As in the mice, the limiting factor in humans under both circumstances was the need for a close tissue match. Otherwise, lethal graft versus host disease (GVHD), viewed simplistically as a one-way immune reaction opposite to the HVG reaction of tissue and organ rejection (Figure 2A), was a nearly inevitable consequence of engraftment.15,22,24 Because human leukocyte antigen (HLA) typing was not available until 1968, unequivocally successful clinical bone marrow transplantation was not accomplished until then.
Six of the 11 pioneers at the UCLA conference had been prime contributors to the ascension from mouse to man of bone marrow transplantation: Leslie Brent, Robert Good (who performed the first unquestionably successful human bone marrow transplantation), E Donnall Thomas (bone marrow transplant pioneer and 1990 Nobel Laureate), and three of the fathers of tissue matching (Paul Terasaki, Jon van Rood, and the 1980 Nobel Laureate, Jean Dausset).
The principal contributions of the five other panel members at the UCLA conference had been to organ transplantation (Figure 1). Four of the five were surgeons: Roy Calne, Norm Shumway (pioneer of thoracic organ transplantation), Joe Murray, and myself. The fifth was the hematologist, Bob Schwartz, whose investigations in 1958–1960 of 6-mercaptopurine (6MP)25,26 made him the father of chemical immunosuppression (see later).
In contrast to the prompt acknowledgment that donor leukocyte chimerism-associated acquired tolerance was the touchstone of bone marrow transplantation, the concept of allograft tolerogenicity upon which the development of organ transplantation depended27 was discounted by leading authorities.28,29 In retrospect, however, the tolerogenicity of kidneys was self-evident from the time of the very first successful renal transplantation.30 Moreover, this property was exploited unknowingly or deliberately in the evolution of renal transplantation and for the subsequent transplantation of all the other organs, using a variety of antirejection modalities.
Resistance to the idea that organs are tolerogenic can be traced to the demonstration between 1959–1963 that kidney allografts could be successfully transplanted in the ostensible absence of donor leukocyte chimerism. This was contrary to the expectation that the immune barrier would have to be surmounted by simulating the mouse irradiation tolerance model. The anticipated strategy consisted of recipient total body irradiation (TBI) and infusion of donor bone marrow cells with establishment of a donor leukocyte chimerism-associated tolerant state before or at the time of organ implantation.
When such protocols were tested over a period of several years in the canine kidney transplant model, however, they yielded only a single animal with survival of 73 days.31 Death of this dog was caused by pneumonia rather than kidney rejection. Because the donor and recipient were from the same purebred beagle colony, the most logical explanation for the absence of allograft rejection was that the animals had a fortuitously good tissue match.
Exhaustive efforts to use the bone marrow-kidney transplant strategy in irradiated mongrel dogs by David Hume at the Medical College of Virginia resulted in no survival prolongation.32 Hume also extensively evaluated the possibility of treating the recipient with TBI only (without bone marrow infusion). Although it was known from military-related research that TBI was immunosuppressive per se, irradiation by itself was uniformly unsuccessful in the canine kidney transplant model.
The bleak experimental results were all the more frustrating because the potential value of organ transplantation had been obvious in a series of human identical twin (monozygotic) kidney transplantations in Boston by the future Nobel laureate, Joseph Murray.11,33,34 Undoubtedly inspired by these results in the absence of an immune barrier, Murray and colleagues proceeded with a non-identical (fraternal or dizygotic) twin kidney transplantation on January 24, 1959, using 450R TBI without donor bone marrow infusion.12,30
The renal allograft functioned until the fraternal twin recipient’s death from cardiovascular disease years later. It was the first successful transplantation of an organ allograft in any species, and in my opinion, the single most important clinical case in the history of transplantation. But it was the only example of recipient survival exceeding 1 month in 12 attempts to use TBI at the Peter Bent Brigham Hospital.35
The dream of organ transplantation was kept alive during the next four years by the independent French teams of Jean Hamburger36,37 and Rene Kuss38 who produced five further successes. By April 1963, there had been six 1-year survivors post-kidney transplantation in the world after sublethal recipient irradiation, all without adjunct donor bone marrow infusion (Table 2, above dotted line).12,30,36–38 Two of the six donors were fraternal twins, two others were a mother and a cousin, and most importantly, two had no genetic relationship to the recipients (Table 2).
In the meanwhile, drug therapy was on the horizon. The two irradiated recipients of non-related kidneys in Table 2 (patients 3 and 5) had been treated by Küss with delayed doses of 6MP and cortisone. In addition, two of Hamburger’s patients (Table 2, patients 4 and 6) probably were given short courses of steroid therapy. Finally, patient 7 (Table 2, below the dotted line) had reached the 1-year post-transplantation milestone under drug treatment only (see below).39
It had been established in the early 1950s by Billingham, Krohn, and Medawar,40 Morgan,41 and others42–45 that cortisone modestly but significantly delayed the rejection of skin allografts in mice, guinea pigs, and rabbits. The prolongation was so limited that the drug was viewed as a minor immunosuppressant. Nevertheless, a neglected report by Cannon and Longmire46 contained a hint of the true value of steroid therapy.
Cannon and Longmire demonstrated that a 6% incidence of spontaneous permanent tolerance to skin allografts in newly hatched chicks (an important finding per se) rose to more than 20% with administration of a short course of cortisone. In their prescient discussion, the authors’ noted that the steroid effect, “… appeared to maintain itself after the drug was discontinued. This phenomenon is one which up to the present time has not been found in homograft experiments on mammals and humans”.46
Like natural cortisone, the far more potent synthetic steroid, prednisone, had very little efficacy when used alone in adult animal recipients of allografts. In an isolated exception, however, Zukoski and associates47 reported the 649-day survival of a mongrel kidney recipient (1 out of 15 experiments) that had been treated with prednisone for 420 days. Renal function had been stable throughout the subsequent drug-free interval of 229 days. As in the Cannon-Longmire chick experiments, the effect of the steroid therapy appeared to have far outlasted the duration of drug treatment.
Finally, an important clinical observation was made in Los Angeles, where a kidney transplantation program had been launched by Willard Goodwin in 1960, but closed in 1961 after the deaths of six consecutive recipients. In the third case of this UCLA series, Goodwin transiently reversed several rejections with prednisone during the 144-day survival of a live donor kidney recipient who had been conditioned with myelotoxic doses of Cytoxan (cyclophosphamide) and methotrexate (amethopterin) rather than with TBI. Unfortunately, the case was not reported until 1963.48
The era of chemical immunosuppression began with the demonstration in 1959 by Schwartz and Dameshek26 that 6MP prolonged skin allograft survival in rabbits, an observation that was promptly confirmed by Meeker and colleagues.49 Preclinical studies in the canine kidney transplant model were carried out by Zukoski, Lee, and Hume in Richmond, VA,50 and independently by Roy Calne in London.51
Before moving from London to Boston in July 1960 to continue his canine studies with Murray, Calne (see Figure 1) used 6MP as the baseline immunosuppressant for three unsuccessful human kidney transplantations on the service of John Hopewell at the Royal Free Hospital (London).52 Shortly after arriving at the Peter Bent Brigham Hospital, where Murray already had unsuccessfully treated a kidney recipient with 6MP in April 1960,35,39 Calne and Murray began studies of the 6-MP analogue, azathioprine.53,54
Twenty-one months later, on April 5, 1962, Murray transplanted a kidney from a nonrelated donor to the recipient shown below the dotted line in Table 2, who was treated from the time of operation with azathioprine, without conditioning by TBI.39 Except for Goodwin’s patient who had lived for 144 days under treatment with cyclophosphamide, methotrexate, and prednisone (see earlier),48 this was the first example in the world of extended survival of a human kidney recipient using chemical immunosuppression only. The renal allograft in Murray’s case was failing in April,. 1963 (blood urea nitrogen 100 mg/deciliter),39 but the organ was destined to keep this seventh one-year kidney transplant survivor dialysis-free for another five months (total 17 months).
The magnitude of the sea change that would be caused by 6MP and azathioprine was not immediately apparent. Less than 5% of the canine kidney recipients treated with the new drugs survived for as long as 100 days.28,35,53,54 Moreover, the encouraging result in Murray’s April 5, 1962, case (Table 2) was the only example of survival exceeding 6 months among the first 13 kidney recipients treated by Murray or Calne with either 6MP or azathioprine, including Calne’s 3 English patients52 and the first 10 treated in Boston.35,39
Similar tantalizing but disappointing results were obtained in 10 cases compiled between July, 1962, and the first week of February, 1963, by Hume and coworkers55 in Richmond (n = 4) and by Woodruff and colleagues56 in Edinburgh (n = 6). One patient from each series survived for one year. Azathioprine was combined with TBI in Hume’s four recipients. Except for the reversal of rejection with prednisone in the last of Woodruff’s recipients, the role, if any, of steroid therapy in these non-standardized multimodality protocols could not be distinguished from that of the other treatments (TBI, actinomycin C, guanethidine, graft irradiation).
In March, 1963, Hume added patients 5 and 6 to his Virginia series, after abandoning TBI conditioning. Using the double drug Colorado protocol,57 the first of Hume’s additional patients survived for more than 25 years after receipt of a sibling kidney on March 18, 1963.58
The results of experimental studies in Colorado of azathioprine in the canine kidney transplant model were no different than those in the Boston, Richmond, and Minneapolis surgical laboratories, with one important exception. Rather than being inexorable, kidney rejection in the azathioprine-treated dogs was readily reversed with a short course of prednisone.59 This finding, which had not been evident in the other experimental studies,53,54 was a key factor in the decision to begin the Denver clinical kidney transplant program in 1962. Now, there were three such centers in the United States: one each at Harvard, the Medical College of Virginia (also opened in 1962), and the University of Colorado. Goodwin’s UCLA program, which had closed in 1961, would not resume activities until well into 1963.60
Eight of the first 10 recipients in the Colorado series had prolonged survival.27 Four of the transplanted kidneys functioned for more than 25 years, and 2 still function today after 38 years in the longest-surviving renal allograft recipients in the world. “The Reversal of Rejection in Human Renal Allografts …” described in the first part of the title of the 1963 report27 had been presaged by the dog studies.
In contrast, the crucial observation of “ … the Subsequent Development of Homograft Tolerance” capsulized in the second half of the title27 far exceeded expectations. But the apparent induction of tolerance was inexplicable. Like the seven kidney recipients in Table 2 who had survived at least 1 year after transplantation, none of our patients had been given an adjunct infusion of donor bone marrow cells. By inference, they did not have the donor leukocyte chimerism that was (and is) strongly associated with Medawarian (ie, neonatal) tolerance.
Although the reason for engraftment in these cases was enigmatic, the experience allowed the formulation of a highly practical treatment algorithm. The algorithm (Table 3) consisted of recognition of kidney rejection patterns, a measured therapeutic response (usually with steroids), and subsequent dose weaning to the lowest maintenance levels consistent with stable graft function. Serial testing of the recipients serum creatinine, creatinine clearance, blood urea nitrogen (BUN), or other functions was the most practical way to monitor HVG activity. Even when renal dysfunction was not detected, the development and regression of subclinical rejection in controlled animal experiments invariably could be demonstrated in serial biopsies of the allograft.
An umbrella of prophylactic immunosuppression is required for the tolerogenicity of kidney allografts to be manifest in humans and in most other large animals species, including dogs. In a small percentage of kidney transplant experiments in mongrel dogs, however, it was learned in 1962–1963 that a short course of azathioprine could be succeeded by drug-free tolerance to the allograft for long periods, or for the life of the animal.28,61–63 Similar independence from immunosuppression has been observed after human renal transplantation, usually in recipients of live donor related kidneys. Of our first 46 recipients of consanguineous kidneys treated at the University of Colorado before March, 1964, 10 still have function of their original kidney allografts after post-transplant years; 5 of the 10 have been drug free for 7, 7, 19, 33, and 34 years.64
As was anticipated,27,65 the treatment algorithm developed with kidney transplantation was generalizable, no matter what the cellular or molecular target of various immunosuppressants (Table 3) or what the organ. In the development of the organ-defined branches of clinical transplantation, guidelines and concepts developed with one kind of organ were applicable to all But there were quantitative differences with the various allografts.66 For example, it was shown between 1963–1966 that immunosuppression-independence after organ transplantation in mongrel dog experiments evolved under a 4-month course of azathioprine67,68 or after a few injections of antilymphocyte serum (ALS) or antilymphocyte globulin (ALG)68 far more commonly in liver recipients than in kidney recipients.
It was subsequently shown that lifetime tolerance to orthotopic liver allografts evolves without any treatment in 15% to 20% of randomly matched outbred pigs,69–72 invariably in selected strain combinations of rats,73,74 and in the vast majority of mouse strain combinations.75 Heart75,76 and kidney allografts77 can also induce spontaneous tolerance in experimental rodent models, but with far fewer strain combinations. Accordingly, it has not been surprising that drug freedom 5 to 10 years after transplantation in humans has been achieved more frequently by liver64,78–80 than by kidney64,81 or heart recipients.
Despite the clear demonstration in experimental models of the relative immunologic privilege of the liver, our first five attempts at hepatic replacement between March 1, and October 4, 1963, resulted in recipient death after 0–23 days.82,83 Single attempts also were unsuccessful in Boston (September, 1963)84 and Paris (January, 1964).85 Our failed trial in Colorado was reported in Surgery Gynecology and Obstetrics,82 2 months after publication in the same journal of our successful kidney experience,27 and it was presented at the April, 1964, annual meeting of the American Surgical Association.83
The kidney transplant programs burgeoned in Colorado and elsewhere, while liver transplantation was sent back to the surgical research laboratories. Even before the availability of any kind of immunosuppression, challenges had been generated by the surgical difficulty, anatomic uniqueness, and physiologic complexity of removal and replacement of the liver in the dog model. These challenges generated opportunities for broadly applicable advances in surgical technique, organ preservation, and metabolism.83,86–93 During the year self-imposed moratorium on further clinical trials, experimental liver transplantation became an even more fertile area for generic advances.
For example, the decision in 1963 not to make further attempts at liver transplantation without better immunosuppression resulted in purification, testing in dogs, and eventual clinical introduction of antilymphocyte globulin (ALG)94 for use in combination with azathioprine and prednisone.95 After it was shown in human kidney recipients that the addition of ALG to azathioprine and prednisone improved treatment efficacy, liver transplantation was accomplished with the same triple drug regimen on July 23, 1967.13
Trials with other extrarenal organs using the same treatment strategies were undertaken in the succeeding 23 months. These resulted in the first successful clinical transplantation of the heart,14 lung,16 and pancreas17 (Table 1). The world’s longest-surviving recipient of an extrarenal organ is a 34-year-old woman whose liver replacement for biliary atresia was carried out more than 31 years ago (Figure 3).
Over the years, graft and patient survival of all transplanted organs, but most dramatically the liver (Figure 4), improved in three distinct steps under azathioprine-based, cyclosporine-based, and tacrolimus-based treatment. When tacrolimus became available, intestinal and multivisceral transplantation procedures that had been first described at the American Surgical Association in 195996 and at the Surgical Forum of the American College of Surgeons in 196097 graduated from “experimental” status98–100 to a bona fide place in the treatment armamentarium of the 21st century.101–104
What actually was being achieved? Because organ engraftment had been accomplished without cotransplantation of bone marrow cells, it was concluded by 1963 that donor leukocyte chimerism, the sine qua non of the experimental tolerance models and of successful bone marrow transplantation, was not associated with organ engraftment. The equally erroneous corollary was that organ engraftment occurred by immunologic mechanisms other than those of the acquired tolerance models and of clinical bone marrow transplantation. This view was not challenged for 30 years, in part because of the striking clinical differences between organ and bone marrow transplantation (Figure 5).
Derivative errors stockpiled. For example, the “passenger leukocytes” of bone marrow origin, depicted in Figure 6 as a bone silhouette within a kidney, were known to be the most immunogenic component of parenchymal organs.105–108 It was recognized in 1969 that most of these donor leukocytes were replaced by recipient leukocytes of the same lineages in successfully engrafted human livers,109,110 a change subsequently demonstrated in intestinal,111,112 kidney113,114 (Figure 6), and all other organ allografts.78 The chimerism-exclusionary explanation of organ engraftment required the assumption that these donor leukocytes had been selectively destroyed by the host immune system, either in situ115,116 or after their migration into the recipient,116–118 with the selective preservation of the organ’s specialized parenchymal cells.
In 1992, we asked a question that had not been remotely broached in the literature of the previous three decades. Could these donor passenger leukocytes, including self-renewing hematolymphopoietic precursor and stem cells, have migrated into the recipient as depicted in the body silhouette of Figure 6? If so, organ transplantation involved the systematic but unwitting infusion of donor hematolymphopoietic cells, in essence comparable to a donor bone marrow cell infusion. The possibility existed that the unrecognized production of donor leukocyte chimerism was, in fact, the reason why organ transplantation had been feasible.
As the first step in testing the hypothesis, blood samples and biopsies were obtained from multiple host sites as well as the allografts in 30 patients who had borne kidney and liver allografts (Figure 7) for up to three decades. The specimens were analyzed for the presence of donor leukocytes and donor DNA, using sensitive immunocytochemical and polymerase chain-reaction techniques. All 30 patients were shown to have low-level multilineage donor leukocyte chimerism (Figure 6). 78,119–122
The mystery of organ engraftment was solved. A satisfactory clinical result after organ transplantation represented a stand-off between multilineage immunocyte populations of David (ie, donor) and Goliath (recipient) size proportions (Figure 8B), each of which induced tolerance of the other. Moreover, the historical view of an organ allograft as a defenseless island in a hostile sea (see Figure 2, bottom) had been a misleading oversimplification. The clinically obvious response after organ transplantation usually was HVG, while the GVH reaction usually was “invisible” (ie, subclinical). Stability of the equilibrium between the coexisting donor and recipient immune competent cells usually, but not always, required continued immunosuppression in human organ recipients.
Conversely, the previous definition of an idealized result following bone marrow transplantation as total replacement of the host immune apparatus (Figure 2, top) was equally incorrect. Instead, successful bone marrow transplantation was a mirror image version of organ transplantation with reversal of the size proportions of the donor and recipient immunocyte populations (Figure 8A).122,123 This view was supported by the discovery that residual host leukocytes invariably could be found in bone marrow recipients who previously had been thought to have complete donor leukocyte chimerism.124,125
In the new paradigm that has been confirmed and extended with a series of controlled experimental studies75,126–137 treatment failure after either bone marrow or organ transplantation was defined as the acute or chronic ascendancy of one or the other response arms (Figure 9), or as a simultaneous perpetuation of both destructive immune reactions. To be successful, organ and bone marrow transplantation required the establishment and maintenance of variable levels of mutual tolerance, the completeness of which determined the need or lack thereof for maintenance immunosuppression. Tolerance induction began with and depended on migration of donor leukocytes to organized lymphoid collections and a simultaneous reverse traffic of host cells.
The seminal mechanism of the mutual tolerance, was proposed to be induction of “… [widespread] responses of co-existing donor and recipient immune cells, each to the other, causing reciprocal clonal expansion, followed by peripheral clonal deletion”.78,119 Since its publication in 1992, this hypothesis has been tested and verified in several experimental models.130,132,138,139
In addition to accounting for the characteristic cycle of immunologic crisis and resolution that occurs in the first 1–4 weeks after both organ and bone marrow transplantation (Figure 9), the mutual modulation of the two immune responses explained the poor predictive value of HLA matching for organ transplantation. With each greater degree of incompatibility, both HVG and GVH responses were progressively ratcheted up, resulting in a sliding scale nullification (Figure 10). With the severe weakening of the host response arm in the immunodeficient or cytoablated bone marrow recipient, the hazard of GVHD was directly proportional to the tissue mismatch (Figure 11).
It also was apparent why the liver with its huge load of leukocytes was the most tolerogenic organ, but was rarely responsible for GVHD, providing the recipient was not immunocompromised in advance.140 Moreover, the fear of causing GVHD that had forestalled clinical efforts to transplant the leukocyte-rich bowel and had discouraged strategies of organ and bone marrow cotransplantation since the late 1950s (see earlier) was put to rest. Both intestinal transplantation101–104 and adjunct donor bone marrow infusion in immunologically intact recipients141,142 were safe. In fact, intestinal transplantation alone or as part of a multivisceral graft has frequently been carried out in combination with bone marrow cell infusion.102,103
The implications of this paradigm are not limited to transplantation. Throughout the quarter century preceding 1975, the field of immunology was preoccupied with studies of tumor and allograft rejection, reflected by the words major histocompatibility complex (MHC) to designate the gene complex coding antigens associated with rejection and other kinds of cell-mediated immunity. But the evolutionary significance and biologic function of the MHC antigens were unknown.
This changed in 1973–1975 with the discovery by Rolf Zinkernagel and Peter Doherty (Nobel laureates, 1996) of the MHC-restricted mechanisms of T-cell recognition of, and response to, non-cytopathic viruses and other intracellular microorganisms.143–146 It seemed obvious that the host versus pathogen adaptive immune response was the equivalent of primary allograft rejection. But the seemingly greater complexity of allograft rejection, and the inexplicable engraftment of organs in the ostensible absence of donor leukocyte chimerism, spawned numerous hypotheses to explain the differences between the immunology of infection and transplantation immunology.
The finding of chimerism in organ recipients has eliminated the need for such theories. We have pointed out elsewhere that the migration and localization of mobile antigen (ie, microorganisms on one hand, leukocytes on the other) regulate immunologic responsiveness or unresponsiveness after both infection and transplantation.113 Although transplantation is complicated by the presence of contemporaneous host-versus-graft and graft-versus-host immune reactions and the additional factor of therapeutic immunosuppression, the mechanisms and rules of the immune response to pathogens and to allografts are basically the same.123,147
Bone marrow and organ transplantation are mirror image procedures that induce reciprocally modulating immune reactions: HVG and GVH. The evidence supporting this conclusion has closed the 30-year intellectual gap between the two transplant disciplines. In so doing, it has been possible to spell out in concrete and easily understood terms the meaning and mechanisms of allogeneic tolerance and to clarify the commonality of adaptive immunity to microorganisms and to allografts.
Establishment of this conceptual coherence has depended on knowledge of the historical roots of transplantation, just as linkage of the present with the past is essential in all areas of epistemology. The resulting insight into the basis of graft acceptance is the most important legacy that those of us who are passing from the scene have left for those who remain behind. It may allow the formulation of more rational approaches to allotolerance induction or even to strategies for xenotransplantation.l48
Supported in part by research grants from the Veterans Administration and Project Grant No. DK-29961 from the National Institutes of Health, Bethesda, MD.
No competing interests declared.
Presented at the American College of Surgeons 86th Annual Clinical Congress, Chicago, IL, October 24, 2000.