|Home | About | Journals | Submit | Contact Us | Français|
The purpose of this communication is to describe a recently delineated principle of immunology1 that can be summarized in one line: immunity, or alternatively tolerance, to any antigen is governed by the migration and localization of that antigen. This concept, which has been developed with transplantation models2–4 and from observations after clinical and experimental infestions5–7 has been described elsewhere in more detail.1 Here, we will be concerned mainly with the relation between donor leukocyte chimerism and transplantation tolerance, focusing at first on clinicopathologic observations in transplant recipients made in 1991–1992 that prompted reconsideration of a number of long held convictions.
The most important new finding reported in 1992 was the presence of multilineage donor leukocyte microchimerism in the blood, skin, lymph nodes, and other tissues of organ allograft recipients up to 30 years after successful transplantation,2–4 The persistence of the disseminated donor leukocytes for this long implied (as was subsequently proved8,9) that precursor or stem cells are included in the burst of donor leukocytes that briefly constitute 1 % to 20% of the recipient circulating mononuclear cells after organ transplantation (Fig 1, upper panel). Although the number of these donor cells is greatest with transplantation of the intestine or liver,10,11 the same events on a smaller scale occur after transplantation of organs like the kidney and heart. 12
Another change during the early postoperative period is disappearance of the resident donor mononuclear leukocytes from the graft and their replacement by recipient cells of the same lineages (Fig 1, lower). Although this was observed in hepatic allografts nearly 30 years ago,13 it was assumed at first to be a unique feature of liver transplants. After the same changes were found in intestinal allografts,11,14 evidence quickly accrued that this was a generic phenomenon; ie, it occurred in all successfully transplanted organs.3,15
These discoveries, showing that both the “accepted” allograft and recipient became genetic composites, suggested an explanation for two enigmatic observations reported for the first time in 196316 that had allowed the development of organ transplantation as a practical and reproducible clinical service. First, kidney rejection developing in patients immunosuppressed with azathioprine could be consistently reversed with large doses of prednisone. The second, and more important, observation was that renal allografts self-induced variable degrees of donor-specific tolerance, manifested by the subsequent ability to reduce immunosuppression to below doses that had been unable to prevent or control rejection at the outset.16 It was obvious that a host-versus-graft (HVG) immune reaction gathered strength at first despite anti-rejection therapy, but then collapsed with temporarily intensified immunosuppression. The same events were soon-documented after liver transplantation17,18 and eventually with all other kinds of organ allografts. They can be observed in many animal models without the need for immunosuppression (summarized in Starzl et al4), most commonly when the liver is the transplanted organ.19
Early workers in transplantation recognized the resemblance of allograft rejection to the response against infections associated with delayed hypersensitivity, as exemplified by tuberculosis.20,21 With the demonstration of the MHC-restricted mechanisms of adaptive infectious immunity in 1973,22–25 it seemed obvious that allograft rejection must be the physiologic equivalent of the response to this kind of infection. Microorganisms that generate such an adaptive immune response are generally intracellular, and have no or low cytopathic qualities. Although MHC-restricted host cytolytic T lymphocytes recognize only infected cells, elimination of all the infected cells could disable or even kill the host. Consequently, mechanisms have evolved that can temper or terminate the immune response, allowing both host and pathogen to survive.5–7
The highly variable clinical manifestations of these mechanisms can be illustrated by the different outcomes that may follow an infection with disseminated non-cytopathic microorganisms (eg, the common hepatitis viruses). In one scenario, the pathogen (antigen) load may rapidly increase during the so-called latent period, but then be dramatically and efficiently reduced by antigen-specific effector T-cells. Following control of the infection, the cytotoxic T lymphocytes (CTL) subside (Fig 2, left panel). The events are similar to those of irreversible organ rejection in the unmodified or ineffectively treated recipient (Fig 2, subscript to left panel). Alternatively, however, such infections may lead to a continuously high antigen load and an antigen specific immunologic collapse (Fig 2, second panel). The consequent asymptomatic carrier state is equivalent to unqualified acceptance of an allograft. This kind of result, including independence from immunosuppression, is rarely achieved by organ recipients and is usually associated with good HLA compatibility.
Between these two extremes of protective immunity and a carrier state, a persistent infectious agent may induce an unrelenting immune response that results in serious immunopathology (eg, chronic active hepatitis with a B or C virus infection). This is equivalent to chronic rejection after liver transplantation (Fig 2, right), or uncommonly graft-versus-host disease (GVHD; see later). The result as a practical matter in the overwhelming majority of organ recipients is chronic rejection which may range from aggressive to indolent, despite the best immunosuppression available today.
Under both infectious and transplant circumstances, only two mechanisms are needed to explain the compromise outcomes shown in the second and third panels of Fig 2 which allow co-existence of live antigen and the host. One is clonal exhaustion leading to clonal deletion. The other is immune indifference. Although these two mechanisms will be considered separately, they tend to be mutually reinforcing. Both are regulated by antigen migration and localization.
The exhaustion and deletion of an antigen specific clone was postulated as early as 1959–1960 to explain the acquisition of tolerance in animal models to heterologous protein (with the aid of 6-mercaptopurine,26) and to allogeneic splenocytes (without the need for immunosuppression27). Clonal exhaustion also was invoked at an early time to explain organ allograft acceptance,28,29 as shown in the illustration and caption reproduced in Fig 3. In this figure, which was first published in 1969, induction of the activated clone by alloantigen was depicted via host macrophages rather than by dendritic cells which would not be described until 4 years later.30
Despite circumstantial evidence of its existence as a tolerogenic mechanism (summarized in Steinman and Cohn31 and Sterzl and Silverstein32), clonal exhaustion disappeared from the literature between 1970 and 1990, ostensibly because it was only a theory. Since 1990, however, clonal exhaustion/deletion has been formally demonstrated in many infectious, transplantation, and other models.33–38 A subpopulation of T cells is induced by the antigen within a few days, end-differentiates to effector cells, and disappears.
Death of the activated cells by apoptosis has been demonstrated in a mouse transplantation model,37 possibly due to interleukin deprivation, and associated with telemere shortening,39 Although clonal exhaustion is the most efficient way to eliminate maturing self-reactive T cells in ontogeny and throughout the life of many higher vertebrates, purging of T cells (and also apparently B cells) also occurs in the peripheral lymphoid organs. The peripheral sites may be the principal site of clonal deletion after successful transplantation in humans.4
Although lack of proof contributed to the dismissive reception of clonal exhaustion, a more pervasive factor in the context of transplantation immunology was a lack of understanding about the role of the organ’s “passenger leukocytes.” These donor cells of bone marrow origin have been known for more than three decades to be the principal immunogenic component of allografts.40,41 Because they disappear from successfully transplanted organs,11,14,15,40–43 it was assumed until recently that they had been selectively destroyed by the recipient immune system, with selective sparing of the specialized parenchymal cells.
As a corollary, organ allograft acceptance was assumed to take place by fundamentally different mechanisms than the chimerism-dependent acquired neonatal tolerence of Billingham, Brent, and Medawar.44 Proposed chimerism-exclusionary mechanisms have included suppressor; veto, and other immune regulatory cells; cytokine profile changes; various antibodies; and failure of delivery of a second (costimulatory) signal following primary antigen presentation. Contrary to these hypotheses, the discovery of microchimerism in organ recipients made it possible, in 1992, to explain allograft acceptance by “ … [widespread] responses of co-existing donor and recipient immune cells, each to the other, causing reciprocal clonal expansion, followed by peripheral clonal deletion.”2
This feature distinguishes the allograft response from the single immune reaction induced by an infection.1 If some degree of reciprocal clonal exhaustion is not induced and maintained (requiring an umbrella of immune suppression in humans), one cell population usually will destroy the other, or both may be destroyed together. Following organ transplantation, the dominant host system usually rejects the graft (Fig 4). However, serious or lethal GVHD is not rare after transplantation of leukocyte-rich organs such as the liver.3
In contrast, the recipient cyto-ablation used in preparation for bone marrow transplantation transfers dominance to the donor system (Fig 5). Consequently, GVHD is the most common complication in bone marrow recipients, but the graft may be rejected instead, or simultaneously.
Migration of spreading non-cytopathic microorganisms to host lymphoid organs, and localization there, are well known to be critical in initiating and sustaining protective immune activation.5–7,45 The similar lymphoid-oriented traffic of passenger leukocytes46–48 is equally acknowledged to be the essential basis of host allosensitization. Carried one step further to clonal exhaustion, it also is the means by which specific immunologic tolerance is induced.
Although clonal exhaustion/deletion is the seminal mechanism of tolerance, survival of either allo- or infectious antigen may be promoted by a second nontolerogenic mechanism, called “immune indifference.”5,7 Like clonal exhaustion, immune indifference is controlled by antigen migration and localization.
Pure examples of de novo immune indifference are provided by the rabies and wart viruses which elicit little or no immune response (Fig 6), simply by avoiding migration through, or to, host lymphoid organs. This has been mimicked in numerous transplant models by depletion of donor leukocytes from allografts. Graft survival is thereby prolonged (subscript of Fig 6). However, tolerance is not induced, as shown by the fact that rejection can be readily precipitated with an injection of donor leukocytes.41,49,50
Pristine examples of immune indifference are not seen in the usual setting of clinical organ transplantation. Nevertheless, immune indifference can evolve secondarily. As also occurs with microorganisms following a widespread noncytopathic infection,7,51 migratory donor leukocytes that have not been eliminated by passage through lymphoid organs may leave the lymphoid compartment, having induced various stages of incomplete and/or reversible antigen specific exhaustion. In experimental organ transplant models, this begins after about 2 weeks, and by 100 days, the most prominent donor leukocyte population has shifted from lymphoid sites to non lymphoid sites (eg, skin and heart).52 Maintenance clonal exhaustion apparently occurs subsequently by leakage of donor leukocytes from the non-lymphoid to the lymphoid compartment (Fig 7).
The balance that develops between destructive and nondestructive immunity as the result of lymphoid/non-lymphoid leakage has been difficult to quantitate in transplantation models. However, a stable equilibrium has been demonstrated by Ohashi, Zinkernagel, et al in transgenic mouse preparations.53,54 In these models, pancreatic islets expressing viral antigens are not destroyed by low level CTL activity, but are rejected with resulting diabetes by the induction of high virus-specific reactivity.
In a further exploitation of similar transgenic models, Ehl, Zinkernagel et al have shown the essential role of antigen persistence in the maintenance of tolerance. 55 Tolerance to transgenic skin grafts expressing the viral antigen (gp33) of the lymphocytic choriomeningitis virus could be induced by pretreatment of recipients with gp33 but it could not be maintained by the gp33 expressed by outlying skin graft. In contrast, tolerization with gp33-expressing, mobile, and self-renewing spleen cells (ie, donor splenocyte chimerism) permanently protected the transgenic skin. By analogy, the disappearance of microchimerism in an organ recipient presages loss of the outlying graft to chronic or acute rejection.56,57 In the model of Ehl et al,55 this was associated with thymus dependent recovery of precursor CTL.
With clonal exhaustion and immune indifference in combination, both regulated by the migration and localization of antigen, the four inter-related events shown schematically in Fig 8 must occur close together if organ transplantation is to succeed: double acute clonal exhaustion, maintenance clonal exhaustion which waxes and wanes, and loss of graft immunogenicity as the organ is depleted of its passenger leukocytes.
The significance of the microchimerism observed at the end of this process has been questioned (summarized in Wood and Sachs58) because, as we also have emphasized,2–4,59 donor leukocytes may be detectable during rejection, and are often not detectable in individual blood or tissue samples in patients bearing stable allografts. These observations are readily fitted into the concept that “ … Donor leukocyte chimerism is a prerequisite for but not synonymous with and not a consequence of, organ allograft acceptance.”1 This conclusion applies equally to macro and microchimerism.
Conventional bone marrow transplantation (Fig 5) is only a mirror image of the events after organ transplantation (Fig 4), with the same governance of the immune events by antigen migration and localization. Although pretransplant cyto-ablation renders the recipient subject to GVHD, the host leukocytes are not all eliminated.60 The weak host-versus-graft reaction mounted by the remaining recipient cells, and the parallel GVH reaction of the donor cells can eventually result in reciprocal tolerance.
Because the fetus possesses very early T cell immune function,31,61,62 the ontogeny of self/non-self discrimination during fetal development can be explained by the same mechanisms as acquired tolerance in later life. Autoimmune diseases then reflect unacceptable post-natal perturbations of the prenatally established localization of self antigens in non-lymphoid versus lymphoid compartments.
There is no MHC-restricted safety valve for cytopathic microorganisms which are typically extracellular and generate the full resources of the innate as well as the adaptive immune system.1,5 An uncontrollable innate immune response involving the effectors shown in Table 1 is provoked by discordant xenografts expressing the Gal-α Gal epitope, an epitope which also is found on numerous cytopathic bacteria, protozoa, and viruses. The clinical use of such discordant animal donors will require changing the xenogeneic epitope to one that mimics a non-cytopathic profile, or else elimination of the xenogeneic epitope.1
Aided by Project Grant No. DK 29961 from the National Institutes of Health, Bethesda, Maryland.