|Home | About | Journals | Submit | Contact Us | Français|
In current clinical practice, immune reactivity of kidney transplant recipients is estimated by monitoring the levels of immunosuppressive drugs, and by functional and/or histological evaluation of the allograft. The availability of assays that could directly quantify the extent of the recipient’s immune response towards the allograft would help clinicians to customize the prescription of immunosuppressive drugs to individual patients. Importantly, these assays might provide a more in-depth understanding of the complex mechanisms of acute rejection, chronic injury, and tolerance in organ transplantation, allowing the design of new and potentially more effective strategies for the minimization of immunosuppression, or even for the induction of immunological tolerance. The purpose of this review is to summarize results from recent studies in this field.
Transplant recipients, by and large, require immunosuppression indefinitely for the lifespan of their graft to prevent rejection; long-term maintenance of immunosuppression becomes a balance between suppressing rejection against the allograft and the toxicities associated with therapy, such as infections and malignancy. Acute clinical rejection still represents one of the major risk factors for poor long-term graft outcome  and, in kidney transplantation, the diagnosis of acute rejection is confirmed by renal allograft biopsy, which carries a finite risk of complications . In clinical practice, a biopsy is commonly undertaken if there is a rise in serum creatinine of at least 15% above baseline, although this approach may fail to detect injury at its inception . Indeed, serum creatinine is a relatively insensitive marker of renal dysfunction, which increases only in the late phases of injury when there has already been substantial functional loss. Moreover, in the absence of any changes in serum creatinine, up to one-third of renal allografts will demonstrate histological signs of acute rejection in the first 3 months after transplant . These so-called ‘subclinical rejections’, if not detected and properly treated, may negatively affect long-term graft outcomes . Protocol allograft biopsies may allow for the early finding of graft abnormalities but they are inconvenient and are impractical to perform in a serial nature on patients.
By contrast, over-immunosuppression may expose patients to an increased risk of opportunistic infections. Some of them, such as polyoma-virus infection, might not even become immediately clinically obvious, but if undetected will eventually result in graft injury and impaired outcomes . In this regard, monitoring of JC and BK viremia might not only allow for early recognition and therapy of polyomavirus nephropathy, but may also provide an indirect assessment of the state of patient immunosuppression. However, patient reactivity toward polyomavirus might not accurately reflect the alloreactive response.
A key clinical and translational research focus in the field of transplantation is the identification of a biomarker of immune reactivity, that is, “a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention” . Immune biomarkers could provide relatively noninvasive indicators of rejection or over-immunosuppression, and would be of the utmost importance in the identification of patients at risk for future rejection episodes, could provide an opportunity to monitor patients, obviating the need for repeated invasive biopsies, and enhance research study development for new clinical agents.
With the advent of and interest in immunosuppressive minimization, and the goal of developing a protolerant state after transplantation, more sensitive markers of developing immune injury and the identification of those at risk of immune responses are needed [8,9]. Currently, therapies are driven by transplant center protocols and are based on drug levels, with changes in response to toxicities including lymphopenia, rejection and allograft dysfunction. Long-term success will entail an understanding of the complexity of lymphocyte repopulation after depletion, including the immune reactivity of repopulating memory cells  and the impact of maintenance therapy on steering immune repopulation [11,12] to support donor hyporesponsiveness. Our current monitoring tools do not provide this kind of information and the identification of in vitro fingerprints of immunological tolerance, that is, the lack of a destructive immune response towards the graft in the presence of generalized immune competence , would allow the partial or complete cessation of immunosuppressants in selected patients with minimal risk of acute rejection. Thus, immunological monitoring of transplant recipients may allow early and noninvasive detection of acute allograft rejection before effector mechanisms and organ destruction have been initiated, and enable physicians to tailor the level of immunosuppression required for a given patient, these adjustments currently only being determined on an empiric basis or on the blood levels of immunosuppressive drugs.
Immune monitoring assays that are currently in development are focused on adaptive recipient T-cell activity, and assays of the innate immune response have not been yet been considered in clinical practice (Table 1). These assays can be divided broadly into two major categories: donor antigen-specific and antigen-nonspecific assays. Donor antigen-specific assays measure the response of recipient lymphocytes to donor antigens, whereas antigen-nonspecific assays evaluate biomarkers and the phenotype or functional state of cells to identify a pattern that is associated with a particular clinical status [8,9]. Most likely, no single assay is able to provide a comprehensive view of the whole immune reactivity status of the recipient towards the graft; rather, each analyzes the immune response in a subtly different fashion. By combining the results of several assays, it should be possible to determine the fingerprint of the immune response at any given time in an individual. While several of these assays are promising, validation in a prospective fashion is a critical requirement for the field.
Evaluation of in vitro alloreactivity has focused on the measurement of the proliferation of recipient lymphocytes after contact with those of the donor. Assays of T-cell reactivity include the mixed lymphocyte reaction (MLR), limiting dilution analysis (LDA), enzyme-linked immunospot (ELISPOT) assay, trans-vivo delayed-type hypersensitivity (DTH) assay, direct toxicity assays and Cylex immune cell function assay .
Mixed lymphocyte reaction represents one of the first assays developed to measure the proliferative response of lymphocytes towards HLA-mismatched cells. In the classical form of MLR, peripheral blood lymphocytes from two individuals are mixed together in tissue culture for several days; in the one-way MLR test, donor lymphocytes are inactivated, thereby allowing only the recipient lymphocytes to proliferate in response to foreign histocompatibility antigens . Lymphocyte proliferation (measured by tritiated thymidine uptake) provides information on the alloreactivity level of the patient. In 19 recipients of cadaveric renal allografts, donor-specific hyporesponsiveness assessed by MLR at 3 and 6 months after transplantation was associated with a better graft outcome at 1 year . A recent study in pediatric kidney transplant patients showed that donor-specific hyporesponsiveness was also associated with improved graft survival at 3 years and with a lower incidence of chronic allograft nephropathy . Moreover, these data suggest that although downregulation of donor-specific reactivity might not be a prerequisite for stable graft function, it could help to identify recipients who require less immunosuppression . However, despite the fact that the assay is relatively easy and inexpensive to perform, it requires 1 week and its reproducibility is problematic. Therefore, it can hardly be considered a useful tool to monitor the risk of acute rejection in routine clinical practice.
Limiting dilution analysis estimates the frequency of alloreactive T-cell precursors through mixing serial dilutions of recipient cells with donor cells and measuring cytokine secretion, proliferation or cytotoxicity several days later . In several studies, a direct correlation between the frequencies of precursors among peripheral blood cells and histologically defined rejection in heart or kidney transplant recipients was detected [18,19]. Although results obtained with LDA may predict short-term outcomes and help in customizing immunosuppressive therapy, the complexity and required labor intensity for reproducible assay performance may limit the broad applicability of LDA as a clinical biomarker of alloreactivity.
The enzyme-linked immunospot assay combines the features of the MLR with the concept of an ELISA assay. Recipient T cells are cultured in the presence of inactivated donor or third-party cells in tissue culture wells coated with a capture antibody that is specific for the cytokine of interest. After a short culture period, the cells are removed and the secreted cytokine remains bound and is detected using a labeled secondary antibody. The spot that develops represents a cell that had been primed to the stimulating antigen(s) in vivo. Because the task of visually counting the number of spots becomes difficult and time-consuming with large numbers of spots, computerized plate readers using digital cameras have been developed. Thus, ELISPOT provides a measure of the frequency of previously activated or memory T cells that respond to specific antigens by producing a selected cytokine . This is a distinct advantage over other cytokine measurements, such as measuring the cell culture supernatant, which in the absence of rapid capture may be subject to breakdown or dilution, or may be used up by other cells .
In 55 kidney transplant recipients, ELISPOT-measured allo-reactivity during the first 6 months after transplantation correlated significantly with the graft function at both 6 and 12 months following transplantation . Notably, the mean frequencies of spots were higher in patients with acute rejection than in those without acute rejection. Multiple-regression analyses indicated that the correlations between the early ELISPOT measurements of IFN-γ and serum creatinine were independent of acute rejection, delayed graft function or the presence of panel-reactive antibodies before transplantation . In another study, the mean frequency of T cells primed to donor antigens at 6 months was shown to correlate with serum creatinine at 6 and 12 months independently of acute cellular rejection, delayed graft function or the recipient’s panel-reactive antibodies . In a further application to this technique, Poggio et al. have identified a T-cell reactivity index (PRT) based on the frequency of positive ELISPOT responses prior to transplantation using a pool of donor antigens that may be reflective of potential organ donors . Akin to the panel-reactive antibody test for identifying individuals with levels of anti-HLA antibodies, the PRT may identify patients at risk for post-transplant cellular immune-mediated graft injury. Along this line, a pretransplant ELISPOT might allow for identifying those patients who may be at highest immunological risk and would benefit the most from induction therapy , thus helping tailor immunosuppression. However, further confirmatory data of the predictability and clinical utility of this technique are necessary.
Finally, ELISPOT may be useful for detecting ‘occult’ responses in the recipient that may not be immediately or sensitively detected by serum creatinine levels. In a cohort of 34 longstanding living donor renal transplant recipients (56.2 ± 23.6 month), the frequency of donor-reactive cells was the only variable that significantly correlated with graft function , suggesting that the long-term function of a kidney allograft might be linked to an immune response that has been previously difficult to survey.
In the trans-vivo DTH assay, recipient peripheral lymphocytes are injected with donor antigens into either the footpad or the pinna of an immune-deficient mouse, and the magnitude of the resultant swelling after 24 h measured using calipers is used as an index of patient’s reactivity towards the donor antigens . VanBuskirk et al. described four transplant recipients in whom all immunosuppression had been discontinued . In three of these patients with prolonged drug-free graft survival, alloantigen-specific hyporesponsiveness was demonstrated in the trans-vivo DTH assay. By contrast, the fourth patient, who had previously displayed but lost operational tolerance, had a strong alloantigen-specific trans-vivo DTH response . Theoretically, the trans-vivo DTH may be useful in stable patients to identify those transplant recipients in whom immunosuppression can be reduced. However, owing to the cumbersome nature of the assay that is dependent on mice, as well as the need for technical consistency of measurements, the utility of trans-vivo DTH for routine clinical immune monitoring is uncertain.
In another version of this assay, a direct intradermal infusion of recall antigens is made on the volar surface of the forearm of patients. In stable kidney transplant patients, the extent of reactivity is inversely correlated with the amount of ongoing immunosuppression . Again, validation of this assay in larger, more diverse transplant populations is needed prior to its clinical application.
A direct cytotoxicity assay measures the ability of recipient CD8+ cytotoxic lymphocytes to lyse donor cells that have been loaded with 51Cr by quantifying the release of the chromium after target cell lysis. The percentage of lysis of these targets after co-incubation is calculated by comparison with the maximum achievable lysis of the target cells. Low cytotoxic-mediated lympholysis (CML) reactivity has been described in recipients with long-term functioning kidney allografts, whereas high lymphocyte lysis has been associated with an increased risk of acute rejection . Burlingham et al. reported hyporesponsiveness to donor alloantigens in a CML assay . More recently, Weimar et al. used hyporesponsiveness in the CML assays to guide drug withdrawal after renal transplantation . These data suggest that measures of the donor-specific cytotoxicity of recipient T cells after transplantation may be useful for guiding decisions about immunosuppressive drug management and for the identification of tolerant transplant recipients. Nevertheless, the assay is complex, requires approximately 1 week and for the investigators to be exposed to radioactive 51Cr, thus limiting its potential use in routine clinical practice.
The Cylex™ ImmuKnow® Assay was recently approved by the US FDA for the detection of cell-mediated immunity in immunosuppressed patients . The assay measures the increase in CD4+ T-cell metabolism following stimulation by mitogen phyto-hemagglutinin-L (PHA) in vitro, and quantifies this activity by measuring the ATP produced by the activated cells. As this assay occurs in an aliquot of the recipient’s whole blood, the impact of immunosuppression should be further assessed . In a meta-analysis of retrospective observational studies with 504 transplant recipients at ten transplantation centers, Kowalski et al. reported an association between immune response values and the clinical outcome . Patients who experienced rejection had a strong immune response (>700 ng/ml), while patients who contracted infections had a low immune response (<25 ng/ml). By contrast, clinically stable patients had moderate immune responses, with a mean level of 280 ng/ml. Intriguingly, in a small cohort of 13 kidney transplant recipients, Cylex was found to have a better correlation with clinical outcomes than blood levels of calcineurin inhibitor (CNI) . Moreover, in observational studies in small bowel transplant recipients undergoing immunosuppressive withdrawal, Cylex values followed serially increased in the setting of rejection, while patients that were clinically stable after withdrawal had temporally stable ATP production . While the assay appears to be highly reproducible, with a rapid turnaround time and a relatively low cost, the clinical utility of this assay is limited. A single value may be difficult to interpret, particularly after lymphocyte-depleting therapy where the number of CD4+ T cells is severely reduced postinduction. Moreover, the impact of homeostatic proliferation on the level of ATP produced has not been documented. Finally, to effectively validate this study, a prospective analysis of this assay’s prediction of rejection in the setting of acute allograft dysfunction is required, with biopsy as a gold standard for diagnosis.
Peripheral lymphocytes represent the ideal tool to monitor the immune reactivity of transplant recipients. While it could be argued that the responses in peripheral blood may not necessarily mirror intragraft events, experimental evidence suggests that circulating lymphocytes actually display phenotypic and functional characteristics similar to those of cells infiltrating the graft, confirming that circulating lymphocytes can be very informative regarding the immune status of transplant patients [9,35].
Changes in the phenotype of circulating lymphocytes have been proposed as a relatively easy way to monitor transplant patients’ alloreactivity. In particular, the fate of a clinical transplant can be imagined as the result of a balance between regulatory and effector T cells. Regulatory T cells (Tregs) represent a unique population of T lymphocytes that are able to control immune responses in a number of pathogenic disease processes, as well as after transplantation. Therefore, the ratio between regulatory and effector T cells may provide information about the risk of acute rejection or the presence of immunological tolerance .
For example, the frequency and functional profiles of circulating CD4+CD25+ Tregs – the most widely studied Treg population  – were evaluated in ten lung transplant recipients with a stable clinical condition, and in 11 with chronic rejection . In the stable transplant recipients, the frequency of CD4+CD25+ T cells was significantly higher than that found in patients with chronic allograft rejection. In addition, functional evaluation of these cells demonstrated that they were hyporesponsive to conventional T-cell stimuli and suppressed the proliferation of CD4+CD25− T cells . Although quantification and characterization of Tregs has the potential to identify patients with low alloreactivity (thus with lower risk of acute rejection and need for immunosuppression), no systematic studies have been performed to date in larger recipient populations. The interpretation of results on Treg numbers may be complicated by the ambivalent role of these cells in immune system activity. Indeed, Tregs may not only maintain hyporesponsiveness to the allograft and prevent rejection in stable/tolerant patients but may also expand during the rejection process itself, possibly as a regulatory mechanism of the immune response .
Intriguing data are also emerging about different promoting or inhibiting effects of various immunosuppressive agents on Treg numbers and function. Indeed, chronic immunosuppression with CNIs in renal transplant patients has been associated with lower levels of circulating Tregs compared with immunosuppression with mammalian target of rapamycin (mTOR) inhibitors, possibly due to the inhibitory effect of CNIs on IL-2 pathways, which are required for Treg proliferation [39,40]. In renal transplant patients receiving induction therapy with the lymphocyte-depleting antibody alemtuzumab, maintenance therapy with the mTOR inhibitor sirolimus significantly increased the number of Tregs over the CNI cyclosporin A (CsA). However, this was not paralleled by improved graft outcomes, which has actually tended to be better in patients receiving CsA . This tempered enthusiasm regarding the role of Tregs in graft outcome, at least in kidney transplant recipients.
Another population of T cells with regulatory properties expressing the CD8+CD28− phenotype has been associated with lower rates of rejection, and an increased likelihood of being weaned effectively from immunosuppression in kidney and liver transplant recipients . FOXP3+ CD8+CD28− T-suppressor (TS) cells are antigen specific, MHC class I-restricted and interact directly with antigen-presenting cells (APCs). TS cells render APCs tolerogenic, inducing the downregulation of costimulatory molecules and the upregulation of the inhibitory receptors, immunoglobulin-like transcripts (ILT)3 and ILT4. ILT3 and ILT4 display long cytoplasmic tails containing immunoreceptor tyrosine-based inhibitory motifs (ITIMs), which mediate inhibition of cell activation by recruiting the tyrosine phosphatase SHP-1. The interaction between allospecific CD8+CD28− cells and epithelial cells is two-sided, since tolerogenic ILT3+ILT4+ epithelial cells induce the in vitro differentiation of CD8+ T cells into CD8+CD28− T cells [42,43]. Thus far, however, only a few groups have focused on their role.
Recently, induction therapies with lymphocyte-depleting agents have been proposed as a means of removing effector T cells from the circulation at the time of ischemia–reperfusion injury, a strategy that should promote a tolerogenic milieu, thus allowing minimization of maintenance immunosuppression . On the other hand, experimental models of organ transplantation have shown that T cells undergoing homeostatic proliferation after lymphocyte depletion display a memory-like phenotype and might induce accelerated allograft rejection . Hence, careful monitoring of the number of these cells may well be instrumental in identifying those patients in whom reduction of maintenance immunosuppressive therapy would be feasible without increasing the risk of acute rejection .
Thus, evaluating the phenotype of circulating T cells represents a very attractive way to assess the immune function of transplant patients. While recent studies suggest critical roles of certain T-cell subpopulations, no lymphocyte phenotype has yet been identified that clearly defines the risk of acute rejection or over-immunosuppression in larger-scale population studies.
Recently, it has been hypothesized that patterns of T-cell receptor (TCR) repertoires in circulating T cells might allow the identification of patients who have developed tolerance toward the graft, who could undergo safe immunosuppressive tapering. To this purpose, Alvarez et al. analyzed the TCR repertoires in circulating T cells of patients with longer than 9-year renal allograft survival with and without immunosuppression. Intriguingly, all three patients without immunosuppression and 12 out of 16 patients still on immunosuppressive therapy had oligoclonality in at least three or more TCR Vβ families, which was significantly higher than in patients with shorter graft survival . Thus, TCR oligoclonality might be explained by clonal deletion, exhaustion of alloreactive T cells or predominant expression of particular T-cell subpopulations, such as Tregs . Other groups reported on the changes in TCR Vβ chains of operationally tolerant patients , which were essentially confined to the CD8+ subset . Notably CD8+CD28− cells isolated from these patients did not display any donor-specific lytic activity .
Genetic analyses of peripheral lymphocytes have been correlated with kidney transplantation outcomes. Indeed, the earliest recognition of the potential of this technique came from the observation that gene expression levels of perforin and granzyme B in peripheral blood cells correlated with acute rejection episodes in 25 kidney transplant recipients . Recently, Sawitzki et al. described concordant intragraft and peripheral blood downregulation of gene expression for TOAG-1 (a protein possibly involved in T-cell apoptosis) and α-1,2-mannosidase (important for the N-glycosylation of membrane-bound and secreted proteins) preceding rejection episodes in two rodent models of kidney and heart transplantation . Conversely, upregulation of these two markers occurred during periods of tolerance . This gene expression pattern might also be informative in human transplant recipients, but there are currently no supportive data for its use as a clinical biomarker.
There is accumulating data for using a genomic approach and obtaining a signature of peripheral gene transcripts. In a prospective study in heart transplant patients, Deng et al. evaluated the relationship between peripheral blood molecular markers and acute rejection using an 11-gene PCR test panel. T-cell and natural killer (NK) cell activation markers (perforin/granulysin) as well as erythropoeisis-related genes (ALAS2, WDR40A and MIR) were identified as differentiating quiescence from rejection, raising the possibility that molecular profiling could be used to avoid the need for protocol biopsies . More recently, in kidney transplant recipients, Brouard et al. defined a ‘tolerant fingerprint’ comprised of 49 genes from a cross-sectional cohort of 75 patients with stable graft function using blood samples . The gene expression pattern included, among others, the downregulation of costimulatory signaling genes, and correctly classified patients with operational tolerance in the test set, as well as in a validation set. A total of 33 out of 49 genes separated tolerance from chronic rejection, with 99% sensitivity and 86% specificity, providing a noninvasive test that might identify patients in whom immunosuppression minimization or withdrawal may be possible . Martinez-Llordella et al. recently reported analogous findings that specific peripheral blood gene expression patterns also correctly identify operationally tolerant liver transplant recipients . These data suggest that transcriptional profiling of peripheral blood could possibly be employed to identify organ transplant recipients who can reduce their immunosuppressive therapy.
For many years, attention has been focused on the cellular mechanisms of allograft rejection, with humoral mechanisms being considered mainly as inductors of hyperacute rejection in the presence of preformed antidonor HLA antibodies before grafting. Increasing evidence now suggests that humoral responses to alloantigens could play an important role in both acute and chronic alloimmunity, particularly following activation of the indirect pathway . Hence, the detection of alloantibodies should be mentioned in the list of candidate assays for the immunological monitoring of transplant recipients. A correlation between anti-HLA antibodies and poor graft outcome has already been established in kidney, heart and lung transplant recipients, whether those antibodies were present before grafting or appeared after transplantation [55–57]. A prospective, multicenter study in 1134 deceased donor kidney transplant recipients showed that the presence of HLA Class I antibodies before transplantation was associated with a higher rate of delayed graft function and acute rejection episodes during the first 3 months after transplantation, which were associated with an increased risk of graft loss up to year 3 . Similarly, post-transplant appearance of non-HLA antiendothelial antibodies was correlated with coronary artery disease and chronic rejection in heart and kidney allograft recipients, respectively [59,60]. As yet, the role of alloantibody detection in allograft monitoring has not been fully defined. In particular, it will be important to monitor the increase in and specificity of donor-reactive antibodies, as well as to correlate their presence and characteristics with the types of tissue damage they cause. Assays to detect the presence of alloantibodies have become increasingly sophisticated. This has been paralleled by an increase in their sensitivity, but also with a significant increase in costs. A monitoring protocol following transplantation that incorporated such assays would potentially identify the development of donor-specific antibodies, but in naive, nonsensitized recipients, the frequency may be so low that the assay may not be cost effective. Moreover, there is a lack of knowledge of intervention following a positive study. Studies in hyperimmune patients suggest that B-lymphocyte depletion  or plasma cell inhibition  would also reduce the production of anti-HLA antibodies in patients with chronic rejection, which should translate into improved graft outcomes. However, more data are needed in a broad clinical population to identify their cost–effectiveness and clinical utility.
Originally described on the Reed–Sternberg cells of Hodgkin’s disease, CD30 is a membrane glycoprotein that belongs to the tumor necrosis factor superfamily. It is expressed on activated T cells, preferentially those secreting Th2-type cytokines, and is thought to be a costimulatory molecule, regulating the balance between Th1 and Th2 responses. After activation of CD30+ cells, a soluble form of CD30 (sCD30) is released and can be measured in the serum. Recently, there has been some evidence that high pre-transplant serum levels of sCD30 indicate the risk of impaired kidney graft outcomes [63,64]. Thus, upregulated sCD30 levels were shown to be indicative of an increased risk of transplant loss, emphasizing their clinical relevance and the possibility of implementing sCD30 as a predictive biomarker for allograft rejection upon transplantation of different solid organs. Importantly, in kidney transplant recipients, the pretransplant detection of a high sCD30 level was shown to constitute a more accurate predictor of acute rejection when compared with panel-reactive antibodies . In a cohort of 206 renal transplant recipients, high levels of sCD30 and anti-HLA antibodies independently predicted poorer graft outcomes . Therefore, sCD30 levels might help in identifying patients at higher risk of acute rejection, in whom immunosuppressive therapy should probably be intensified.
Data are accumulating regarding the importance of chemokine/chemokine receptor pathways in ischemia/reperfusion, chronic rejection and tolerance induction following costimulation blockade, providing new targets for immune monitoring and therapeutic intervention. In particular, research has now focused on CXCR3, CCR5 and their respective ligands as key mediators of host alloresponses, especially in acute rejection . Similarly, neopterin has been shown to be a sensitive marker of the cellular immune response . It reflects the activation of macrophages and can be easily measured in serum, plasma or urine, and might provide information on the risk of acute rejection or over-immunosuppression . However, more data are needed before these molecules can be considered clinically useful tools.
The urine seems an obvious choice for evaluating immune activity in the organ of its synthesis. Notwithstanding, only a few reports have focused on urine as a marker of graft function or immune reactivity of patients. In 2001, Li et al. found that the levels of mRNA of the perforin and granzyme B genes in the urine of kidney transplant patients are sensitive markers of acute rejection . These genes code for proteins cooperating to induce the death of target cells and that are present in the cytoplasmic granules of cytotoxic T cells and natural killer cells. Levels of perforin mRNA in urine that were above a certain predetermined threshold identified a biopsy-confirmed episode of rejection, with a sensitivity of 83% and a specificity of 83%, whereas the specificity of the granzyme B mRNA level was only 64% . Other markers of the cytotoxic T-cell pathway that are overexpressed in urine pellets during acute rejection episodes include the serine protease inhibitor (PI)-9, a natural inhibitor of granzyme B, and CD103, expressed on alloreactive cytotoxic CD8+ T cells . Along the same lines, protein and transcript expression of IFN-γ-inducible IP-10 and the chemokine receptor CXCR3 are elevated in the urine sediments of recipients with acute rejection, with variable predictive characteristics depending on the cut-off values used [70,71].
Recently, Muthukumar et al. found that expression levels of CD3, perforin and CD25 (a T-cell activation transcript) are remarkably higher in urine-sediment cells from renal transplant recipients with acute rejection than in cells from patients with either chronic rejection or no biopsy changes . Unexpectedly, they also found an upregulation of the FOXP3 gene, the hallmark of Tregs, in urine from patients with acute rejection. Intriguingly, low expression of FOXP3 at the time of rejection was associated with poor response to therapy and increased the risk of graft failure within 6 months. Conversely, higher FOXP3 transcript levels at the time of rejection, despite the molecular signature of rejection, herald a more favorable clinical outcome . Notably, the histologic grade of rejection is not able to predict with the same accuracy the severity or clinical outcome of treated rejection episodes. The relative ease of this technique and the rapid turnaround time may open the avenue to novel, noninvasive ways to monitor the immune status of renal transplant recipients. Further prospective studies on larger patient populations are pending, as is the identification of the panel of markers that can appropriately distinguish the various causes of inflammatory allograft dysfunction from each other, including rejection, BK virus nephropathy and acute tubular necrosis.
An alternative strategy using urine is that of a proteomic approach. Through high standardization of urine collection and storage, Shaub et al. identified a urine protein profile associated with acute rejection . Moreover, Quintana et al. recently showed that assessment of the urine proteome can noninvasively distinguish two types of chronic allograft pathology, such as chronic active antibody-associated rejection or significant interstitial fibrosis/tubular atrophy, without evidence of antibody-mediated injury . Intriguingly, the urine proteomes obtained from patients with the two pathologies were distinctly different from one another, and both differed from those found in the urine of healthy control subjects and from patients with excellent allograft function 1 month after transplantation. Thus, proteomics may define peptide signatures to detect and differentiate subclinical rejection and differentiate subtypes of chronic allograft injury in a noninvasive fashion. However, the individual proteins that comprise the specific proteomic patterns associated with acute or chronic rejection have yet to be identified. Defining these proteins would be of utmost importance in understanding the pathogenesis of these disease processes and might identify a specific target protein that could be a reliable surrogate marker – an extremely important goal in the monitoring of transplant patients.
Graft biopsy still represents the goal standard for most diagnoses of graft dysfunction. However, there are circumstances in which it provides limited information regarding the clinical impact and prognosis of a process. Indeed, different pathogenic mechanisms may underlie the same histological changes. For example, no histological change has yet been identified to differentiate a clinical rejection resulting in graft dysfunction from a subclinical one. Hence, tools to define alloreactivity of transplant patients might be extremely helpful in improving the accuracy of histological evaluation of the graft.
Attempts have been made to differentiate the severity of acute rejection according to the phenotype of infiltrating cells. Higher percentages of infiltrating monocytes have been found in clinical compared with subclinical acute rejection, but no clear cutoff value has been identified . Moreover, recent wider use of lymphocyte-depleting strategies may alter the significance of the leukocyte phenotype, both in the periphery and in the graft.
More promising results have been obtained by genetic analyses of graft biopsies. Using high-density microarrays, several groups have identified expression patterns associated with particular histologic types of rejection and different outcomes . This method allows for the study of several thousand genes simultaneously, and is an increasingly robust and reproducible technique. Recent investigations with this technique have demonstrated a molecular heterogeneity of allograft rejection, with differences detected by transcription that are not evident by light microscopy alone . Differences in steroid responsiveness and the success of return to baseline serum creatinine after therapy have a genetic counterpart . Intriguingly, upregulation of CD20 B-cell transcripts has been associated with poorer outcomes of acute rejection. Moreover, rejection profiles, despite their heterogeneity, were still distinguishable from chronic injury or toxicity . Therefore, the variable clinical behavior of histologically similar rejections is supported by the apparent quantitative and qualitative transcriptional differences.
Several groups now have combined the improved expense, speed and quantitative capabilities of real-time PCR with the rapidly increasing availability of candidate transcripts to produce rapid-turnaround platforms for the analyses of fixed sets of 20–100 genes [78–80]. These low-density arrays specifically limit the assessment of expression to a set of gene transcripts potentially involved in different pathologic conditions as opposed to a more global ‘interrogation’. It could be hypothesized that different arrays are used according to different histological patterns, to improve our understanding of the underlying pathogenic mechanisms and prognosis on a routine basis. Of note, this might be crucial in testing new therapies for clearly defined subtypes of acute rejection.
Currently, the measurement of immunosuppressive drug levels in the patient’s blood represents the most frequently used tool to monitor transplant immunosuppression. However, even stringent obedience to the recommended drug levels (frequently chosen on an empiric basis) does not prevent either over-immunosuppression, resulting in infections and malignancies, or under-immunosuppression, which is associated with an increased risk of acute rejection or chronic immune injury of the graft. Indeed, despite similar levels of immunosuppressive drugs in the blood, different genetic backgrounds may result in different immune reactivities between patients [81,82]. Acute rejection is suspected when serum creatinine increases without systemic illness or a technical complication. Knowing the blood levels of immunosuppressive drugs is helpful, but is usually insufficient per se to differentiate rejection from toxicity, thus necessitating biopsy. Hence, the development of strategies for immune monitoring will be crucial for distinguishing transplant recipients who will benefit from a reduction in, or even the withdrawal of, immunosuppression from those who require more intense, lifelong immunosuppression. Finally, some of these assays may be useful biomarkers not only of acute rejection but of long-term outcomes, which are increasingly dissociated from the excellent results enjoyed in the first 3 years after transplantation.
The availability of noninvasive, easy and economical tools to monitor alloreactivity of transplant patients will open avenues to a completely new way of managing transplant patients. Indeed, immunosuppression might be titrated to a single patient’s need. Over the last decade, many assays have been developed to noninvasively assess the immune status of transplant patients. Encouraging results have been obtained with ELISPOT, Cylex ImmuKnow and analysis of mRNA expression in the urine of transplant patients, which can allow differentiating patients at higher risk of acute rejection from those in whom immunosuppression can probably be reduced [8,9]. However, clinical validation is lacking for most of the assays (Table 1). Indeed, the majority of trials that tested the efficacy of these assays were relatively small, with limited statistical power and were monocentric. Moreover, no study has formally tested whether any of these assays can be used as an alternative to, or can add information to, graft biopsy. To this end, ad hoc studies should assess whether changing the immunosuppressive therapy according to information obtained from these assays provides comparable results to the conventional approach based on serum creatinine levels, graft biopsy and immunosuppressive drug levels. Future studies should focus on this issue. Indeed, it is reasonable to say that only a small fraction of organ transplant patients are receiving adequate immunosuppression. For some of them, immunosuppression is insufficient, which eventually results in clinical or subclinical rejection and impaired graft function. On the other hand, fear of acute rejection may lead to the use of unnecessarily high doses of maintenance immunosuppression, exposing another fraction of transplant patients to an increased risk of opportunistic infections and malignancies. Notably, there are also cases of operational tolerance that, if correctly identified, might benefit from complete withdrawal of immunosuppression.
As induction of immunological tolerance represents the primary goal of transplant research, the implementation of assays for immune monitoring may be crucial to both safely test strategies for immunosuppression minimization and to tailor/minimize antirejection therapy in those patients who are receiving standard immunosuppressive protocols. Indeed, empiric minimization may put all patients at risk of rejection. Immune monitoring would be necessary prior to the minimization procedure and would increase its safety. Moreover, assessing the alloreactivity level of patients would be essential for identifying patients who are at an increased risk of acute rejection once the minimization has been initiated, or to detect acute rejection prior to clinical symptoms in order to perform pre-emptive intervention to limit graft damage. Indeed, immune monitoring tools would be particularly useful if they could allow the detection of acute rejection before graft injury becomes clinically relevant and, even more importantly, if they could predict changes in immune reactivity following changes in immunosuppressive therapy. Thus, efforts should be made in this direction, which will hopefully translate into safe and effective tailoring of immune suppression. Notably, immune monitoring should be performed regularly as the immune status of each patient is highly variable in response to different settings and at different time points.
To improve the specificity and sensitivity of available immune monitoring assays, a reasonable approach would be to combine different tools, measuring the immune response from different perspectives, such as T-cell phenotype, reactivity and gene-expression pattern. This process is currently undergoing evaluation through a large NIH consortium prospective trial, pairing up allograft surveillance biopsy with several methods described in this review. Moreover, important insights into the mechanisms that sustain tolerance might arise from a study of those rare patients who did not reject their graft even after unintentional immunosuppressive drug weaning (operational tolerance). Assessing the mechanism by which this state arises and how it can be detected may allow inducing it on a larger scale, by designing more effective tolerogenic protocols. Moreover, these studies might lead to the identification of biomarkers of low immunological risk that could be used to select patients for potential weaning. Collaborative efforts should be made to establish international networks for the identification and study of operationally tolerant patients [83,84].
The task of translating immune monitoring assays into routine clinical practice is not simple but much progress has already been made in this direction. In the future, immune monitoring will probably be part of day-to-day management of transplant patients.
Financial & competing interests disclosure
Roslyn B Mannon is funded in part by grants from the NIH (DK75532 and AI058013). The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
For reprint orders, please contact moc.sweiver-trepxe@stnirper
No writing assistance was utilized in the production of this manuscript.
Paolo Cravedi, Mario Negri Institute for Pharmacological Research, Bergamo, Italy, Tel.: +39 035 453 5405, Fax: +39 035 453 5370, Email: ti.irgenoiram@idevarc..
Roslyn B Mannon, Division of Nephrology, Department of Medicine, 1900 University Boulevard, THT 611G, Birmingham, AL 35294, USA, Tel.: +1 205 996 6383, Fax: +1 205 996 6659, Email: ude.bau@nonnamr..
Papers of special note have been highlighted as:
• of interest