|Home | About | Journals | Submit | Contact Us | Français|
Background.Transmitted human immunodeficiency virus type 1 (HIV-1) drug resistance (TDR) mutations can become replaced over time by emerging wild-type viral variants with improved fitness. The impact of class-specific mutations on this rate of mutation replacement is uncertain.
Methods.We studied participants with acute and/or early HIV infection and TDR in 2 cohorts (San Francisco, California, and São Paulo, Brazil). We followed baseline mutations longitudinally and compared replacement rates between mutation classes with use of a parametric proportional hazards model.
Results.Among 75 individuals with 195 TDR mutations, M184V/I became undetectable markedly faster than did nonnucleoside reverse-transcriptase inhibitor (NNRTI) mutations (hazard ratio, 77.5; 95% confidence interval [CI], 14.7–408.2; P < .0001), while protease inhibitor and NNRTI replacement rates were similar. Higher plasma HIV-1 RNA level predicted faster mutation replacement, but this was not statistically significant (hazard ratio, 1.71 log10 copies/mL; 95% CI, .90–3.25 log10 copies/mL; P = .11). We found substantial person-to-person variability in mutation replacement rates not accounted for by viral load or mutation class (P < .0001).
Conclusions.The rapid replacement of M184V/I mutations is consistent with known fitness costs. The long-term persistence of NNRTI and protease inhibitor mutations suggests a risk for person-to-person propagation. Host and/or viral factors not accounted for by viral load or mutation class are likely influencing mutation replacement and warrant further study.
Although the epidemiology of transmitted human immunodeficiency virus type 1 (HIV-1) drug resistance (TDR) mutations has been studied in a variety of settings [1–5], less is known about whether transmitted mutations persist without selection pressure from antiretroviral therapy (ART). Because many drug resistance mutations impair HIV fitness [6–10], replacement of TDR mutations with wild-type variants may confer a potential survival advantage. This is particularly true when resistant and wild-type viral populations co-exist, as typically occurs when resistance develops during treatment. However, when HIV is transmitted by a single drug-resistant virion, the emergence of wild-type variants requires evolution and back-mutation rather than emergence of an existing wild-type variant.
Several studies have reported long-term persistence of most TDR mutations but have also documented replacement of certain mutations with wild-type variants [11–16]. Although valuable, these investigations did not have sufficient person-years of follow-up to quantify and compare replacement rates for each therapeutic drug class.
Improved understanding of how quickly different mutation classes are replaced by wild-type variants is important for several reasons.
First, most HIV diagnoses occur during chronic infection, rather than during acute and/or early infection. By the time genotyping is performed, TDR mutations may have waned below the detection threshold for population genotyping, but persist as minority variants. Therefore, knowledge about mutation replacement rates for each drug class could help clinicians interpret resistance tests performed during chronic infection.
Second, the degree to which drug resistance will propagate among recently infected individuals depends on several factors including (1) the prevalence of drug resistance among chronically infected persons who are receiving partially suppressive therapy and (2) how long TDR persists in untreated persons and, thus, can propagate even without drug selection. The importance of propagation of transmitted mutations becomes amplified in resource-limited settings. Without genotyping and viral load monitoring, patients may initiate incompletely suppressive treatment regimens and sustain prolonged exposure to them. This leads to additional selection of drug resistance mutations [17, 18], further increasing the risk of resistance propagation and limiting patients’ second-line treatment options.
Third, the interpretation of population-based surveys of TDR prevalence depends on understanding how long mutations persist after they are transmitted, because surveys will underestimate mutations that are quickly replaced by wild-type variants.
To provide more detailed information about persistence of specific mutations, we analyzed participants in 2 large early infection cohorts with sustained periods of follow-up without treatment.
We included individuals with acute and/or early HIV infection who were enrolled in 2 prospective cohort studies: the Options Project (San Francisco General Hospital, University of California, San Francisco [UCSF]) and an acute and/or early HIV infection cohort in São Paulo, Brazil. The studies were approved by the UCSF Committee on Human Subjects Research and the Federal University of São Paulo Ethics Committee and Institutional Review Board. All participants gave informed, written consent for participation.
The Options Project is a prospective cohort study of individuals enrolled within 12 months after HIV antibody seroconversion (restricted to within 6 months after seroconversion since 2003). Participants are enrolled after screening for acute and/or early HIV infection with use of a combination of clinical history, serologic testing, and plasma HIV RNA determination, as described elsewhere . Individuals enrolled during 1996–2009 were included. The São Paulo cohort enrolled patients, starting in 2002, who had experienced seroconversion within the previous 6 months and had evidence of acute and/or early HIV infection by the Serologic Testing Algorithm for Recent HIV Seroconversion . Individuals enrolled during 2002–2009 were included.
We included all ART-naïve individuals in each cohort meeting 3 criteria: (1) TDR on initial genotyping, (2) ≥6 months of observed follow-up time without ART (≥3 months for São Paulo participants), and (3) ≥1 follow-up genotype. TDR was defined according to guidelines for epidemiologic studies  that include mutations known to be selected by therapy and exclude common polymorphic mutations that may or may not be selected by therapy.
For ART-naïve patients, we analyzed all available genotypes. We also included some individuals who had initiated ART. Here, we analyzed genotypes in 2 instances: (1) patient initiated ART during early HIV disease, with genotyping performed ≤10 days after initiation of ART (to minimize the possibility of an ART-induced mutation being mistaken for a TDR mutation), or (2) patient initiated but then later stopped ART, with genotyping performed >6 months after ART cessation. In the latter situation, we assumed that the virus population would be evolutionarily fixed during fully suppressive ART. If individuals started ART a second time, they were censored, and subsequent genotypes were not assessed.
Demographic and behavioral data were collected from individuals in both cohorts with use of standardized interviews. For Options Project participants, CD4+ T cell counts and plasma HIV-1 RNA levels were measured at baseline and every 3–4 months. For São Paulo cohort participants, these measurements were repeated 3 months after initial determination.
In Options Project patients, baseline HIV-1 population sequence genotypes (TRUGENE genotyping system; Siemens Healthcare Diagnostics) were determined as described elsewhere [22, 23]. In São Paulo cohort patients, genotypes were performed as described elsewhere .
Follow-up genotypes were obtained using a strategy to accurately estimate the time at which baseline TDR mutations became undetectable by population sequencing while limiting the number of assays. For each individual, we genotyped the last available specimen before ART initiation or, for ART-naïve individuals, the last available specimen. If all baseline TDR mutations were present in the final sample, we did not genotype intervening specimens. If any baseline TDR mutations were not detected (indicating a loss), we bracketed backward, first genotyping the specimen closest to the midpoint of the prior 2 specimens. We continued genotyping intervening specimens to pinpoint the 2 time points bracketing the loss of the mutation.
When a baseline TDR mutation was not detected subsequently, it was considered to have been replaced. This contrasts to other approaches that focus on the first appearance of wild-type variants.
For ART-exposed patients, because we assumed that mutation loss or gain would not occur during effective ART, the duration of observation was defined as the total number of days elapsed since HIV infection minus the number of days of ART.
We estimated a date for each mutation replacement event with use of midpoint imputation between the date of the last specimen on which the mutation was detected and the first specimen in which it was absent. Our primary statistical analyses, however, did not rely on these imputed dates.
Drug resistance mutations were grouped into 6 categories: (1) lamivudine/emtricitabine–associated mutations M184V/I; (2) thymidine analog–associated (TAM) mutations M41L, D67N, K70R, L210W, T215Y/F, and K219Q/E; (3) T215 partial revertant mutations T215C, T215D, T215E, T215I, T215S, and T215V; (4) other nucleoside reverse-transcriptase inhibitor (NRTI) mutations; (5) nonnucleoside reverse-transcriptase inhibitor (NNRTI) mutations; and (6) protease inhibitor (PI) mutations. We analyzed the T215 partial revertant mutations as a separate group, because T215Y/F mutations first progress to T215 partial revertants in one step, then progress to wild-type in a second step . In each mutation group, a Kaplan-Meier plot was made of the cumulative probability of mutation replacement versus the number of ART-free days since the estimated date of infection, using imputed dates of replacement.
We sought to address several features of these data that make optimal statistical analysis challenging. First, transmitted mutations that become undetectable before a person's initial genotype cannot be identified or included in this study. The time of individuals’ baseline genotypes, therefore, are late-entry times (also known as left-truncation times). Second, as mentioned, the exact time of mutation replacement lies between the last genotype with the mutation and the first genotype without it. Such interval-censored data require specialized calculations. Third, an important a priori predictor that we sought to analyze—HIV plasma RNA level—changed throughout follow-up (ie, it is a time-varying covariate), and those changes could impact the chance of mutation replacement. Fourth, many persons had multiple baseline mutations, requiring assessment of possible within-person clustering or dependence.
Methods have been published for fitting parametric survival models to interval-censored data with clustering but without time-varying covariates  and for interval-censored data with time-varying covariates but no clustering . We combined features of these models  and added calculations to account for late-entry times. Details of the calculations and example SAS code (SAS Institute) are provided in online supplements A and B.
This produced a parametric proportional hazards model that we used to analyze the association between mutation group and the hazard of mutation replacement. To control for plasma RNA levels as a time-varying covariate, for each interval of time modeled, we used the last RNA level measured at or before the start of that interval to ensure that RNA levels were predicting mutation loss rather than reflecting it. Using the NNRTI mutation group as the referent, we computed hazard ratios for mutation loss over time for the other 5 mutation groups. Our method allows individual mutations to be assessed singly but accounts for within-subject correlation in participants with multiple baseline mutations.
Using this model, we computed the percentage of transmitted mutations that would be expected to become undetectable by 6 months, 1 year, 2 years, and 3 years after HIV infection, assuming a constant HIV plasma RNA level of 40,000 copies/mm3 and assuming that mutations were present at 3 months after infection (because there were no observations during very early times due to the aforementioned late-entry phenomenon).
Of 697 participants who entered either the Options Project or the São Paulo cohort during 1996–2009, 676 had baseline genotyping performed (21 individuals had plasma RNA levels that were too low for genotyping or were lost to follow-up). Of the 676 participants with baseline genotypes, 20 additional persons were excluded because they began ART >10 days before genotyping (n = 7), had evidence of superinfection with a second HIV variant (n = 6), or had missing data (n = 7). Of the 656 eligible individuals, 136 (21%) had TDR. Of these, a total of 75 persons had a follow-up genotype that could be assessed (the other 61 individuals either began ART and achieved virologic suppression [n = 57] or were lost to follow-up or had missing data [n = 4]). The 75 patients with TDR and follow-up genotypes formed the basis for this analysis (n = 59 from Options cohort and n = 16 from the São Paulo cohort).
Overall, individuals in the 2 cohorts were similar; our sample was predominantly men whose main risk for HIV acquisition was sex with other men and who enrolled in the cohorts at a median of 2.7–3.0 months after HIV infection (Table 1). HIV-1 plasma RNA levels were slightly higher among São Paulo patients, but this difference was not statistically significant (Wilcoxon rank sum test, P = .37).
The 75 participants had a total of 195 baseline TDR mutations (Table 2). Participants’ genotypes are listed in online supplement C and have been uploaded to GenBank (accession HQ528536-HQ529037; HQ585024-HQ585055). Over a median follow-up period of 25 months (interquartile range, 8–41 months), 75% of individuals with transmitted M184V/I mutations exhibited loss of this mutation; the proportion with mutation replacement was 40% for those with T215 revertants, 28% for those with TAMs, 11% for those with other NRTI mutations (including K65R), 25% for those with NNRTI mutations, and 20% for those with PI mutations (Table 2).
In a Kaplan-Meier analysis of time to mutation replacement (Figure 1), the M184V/I group displayed more rapid replacement compared with all other mutation groups, and the other mutation groups were similar to one another (Figure 1).
Table 2 shows results of our parametric proportional hazards model (which assumed an exponential parametric form). Because of the steady state plasma RNA level of 40,000 copies/mm3, the model predicts that, after 6 months of HIV infection, 68% of transmitted M184V/I mutations that were present at 3 months would wane below the detection limit, whereas ≤6% of other mutation groups would wane to this level (Table 2). After 3 years of HIV infection, the model predicts that all mutation groups except other NRTIs would show >15% replacement and M184V/I mutations would show 100% replacement.
Compared with the NNRTI mutation group, the M184V/I mutation group had a markedly higher hazard for mutation replacement (hazard ratio, 77.5; 95% CI, 14.7–408.2; P < .001) (Table 2). TAMs and T215 partial revertants exhibited a trend toward elevated hazard of replacement, although neither was statistically significant. Other NRTI mutations had a lower hazard, but the CIs around this estimate were very wide. The hazard for PI mutation replacement was not substantially different from that of NNRTI mutations.
Higher plasma HIV RNA levels were associated with a trend toward increased mutation loss, but this was not statistically significant (hazard ratio for replacement per log10. increase in plasma HIV RNA, 1.77; 95% CI, .90–3.25; P = .11).
In addition, we found strong evidence for substantial within-person correlation in the risk of mutation loss that was not accounted for by plasma HIV RNA levels (P < .001) or by drug class.
We examined possible departures from the proportional hazards assumption, including possible change in the effect of HIV plasma RNA level in the first 6 months after infection versus after 6 months, and did not find strong evidence for nonproportionality; therefore, we used the simplest model, assuming proportional hazards for all covariates. Generalizing from an exponential model to a Weibull model did not improve fit substantially.
By analyzing a large number of patients with TDR, we were able to show that the M184V/I mutations become undetectable in population sequencing substantially faster than do other mutation groups, which are generally similar to one another. Although rates of replacement for mutations other than M184V/I were relatively low, there was still appreciable replacement. Although NNRTI mutations are assumed to have minimal effects on fitness, replacement of these mutations occurred over time. In addition, our results suggest that higher viral load may promote mutation loss, but this does not account for the very substantial person-to-person variation that we observed in the rate of mutation loss.
Prior studies of TDR mutations have reported that replacement with wild-type variants occurs but that most mutations are maintained for at least 1–2 years after transmission [11–16]. The largest and most definitive study grouped all mutations together and estimated a median time-to-loss of detectable drug resistance (using population-based assays) ranging from 4.1 years, using a conservative estimate, to longer than the lifetime of the individual, using a less conservative estimate . Although our results might appear to differ from this estimate, the difference is primarily attributable to our ability, with larger numbers, to assess groups of mutations separately. The high rate of replacement that we observed among M184V/I mutations is consistent with prior reports [11–13, 29] and almost certainly reflects this mutation's association with reduced fitness in the absence of therapy [6, 9, 30]. The relative stability of thymidine analog–associated, NNRTI, and PI mutations over time is also consistent with other reports [12–14].
We had hypothesized that PI mutations would be replaced more rapidly than would NNRTI mutations. However, we found that the PI group had a rate of replacement similar to that of the NNRTI group, although CIs were wide (HR for replacement of PI vs NNRTI, 1.12; 95% CI, 0.3–4.0; P = .9). The slow replacement of PI mutations is notable, because these mutations, similar to M184V/I, are known to affect viral fitness [31, 32]. Our group previously performed a partial treatment interruption study in which patients receiving a stable partially suppressive regimen discontinued one drug class while maintaining the other drug classes. Continuation of at least some ART prevented rapid rebound of archived variants, forcing HIV to back-mutate in the same way that occurs after the acquisition of a TDR variant. In the study, selective removal of NRTIs was associated with loss of the M184V/I mutation, and removing PIs was associated with limited mutation replacement, even after several years . The mechanism for the persistence of PI-resistant variants despite fitness costs was investigated by van Maarseveen et al , who argued that viral variants with both PI mutations and compensatory mutations must travel through a “fitness valley” to lose both types of mutations— first reducing fitness in order to ultimately maximize it. Our findings are consistent with the theoretical framework advanced by these 2 studies and with prior reports of infrequent replacement of PI mutations [11, 12, 16].
The clinical implication of our finding that PI mutations are replaced at a similar rate to NNRTI mutations is that one potential advantage of including PIs in first-line ART regimens in resource-limited settings—namely, faster replacement of transmitted mutations by wild-type and lower risk of spread of resistance—does not appear to be present. The risk of drug resistance propagation is further amplified by the substantial fraction of all HIV transmissions that occur from source patients who have acute and/or early HIV infection [35, 36], implying that, in many patients with TDR, the available time for mutation replacement before retransmission may be quite limited.
A second clinical implication of our findings relates to the rapid replacement of M184V that we observed. Because of the frequent use of drugs that select for this mutation, should a patient who has received a new diagnosis of HIV who has other mutations be assumed to have lamivudine/emtricitabine-resistant virus even if this is not present on genotyping? The rapid mutation loss that we observed suggests that the lack of M184V in this context may be attributable to replacement, which is supported by a prior study showing that very early replacement of M184V is common: 11% of persons with early HIV infection had M184V variants detected using a minor variant assay but not in population sequencing . Other studies, however, have shown that genotyping with population sequencing is usually a reliable guide to choosing therapy for patients with TDR [38, 39]. One way to reconcile these studies is that M184V variants may rapidly decrease to levels that are not clinically significant, leaving a relatively short window during which clinically important frequencies are missed. Precise quantitation of the prevalence of M184V variants after the loss of detection on population sequencing, along with correlation with treatment outcomes, could help better define the threshold for clinically significant levels of M184V.
In analyzing predictors of mutation replacement, we made 2 observations that together raise interesting questions about host-virus interactions. First, we hypothesized that higher viral load would predict faster mutation loss but found only a modest trend. Second, our study included many persons with multiple baseline mutations, allowing us to assess whether there are person-level factors influencing mutation replacement. We found that, after accounting for viral load and mutation drug class, there was marked person-to-person variability in the likelihood of mutation replacement. This suggests that there are additional patient-level factors driving mutation loss. Our observations about mutation replacement are consistent with the concept that viral evolution, rather than being driven exclusively by selective (deterministic) events based only on the fitness costs of mutations, may also be driven by stochastic forces. This idea that evolution is influenced by a combination of factors and that the effective viral population size is not simply reflected by the plasma HIV RNA level has been supported by seminal studies [40–42] and helps contextualize our results.
Our study has several important limitations. First, we used population sequence genotypes rather than more sensitive methods capable of detecting minor variants. This limits our ability to assess mutations below the detection threshold of conventional sequencing, which may persist in the viral quasi-species in sufficient quantities to remain clinically significant. This phenomenon has been illustrated by studies of the K103N mutation among women who received single-dose nevirapine to prevent vertical HIV transmission . Our inability to detect M184V minor variants in particular suggests that our data may underestimate the rate at which replacement with wild-type virus occurs. Second, we included 11 individuals who received and subsequently interrupted ART. Because viral evolution is minimal or nonexistent during effective therapy [44–46], we do not believe that this affected our central observations. Third, because of the challenges of identifying patients with very early HIV infection, we could not determine whether any of our participants acquired mutations and lost them in the earliest weeks of infection. Fourth, we did not assess all factors that might influence replacement rates, such as viral tropism. Although the effective viral population size discussed earlier is highly pertinent to analyzing rates of mutation replacement, estimating this factor is complex and requires measurement of multiple clonal populations that we did not perform. Finally, caution should be exercised in generalizing our results from sexual transmission cases to injection drug use transmission. Because intravenous infections have been shown to be caused by a multiplicity of viral variants in most cases , whereas the majority of sexual transmissions are caused by single virions , there may be more rapid replacement of TDR in transmissions related to injection drug use.
Overall, our data indicate that transmitted M184V/I mutations are unique in their high propensity to wane below the detection threshold of population genotyping over time. Clinicians assessing new patients should consider the possibility that mutations not present in baseline genotyping may have been present previously and may have been replaced with wild-type viral variants, particularly in the case of M184V/I. Further investigation is warranted on both host and viral factors that are influencing mutation replacement.
We thank Timothy Schmidt and Jacqueline Javier in the laboratory for core virology; AIDS Research Institute, UCSF, for genotyping expertise; Gerald Spotts, for database management expertise; and the participants in the Options Project and the São Paulo cohort, for their generous participation in ongoing studies.
National Institutes of Health/National Institute of Allergy and Infectious Diseases (T32AI060530, PO1AI071713 to FMH, K24AI069994 to SGD, and P30 AI027763 and UL1 RR024131 to UCSF), Brazilian Program for STD and AIDS, Ministry of Health (914/BRA/3014- UNESCO to EGK), the São Paulo City Health Department (2004-0.168.922-7 to EGK), Programa Nacional de Pós Doutorado / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (PNPD/CAPES-2496/08 to MCS), and Fundação de Amparo a Pesquisa do Estado de São Paulo (04/15856-9 to RSD and EGK, and 04/12316-3 to MCS). RSD and EGK are scholarship recipients from the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazilian Ministry of Science and Technology.