|Home | About | Journals | Submit | Contact Us | Français|
Successful conduct of HIV vaccine efficacy trials entails identification and enrollment of at-risk populations, assessment of appropriate endpoints as measures of vaccine efficacy for prevention of HIV acquisition and amelioration of disease course among infected vaccinees, as well as identification of potential confounders or effect modifiers. While not invariably useful and bringing their own cost in terms of measurement and validation, a variety of biomarkers may aid at each stage of trial conduct.
A review of selected articles, chosen based on quality, relevance of the biomarker to HIV vaccine trials, and availability of the publication, was conducted. The authors also drew experience from current trials and other planned or ongoing trials.
Biomarkers are available to assess HIV incidence in potential study populations but care is needed in interpreting results of these assays. During trial conduct, STIs such as HSV-2 may act as effect modifiers on primary and secondary endpoints, including HIV incidence and set point viral load. The utility of STI biomarkers will likely depend heavily on local epidemiology at clinical trial sites. Analyses from recent large HIV vaccine efficacy trials point to the complexities in interpreting trial results and underscore the potential utility of biomarkers in evaluating confounding and effect modification.
The HIV epidemic continues to spread worldwide. Even though strong HIV prevention programs, access to highly active antiretroviral therapy and the maturity of the epidemic are resulting in stabilization or even decreases in HIV incidence (Gregson, 2006; Diop, 2000; Msuya, 2007), in 2007 there were approximately 2.5 million new HIV infections around the world (UNAIDS, 2007). Most agree that, given the number of infections, the development of a safe and effective HIV vaccine is a global public health priority (Klausner et al, 2003).
Biomarkers could have multiple roles within HIV vaccine efficacy trials and could be used to answer key questions before, during and after enrollment in trials (Gotch, 2005). First, because the aim of most advanced HIV vaccine clinical trials is to enroll subjects at high HIV risk (Vardas, 2005; Salazar, 2005; Koblin, 2000; Schechter, 2000), biomarkers could serve the function of assessing and validating risk of exposure to HIV in the intended recruitment area.
Second, biomarkers are routinely used in assessment of vaccine efficacy. For one, HIV vaccine trials focus on the ascertainment of HIV incidence in order to answer the primary question of vaccine efficacy (Shepard, 2006; Gotch, 2005; Beattie, 2004; Gilbert, 2003). Also, the viral load set point, an early ‘stable’ measure of plasma viremia, has been used as a surrogate end point biomarker (Duerr, 2006) to evaluate whether vaccine-induced immune response can ameliorate disease course in individuals who become HIV infected after vaccination – by enabling those who become infected to better control viral replication compared to those who received a placebo.
Finally, biomarkers of disease, especially STIs, have been found to be intimately associated with HIV infectiousness, transmissibility and disease progression (Plummer, 1991; Greenblatt, 1988; Wasserheit, 1992; Mayer, 1995; Pao, 2005). The measurement of STI biomarkers in HIV vaccine efficacy trials could provide valuable information about the main study outcomes of prevention of HIV infection or amelioration of disease progression or decreased HIV infectiousness. This is especially important if the STI is likely to serve as an effect modifier or a confounder in the analysis of vaccine efficacy or changes in disease progression.
As thousands of individuals around the world continue to enter HIV vaccine trials, a critical review of some of the biomarkers to include in these trials is needed. In this paper we have reviewed the use of biomarkers already collected in HIV vaccine trials and evaluated others that, due to their relation to HIV infection, could be considered for inclusion in future trials. Also, the availability of a broader range of technologies to measure HIV infection (e.g. use of antibody tests that provide some information on recency of infection) and other STIs may change the feasibility of using biomarkers in HIV vaccine trials. These technologies are reviewed here, with an emphasis on what potential additional information would be gained in a trial where these new technologies are employed.
The intent of this review is to help clinical trial investigators in their considerations regarding HIV vaccine trial design in given target populations and sites. In this paper we use the official NIH definition of a biomarker: “a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.”
The global HIV epidemic ranges from generalized epidemics to concentrated HIV epidemics in sub-populations such as men who have sex with men, commercial sex workers or injecting drug users (UNAIDS, 2007). Prior to the initiation of an HIV vaccine efficacy trial, the most critical information investigators need is a reliable estimate of HIV incidence in specific populations. This is needed to develop a sample size estimate of the number of enrollees needed for the trial (Vardas, 2005) and to identify which populations are appropriate for enrollment into early phase or later phase trials. Ideally, if HIV incidence and behavioral data can be linked, risk factors can even be identified early on and used as entry criteria that will facilitate enrollment of persons at high risk of HIV acquisition.
Unfortunately, HIV incidence has historically been difficult to estimate. HIV prevalence estimates can be obtained easily using rapid HIV antibody tests, with confirmation by Western Blot or other specific techniques (Franco-Peredes, 2006). The ELISA assays in particular are widely available, very inexpensive, quick and easy to use in regions where the highest prevalence sub-populations are found (Summers, 2000; Liechty, 2005). However, these tests for current HIV infection do not provide information about the duration of infection in seropositive individuals. This HIV incidence information is exactly what is needed to plan for HIV vaccine efficacy trials, the number of new HIV infections in a population in a given time period; though it is much harder to obtain.
Because HIV incidence information is critical prior to initiation of HIV vaccine efficacy trials, better ways to inexpensively and quickly assess incidence in target populations are needed (Parehk, 2005). The best and most precise incidence assessment is done by recruiting at-risk, HIV seronegative populations and prospectively following them over time. Another advantage of this approach is that the process of participant recruitment, risk assessment and risk reduction counseling—and efforts to retain participants—are similar to the procedures involved in an actual vaccine trial. Thus, a prospective cohort study provides an opportunity to hone recruitment strategies, understand the target population and implement operational processes. Such “vaccine preparedness” studies have in fact been carried out in several countries around the world, including the US, and have been very helpful in understanding local HIV epidemiology in potential trial sites (Brown-Peterside, 2000; Koblin, 2000; Seage, 2001; Baeten, 2000). As more HIV vaccine efficacy trials approach, this method for estimating incidence and gathering other epidemiological data could be advantageous to geographic areas or subpopulations with little to no epidemiological data.
The major disadvantage of this approach is the time and expense involved in setting up the study, in the recruitment and retention of the cohort and in following the cohort longitudinally. For example, in Africa only a handful of such community-based prospective surveys have been undertaken so far (Sakarovitch, 2007). Furthermore, local HIV epidemiology may change by the time the longitudinal study is completed and an efficacy trial begins.
Fortunately, some new methods for cheaply and easily obtaining HIV incidence have been developed in the last ten years. One of the most recent approaches uses comparative antibody tests that take advantage of the evolution of anti-HIV antibodies that occurs in the months after HIV infection. These assays distinguish recent seroconverters from people with more established infection. One such assay is known as the “detuned” or less sensitive enzyme immunoassay (LS-EIA), another is the IgG-capture BED-EIA (BED) assay (Calypte Biomedical Corporation, Rockville, MD) (Janssen, 1998; McFarland, 1999; Weinstock, 2002; Parekh, 2002; Hu, 2003) and there are several others.
These EIA assays have the advantage of being relatively inexpensive and rapid. They can theoretically be used to identify recent seroconverters and allow calculation of incidence using specimens derived from cross-sectional studies. Unfortunately, however, the detuned LS-EIA assay has never been optimized to precisely determine a narrow window of seroconversion (usually it is within the past 6 months) and thus may lead to unstable incidence estimates, especially if the sample size is small. This assay has also been found to miss acute HIV infections and there is some evidence of confounding of the estimates by use of antiretroviral therapy (ART) (Priddy, 2007; McDougal, 2005; Reed, 2004; Laeyendecker, 2005). In addition, results from LS-EIA appear to vary by subtype (for example, significantly different window periods were seen for HIV subtypes B- and E-infected populations from Thailand) and so it has not been widely implemented due to the difficulty of interpreting data from populations with multiple subtypes (Kana, 2007; Dobbs, 2004).
The BED-EIA test may be more useful (Parekh, 2002). This commercially available test indirectly measures the proportion of total IgG in a given specimen that is HIV-1 specific. Recent seroconverters have a lower proportion of HIV-specific IgG in the serum/plasma than those with long-standing infection. This assay was developed to address some of the shortcomings of the LS-EIA (e.g., the difficulty of accurately making1/20,000 dilution, assay variability, subtype-dependent performance) but has a similar mean window period as the LS- EIA, 160 days (Parekh, 2001; Dobbs, 2004; Parekh, 2005; Hu, 2003; Kana, 2007). In a recent publication using HIV positive samples from Côte d’Ivoire the BED-EIA had a sensitivity of 85.7% and a specificity of 77.1% in detecting recent infections (Sakarovitch, 2007). In a study among 26,548 volunteers screened for enrollment in a phase III HIV-1 vaccine trial in Thailand, 38 (9%) of the 415 HIV prevalent infections tested with BED-EIA were identified as recent seroconversions (Kana, 2007). It is impossible to measure the accuracy of the incidence rate reported, though the BED assay did appear to be a convenient and economic tool for identifying recent infections. The author notes that a threshold cut off based on a calibrator specimen determines the classification of recent seroconversion and so the assay protocol must be strictly adhered to for accurate, precise and reproducible results (Kana, 2007).
Other investigators have modified commercially available EIA tests such as the Vironostika HIV-1 EIA from BioMerieux and the Abbott HIV-1/2gO assay to detect recent infection (Rawal, 2003; Kothe, 2003; Suligoi, 2003). Another in-house test for recent infection with HIV-1 was developed by Barin et al, called the IDE-V3 EIA (Barin, 2005). These 3 assays were compared to the BED EIA in an incidence study and all four showed mediocre performance at measuring recent infection in samples from individuals with known HIV infection dates (collected <180 days after infection) (Sakaravitch, 2007). Using a longer window period and/or a different cutoff favoring better specificity may improve accuracy of the modified kits.
In conclusion, these EIA methods (otherwise known as tests for recent infections or TRIs) for estimating HIV incidence are most appropriately used for population-level data. The window period for positivity in these assays is fairly broad (about 150–180 days) and varies considerably from person to person, so that incidence calculations, which use a single value for all individuals, should be interpreted carefully. Moreover, the assays have difficulties in identifying recent infections in persons with clades of HIV that are not B, E or D clades and misclassify individuals with late stage HIV disease, who lose antibodies as a result of immunodeficiency. In general, however, these tests will become more useful if future generations of tests can more accurately measure recent infection and adjust for individual-level variation in antibody kinetics. Comparisons to estimates from prospectively collected data and nucleic acid testing will aid assessments of the accuracy and advantages of different methods.
Another method that has been used for pinpointing time of HIV infection has been the use of HIV RNA testing, or nucleic acid amplification testing (NAAT), to identify individuals who have been so recently infected that they have not had sufficient time to develop HIV antibodies. In the literature these are referred to as acute HIV infections as opposed to the recent infections that incidence EIAs are designed to capture. Newer clinical testing algorithms suggest that pooling plasma specimens from high risk persons who test antibody negative may result in cost-effective use of HIV NAAT screening; pooling algorithms using NAAT testing have been used to identify viremic individuals and to allow estimation of HIV incidence in high risk groups (Quinn et al, 2000b; Priddy, 2007). A hypothetical advantage of routinely screening for HIV RNA before enrollment HIV vaccine trials is that acutely infected persons will not be enrolled. In addition, pooled samples showing incident infections might provide important information about where to target enrollment of at-risk populations.
However, the narrow window period identified by NAAT testing confers a significant disadvantage - few acute infections are detected. Specifically, the small number of infections identified—though important—limits the ability to identify risk factors associated with acute infection and may prevent inter-group incidence comparisons (Priddy, 2007). Another major disadvantage of NAAT is that in regions of lower HIV prevalence, the number of false positives in RNA+/antibody- populations may exceed the number of acutely viremic persons. In addition, the cost implications of NAAT testing would need to be carefully evaluated. Many centers in the developing world may not be equipped to perform plasma pooling and HIV NAAT screening proficiently and in a cost effective manner, though there are cases where NAAT is succeeding even in resource constrained settings. For example, this has been demonstrated by CHAVI (Center for HIV-AIDS Vaccine Immunology) sites in Africa that perform NAAT pooling routinely and have already screened >10,000 high risk individuals.
In conclusion, the lack of more precise and cost-effective HIV incidence biomarkers can limit the information available to researchers. However, with the right laboratory evaluation method, HIV biomarkers may be used to refine the estimates of the prevalence and incidence of HIV in at-risk populations that are being considered for inclusion in vaccine trials (Parekh, 2005). Two of these methods are the detuned EIA or BED capture assay and the NAAT pooled screening methodology, each with certain limitations. These same biomarkers may refine the determination of HIV incidence during the vaccine clinical trial, as described in the next section, by providing enhanced precision regarding the time of HIV acquisition by study participants. Finally, serologic testing for HIV incidence is faster than following cohorts prospectively but cannot replace the rich information provided by longitudinal cohorts. As stated earlier such a cohort study may be critical in regions or sub-populations where little to no epidemiological information is available and can generate prospective biological and behavioral data in a way that can illuminate epidemic trends better than cross-sectional prevalence studies.
The most important biomarkers used during an HIV vaccine trial are HIV infection and HIV viral load, a surrogate marker of vaccine effect on disease progression. Commonly used biomarkers for this purpose are HIV seropositivity (by ELISA/Western blot) and set-point viral load. Set-point viral load is used as a surrogate for vaccine effects due to observations that lower set-point viral load is correlated with a slower immunological decline in unvaccinated HIV-infected individuals (Mellors, 2007). It has been proposed as a surrogate endpoint for trials of “T-cell vaccines” based on these observations and biologic plausibility, assuming that vaccine-induced immune responses will lead to control of viremia, which will be reflected in reduced viral load which will in turn lead to improved clinical course. It is also hypothesized that vaccine-induced decreases in plasma RNA viral load will result in reduced transmission of HIV to sexual partners. This is based on data from untreated, unvaccinated individuals in a natural history study of discordant couples in the Rakai district of Uganda (Quinn et al, 2000a).
For measurement of HIV acquisition in a trial, incident EIA testing would have limited utility during an HIV vaccine trial due to individual variation in the evolution of antibody response. This makes these tests more suitable for determining recency of infection on a population, rather than an individual, level. However, RNA tests are routinely used to test stored specimens to determine if an antibody-positive participant was viremic at a previous time point. BED-EIA, or other EIAs designed to capture recent HIV infection, might be used to narrow the ‘window’ for a group of incident infections detected at a given visit. This would be most appropriate when study visits are far apart (e.g., one year) since more frequent HIV testing (e.g., every six months or every three months) would provide better discrimination of HIV incidence than would be provided by serologic testing for recency of infection.
If these assays can help to better pinpoint the time of infection (for example through routine NAAT testing to detect acute infections or if future generations of EIA tests can be used to identify very recent infections), they may aid in linking infection data with behavioral data obtained around the time of infection. This is a vast improvement over trying to link behavioral data with an infection date that is known very imprecisely. When a participant first tests positive for HIV serologically, stored specimens from the previous visit are routinely tested for HIV RNA as evidence of acute infection. NAAT testing is not routinely used for screening due to cost considerations, although specimens could be stored and then pooled and tested at a later date. However, if a substantial delay occurs in identifying acute infections, the advantages of the NAAT methodology over sequential serologic testing would likely be lost as participants would be well past the acute infection period when identified. Another major disadvantage of using these kinds of assays during an HIV vaccine efficacy trial is that they are expensive and not widely available outside of the US. There is also concern that they are not optimized for detection of infection with non-Clade B HIV.
Other important biomarkers to consider, as part of screening or after enrollment, are STI and other infectious disease biomarkers that may be co-factors for HIV acquisition or markers of risk. Biomarkers such as other sexually transmitted infections (STI) may be highly correlated with risk of HIV infection and, if they may possibly act as effect modifiers of vaccine efficacy, should be included in the final statistical analyses (Shepard, 2006). The unexpected results seen in a recent test-of-concept trial of the Merck Ad5 HIV vaccine candidate and the observation of vaccine effects on HIV acquisition only in certain subgroups underscore the potential complexity of analysis of data from vaccine efficacy trials (NIAID, 2007). These analyses may be aided by examination of additional biomarkers that can help elucidate, for example, HIV transmission dynamics for a given transmission modality or study site.
Deciding on which STI biomarker to include is still very difficult, however, because the possible effect of STIs on the measurement of vaccine efficacy has not been documented and is mostly theoretical. The potential role of STIs in different trial scenarios is depicted in Table 1. In scenario B, where STIs are much less prominent in the placebo arm of the trial, the vaccine efficacy findings are biased toward the null. In scenario C differential vaccine efficacy in STI arms of the trial also bias vaccine efficacy findings toward the null. As vaccines produce less and less modest effects, and fewer endpoints are available for analysis in STI subgroups, these effects may become more pronounced (Buchbinder et al, 2008). Despite these possible effects, however, deciding whether or not to test for STIs or whether to alter trial design or sample size calculations based on these theoretical effects is not straightforward. When considering which STI to include in a trial, the overall epidemiological evidence for the STI as an HIV co-factor should be considered carefully, in addition to the epidemiological evidence for that STI’s importance in a trial target population, such as MSM, or for a local community involved in the trial. Even so, the inclusion of STI biomarkers in HIV vaccine trials is controversial because of the lack of evidence that indeed these biomarkers will affect the results through confounding or effect modification. This in turn makes it very difficult to calculate parameters such as the sample size needed in order to conduct STI sub group analyses, since it is unknown what the potential size of the effect may be or even the direction of the effect. Certainly trial sample sizes would need to increase in order to adequately perform STI sub group analyses.
Proponents argue that STIs may affect estimates of vaccine efficacy because they influence HIV acquisition and HIV viral load (O’Brien et al., 1996; Quinn et al., 2000a; Corey, Wald et al., 2004), primary endpoints of HIV vaccine trials. Multiple, well-designed cohort and nested case-control studies from 4 continents suggest that there is a 2- to 5-fold increased risk of HIV acquisition associated with intercurrent STDs, and that genital ulcer diseases are generally associated with slightly higher risk estimates than discharge syndromes (Fleming et al. 1999; Sexton et al. 2005).
For herpes simplex virus type 2 (HSV-2), a recent meta-analysis of longitudinal studies estimated that the risk of HIV-1 infection among HSV-2 infected individuals was 3 times that among HSV-2–uninfected individuals (Freeman et al. 2006). This 3-fold increase in risk of HIV-1 acquisition suggests that in much of the developing world, where HSV-2 prevalence is high, a substantial proportion of HIV infection may be attributable to genital herpes. Thus, HSV-2 is the STI most often proposed as a possible HIV vaccine effect modifier, although it is possible that other STIs reviewed here could play a similar role.
It is also possible, because of the 3-fold increased risk for HIV acquisition, that vaccine trials conducted in high prevalence HSV-2 settings will reach trial endpoints much more quickly. Logically, reducing HSV-2 using suppressive therapy would also reduce risk of HIV. However, a recent trial did not demonstrate efficacy of HSV-2 suppression treatment in preventing new HIV infections (Celum et al, 2008). Thus, exactly how these biological cofactors may interact with candidate vaccines remains to be elucidated in clinical vaccine trials.
In addition to the propensity of HSV-2 infection to increase susceptibility to HIV acquisition it can affect HIV-1 viral load, particularly during acute HIV-1 infection, and may affect the HIV set-point viral load in HSV-2 infected individuals. Studies suggest that, for HIV-1 infected individuals, the plasma HIV viral load is 0.33–1 log higher among HSV-positive persons compared to HSV-negative individuals (Schacker et al. 2002; Reynolds et al. 2003; Gray et al. 2004; Duffus et al. 2005; Nagot et al. 2007). Several studies demonstrate that HSV-2 may directly increase the replication of HIV-1, explaining the higher HIV viral load seen in HSV-2 infected people (Corey et al. 2004). Again, such a role for HSV-2 could theoretically lead to vaccine effect modification in vaccine trials seeking to demonstrate reductions in viral load.
Another example of potential effect modification is syphilis. Genital ulcers due to syphilis have been shown to be associated with increased HIV transmission and acquisition (Fleming, 1999; Bentwich, 2000; Plummer, 1990) and with increases in HIV viral load (Buchacz, 2004; Dyer, 1998). These associations could potentially have implications for HIV vaccine trials, especially in certain geographic areas and in key subpopulations, such as MSM. MSM populations in cities in the United States and internationally have documented co-epidemics of syphilis and HIV, with HIV infection rates as high as 50–60% among gay/bisexual men with early syphilis (Buchacz, 2005).
We have discussed the use of biomarkers in assessing the primary endpoints in a vaccine trial and their potential role in explaining effect modification and confounding, using the examples of HSV-2 and syphilis. Although not directly related to the study question of vaccine efficacy, another approach to the use of STI biomarkers in an HIV vaccine trial may be the use of biomarkers as a screening tool. STI screening may be especially relevant to Phase II/III trials that seek to enroll participants at highest risk of HIV into a trial, by helping to assess patterns of risk-taking behavior in specific populations (e.g., commercial sex workers, IDU, MSM, adolescents, etc). In general there is agreement that using STIs for this purpose is most beneficial at the population level (e.g., to define STI epidemiological trends in key groups and thus identify groups or communities who may be at high risk of HIV) as opposed to the individual level, where concluding HIV risk based on STI infection can be very difficult. Non-disease biomarkers such as those used for semen are not included in this review, although they do exist.
Although most researchers agree there are several limitations to definitively linking STI infection to HIV risk behavior at the individual level (Fishbein, 2000; Pequegnat et al., 2000), in general high STI incidence and prevalence in a population or in target groups does help identify populations at risk for HIV due to unprotected sex (MMWR, 2003). Thus, measuring incident STIs would be most useful in planning prior to opening an HIV vaccine trial or as a screening tool. If incident STI information were available at the clinic level the occurrence would likely be rare and it would be difficult to validate sexual behavior or improve counseling based on STI outcomes. Nevertheless, STI outcomes could be used in this way. Syphilis and HSV-2 can again be used as examples.
Because the risk behaviors are so similar to those for HIV, performing serologies for syphilis and HSV-2 could help clinic staff identify high-risk volunteers, validate reported behaviors and provide appropriate counseling throughout a trial (Lopez-Zetina, 2000), especially in populations known to be at risk such as MSM (Buchacz, 2005; Nicoll, 2002; Stolte; 2001; Tabet, 2002; Lama et al, 2006). Fortunately, assays used to test for syphilis reactivity (RPR or VDRL), confirmed by FTA or MHA-TP, are widely available and are fairly inexpensive (Goh, 2005). Also, for detection of HSV-2 newer antibody assays are becoming more widely available, cheaper and quicker, and have been shown to be more cost-effective than traditional culture methods (Ramaswamy, 2004). Finally, HSV-2 rapid test kits and commercial ELISA HSV-2 tests are available (Morrow, 2005). For HSV-2 EIA tests, it should be noted that gG2 based HSV-2 EIAs have performed poorly in many populations and is a potential drawback to HSV-2 testing (Oladepo, 2000).
Other STIs that can be useful in patient management are bacterial STIs. Testing for gonorrhea and chlamydia has been made easier by use of PCR testing of urine and self-administered vaginal swabs for women, even though sometimes the cost of PCR testing can be prohibitive and specimen collection can be logistically challenging in some settings (Johnson, 2002). In addition, the association with HIV transmission and acquisition is not as strong as with syphilis and HSV-2 and the prevalence of bacterial STI is generally lower. It would thus be much harder to make a case that bacterial STIs may act as effect modifiers in an HIV vaccine trial, although the potential indirect benefits to the trial and to trial participants described earlier would still apply. To decrease the cost of testing pooled testing with PCR or in-house PCR can be used (Tan, 2005; Johnson, 2002; Mania-Pramanik, 2006; Cook, 2005).
It is worth noting the unique case of gonorrhea and chlamydia in MSM, since MSM are often important vaccine trial participants. Pharyngeal and rectal testing for gonorrhea and chlamydia by NAAT is not FDA approved. Pharyngeal testing may have limited utility as a marker of sexual risk behavior, whereas rectal gonorrhea and chlamydia in men has been shown to enhance susceptibility to HIV infection. Rectal infections would be not be detected if only urethral screening was performed; in one study among MSM, 53% of chlamydial infections and 64% of gonococcal infections were at nonurethral sites. Up to 85% of rectal infections are asymptomatic (Kent, 2005). Increases in chlamydial and gonococcal infections in MSM in the US and elsewhere may indicate a need to consider testing for these infections in MSM in HIV vaccine trials, both as an index of sexual risk behavior and as a possible confounder of HIV acquisition or viral load endpoints in a trial (Geisler, 2002; Fox, 2001; Ciemens, 2000; King, 2003).
Certain hepatitis infections may also be useful markers of HIV risk and can corroborate self-reported sexual risk behavior. For example, it may be helpful to assess Hepatitis C serologies (HCV) in vaccine trials involving injection drug users (IDU) (Paris, 2003). Due to shared risk factors for transmission, co-infection with HIV and HCV is common, especially among IDUs and recipients of contaminated blood or products (Verucchi, 2004; Sulkowski, 2000; Sherman, 2002). In addition, studies of MSM in the US and elsewhere have shown a higher prevalence of HCV than the general population and often a high correlation with HIV infection (Cohen, 2006; Schmidt, 2007; Thein, 2007). For example, a survey of 183 MSM at a community health center in Boston found a prevalence of 11.5% for HCV; men with HCV were more likely than those not infected with HCV to have HIV infection (70% versus 29%; p<.001) or Hepatitis B infection (47% versus 12%, p<.001) (Cohen, 2006). These results may indicate that the risk behaviors for HCV are aligned with HIV risk behavior (HCV seropositivity was significantly associated with an aggregate score representing high-risk sexual behavior in the past six months) though it remains unclear what role sexual behavior plays in HCV transmission.
Finally, the potential role of Hepatitis B screening should be considered, mostly as a benefit to trial participants. In the US many people who may volunteer for HIV vaccine trials are not vaccinated against Hepatitis B. This is especially true of the high risk groups that are sought for HIV vaccine trials, such as high risk MSM and heterosexual populations in sub-Saharan Africa countries with high prevalence of Hepatitis B. Studies among high-risk women and MSM in the US indicate low rates of HBV vaccination in these groups (Diamond, 2003; CDC, 1996; MacKellar, 2001; Koblin, 2007). Clinical trial sites in areas of high Hepatitis B prevalence could potentially test for Hepatitis B and provide or refer study participants for vaccination. Authors have noted that Hepatitis B vaccination could be used as an incentive to HIV vaccine trial participation, as could other STI testing and/or vaccinations noted in this review (Beyrer, 1996).
The review of HIV biomarkers reveals a wide spectrum of utility and ease of implementation. EIA tests for recent HIV infection and NAAT tests for acute infection are best used for incidence studies done in preparation for HIV vaccine studies. Even though the richest incidence data from a population is obtained from prospective cohort studies, incident EIA testing and/or NAAT testing provide HIV incidence information in a more efficient and cost-effective manner than a cohort study. However, the ease of using these tests to obtain incidence estimates from cross-sectional specimen collections has a price. Estimates of incidence obtained in this manner may be less accurate than prospectively collected data and cross-sectional studies using these methodologies (especially those using ‘waste’ specimens collected for another purpose) often provide only limited information on sexual risk behavior or other co-factors for HIV infection. This may be a distinct disadvantage when identifying groups at high risk of HIV infection (e.g., by ethnic group, gender, geographic location, or risk group) is a goal.
Using EIA tests for recent HIV infection during a trial may be of use when there are long windows between testing (e.g., annual testing) and an investigator would like to better pinpoint the actual infection date. For the most part, however, when a participant becomes HIV infected during a vaccine trial, earlier specimens are RNA tested to identify whether or not the person was acutely infected at the previous study visit. Using NAAT testing at screening into a trial is generally cost prohibitive based on the small number of acute infections that would be detected. EIA tests for recent infection and NAAT/RNA tests can aid with understanding of behavioral data by allowing linkages of behavioral data with a more accurate estimate of HIV infection date.
Investigators in a vaccine trial who are contemplating the use of STI biomarkers should first consider whether the biomarker in question is of direct benefit to the overall trial or of indirect benefit. For example, including potential STI biomarkers in a clinical trial because of the possibility of confounding and effect modification is of direct benefit to the primary scientific question and should be looked at closely by HIV vaccine trial teams. Other uses of biomarkers, such as to corroborate risk taking behavior or identify groups at risk, are of indirect benefit. Also, in making decisions about including STI assessment in trials, local STI epidemiology should be carefully considered. STI prevalence will vary from site to site and the unique contribution of specific STIs to each trial outcome may vary. Once an STI of importance has been identified, testing for such STIs could be of use prior to the trial, as a screening tool, as a baseline measurement and perhaps also throughout an HIV vaccine efficacy trial.
For the STI biomarkers reviewed, there are two important mechanisms through which STIs may theoretically modify candidate HIV vaccine efficacy. First, trial participants with intercurrent STIs may face a 2–5 fold increased risk of acquiring HIV, as has been demonstrated by multiple studies (Fleming et al. 1999; Sexton et al. 2005). Estimates of vaccine efficacy for HIV prevention may be affected if STIs occur more commonly in one arm of the trial or if STIs act as effect modifiers. Second, intercurrent STIs may affect HIV viral load, an important outcome in HIV vaccine trials. Neither of these concerns about STIs increasing HIV acquisition and viral load would apply if STIs are equally common in both arms and is the vaccine action was the same among participants with and without STIs; i.e. the vaccine effect was not modified by intercurrent STIs. It is important to note again that such interactions are currently theoretical.
What does STI evaluation mean practically in vaccine trials? In addition to the planning and analytical issues relating to STI testing in clinical protocols, the trial sites will face operational issues associated with counseling, testing and treatment for STIs. Certainly additional resources would be needed. Another key question to consider is whether treatment would be offered to symptomatic participants only or to all infected participants. For HSV-2, for example, providing treatment for symptomatic HSV-2 may reduce viral load preferentially in placebo recipients, potentially confounding the true vaccine effect. This effect may be even greater if asymptomatic participants are provided with therapy. It may be possible to control for this effect, as would be done for the effect of antiretroviral therapy, if data on HSV infection and treatment are collected. Nevertheless, issues around counseling, testing and treatment of any STI biomarker are complex and need to be considered carefully.
The recent discontinuation of vaccinations in two large HIV vaccine trials in the Americas and South Africa and the complexity of the data from these trials underscore the potential utility of collecting more extensive biological and behavior information. An HIV vaccine based on an Adenovirus serotype 5 (Ad5) vector with a clade B insert failed to protect against HIV acquisition and failed to reduce HIV set point viral load among individuals who became infected with HIV after vaccination (NIAID, 2007). Among two sub-groups - men with serologic evidence of prior exposure to Ad5 and uncircumcised men - those who received vaccine appeared more likely to acquire HIV infection. Whether this was due to a direct or indirect vaccine effect, chance, or behavioral differences between the vaccinees and placebo recipients is currently being investigated. A limited number of biomarkers (such as HSV-2 serology) are available and will be included in the multivariate analysis. Another feature of this study was the low incidence observed in women – those who received placebo as well as those who received vaccine. While these women reported a large number of partners and many pregnancies were observed during the trial, it appears that their sexual networks were at low-risk for HIV. It may be that in future trials in women additional biomarkers could be used to more completely assess women’s risk of HIV and other STIs.
In conclusion, this review has presented the rationale for the use of biomarkers before and after HIV vaccine trial cohort enrollment. It is clear that the final determination of the use of biomarkers in HIV vaccine trials needs to be based on a careful balancing of trial location, scientific rationale, costs, burden to clinical trials units and ethical obligations and/or benefits to participants. We have summarized issues relating to the use of different biomarkers in table 1. In this table the biomarkers discussed in this review are displayed, along with a description of 1) what they could be used for in a trial 2) the advantages of the biomarker 3) the disadvantages of the biomarker and 4) some examples of products commonly used to test for the biomarker. In the history of vaccine development it has often been the case that a given vaccine shows dramatic effect although the underlying mechanisms are not well understood (Lambert, 2005). This should tell us that in the end, if we do discover a partially effective HIV vaccine, we may end up relying heavily on the biological and behavioral information we collected during a trial to try and determine why it worked.
The views presented in this article are those of the authors and do not necessarily reflect those of the U.S. Department of Health and Human Services, National Institutes of Health or any of the authors’ employers, such as the HIV Vaccine Trials Network