The HIV epidemic has carved contrasting trajectories around the world with sub-Saharan Africa (SSA) being most affected. We hypothesized that mean HIV-1 plasma RNA viral loads (VL) are higher in SSA than other areas, and that these elevated levels may contribute to the scale of epidemics in this region.
Design and Methods
To evaluate this hypothesis, we constructed a database of means of 71,668 VL measurements from 44 cohorts in seven regions of the world. We used linear regression statistical models to estimate differences in VL between regions. We also constructed and analyzed a mathematical model to describe the impact of the regional VL differences on HIV epidemic trajectory.
We found substantial regional VL heterogeneity. The mean VL in SSA was 0.58 log10 copies/mL higher than in North America (95% CI: 0.45 to 0.71); this represents about a 4-fold increase. The highest mean VLs were found in Southern and East Africa, while in Asia, Europe, North America, and South America, mean VLs were comparable. Mathematical modeling indicated that conservatively 14% of HIV infections in a representative population in Kenya could be attributed to the enhanced infectiousness of subjects with heightened VL.
We conclude that community VL appears to be higher in SSA than in other regions and this may be a central driver of the massive HIV epidemics in this region. The elevated VLs in SSA may reflect, among other factors, the high burden of co-infections or the preponderance of HIV-1 subtype C infection.
HIV; viral load; co-infection; epidemic; sub-Saharan Africa; mathematical model
In the RV144 trial, the estimated efficacy of a vaccine regimen against human immunodeficiency virus type 1 (HIV-1) was 31.2%. We performed a case–control analysis to identify antibody and cellular immune correlates of infection risk.
In pilot studies conducted with RV144 blood samples, 17 antibody or cellular assays met prespecified criteria, of which 6 were chosen for primary analysis to determine the roles of T-cell, IgG antibody, and IgA antibody responses in the modulation of infection risk. Assays were performed on samples from 41 vaccinees who became infected and 205 uninfected vaccinees, obtained 2 weeks after final immunization, to evaluate whether immune-response variables predicted HIV-1 infection through 42 months of follow-up.
Of six primary variables, two correlated significantly with infection risk: the binding of IgG antibodies to variable regions 1 and 2 (V1V2) of HIV-1 envelope proteins (Env) correlated inversely with the rate of HIV-1 infection (estimated odds ratio, 0.57 per 1-SD increase; P = 0.02; q = 0.08), and the binding of plasma IgA antibodies to Env correlated directly with the rate of infection (estimated odds ratio, 1.54 per 1-SD increase; P = 0.03; q = 0.08). Neither low levels of V1V2 antibodies nor high levels of Env-specific IgA antibodies were associated with higher rates of infection than were found in the placebo group. Secondary analyses suggested that Env-specific IgA antibodies may mitigate the effects of potentially protective antibodies.
This immune-correlates study generated the hypotheses that V1V2 antibodies may have contributed to protection against HIV-1 infection, whereas high levels of Env-specific IgA antibodies may have mitigated the effects of protective antibodies. Vaccines that are designed to induce higher levels of V1V2 antibodies and lower levels of Env-specific IgA antibodies than are induced by the RV144 vaccine may have improved efficacy against HIV-1 infection.
Treatment-selection markers are biological molecules or patient characteristics associated with one’s response to treatment. They can be used to predict treatment effects for individual subjects and subsequently help deliver treatment to those most likely to benefit from it. Statistical tools are needed to evaluate a marker’s capacity to help with treatment selection. The commonly adopted criterion for a good treatment-selection marker has been the interaction between marker and treatment. While a strong interaction is important, it is, however, not suffcient for good marker performance. In this paper, we develop novel measures for assessing a continuous treatment-selection marker, based on a potential outcomes framework. Under a set of assumptions, we derive the optimal decision rule based on the marker to classify individuals according to treatment benefit, and characterize the marker’s performance using the corresponding classification accuracy as well as the overall distribution of the classifier. We develop a constrained maximum-likelihood method for estimation and testing in a randomized trial setting. Simulation studies are conducted to demonstrate the performance of our methods. Finally, we illustrate the methods using an HIV vaccine trial where we explore the value of the level of pre-existing immunity to Adenovirus serotype 5 for predicting a vaccine-induced increase in the risk of HIV acquisition.
Classification accuracy; Constrained maximum likelihood; Monotone treatment effect; Potential outcomes; Sensitivity analysis; Treatment-selection marker
Extensive observational data suggest that HSV-2 infection may
facilitate HIV acquisition, increase HIV viral load, and accelerate HIV
progression and onward transmission. To explore these relationships, we
examined the impact of pre-existing HSV-2 infection in an international HIV
We analyzed the associations between prevalent HSV-2 infection and
HIV-1 acquisition and progression among 1836 men who have sex with men
(MSM). We used Cox proportional hazards regression models to estimate the
association between HSV-2 infection and both HIV acquisition and ART
initiation, and linear regression to explore the effect of HSV-2 on pre-ART
HSV-2 infection increased risk of HIV-1 acquisition among all
volunteers (adjusted hazard ratio 2.2; 95% CI, 1.4 to 3.5).
Adjusting for demographic variables, circumcision, Ad5 titer and significant
risk behaviors, the risk of HIV acquisition among HSV-2 infected placebo
recipients was three fold higher than HSV-2 seronegatives (hazard ratio 3.3;
95% CI, 1.6 to 6.9). Past HSV-2 infection was associated with a 0.2
log10 copies/ml higher adjusted mean set point viral load
(95% CI, 0.3 lower to 0.6 higher). HSV-2 infection was not
associated with time to ART initiation.
Among MSM in an HIV-1 vaccine trial, pre-existing HSV-2 infection was
a major risk factor for HIV acquisition. Past HSV-2 did not significantly
increase HIV viral load or early disease progression. HSV-2 seropositive
persons will likely prove more difficult than HSV-2 seronegative persons to
protect against HIV infection using vaccines or other prevention
Herpes Simplex Virus Type II; HIV incidence
The sieve analysis for the Step trial found evidence that breakthrough HIV-1 sequences for MRKAd5/HIV-1 Gag/Pol/Nef vaccine recipients were more divergent from the vaccine insert than placebo sequences in regions with predicted epitopes. We linked the viral sequence data with immune response and acute viral load data to explore mechanisms for and consequences of the observed sieve effect.
Ninety-one male participants (37 placebo and 54 vaccine recipients) were included; viral sequences were obtained at the time of HIV-1 diagnosis. T-cell responses were measured 4 weeks post-second vaccination and at the first or second week post-diagnosis. Acute viral load was obtained at RNA-positive and antibody-negative visits.
Vaccine recipients had a greater magnitude of post-infection CD8+ T cell response than placebo recipients (median 1.68% vs 1.18%; p = 0·04) and greater breadth of post-infection response (median 4.5 vs 2; p = 0·06). Viral sequences for vaccine recipients were marginally more divergent from the insert than placebo sequences in regions of Nef targeted by pre-infection immune responses (p = 0·04; Pol p = 0·13; Gag p = 0·89). Magnitude and breadth of pre-infection responses did not correlate with distance of the viral sequence to the insert (p>0·50). Acute log viral load trended lower in vaccine versus placebo recipients (estimated mean 4·7 vs 5·1) but the difference was not significant (p = 0·27). Neither was acute viral load associated with distance of the viral sequence to the insert (p>0·30).
Despite evidence of anamnestic responses, the sieve effect was not well explained by available measures of T-cell immunogenicity. Sequence divergence from the vaccine was not significantly associated with acute viral load. While point estimates suggested weak vaccine suppression of viral load, the result was not significant and more viral load data would be needed to detect suppression.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice.The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality.Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction.A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines.These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
Markers for treatment selection are being developed in many areas of medicine. Technological advances are rapidly producing an abundance of candidates for study. Clinicians hope to use these markers to identify which individuals will benefit from a given treatment, with the goal of maximizing good outcomes and minimizing side effects, treatment burden, and medical costs.
It is essential that we have appropriate methods for evaluating treatment selection markers, in order to make informed decisions regarding marker advancement and, ultimately, clinical application. However, existing statistical methods for evaluating treatment selection markers are largely inadequate. This paper proposes several novel statistical measures of marker performance aimed at addressing key questions in marker evaluation: 1) Does the marker help patients choose amongst treatment options?; 2) How should treatment decisions be made based on a continuous marker measurement?; 3) What is the impact on the population of using the marker to select treatment?; and 4) What proportion of patients will have different treatment recommendations following marker measurement? The proposed approach is contrasted with existing methods for marker evaluation, including assessing a marker’s prognostic value, evaluating treatment effects in a subset of the population who are marker-positive, and testing for a statistical interaction between marker value and treatment. The approach is illustrated in the context of choosing adjuvant chemotherapy treatment for women with estrogen-receptor positive and node-positive breast cancer. The results have important implications for the design of marker evaluation studies, and can serve as the basis for further development of standards for assessing treatment selection markers.
When estimating the association between an exposure and outcome, a simple approach to quantifying the amount of confounding by a factor, Z, is to compare estimates of the exposure–outcome association with and without adjustment for Z. This approach is widely believed to be problematic due to the nonlinearity of some exposure-effect measures. When the expected value of the outcome is modeled as a nonlinear function of the exposure, the adjusted and unadjusted exposure effects can differ even in the absence of confounding (Greenland , Robins, and Pearl, 1999); we call this the nonlinearity effect. In this paper, we propose a corrected measure of confounding that does not include the nonlinearity effect. The performances of the simple and corrected estimates of confounding are assessed in simulations and illustrated using a study of risk factors for low birth–weight infants. We conclude that the simple estimate of confounding is adequate or even preferred in settings where the nonlinearity effect is very small. In settings with a sizable nonlinearity effect, the corrected estimate of confounding has improved performance.
Collapsibility; Confounding; Odds ratio
In many clinical settings, statistical models are being developed for predicting risk of disease or other adverse event. These models are intended to help patients and physicians make informed decisions. A new approach to assessing the value of adding a new marker to a risk prediction model, called the risk stratification approach, was recently proposed by Cook and colleagues (1,2). This involves cross-tabulating risk predictions on the basis of models with and without the new marker, and has been widely adopted in the literature. We argue that important information with regard to three important model validation criteria can be extracted from risk stratification tables: 1) model fit or calibration; 2) capacity for risk stratification; and 3) accuracy of classifications based on risk. However, we describe how the information contained in the tables must be interpreted carefully, and caution against common misuses of the method. The concepts are illustrated using data from a recently published study of a breast cancer risk prediction model by Tice et al. (3).
The rapid and continuing progress in gene discovery for complex diseases is fueling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality. Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by previous reporting guidelines. These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality. Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines. These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
Genetic; Risk prediction; Methodology; Guidelines; Reporting
The restricted neutralization breadth of vaccine-elicited antibodies is a major limitation of current human immunodeficiency virus-1 (HIV-1) candidate vaccines. In order to permit the efficient identification of vaccines with enhanced capacity for eliciting cross-reactive neutralizing antibodies (NAbs) and to assess the overall breadth and potency of vaccine-elicited NAb reactivity, we assembled a panel of 109 molecularly cloned HIV-1 Env pseudoviruses representing a broad range of genetic and geographic diversity. Viral isolates from all major circulating genetic subtypes were included, as were viruses derived shortly after transmission and during the early and chronic stages of infection. We assembled a panel of genetically diverse HIV-1-positive (HIV-1+) plasma pools to assess the neutralization sensitivities of the entire virus panel. When the viruses were rank ordered according to the average sensitivity to neutralization by the HIV-1+ plasmas, a continuum of average sensitivity was observed. Clustering analysis of the patterns of sensitivity defined four subgroups of viruses: those having very high (tier 1A), above-average (tier 1B), moderate (tier 2), or low (tier 3) sensitivity to antibody-mediated neutralization. We also investigated potential associations between characteristics of the viral isolates (clade, stage of infection, and source of virus) and sensitivity to NAb. In particular, higher levels of NAb activity were observed when the virus and plasma pool were matched in clade. These data provide the first systematic assessment of the overall neutralization sensitivities of a genetically and geographically diverse panel of circulating HIV-1 strains. These reference viruses can facilitate the systematic characterization of NAb responses elicited by candidate vaccine immunogens.
Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.
Classification accuracy; Covariate effect; Receiver operating characteristic curve; Sensitivity; Specificity
The receiver operating characteristic (ROC) curve displays the capacity of a marker or diagnostic test to discriminate between two groups of subjects, cases versus controls. We present a comprehensive suite of Stata commands for performing ROC analysis. Non-parametric, semiparametric and parametric estimators are calculated. Comparisons between curves are based on the area or partial area under the ROC curve. Alternatively pointwise comparisons between ROC curves or inverse ROC curves can be made. Options to adjust these analyses for covariates, and to perform ROC regression are described in a companion article. We use a unified framework by representing the ROC curve as the distribution of the marker in cases after standardizing it to the control reference distribution.
Classification accuracy is the ability of a marker or diagnostic test to discriminate between two groups of individuals, cases and controls, and is commonly summarized using the receiver operating characteristic (ROC) curve. In studies of classification accuracy, there are often covariates that should be incorporated into the ROC analysis. We describe three different ways of using covariate information. For factors that affect marker observations among controls, we present a method for covariate adjustment. For factors that affect discrimination (i.e. the ROC curve), we describe methods for modelling the ROC curve as a function of covariates. Finally, for factors that contribute to discrimination, we propose combining the marker and covariate information, and ask how much discriminatory accuracy improves with the addition of the marker to the covariates (incremental value). These methods follow naturally when representing the ROC curve as a summary of the distribution of case marker observations, standardized with respect to the control distribution.
In the Step Study, the MRKAd5 HIV-1 gag/pol/nef vaccine did not lower post-infection plasma viremia, and HIV-1 incidence was higher in vaccine-treated than placebo-treated males with pre-existing adenovirus serotype 5 (Ad5) immunity. We evaluated vaccine-induced immunity and its potential contributions to infection risk.
To assess immunogenicity, HIV-specific T-cells were characterized ex vivo using validated IFN-γ ELISpot and intracellular cytokine staining (ICS) assays, employing a case-cohort design. To determine effects of vaccine and pre-existing Ad5 immunity on infection risk, flow cytometric studies measured Ad5-specific T-cells and circulating activated (Ki67+/Bcl- 2lo) CD4+ T-cells expressing CCR5.
IFN-γ-secreting HIV-specific T-cells (range, 163–686/106 PBMC) were detected ex vivo by ELISpot in 77% (258/354) of vaccinees; the majority recognized 2–3 HIV proteins. HIV- specific CD4+ T-cells were identified by ICS in 41%; ~85% expressed IL-2, and two-thirds of these co-expressed IFN-γ and/or TNF-α. HIV-specific CD8+ T-cells (range, 0.4–1.0%) were observed in 73%, expressing predominantly either IFN-γ alone or with TNF-α. No major differences were found in vaccine-induced HIV-specific immunity, including response rate, magnitude, and cytokine profile comparing vaccinated male cases (pre-infection) with non-cases. Interestingly, Ad5-specific T-cells were lower in cases than non-cases in several subgroup analyses. The percent circulating Ki67+Bcl-2lo/CCR5+ CD4+ T-cells did not differ between cases and non-cases.
Consistent with previous trials, the MrkAd5/HIV-1 gag/pol/nef vaccine was highly immunogenic for inducing HIV-specific CD8+ T-cells. Comparative analyses did not reveal differences in HIV-specific immunologic responses between cases and non-cases that explain the lack of vaccine efficacy and potential infection enhancement. If T-cell immunity is critical in vaccine-induced HIV protection, our findings suggest that future candidate vaccines must elicit responses that either exceed in magnitude or differ in breadth and/or function from those observed in this trial.
National Institute of Allergy and Infectious Diseases, U.S. National Institute of Health; Merck Research Laboratories
Research methods for biomarker evaluation lag behind those for evaluating therapeutic treatments. Although a phased approach to development of biomarkers exists and guidelines are available for reporting study results, a coherent and comprehensive set of guidelines for study design has not been delineated. We describe a nested case–control study design that involves prospective collection of specimens before outcome ascertainment from a study cohort that is relevant to the clinical application. The biomarker is assayed in a blinded fashion on specimens from randomly selected case patients and control subjects in the study cohort. We separately describe aspects of the design that relate to the clinical context, biomarker performance criteria, the biomarker test, and study size. The design can be applied to studies of biomarkers intended for use in disease diagnosis, screening, or prognosis. Common biases that pervade the biomarker research literature would be eliminated if these rigorous standards were followed.