Current antiretroviral (ARV) regimens are extremely effective therapies for increasing life expectancy in HIV-infected individuals, and have been shown to be cost-effective as treatment in both resource-rich and resource-limited settings . However their effectiveness in preventing infection, except in the case from mother-to-child, is unknown. When AZT was introduced (over 20 years ago) infectious disease experts hypothesized that ARVs, by reducing viral load, would make individuals less infectious and consequently decrease transmission. Modelers only began to investigate this secondary benefit of ARVs after the introduction of more powerful treatment regimens (in the late 1990s) that were highly effective in suppressing viral load. Blower and colleagues predicted, in 2000, that a high usage of ARVs could be expected to decrease transmission in San Francisco by over 40% in a decade . They assumed ARVs, by reducing viral load by several logs, reduce the infectiousness of treated individuals by 50 to 99%. Velasco-Hernandez et al. took this modeling further and showed an HIV epidemic could (theoretically) be eliminated if treatment rates were very high; they calculated elimination would take 50 to 100 years . Recently, it has been suggested that instead of considering prevention as a secondary benefit of ARVs, it should be considered as the primary purpose. On this basis, Granich et al. have proposed a “Test and Treat” strategy; where all individuals would be tested annually for HIV-infection and infected individuals would receive anti-HIV treatment regardless of clinical need . Using modeling Granich et al. have shown this strategy could (theoretically) eliminate HIV in a decade if the following conditions are met: almost all infected individuals accept treatment, ARVs reduce infectiousness by 99%, drop-out rates remain below 5%, drug resistance does not evolve and risk behavior is substantially reduced. In an interesting paper in this issue Dodd et al. further examine the use of ARVs as a prevention tool . They expand on the Granich et al. model by including two behavioral risk groups: a large group of individuals with few sex partners and a smaller “core” group of individuals with high numbers of sex partners. They use their model to examine the impact of the frequency of HIV testing on the effectiveness of a “Test and Treat” strategy. They determine the optimal testing frequency (defined in terms of reduction in incidence per unit cost) depends on the degree of heterogeneity in risk behavior, and is likely to lie between 1 to 5 years. Most importantly, their modeling shows a “Test and Treat” strategy could reduce transmission, but would be unlikely to eliminate HIV in hyper-endemic settings.
Large-scale clinical trials are being initiated to quantify the impact of ARVs on reducing transmission; however results will not be available for several years. Until then models, such as the one constructed by Dodd et al. will be used to predict the effectiveness, as well as to estimate the cost-effectiveness, of the “Test and Treat” strategy. The model presented by Dodd and colleagues is built on a number of assumptions regarding the effects of ARVs on increasing survival and reducing infectivity. These assumptions can be “tested” against existing data from clinical trials and observational cohorts to assess their validity. Dodd and colleagues assume ARVs increase survival by a maximum of 25 years . They further assume an individual only receives this maximal survival benefit if they initiate treatment immediately after infection; if an individual waits to initiate treatment they are assumed to lose ~2.5 years from the maximum survival time for each year they wait . Consequently, in the modeling of the “Test and Treat” strategy, individuals who wait to initiate treatment until their CD4 cell count falls to 350 cells/microL only survive ~12 years . However clinical data suggest individuals who initiate treatment at a CD4 cell count of 350 cells/microL survive for 15 to 30 years; and recent studies indicate the earlier an individual begins treatment the greater their chances of survival [6–8]. These data imply infected individuals are likely to survive for significantly longer than Dodd et al. have assumed. The assumptions Dodd et al. make regarding infectivity can also be “tested” against data. They assume ARVs reduce infectivity by ~90% (in all individuals) and viral suppression rates can remain constant for several decades . Clinical data show ARVs effectively suppress virus in the majority of individuals (70–95%), but in a minority only partial suppression is achieved and hence these individuals remain infectious. The degree of viral suppression is very dependent on adherence. In resource-limited settings, due to interruptions in the drug supply (stock-outs are already occurring) adherence rates may only be moderate. Clinical data also show viral suppression rates are highest in the first year of treatment and then decrease in subsequent years. Taken together, the current clinical data indicate that the reductions in incidence predicted by Dodd et al. may be overly optimistic because infected individuals will live longer and be more infectious than they have assumed.
The paper by Dodd et al. illustrates how models can be used as thought experiments. Modeling can be useful for designing health policy, but it is essential that models are based on realistic assumptions and parameterized with the most recent biomedical data. It is crucial to include resistance in any model that is used to evaluate the “Test and Treat” strategy as previous modeling has shown significant levels of resistance emerge when treatment rates are high . Dodd et al. assume resistance will not evolve, but unfortunately it appears certain that resistance will develop when millions receive treatment. Clinical data show that treatment regimens based on two NRTIs and a NNRTI (which is the current first-line regimen in many resource-limited countries) could lead to 5–15% of patients developing resistance after one year. Resistance rates could double after three years if patients are not tested for resistance and switched to new regimens. Deciding whether to use ARVs primarily for therapy or primarily for reducing transmission has significant implications for determining who gets treated. Assuming that the primary purpose of ARVs is prevention Dodd et al. argue that behavioral “core” groups should be prioritized to receive treatment. However, if the primary purpose of ARVs is therapeutic then the sickest should receive treatment. Recently it has been reported that only ~40% of people in need of ARVs are receiving treatment . Many individuals in Sub-Saharan Africa are initiating treatment with a CD4 cell count less than 100 cells/microL . These numbers strongly indicate that before a “Test and Treat” strategy is implemented there is an urgent need to focus on a “Find and Treat” strategy. The goal of such a strategy would be to attain universal access to necessary medications for those most in need and to ensure that treatment is sustainable.