Many ART programmes in resource-limited settings do not have access to viral load testing to monitor treatment response, identify treatment failure and inform decisions on when to switch to a second-line regimen, but rely on clinical examinations and, where available, on CD4 cell counts. In this study of ART programmes from sub-Saharan Africa, Asia and Latin America all sites had access to CD4 cell counts and some had access to viral load monitoring. We could thus study patterns of switching to second-line regimens in sites with and without viral load monitoring. We found that patients tended to switch earlier and at higher CD4 cell counts in programmes with, compared to programmes without, access to viral load monitoring. Low CD4 cell counts at the start of ART predicted switching both in programmes with and without viral load monitoring, and fewer patients switched in more recent calendar periods.
Our study included over 20,000 patients who started ART and almost 600 patients who switched to second-line ART. The decision to measure, or not to measure, viral load in an individual patient will often be related to prognosis and the probability of switching to a second line regimen. By comparing sites with and without a policy of monitoring viral load, rather than comparing patients with and without available viral load measurements, such confounding by the indication to test was avoided. The definition we used for second-line ART is in accordance with WHO recommendations [11
] and was used in a previous analysis of the Médecins Sans Frontières (MSF) programmes in Africa, Asia, Latin America and Eastern Europe [8
]. Our study includes patients who were treated in 17 programmes from 14 countries, and results should therefore be applicable to many other patients on ART in resource-limited countries. Of note, results were robust when restricting analyses to programmes from sub-Saharan Africa. However, we stress that the sites participating in the ART-LINC collaboration are not necessarily representative of all sites providing ART in these countries: they represent a sample of programmes with electronic medical record systems [13
] and access to CD4 counts and second-line regimens.
A substantial number of patients did not have a CD4 cell count recorded at the time of switching, and this could have introduced bias in the comparison of CD4 cell counts at switching in sites with and without viral load monitoring. This is unlikely, however: the baseline CD4 counts were similar in patients with missing counts in sites with and without viral load monitoring (data not shown). We did not examine clinical failure: not all sites systematically collect data on opportunistic infections and diagnostic capabilities and criteria vary between sites. Also, we had no information on adherence or drug-resistance, and data about reasons for switching were only available for some patients. In a programme of a faith-based organization in three countries in sub-Saharan Africa, immunological failure was the most common reason for switching, followed by virological failure. Clinical failures were rare [14
We only considered switching of regimens, as recommended by WHO in case of treatment failure, and not substitutions of single drugs or other changes. A recent study comparing Switzerland with the Khayelitsha and Gugulethu township programmes in South Africa showed that changes to first-line regimens of any type occurred twice as often in Switzerland than in South Africa [7
]. The difference was, however, explained by a higher rate of changes due to toxicity or patient wishes in Switzerland, while changes due to treatment failure were infrequent in both settings [7
]. In the present study, most of the patients with information on the reason for switching changed regimens because of treatment failure, and results were similar when analyses were adjusted for differences in the first line regimens used in programmes with and without viral load monitoring.
Both in programmes with and without access to viral load monitoring the rates of switching were substantially higher than in the MSF programmes [8
]: the MSF programmes do not have access to viral load monitoring and the rate was 0.5 per 100 person-years. Rates were, however, lower than in the programme in three African countries, which includes routine viral load monitoring in all sites: the rate of switching was 4.9 per 100 person-years [14
]. A multi-country survey by WHO found highly variable rates of switching to second-line regimens [15
]. It seems unlikely that this variability is explained by differences in primary resistance to NRTIs or NNRTIs. At present viral resistance is rare in most resource-limited settings, although important levels of resistance have been reported from Nigeria and North India [16
]. The availability of viral load monitoring and generally differences in clinical practice are more likely explanations: practice varies across sites participating in the collaboration, even within the same country. For example, in township programmes in Cape Town, therapy is switched after two consecutive viral loads above 5,000 copies/ml in Khayelitsha whereas in Gugulethu the threshold is 1,000 copies/ml.
A low CD4 cell count when starting ART was the most important predictor of switching to a second-line regimen, in line with a previous study [14
]. Starting ART earlier might thus not only reduce the high mortality during the initial months of ART [7
] but also help preserve first-line regimens. The rate of switching was lower in more recent calendar years compared to the early years of the ART scale-up, again confirming previous findings [14
]. Of note, this was not explained by the increase in CD4 counts at the start of ART, which was observed in more recent years [10
]: the effect was evident in analyses adjusted for baseline CD4 count, and might reflect a change in practice associated with the substantial increase in patients starting ART since the early 2000s.
There is debate on the feasibility and cost-effectiveness of viral load monitoring in the context of scaling up of ART in resource-limited settings [18
]. WHO stipulates that viral load monitoring is desirable, but not essential, for a public health approach to ART [23
]. According to a recent modelling study, routine viral load monitoring has only limited benefit on survival and cost-effectiveness is poor [20
]. An analysis of mortality in the first year after starting ART showed similar survival in sites with and without viral load monitoring [17
], and preliminary results of a randomized trial led to similar conclusions [24
]. The results from empirical research are reassuring for sites without access to routine viral load monitoring but they are based on short-term follow-up and the effect of viral load monitoring on long-term outcomes is unclear at present. The higher CD4 cell counts at the time of switching in sites with, compared to sites without access to viral load monitoring indicate that treatment failure is detected earlier in programmes using viral load monitoring, and this might translate into better clinical outcomes in the long-term.
In conclusion, we found that in programmes with access to viral load monitoring patients tended to switch earlier and at higher CD4 cell counts than in sites without viral load monitoring. Future studies should examine what the consequences of earlier or later switching are for clinical outcomes. For example, the effect of switching based on virological failure in patients monitored virologically with switching based on immunological and clinical criteria could be compared in a randomised trial. Alternatively, this comparison could be mimicked in longitudinal, observational studies using causal modelling [25
]. Finally, further research is required to determine the optimal frequency of determining CD4 cell counts and of measuring viral load to maximize cost-effectiveness and optimize patient outcomes in different settings in lower income countries.