This is the largest cohort study to date demonstrating that WHO immunologic failure criteria lack sensitivity for predicting virologic failure. Almost 50% of individuals identified as experiencing failure by CD4 cell count criteria were virologically suppressed (VL, <400 copies/mL), whereas the criteria did not identify nearly half of individuals who were actually experiencing failure.
The conclusion that CD4 cell count criteria perform poorly remains true even when the slightly less stringent WHO definition of virologic failure is used (), but most patients in RLS are monitored using CD4 cell count or clinical criteria. Previous studies have shown a similarly low performance of immunologic and clinical criteria [12
]. However, this analysis, which uses programmatic treatment data, a large sample size, longer duration of follow-up, and rates of virologic failure commensurate with other real-life treatment cohorts in RLS [18
], provides the strongest evidence to date of the poor performance of immunologic criteria in identifying treatment failure.
As a retrospective analysis of a large treatment cohort, this study is limited by the data available, the frequency of programmatic monitoring, and the program protocol for ART switch. Because virologic failure typically precedes immunologic failure in the natural progression of disease, this may have resulted in lower than expected sensitivity and PPV. However, given the risk of accumulating drug-resistance mutations in the context of nonsuppressive regimens, timely identification of failure remains critical for appropriate ART management. In addition, to compare monitoring strategies, a minimum number of laboratory values were required, as delineated in the inclusion criteria. Exclusion of patients with the poorest protocol compliance may have resulted in some underestimation of overall failure rates.
Missed Opportunities for Failure Detection
ART regimens that do not achieve virologic suppression are associated with an increased risk of disease progression and death [21
], and identification of antiretroviral failure and prompt switching to second-line therapy may reduce the development of resistance [22
]. In this study, immunologic criteria did not detect 42% of failures identified by VL testing. Although rates of adverse events vary, a study found that 20% of patients experience serious adverse outcomes by 30 months with a regimen that did not achieve virologic suppression [20
]. Thus, a substantial number of unnecessary adverse outcomes would be expected if CD4 cell count monitoring alone were relied on to assess ART success.
The PPV modestly increased to 58.1% when confirmation of CD4 cell count criteria in 2 consecutive measurements was required (). As expected, requiring confirmatory CD4 cell count assessment resulted in a significant reduction in sensitivity. Less than one-quarter of individuals with virologic failure would have been identified if confirmation of CD4 cell count failure criteria had been required. Of note, although confirmation of CD4 cell count criteria is not necessary to meet WHO-defined immunologic failure criteria, there is a tendency for caregivers to use this method in practice (ie, checking a second CD4 cell count before making an ART switch decision). Clinicians should therefore be aware that, although specificity is considerably improved, a significant majority of treatment failures would be missed by this method.
In addition to potentially resulting in unnecessary adverse events, nonsuppressive regimens also contribute to the development of drug resistance. The number of resistance mutations has been found to correlate with duration of ART exposure [24
]. In a study that evaluated patients who received virologically nonsuppressive combination ART over a median of 6 months, patients developed a mean of 1.96 drug resistance mutations (International AIDS Society), with a loss of 1.25 active drugs [27
]. There is also evidence that prolonged exposure to failing NNRTI regimens may compromise future treatment options, in particular, the use of etravirine [28
]. As this study shows, relying on CD4 cell count criteria to diagnose treatment failure will, at best, result in a diagnosis significantly later than by VL monitoring (P
< .0001). Perhaps more worrisome, this method misses nearly half of the failure cases, allowing for the selection of increasingly drug-resistant viruses. This raises the very real concern that subsequent second-line regimens may not perform as well because of compromise of nucleoside backbones as increasing class-wide resistance develops [22
Patients Identified as Experiencing Failure by Both Immunologic and Virologic Criteria
Among patients identified as experiencing failure by both immunologic and virologic criteria, VL monitoring identified failure significantly earlier than did CD4 cell count criteria (P < .0001). Thus, even among patients correctly identified as experiencing failure by CD4 cell count criteria, the potential exists for accumulated drug resistance mutations, given an increased time to failure. If more frequent VL monitoring recommendations had been used in the present study cohort, as is the case in resource-rich countries, it is likely that viral failure would have been identified even earlier.
Patients Misclassified as Experiencing Failure
Despite dramatic price reductions for ART, second-line ART currently remains almost 5 times more costly than first-line regimens. Along with negotiations to reduce drug prices, VL assays have also become more economical, with a mean cost of US $22 per VL test in the Harvard PEPFAR program. Discussions regarding effective monitoring strategies should therefore also take into consideration the potential impact on overall drug costs. In this study, the PPV was low, suggesting that less than half of patients identified as experiencing failure by CD4 cell count criteria were actually experiencing failure. Therefore, if CD4 cell count failure criteria were used to identify patients for switch, 60.8% (1897 of 3122 () patients switched to second-line therapy would have been unnecessarily switched. Although further detailed cost analyses are underway, the significance of this should be noted, because 1897 patients unnecessarily switched to second-line therapy would cost in excess of US $1 million in increased treatment costs per year.
Immunologic criteria not only misclassified a significant number of patients as failures, but also identified a significantly larger number of treatment failures overall. In this cohort of 9690 patients, immunologic criteria identified 3122 failures (32.2%), and virologic criteria identified only 2097 (21.6%; P < .0001). The incremental increase in cost resulting from a greater number of identified failures should also be considered when assessing the overall value of CD4 cell count versus VL monitoring.
In conclusion, this large cohort study shows that immunologic criteria are poor predictors of virologic failure, missing nearly half of individuals who were failing ART and misidentifying nearly half of patients with immunologic failure who were actually virologically suppressed. The impact of accumulated drug resistance and unnecessary drug switches may ultimately eclipse the cost of VL monitoring and potentially erode the gains achieved through widespread ART use. Although the development of point-of-care HIV RNA quantification may alter the feasibility discussion, suitability for high-volume urban clinics remains to be seen. As HIV treatment programs in RLS progress beyond the emergency phase, commitments to building the infrastructure for optimal patient monitoring may improve patient outcomes and long-term sustainability.