|Home | About | Journals | Submit | Contact Us | Français|
We thank Roberts and Low for their commentary on our paper and would like to take the opportunity to clarify one or two points. Roberts and Low rightly point out that the National Institute for Clinical Excellence (NICE) recommends using incremental cost‐effectiveness ratios (ICER) when assessing health interventions.1 An incremental cost‐effectiveness analysis compares each intervention with the next best option. We evaluated a large number of potential interventions (“no screening”, screening men and women, or just women, targeting different age ranges, etc) and, in comparing all of these with one another, we were able to suggest which of the strategies should be adopted.2 Note that this is a much more comprehensive analysis than just comparing the current National Chlamydia Screening Programme (NCSP) strategy of screening men and women aged under 25 years with “no screening” and working out whether it is worth doing. That is not to say that this information is not provided by our analysis – it is. The NCSP strategy can be thought of as being a bundle of sub‐strategies (targeting just women aged under 20, adding men aged under 20, adding women and men aged over 20), and the ICER of this strategy compared with “no screening” is the average of the ICERs of the sub‐strategies. Hence, our use of the term average cost‐effectiveness ratio (CER) to describe this, which follows standard health economics terminology.3 As we are careful to explain this in the text and present the results in both ways, we cannot see how they are misleading. Our more detailed analysis suggests that the current NCSP strategy should be amended, not (as Roberts and Low claim) because the average CER for this strategy (compared with no screening) is close to the “threshold” value of £30000/Quality‐Adjusted Life Years (QALY) gained, but because the incremental cost‐effectiveness of expanding screening from those aged under 20 to those aged under 25 is likely to be far in excess of £30000/QALY gained.
Roberts and Low correctly point out that there is considerable uncertainty in the QALY estimates used in our study. We accounted for this by assigning distributions to the QALY estimates in our probabilistic sensitivity analysis (see fig 3 in our paper). We acknowledge that this is not a perfect solution, but we still believe that it is worth using QALYs as an outcome measure rather than Major Outcomes Averted (MOAs) for two reasons. First, the QALY is the common currency by which decisions are made on health programmes in the UK and other countries. Second, the use of MOAs implies that all of the “major” outcomes are equal. So a case of epididymitis carries the same weight as a case of ectopic pregnancy. We doubt that many people would agree with this. Indeed, the use of MOAs in the CLaSS study leads to the seemingly perverse conclusion that screening men is more cost‐effective than screening women because the probability of men developing epididymitis is greater (in their study) than the probability of women developing complications.4 Since most previous cost‐effectiveness studies have used MOAs as their outcome measure, we also provide this information.
We agree with Roberts and Low that more research is needed in the two key areas of progression to pelvic inflammatory disease and QALY values for chlamydia states. However, it is not paradoxical, as they suggest, to call for better understanding of the natural history of chlamydia infection, when many studies have subsequently cast doubt on the earliest empirical estimates of pelvic inflammatory disease progression (on which initial decisions to screen were based). The purpose of science is to move knowledge forward and inform best practice. We hope that our paper has helped achieve this.