|Home | About | Journals | Submit | Contact Us | Français|
Chance variability makes it impossible to assess reliably whether individual trusts are meeting annual targets for reduction in the risk of MRSA infection
One of the core standards set by the Department of Health is to achieve year on year reductions in rates of infection with methicillin resistant Staphylococcus aureus (MRSA).1 This was clarified in November 2004 by the (then) health secretary John Reid, who said that he expected “MRSA bloodstream infection rates to be halved in our hospitals by 2008,” that “NHS Acute Trusts will be tasked with achieving a year on year reduction,”2 and that such a target was “achievable, measurable, and not too burdensome.” Several problems can arise, however, when measuring change in rates, particularly when the observed number of events is fairly low. These include the effects of chance variability, regression to the mean, and low power to detect genuine underlying changes. These problems are accentuated with an infectious disease, since cases tend to cluster and hence rates are “over-dispersed” relative to chance variation.3 So how should we interpret government targets on MRSA infections?
The government target of a 50% reduction in cases over three years essentially corresponds to a 20% annual reduction. But the term target is ambiguous: does it refer to an observed change in rates or a true underlying reduction in risk? At a national level this distinction may be unnecessary because of the large numbers involved, but for individual trusts the play of chance can mean that an observed rate reduction that meets the target may not accurately reflect a corresponding change in underlying risk and, conversely, that a true risk reduction may not be reflected in the observed rates, which might even increase. This lack of clarity can give rise to a range of possible criteria to determine whether a target has been met, as illustrated below.
The Department of Health publishes data on MRSA infection in individual trusts obtained through its mandatory reporting scheme.4 Using data for financial years 2001-2, 2002-3, and 2003-4, I estimated how close the variation was to that expected by chance from the Poisson regression residuals around an individual trend line for each trust.5 This showed a significant over-dispersion of 1.76 (Pearson χ2 = 304.3, degrees of freedom = 167; P < 0.001). For example, Aintree Hospitals NHS Trust had 34 cases in 2001-2, rising to 66 cases in 2002-3, and falling to 48 in 2004-4, far more variability than would be expected by chance alone. Such over-dispersion is expected with an infectious disease, and means that all interval estimates of MRSA rates should be 33% wider (1.33 = √1.76) than those that assume simple random variability.
Figure 1 shows the MRSA rates in 2003-4 per 1000 bed days plotted against the number of bed days. Such funnel plots can be interpreted as control charts6,7 and have been used in a slightly different form by the Health Protection Agency when publishing MRSA rates.3 The control limits, expanded to allow for over-dispersion, are set around the overall rate observed for each trust type: 0.24 per 1000 bed days for specialist trusts, 0.16 for general acute, and 0.10 for single specialty. Although most trusts lie within the control limits, there are some clear outliers with high rates.
Figure 2 (left) shows a funnel plot of the change in rates between 2002-3 and 2003-4, pooling all types of trusts and using control limits around a hypothesis of no change (ratio = 1). All the trusts lie within the 3 standard deviation limits (adjusted for overdispersion), showing that any attempt at ranking trusts into a detailed league table of change would be entirely spurious. Figure 2 also emphasises the wide variability in change expected (and observed) in centres with low counts, which may occur through the trust either being small or having low MRSA rates, or both.
Figure 2 (right) shows the ratio of the rates in 2003-4 to 2002-3 against the baseline 2002-3 rates. The clear negative relation between these (correlation = -0.43) shows the phenomenon of regression to the mean.8 Essentially, since high or low rates in 2002-3 are largely due to runs of chance events that are unlikely to be repeated, rates in the subsequent year will tend to be closer to the overall average rate. This immediately explains the finding reported by the media that “some of the hospitals with the lowest rates last year had a rise in MRSA cases this year,” and examples such as “York Health Services NHS Trust slipped 42 places in the ranking from having the lowest rate of MRSA cases for a general hospital last year.”9
Figure 3 (left) shows the chance limits for trusts with no true risk reduction: the 95% limits, for example, would then correspond to outcomes that show a “significant” (P < 0.05) change. Trusts with, say, less than 75 cases a year (comprising 82% of all trusts) would have a fair chance of observing a reduction in the shaded area—that is, achieving the target 20% reduction—by chance alone; for a median trust with 32 cases the probability of a false positive reduction is 25%. To prove a significant reduction, a median trust (marked M1) would need to achieve an observed risk ratio of 0.47 (53% observed risk reduction).
Figure 3 (right) shows that a trust with a true risk reduction of 20% has only a 50% chance of actually observing a reduction in the shaded area and hence achieving the target. However, if we were merely aiming to achieve a reduction compatible with the target (that is, not significantly different from the target), then a median trust (M2) would have to show only less than a 49% increase in order not to reject the hypothesis of a true 20% reduction. Detailed power calculations are provided on bmj.com.
Performance monitoring presents many challenges, and it may need a protocol10 similar to that required before a clinical trial, including careful definitions, power calculations, and so on. I have not dealt with the considerable difficulty in defining the numerator and denominator of the MRSA infection rate, but any more restrictive definition of MRSA will reduce the counts and hence only exaggerate the issues raised.
The analysis suggests that, although MRSA rates show some systematic differences between trusts, recent observed variability in annual changes has been entirely explainable by chance (fig 2). Year on year changes are therefore extremely difficult to measure since they are strongly influenced by chance variability and exhibit substantial regression to the mean Even if an average trust is truly reducing risk according to the government target, it has little chance of showing a significant reduction in observed cases on a year on year basis. At the other extreme, most trusts will show results that are statistically compatible with the target risk reduction, even if they truly have not improved. The naive target of an observed annual reduction of 20% will be failed by half the trusts that have truly reduced their risk by that extent.
The Department of Health has set targets for reduction in MRSA rates for individual trusts
It is not clear whether the targets refer to an observed rate reduction or a true reduction in underlying risk
Recent annual changes in MRSA rates have been dominated by chance variability
Reliable annual monitoring of reductions in MRSA rates in individual trusts is not generally feasible
MRSA infection rates formed part of the 2002-3 star ratings for NHS trusts11 with annual change as a performance measure. They were excluded from the 2003-4 ratings12 but reintroduced in 2004-5 with a composite indicator based on current rate, change over a two year baseline, and presence of hand cleaning facilities.13 This seems appropriate, but if MSRA rates are to be used to assess future performance the analysis above suggests further changes should be included:
Finally, the government needs to be more precise about what it means by the term target. When it comes to assessing whether a target has been met, it is vital to distinguish between observed reduction in numbers of cases and reduction in true underlying risk. Underlying risk, though it cannot be precisely measured, is the appropriate interpretation when setting local targets.
Editorial by Duckworth and Charlett and Papers p 982
A table showing the chance of errors for different strategies in assessing targets is on bmj.com
I thank Adrian Cook, Martin Bardsley, and David Cromwell for useful discussions.
Contributors and sources: DJS works with the Healthcare Commission on performance assessment. The views expressed here are his alone. All data were obtained from publicly available sources.
Competing interests: None declared.