PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jrsocmedLink to Publisher's site
 
J R Soc Med. 2002 November; 95(11): 549–551.
PMCID: PMC1279252

Clinical performance measurement: part 2—avoiding the pitfalls

In part 1 of this paper we explored what aspects of performance should be measured and how1. The key to success is to maximize the beneficial consequences while minimizing the unwanted side-effects. In this second and concluding part we discuss strategies for avoiding the pitfalls and highlight examples of good practice. In particular, we argue that the success of policies to improve clinical performance will depend critically on the extent to which the top-down, externally imposed, approach is complemented by devolved systems of performance measurement that encourage learning and improvement amongst frontline clinicians.

POTENTIAL HAZARDS IN PERFORMANCE MEASUREMENT

Here are some of the hazards most relevant to the measurement of clinical performance.

Tunnel vision arises when the system forces concentration on areas covered by the performance measurement scheme to the exclusion of other important but unmeasured areas. So far, for example, efforts to meet the old-style NHS efficiency targets were often directed at increasing measured activity within the hospital setting, at the expense of improvement in outcome that might have been achieved through increasing care outside the hospital. Similarly, efforts to meet waiting-list targets may have diverted attention to managing the lists (to improve ratings), rather than maximizing improvement in health by prioritizing according to clinical need. The NHS performance assessment framework seeks to redress this balance by covering six areas intended to give a balanced view of healthcare performance2. However, there is clearly some way to go: the Government's consultation document suggested 470 possible indicators, only 7% of which fall into the ‘health outcomes’ category3.

The tendency towards tunnel vision might be offset by devising a large number of measures none of which are to be neglected. However, the cost and data-collection burden would be prohibitive. Clinical staff would become over-whelmed and effort could be wasted trying to identify the indicators that really count4.

The need for a clearer focus is the rationale behind the notion of ‘headline’ indicators, published and used to hold organizations to account. Other indicators will be used for internal ‘benchmarking’ purposes only5. Key indicators should focus on priority issues; however, priorities can differ between clinicians, managers, patients and politicians, so some sort of consensus is needed.

Suboptimization occurs when NHS staff pursue narrow targets at the expense of the objectives of the system as a whole. For instance, a target set in the hospital sector, such as an increase in day-case rate or a shorter length of stay, may take no account of the increased burden for primary care or social services. Suboptimization may also occur within an organization. Multispecialty teamworking may be compromised if team members pursue targets set in their own specialty without taking account of the impact on the overall effectiveness of the team and the outcome for the patient. One reason for the apparent success of the cancer collaboration pilots in reducing waiting times may be that the whole multidisciplinary team was made responsible for meeting targets.

Suboptimal behaviour can be addressed through mechanisms to encourage partnerships and joint working (e.g. joint health and social service targets set out in the NHS Plan) and strategies to encourage and reward teamworking (e.g. devolved responsibility and team bonuses).

Myopia is the tendency to focus on short-term issues at the expense of long-term considerations that may only show up in performance indicators in several years' time. The classic example is where indicators are focused on curative rather than preventive services. Setting targets that seek to measure improvement over time, rather than on an annual basis, may help to offset myopia. The UK Government has indeed set longer-term targets (e.g. improvements in mortality rates from major killer diseases by 2010) but whether this is sufficient to promote improvements via preventive rather than curative care remains to be seen. The use of process indicators to monitor progress in moving towards long-term goals may help1—as would the fostering of long-term careers within an organization.

Complacency refers to a lack of motivation for improvement when comparative performance is deemed adequate. Staying out of the headlines by ensuring that performance is ‘good enough not to attract attention’ (even where the potential for better results exists) may be a strong incentive for many NHS staff faced with multiple demands on their time6,7. For example, if surgical performance is in the middle of the national distribution, the focus may be on outliers where performance is exceptionally poor. Complacency of this sort can be avoided by measuring performance continuously over time and linking rewards to improvement.

Measure fixation occurs when the pursuit of success as measured, rather than as intended, becomes the main focus. There has been debate about the benefits of the two-week maximum waiting time for cancer referrals. This could override clinical judgment and distort the flow of patients in favour of those who might have cancer, and in turn increase the length of time to treatment for cancer patients overall. Additionally, there is emerging evidence that in some specialties the standard is being met at the expense of an increase in the waiting times of routine referrals whilst not necessarily identifying treatable causes of cancer8. The tendency towards measure fixation may be offset by greater involvement of front-line staff in defining targets and measures and by ensuring there is an evidence base to support the use of specific targets.

Misinterpretation in the form of incorrect inferences about performance can arise where it is difficult to allow for the full range of potential influences on a measure1. Failure to take fully into account case-mix and other environmental factors in interpreting ‘league tables’ is a classic case. Better methods for adjusting clinical data can help avoid misinterpretation; at a specialty level, some professional associations have been working with statisticians and others to produce more meaningful comparative audit data9,10,11. Recognition of the limitations of indicators, by relying on them as signposts rather than as methods of control, is also important, as is the use of local knowledge to make sense of apparent variations in performance12,13.

Gaming occurs when staff seek to influence measured performance by deliberately altering variables other than clinical quality. For instance, an inadequate risk adjustment methodology might allow selection of low-risk patients (and refusal of high-risk patients) so as to secure favourable outcome figures14. If social influences on outcome are not properly accommodated, clinicians may indeed become reluctant to practise in areas with disadvantaged social circumstances. Gaming can be offset by careful attention to risk adjustment methods, and by developing indicators that can be used to test for selection bias. There may even be a need to put in place incentives to encourage the treatment of high-risk patients.

Ossification refers to the organizational paralysis that can arise from an excessively rigid system of measurement. For example, use of day-case rates as an indicator of performance in gynaecology may inhibit the adoption of new techniques that can be applied in the outpatient department. To avoid such side-effects the performance measures must be reviewed regularly; however, a balance must be struck since constant changes will be disruptive and will make long-term change difficult to track.

MAXIMIZING THE POTENTIAL OF CLINICAL PERFORMANCE MEASUREMENT

We have mentioned several specific strategies that can be used to enhance the operation of clinical performance measurement systems. However, perhaps the single most powerful approach is to facilitate clinically led systems that are developed and used locally by those whose behaviour affects the measures of interest. This needs to go beyond initial consultation and become integrated into the entire process of performance measurement. The development of performance measurement systems can be guided and informed by the substantial progress already made: some specialties have developed clinical databases that are sound in terms of coverage and methods of standardization15,16.

With a few notable exceptions (e.g. intensive care17), such developments have operated at regional rather than national level and offer limited opportunities for comparison because of difficulties with risk adjustment. However, several specialties—such as vascular surgery, coloproctology, radiology and upper gastrointestinal surgery10,11—are making progress with audited outcomes. These initiatives will enable hospitals and individual clinicians to track their results prospectively against national standards with suitable adjustment for casemix, severity of illness and comorbidity. The systems can be designed in ways that avoid many of the dysfunctional consequences identified above. Professional ownership, and freedom to choose measures that focus on clinically meaningful events, will reduce the tendencies towards tunnel vision, ossification and measure fixation. Misinterpretation and the consequent gaming will be limited by the knowledge of local factors that influence outcomes and by robust methods of risk adjustment.

Attention also needs to be paid to the context within which performance measurement systems operate. There is to be a concerted effort to devolve power to front-line staff throughout the NHS, involving them in decisions at all levels, reducing hierarchies and developing self-managed teams18. If performance improvements are to be achieved, the systems that measure and monitor clinical performance will need to reflect the new structure and environment. The imposition of top-down external systems will not fit well within the new NHS: performance measurement systems will be a helpful tool to improve clinical outcomes if they are owned by professions and build upon the initiatives that have already begun. More generally, as the Bristol Enquiry emphasized, performance measurement is unlikely to be effective unless implemented within a favourable professional culture of enquiry and continuous improvement19.

Acknowledgments

We thank Andrew Street and Hugh Gravelle for helpful comments.

References

1. Goddard M, Davies HTO, Dawson D, Mannion R, McInnes F. Clinical performance measurement: part 1—getting the best out of it. J R Soc Med 2002;95: 508-10 [PMC free article] [PubMed]
2. NHS Executive. The NHS Performance Assessment Framework. Leeds: NHS Executive, 1999
3. Department of Health. NHS Performance Indicators: A Consultation, May 2001. London: DoH, 2001
4. Goddard M, Mannion R, Smith PC. Enhancing performance in health care: a theoretical perspective on agency and the role of information. Health Econ 2000;9: 95-107 [PubMed]
5. NHS Executive. The NHS Plan—Implementing the Performance Improvement Agenda. Leeds: NHS Executive, 2000
6. Goddard M, Mannion R, Smith PC. Assessing the performance of NHS hospital trusts: the role of hard and soft information. Health Policy 1999;48: 119-34 [PubMed]
7. Mannion R, Goddard M. Impact of clinical performance indicators: a case study of NHS hospital trusts. BMJ 2001;323: 260-3 [PMC free article] [PubMed]
8. Jones R, Rubin G, Hungin P. Is the two week rule for cancer referrals working? BMJ 2001;322: 1555-6 [PMC free article] [PubMed]
9. Keogh B, Kinsman R. National Adult Cardiac Surgical Database Report, 1998. London: Society of Cardiothoracic Surgeons of Great Britain and Ireland, 1999
10. Earnshaw J, Ridler B, Kinsman R. National Outcome Audit Report, 1999. London: Vascular Surgical Society of Great Britain and Ireland, 2000
11. Stamatakis J, Thompson M, Chave H, Kinsman R. National Audit of Bowel Obstruction due to Colorectal Cancer March 1998—March 1999. London: Association of Coloproctology of Great Britain and Ireland, 2000
12. Appleby J, Thomas A. Measuring performance in the NHS: what really matters? BMJ 2000;320: 1464-7 [PMC free article] [PubMed]
13. Mulligan J, Appleby J, Harrison A. Measuring the performance of health systems. BMJ 2000;321: 191-2 [PMC free article] [PubMed]
14. Dranove D, Kessler D, McClellan M, Satterthwaite M. Is more information better? The effects of report cards on health care providers. NBER Working Paper 8697. Cambridge, MA: National Bureau of Economic Research, 2002 [www.nber.org]
15. Black N. High quality clinical databases: breaking down barriers. Lancet 1999;353: 1205-6 [PubMed]
16. Black N. Developing high quality clinical databases: the key to a new research paradigm. BMJ 1997;315: 381-2 [PMC free article] [PubMed]
17. Rowan K, Black N. A Bottom-up Approach to Performance Indicators through Clinician Networks. Health Care UK. London: King's Fund, 2000
18. Department of Health. Shifting the Balance of Power within the NHS—Securing Delivery. London: DoH, 2001
19. Davies HTO, Nutley SM, Mannion R. Organisational culture and quality of health care. Qual Health Care 2000;9: 111-19 [PMC free article] [PubMed]

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press