|Home | About | Journals | Submit | Contact Us | Français|
Measuring outcomes is necessary but difficult to get right
In this week's BMJ, Westaby and colleagues compare the value of two sources of data for determining mortality 30 days after congenital cardiac surgery—hospital episode statistics (HES) and the central cardiac audit database.1 They find that the central cardiac audit database is more complete than HES, but that individual centres need investment to improve the completeness and accuracy of their data. Their investigation follows a study published in the BMJ in 2004 that used HES to compare mortality from congenital heart surgery in different UK centres.2 The study suggested that Oxford had significantly higher mortality than the national average, and the results were reported widely by the media. So have we learnt anything new about the relative value of routinely collected versus specifically collected sources of data?
Routinely collected patient data are regularly analysed to investigate outcome. Equally regularly the results are contested by the specifically collected dataset, which is often designed to measure the very thing being looked for. So why use routinely collected data to draw clinical conclusions at all? The advantages include pragmatism, wide coverage, low cost, and easy access. The disadvantages include superficial or inaccurate coding and potentially damaging generalisations.
HES data from the National Health Service (NHS) are widely used to produce outcome information and more recently to publicise differences between hospitals. Data produced for administrative and financial purposes that are centred on the organisation not the patient may never be as complete as data derived from clinicians.
Huge datasets also invite misuse of statistical method—the significance of correlations is a product of the number of data points, not necessarily its relative importance. Chance findings will occur if many tests are done on the same data; association is not the same as causality. Nonetheless, the sheer scale of the HES database makes it attractive and it has become a rich source of hypothesis generation and evidence on outcomes. The database has been used to investigate associations between case volume and outcome (for example, oesophagectomy3 and repair of aortic aneurysms4), to search for potentially useful predictors of outcome (for example, excess mortality associated with delay in operation after hip fracture5), to carry out quasilongitudinal studies to track changes in outcome related to changes in clinical practice (for example, acute urinary retention and prostatectomy,6 follow-up after emergency admission,7 and changes in mortality after paediatric cardiac surgery2), and increasingly to predict individual outcomes in patients at high risk.
Results that conflict with HES data have often been reported—for example, the relation between hospital volume and outcome8 9 and the predictive factors for poorer outcome in certain patients.10 This is not just a contest of science and statistics but of politics and hearts and minds. As Westaby and colleagues note, the media rapidly picked up on the conclusion based on HES data that Oxford had significantly higher mortality after paediatric cardiac surgery than the national average.
While recognising that there are problems in special datasets too—timescales and numbers of episodes are often smaller, making it more likely to miss a rare event or true difference, and collecting outcome data on your own performance may bias the case mix of patients selected for intervention—Westaby and colleagues conclude that HES data should not be used for comparisons within specialties.
Patients do not necessarily trust official data sources.11 We need to know if they will trust information collected by doctors who analyse their own data and claim that their own performance is sound. The NHS has recently appointed a new medical director, Professor Sir Bruce Keogh, who is famous for his leadership of British cardiothoracic surgeons in measuring outcomes and making them public. This appointment sends a clear signal to staff, the public, and the media about the importance of measuring outcomes.
It is unclear how much patients change their choice of provider based on such knowledge, or how much employers manage their clinical staff with an eye on comparative performance, however intuitively it seems important. With all its potential problems, HES has more to offer than league tables of performance. Better knowledge will flow from a collaboration of all sound analyses, based on complete data, that are accurately coded by clinicians who have an interest in the outcome. This can only lead to more complete and contextualised data being released into the public domain.
Competing interests: None declared.
Provenance and peer review: Commissioned; not externally peer reviewed.