PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of bmjThis ArticleThe BMJ
 
BMJ. 2008 February 2; 336(7638): 249.
PMCID: PMC2223012
Body Politic

How do we get the measure of patient care?

Nigel Hawkes, health editor, the Times

The UK government’s latest plan is to try asking the patients rather than the doctors

Death is an end point we can all understand. When reading the latest clinical trial, I always find my eyes drawn to the column that shows mortality. This may be a little morbid, but no other measure has the crisp finality of death.

As a means of assessing the quality of care, however, it has many drawbacks. True, the heart surgeons have submitted to an audit of the number of their patients who die, enabling the public to weigh up the odds before going under the knife. But in most circumstances, mortality—even a run of bad results—tells us little about the quality of care.

In recent months both Papworth Hospital near Cambridge and Glasgow Royal Infirmary stopped their heart transplant programme after a number of deaths—seven out of 20 at Papworth, and four out of 11 at Glasgow. Both resumed after close scrutiny found no common factors behind the deaths.

I would be the last to criticise hospitals for taking precautionary action. It’s a lot better than dealing with the fallout if a journalist hears of the deaths and starts to cry cover-up. In public relations terms, what Papworth and Glasgow did was exemplary. But it wasn’t rational; even the average punter at next month’s Cheltenham Festival could have understood the odds better than that.

Few operations are common enough for mortality statistics to be meaningful. An analysis published in 2004 (JAMA 2004;292:847-51) concluded that only for coronary artery bypasses was there a sufficient caseload in any US hospital for a doubling of the mortality rate to be detectable. For other operations, such as hip replacements, repair of abdominal aortic aneurysms, or paediatric heart surgery, the volume of operations or the chances of death was simply too small.

Even if you increased the sample size by adding five years’ results together, heart bypasses were still the only operation for which more than half of US hospitals had a sufficient caseload for statistics to be useful. The study did not include heart transplants, but the sample is obviously far too small for the deaths reported by Papworth or Glasgow to mean anything at all.

If we cannot judge our surgeons by the number of patients who die, how can we judge them? Some clinicians argue that “outcome measures,” now so popular among makers of health policy, are a pipe dream. Professor Richard Lilford from the University of Birmingham, for example, argued in the BMJ that it is impossible to adjust the data so as to take into account all the possible variables that determine how well a patient does (BMJ 2007;335:648-50; doi: 10.1136/bmj.39317.641296.AD).

If outcomes are a poor measure even at a hospital level, they are even worse at the level of the individual clinician. Professor Lilford’s answer is not to measure outcomes at all, but to measure process. If a hospital or a doctor follows proved tenets of clinical care, and can show it, then outcomes will be better. Rather than trying to measure outcomes, we should measure the adherence to processes known to produce better outcomes.

Turning the issue around, how about asking the patients rather than the doctors? This is the plan hatched by the Department of Health and slipped out before Christmas with such a total absence of publicity that only the alert Nick Timmins of the Financial Times even spotted it.

A diligent search among the many documents outlining the 2008-9 operating framework will tell you that in April the NHS plans to introduce the routine collection of Patient Reported Outcome Measures (PROMs). To start with, these will cover hip and knee replacements, hernias, and varicose veins. The national introduction follows a pilot programme run by Professor Nick Black of the London School of Hygiene and Tropical Medicine with 2400 patients in 24 sites.

The system involves asking patients a series of questions on the day of admission, using forms they fill in themselves, and following up with a second form three months later for hernias and varicose veins and six months later for hips. The forms are called EQ-5D; they have been proved to be an accurate way of assessing if patients feel any better, and the administrative cost works out at about £5 per patient.

The NHS already gathers reams of information, not all of it of obvious relevance. Yet it still has some odd gaps. It has always puzzled me, for example, that having pioneered hip implants the NHS did nothing to monitor how well they worked. This has now been put right and the National Joint Registry is beginning to produce useful results—better late than never.

PROMs are probably a good idea. But if the data are to be valid, at least 80% of patients will have to fill in the forms before they are treated, and 80% of those will have to complete the final forms. The guidance that was issued in December admits that some patients may be incapable of filling in their PROMs, and that it will be up to clerking staff to spot them.

And what about the follow-up forms? Statistics show high non-attendance rates for follow-up appointments (19% for men aged 15 to 44), so the 80% target looks pretty ambitious. Real patients—except those who feel terrible—may simply treat the follow-up forms as another bit of junk mail. The danger is that PROMs will add to the noise without contributing much to the signal.

The danger is that PROMs [patient reported outcome measures] will add to the noise without contributing much to the signal


Articles from The BMJ are provided here courtesy of BMJ Publishing Group