|Home | About | Journals | Submit | Contact Us | Français|
Sixty years ago, the new National Health Service promised that a doctor would be assured ‘freedom... to pursue his professional methods in his own individual way, and not to be subject to outside clinical interference’.1 But after thirty years, the Chief Medical Officer, Sir George Godber, set out to define a ‘Cogwheel’ structure for the accountability and self-regulation of hospital doctors,2 and soon a non-governmental inquiry reported ‘It is a necessary part of a doctor's professional responsibility to assess his work regularly in association with his colleagues.’3 In evidence to the Royal Commission on the NHS in 1977, the British Medical Association was ‘not convinced of the need for further supervision of a qualified doctor's standard of care’. In its final report, the Commission responded, ‘We are not convinced that the professions regard the introduction of medical audit and peer review with a proper sense of urgency.’4
Thus, thirty years ago, standards in the NHS referred not to clinical practice or services but to buildings, equipment, capacity and allocation of resources.5 Any defects in the system were blamed on shortage of staff, money or facilities—after all, the NHS was then one of the cheapest comprehensive health systems in the world. There was little effort to examine how those resources were used or whether they could yield better clinical results. There had been several public scandals about the treatment of patients, the behaviour of doctors and the management of institutions, particularly in long-term care. But few people were keen for improvement, or even recognized a need for it. Tradition and the stout defence of clinical freedom made the management of doctors as easy as herding cats.
The concept of ‘quality’ in the NHS was effectively launched in 1983 by the report of Roy Griffiths on the management of the NHS.6 He emphasized the importance of consumers (previously called patients) in defining expectations and judging performance. He also replaced hospital management committees (administrator, treasurer, nurse, doctor) with one general manager, and suggested that one senior assistant should be responsible for quality. In the ensuing scramble for jobs many senior nurses became directors of quality, and established committees, structures and systems for ‘quality assurance’.
So, in the late 1980s, quality was driven by nurses and supported with training and research by the Royal College of Nursing. Some medical Royal Colleges and Faculties—notably the anaesthetists and thoracic surgeons—had begun to identify and question variations in clinical outcomes. But most doctors were not systematically involved in the new movement until participation in medical audit became a formal requirement of the NHS.NHS.
Unlike some other propositions in the White Paper (especially the purchaser-provider split), the introduction of medical audit7 as an educational tool had a friendly reception from the profession, albeit with reservations. The political intent was clear; the Thatcher Government had already subdued the unions and the legal profession, and now was the time to make doctors more accountable. The implementation and implications of medical audit were not at all clear; civil servants were given six months to make practical arrangements.
Audit committees were set up at local, district and regional level and were mostly unconnected to the existing structures and methods of quality assurance. Many argued that there was not enough time, money, guidance, support or reliable information for systematic audit. But then audit budgets were established, audit assistants were invented and absurd sums were invested in useless stand-alone computers for consultants. General practitioners were not accorded the same largesse; the obligation of audit was considered to be inherent in their existing contracts.
After the teething problems, the underlying weaknesses of the original plan became clearer. First, doctors were trained to evaluate and treat patients one at a time; most had neither the knowledge nor the skills to compare clinical processes and outcomes systematically. Second, the professional bodies and academic institutions had not been involved from the start; there was no coordinated plan for research, training and professional development. Third, evidence for effective medical practice was largely unavailable or inaccessible; ‘good’ practice was based on tradition and personal preference. Fourth, nurses, allied professions and, especially, managers were excluded; doctors were able to change their own practice but were unable to change the system in which they worked. Finally, the medical audit committees were more advisory than executive; they were not accountable to the management or to the public.
Many doctors, particularly the older ones, were uncomfortable discussing their clinical results with other doctors; for them, the move in the mid-1990s to multidisciplinary clinical audit came too soon. The concept of working in clinical teams was better received in general practice and long-term care than in acute hospitals.
About the same time, ‘audit’ slipped off the NHS priorities, to be replaced by ‘clinical effectiveness’. Attention turned away from measuring the behaviours of clinicians and organizations and on to more palatable issues of evidence-based medicine, research, technology and machines. This transition was celebrated by another renaming for the audit committees and retitling of audit staff, the launch of a national institute and, of course, another journal. Many of the lessons of the audit era, though still relevant, were not followed through at local level, particularly regarding training needs, clinical systems and professional accountability.
By the late 1990s the NHS was littered with the relics of earlier expeditions, and in need of a unifying concept. The solution came from the future Chief Medical Officer for England, Liam Donaldson, in the form of ‘clinical governance’—a term that has been progressively adopted in many other countries to fill the gap between government ‘stewardship’ of the health system and local ‘management’. And again the committees and staff were relabelled, and new journals appeared.
Latest in the series of quality priorities (patients' rights, clinical competence, effectiveness, service performance and so on) is the safety of patients and staff. Evidence from around the world, starting from the Harvard Medical Practice Study,8 consistently told us that healthcare is dangerous and that the UK is no exception;9 for example, one patient in ten is damaged during an inpatient stay. A series of reports from the Institute of Medicine (IoM) analysed the causes and effects of failures in the USA and made far-reaching recommendations that focused on the systems of training and healthcare rather than the individuals who are receivers or providers.10,11 Most of those messages could apply to all developed countries, including the UK.
The British equivalent of the IoM reports was triggered by investigation into paediatric cardiac surgery in a Bristol teaching hospital.12 A dissection of performance management from one clinical department all the way to the Department of Health provided a meticulous case study not only for England but also for much of the rest of the world. Much of the evidence suggested that, in the early 1990s in one large hospital, several key national initiatives to promote quality had failed: external monitoring, performance management, market competition, consumer empowerment, clinical managers, service contracting, medical audit and data systems had all failed in this instance to define, measure and ensure compliance with acceptable standards of organization and practice. Many of the resulting recommendations were not new; indeed, most of the proposals on peer review had been formally issued to the NHS in the previous ten years but not followed through.
Bristol exposed numerous systematic weaknesses in the NHS but, despite these, the UK can fairly claim to have achieved many quality milestones in the past twenty years, and to have pioneered many quality systems in Europe. Here are some highlights and lowlights:
Irrespective of the outcome of any plebiscite, or of the rules of ‘subsidiarity’ which leave health services entirely (well, mostly) the business of Member States, the greatest formative pressures on the UK will come from the European Union. Freedom of trade, mobility of staff and patients, reciprocation of biomedical and health service research, professional training and regulation, and protection of public safety will increasingly define common standards for the provision, assessment and improvement of healthcare. Britain's non-governmental organizations have already contributed to this movement—for instance, with clinical pathology accreditation18 and guidelines for diagnostic radiology19—but public bodies also must be prepared to export, and to import. This will demand that national policies become more explicit and joined-up within and between countries, that national support agencies (such as for clinical guidelines in France, Scotland and England) share rather than duplicate work and resources, and that performance data are standardized and available across the borders of Europe.
Even though the UK shares fewer patients and services with the rest of the world, it should still watch and learn from the experience of others, such as the ‘new rules to redesign and improve care’ proposed by the IoM,11 standards for health service assessment20 and the public inquiries into health service scandals elsewhere. Someone needs to be actively scanning the horizon, and that government and the NHS must listen and be able to respond. Whose business is that?