PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jrsocmedLink to Publisher's site
 
J R Soc Med. 2005 May; 98(5): 224–227.
PMCID: PMC1129046

Standards in the NHS

Sixty years ago, the new National Health Service promised that a doctor would be assured ‘freedom... to pursue his professional methods in his own individual way, and not to be subject to outside clinical interference’.1 But after thirty years, the Chief Medical Officer, Sir George Godber, set out to define a ‘Cogwheel’ structure for the accountability and self-regulation of hospital doctors,2 and soon a non-governmental inquiry reported ‘It is a necessary part of a doctor's professional responsibility to assess his work regularly in association with his colleagues.’3 In evidence to the Royal Commission on the NHS in 1977, the British Medical Association was ‘not convinced of the need for further supervision of a qualified doctor's standard of care’. In its final report, the Commission responded, ‘We are not convinced that the professions regard the introduction of medical audit and peer review with a proper sense of urgency.’4

Thus, thirty years ago, standards in the NHS referred not to clinical practice or services but to buildings, equipment, capacity and allocation of resources.5 Any defects in the system were blamed on shortage of staff, money or facilities—after all, the NHS was then one of the cheapest comprehensive health systems in the world. There was little effort to examine how those resources were used or whether they could yield better clinical results. There had been several public scandals about the treatment of patients, the behaviour of doctors and the management of institutions, particularly in long-term care. But few people were keen for improvement, or even recognized a need for it. Tradition and the stout defence of clinical freedom made the management of doctors as easy as herding cats.

Quality assurance

The concept of ‘quality’ in the NHS was effectively launched in 1983 by the report of Roy Griffiths on the management of the NHS.6 He emphasized the importance of consumers (previously called patients) in defining expectations and judging performance. He also replaced hospital management committees (administrator, treasurer, nurse, doctor) with one general manager, and suggested that one senior assistant should be responsible for quality. In the ensuing scramble for jobs many senior nurses became directors of quality, and established committees, structures and systems for ‘quality assurance’.

So, in the late 1980s, quality was driven by nurses and supported with training and research by the Royal College of Nursing. Some medical Royal Colleges and Faculties—notably the anaesthetists and thoracic surgeons—had begun to identify and question variations in clinical outcomes. But most doctors were not systematically involved in the new movement until participation in medical audit became a formal requirement of the NHS.NHS.

Figure 1
Lenin: ‘Freedom is precious, so precious it must be rationed’

Medical audit

Unlike some other propositions in the White Paper (especially the purchaser-provider split), the introduction of medical audit7 as an educational tool had a friendly reception from the profession, albeit with reservations. The political intent was clear; the Thatcher Government had already subdued the unions and the legal profession, and now was the time to make doctors more accountable. The implementation and implications of medical audit were not at all clear; civil servants were given six months to make practical arrangements.

Audit committees were set up at local, district and regional level and were mostly unconnected to the existing structures and methods of quality assurance. Many argued that there was not enough time, money, guidance, support or reliable information for systematic audit. But then audit budgets were established, audit assistants were invented and absurd sums were invested in useless stand-alone computers for consultants. General practitioners were not accorded the same largesse; the obligation of audit was considered to be inherent in their existing contracts.

After the teething problems, the underlying weaknesses of the original plan became clearer. First, doctors were trained to evaluate and treat patients one at a time; most had neither the knowledge nor the skills to compare clinical processes and outcomes systematically. Second, the professional bodies and academic institutions had not been involved from the start; there was no coordinated plan for research, training and professional development. Third, evidence for effective medical practice was largely unavailable or inaccessible; ‘good’ practice was based on tradition and personal preference. Fourth, nurses, allied professions and, especially, managers were excluded; doctors were able to change their own practice but were unable to change the system in which they worked. Finally, the medical audit committees were more advisory than executive; they were not accountable to the management or to the public.

From audit to effectiveness

Many doctors, particularly the older ones, were uncomfortable discussing their clinical results with other doctors; for them, the move in the mid-1990s to multidisciplinary clinical audit came too soon. The concept of working in clinical teams was better received in general practice and long-term care than in acute hospitals.

About the same time, ‘audit’ slipped off the NHS priorities, to be replaced by ‘clinical effectiveness’. Attention turned away from measuring the behaviours of clinicians and organizations and on to more palatable issues of evidence-based medicine, research, technology and machines. This transition was celebrated by another renaming for the audit committees and retitling of audit staff, the launch of a national institute and, of course, another journal. Many of the lessons of the audit era, though still relevant, were not followed through at local level, particularly regarding training needs, clinical systems and professional accountability.

And so to governance

By the late 1990s the NHS was littered with the relics of earlier expeditions, and in need of a unifying concept. The solution came from the future Chief Medical Officer for England, Liam Donaldson, in the form of ‘clinical governance’—a term that has been progressively adopted in many other countries to fill the gap between government ‘stewardship’ of the health system and local ‘management’. And again the committees and staff were relabelled, and new journals appeared.

...and safety

Latest in the series of quality priorities (patients' rights, clinical competence, effectiveness, service performance and so on) is the safety of patients and staff. Evidence from around the world, starting from the Harvard Medical Practice Study,8 consistently told us that healthcare is dangerous and that the UK is no exception;9 for example, one patient in ten is damaged during an inpatient stay. A series of reports from the Institute of Medicine (IoM) analysed the causes and effects of failures in the USA and made far-reaching recommendations that focused on the systems of training and healthcare rather than the individuals who are receivers or providers.10,11 Most of those messages could apply to all developed countries, including the UK.

Learning from the Bristol Inquiry

The British equivalent of the IoM reports was triggered by investigation into paediatric cardiac surgery in a Bristol teaching hospital.12 A dissection of performance management from one clinical department all the way to the Department of Health provided a meticulous case study not only for England but also for much of the rest of the world. Much of the evidence suggested that, in the early 1990s in one large hospital, several key national initiatives to promote quality had failed: external monitoring, performance management, market competition, consumer empowerment, clinical managers, service contracting, medical audit and data systems had all failed in this instance to define, measure and ensure compliance with acceptable standards of organization and practice. Many of the resulting recommendations were not new; indeed, most of the proposals on peer review had been formally issued to the NHS in the previous ten years but not followed through.

Twenty years on

Bristol exposed numerous systematic weaknesses in the NHS but, despite these, the UK can fairly claim to have achieved many quality milestones in the past twenty years, and to have pioneered many quality systems in Europe. Here are some highlights and lowlights:

  • Professional culture: doctors, in particular, have moved from unfettered clinical freedom towards guarded acceptance of accountability, transparency, regulation and evidence-based medicine
  • Government policy: has changed too much; successive ministers have started new, largely untested ‘quality’ initiatives without a coherent and comprehensive strategic plan for improvement which is owned by stakeholders and able to survive a change of government
  • Confusion of leadership: the UK does not clearly separate the government role of stewardship from the NHS Executive role of management, or set out to balance top-down control with bottom-up self-regulation. This is a common defect of predominantly public-funded health systems
  • National agencies: organizations were set up including bodies for clinical practice, competence assessment, patient safety, controls assurance, clinical negligence, service inspection; many countries have followed suit, but their agencies are less fragmented, more participative and more stable
  • Public/private partnerships: Britain (especially England) has been slow to adopt public/private partnerships for quality improvement, which have been promoted by independent commissions in Australia13 and the USA.14 Examples of collaboration in the UK include the Scottish Intercollegial Guidelines Network with the already reformed Clinical Standards Board (now QIS15), and the proposed concordat by which a statutory inspectorate (the Healthcare Commission) could reciprocate with voluntary and professional peer review programmes16
  • Professional development: postgraduate programmes, staff appraisal and revalidation have become the norm in the UK but are still rare in many other countries; in general, Europe lags behind North America in work-place systems of credentialling and privileging
  • Public information: consumers in the UK now have much more access to information about rights, choices, and services; systematic independent surveys of patient experience add to routine measures of service performance, but both sources are compromised by massage and selective publication, spawning more independent sources (e.g. Dr Foster [www.drfoster.com])
  • Clinical effectiveness: the NHS has been a pioneer in clinical audit, practice guidelines, indicators and confidential inquiries but their messages are not consistently heeded, implemented or followed up at local or national level17
  • Quality infrastructure: ‘audit assistants’ grew from a junior handful in 1990 into a career pathway of several thousand support professionals; there is no agreed national training curriculum for them, but technical skills and knowledge of quality improvement are now required of medical specialists.

And where next?

Irrespective of the outcome of any plebiscite, or of the rules of ‘subsidiarity’ which leave health services entirely (well, mostly) the business of Member States, the greatest formative pressures on the UK will come from the European Union. Freedom of trade, mobility of staff and patients, reciprocation of biomedical and health service research, professional training and regulation, and protection of public safety will increasingly define common standards for the provision, assessment and improvement of healthcare. Britain's non-governmental organizations have already contributed to this movement—for instance, with clinical pathology accreditation18 and guidelines for diagnostic radiology19—but public bodies also must be prepared to export, and to import. This will demand that national policies become more explicit and joined-up within and between countries, that national support agencies (such as for clinical guidelines in France, Scotland and England) share rather than duplicate work and resources, and that performance data are standardized and available across the borders of Europe.

Even though the UK shares fewer patients and services with the rest of the world, it should still watch and learn from the experience of others, such as the ‘new rules to redesign and improve care’ proposed by the IoM,11 standards for health service assessment20 and the public inquiries into health service scandals elsewhere. Someone needs to be actively scanning the horizon, and that government and the NHS must listen and be able to respond. Whose business is that?

References

1. Ministry of Health. A National Health Service. London: Stationery Office, 1944
2. Godber G (chairman). Organisation of Medical Work in Hospitals. London: HM Stationery Office, 1974
3. Alment EAJ (chairman). Competence to Practise. London: Committee of Enquiry, 1977
4. Royal Commission on the National Health Service. Report. London: HM Stationery Office, 1979
5. Shaw CD. Monitoring and standards in the NHS. BMJ 1982;284: 217,294 [PMC free article] [PubMed]
6. Griffiths R (chairman). NHS Management Enquiry. London: DHSS, 1983
7. Department of Health. Working for Patients: Medical Audit. Review working paper 6. London: HM Stationery Office, 1989
8. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. N Engl J Med 1991;324: 370-6 [PubMed]
9. Vincent C, Neale G, Woloshnowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ 2001;322: 517-19 [PMC free article] [PubMed]
10. Kohn LT, et al. To Err is Human: Building a Safer Health System. Washington: National Academy Press, 1999 [www.nap.edu/openbook/030906837/html]
11. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington: National Academy Press, 2001 [www.nap.edu/openbook/0309072808/html] [PubMed]
12. Learning from Bristol: The Report of the Public Inquiry into Children's Heart Surgery at the Bristol Royal Infirmary 1984-1995. Command Paper: CM 5207 2001 [www.bristol-inquiry.org.uk]
13. National Expert Advisory Group on Safety and Quality in Australian Health Care. Report. 1998 [www.health.gov.au/about/cmo/report.doc]
14. President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry. Quality First: Better Health Care for All Americans. 1998 [www.hcquality/commission.gov/final/]
15. NHS Quality Improvement Scotland [www.nhshealthquality.org/]
16. Healthcare Commission. Concordat between regulatory bodies auditing and inspecting, regulating and auditing health care, 2004 [www.healthcarecommission.org.uk/assetRoot/04/00/43/01/04004301.pdf]
17. Wilson B, Thornton JG, Hewison J, et al. The Leeds University Maternity Audit Project. Int J Qual Health Care 2002;14: 175-81 [PubMed]
18. Clinical Pathology Accreditation UK [www.capa-uk.co.uk]
19. European Commission. Referral Guidelines for Imaging, 2000 [http://europa.eu.int/comm/environment/radprot/118/rp-118-en.pdf]
20. International Society for Quality in Healthcare. Standards for External Assessment of External Evaluation Bodies, 2nd edn. 2004 [www.isqua.org/isquaPages/Accreditation/ISQuaSurvStandards2.pdf]

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press