|Home | About | Journals | Submit | Contact Us | Français|
Professor Maynard and his assistant Yezenash Ayalew1 highlight many important points in their Essay on ‘Performance management and the Royal Colleges of Medicine and Surgery’ (JRSM 2007;100:306-308). They question the roles of the Royal Medical Colleges in relation to the NHS and the cost-effectiveness of these institutions. In particular they question the indirect ‘subsidy paid by the taxpayer to the Colleges through the NHS’ and appear to challenge the Colleges for having too much influence on government decisions. Oh, that it was so! These points require more detailed comment. Most would strongly support their view on the need for cost-effectiveness in all organizations, and that the Government should certainly require appropriate evidence on all advice it receives, be it on matters of clinical care of patients, health policies or finance.
The authors set out the income and expenditure of the Royal Colleges and challenge whether, in their present form, they warrant charitable status and are value for money. Surprisingly, in their broad-brush approach they largely fail to provide detailed evidence for their criticisms and conclusions, when there is much evidence in the public domain to the contrary.
They rightly challenge the cost implications of the recent proliferation in the number of Medical Colleges. A proper balance must be struck between an optimal number to undertake effectively the range of specialized work they do in the various distinct major branches of medicine, and the disadvantages of such differentiation, with the risk of fragmentation of the profession and duplication of core functions. There is certainly room for some rationalization, particularly where there is an overlap of expertise. Indeed, there is already increasing formal rationalization of College activities where this is better shared. The Federation of the Royal Colleges of Physicians of the UK and the Surgical Forum of all the UK Colleges of Surgeons are two major examples. There are also a number of joint specialist standing committees—those between the Royal College of Physicians of London (RCP) and the Royal College of General Practice; the RCP and Royal College of Pathology on haematology and microbiology; and the RCP and the Royal College of Radiology on oncology, to name just a few. Other Colleges have their own examples. The authors suggest a merger of all the Colleges into a single huge organization, to save costs and strengthen the voice of professional input to government, but they provide no evidence that this centralization would provide either benefit. Indeed governments, on the basis of evidence, have often been reluctant to listen. There is no doubt at all about the urgent need for an improved mechanism for the Colleges to speak out clearly with a single voice on many issues. This requires a single representative organization with trusted strong leadership. We currently have a special opportunity to strengthen the Academy of the Royal Medical Colleges in a way to do just this. It is imperative for the Colleges to pull together and the Academy must be given greater authority if it is to succeed.
Taking up the broad thrust of the authors' arguments, a useful new project might be to review the various independent Departments of Health Policy and Health Economics around the country in terms of objective outcome measurements (e.g. acceptance by government of their policy advice, the outcome of this and its cost-effectiveness). Might not the authors be led to conclude that merging them all into a single Department was the right step, on the grounds that it would provide a more coherent and effective advisory body to Government and add value through economies of scale? I rather doubt it.
In their complaint that the Colleges give advice to Government on NHS policy when their expertise may be quite limited, the authors appear to confuse a number of quite distinct issues needed to improve the organization of health care systems. Among them are the advisory role of clinicians within the NHS, including their role in clinical governance and management; the role of advisors to the British Government on health policy issues in the NHS; and the much wider role of the Colleges in maintaining and improving the quality of professional care of patients in both health and disease in this country, including the NHS and private sectors, and worldwide—all irrespective of the different health care systems. Of course the Colleges have a responsibility to speak out when government policies threaten standards of patient care. They must certainly demonstrate that this is not a matter of self-interest. The major problem is to persuade government and others to listen. If necessary the Colleges must continue to challenge them to provide the evidence to validate new policies before they are widely implemented. More use of pilot studies would also be sensible.
The obvious way to improve the organization of health systems in general and the NHS in particular is to recognize the need for a tripartite trusting and mutually respecting partnership between Government, management and medical professionals, each contributing their complementary skills and evidence-based advice. This partnership should be augmented by additional advice and information from a variety of other sources, including health policy and economic experts. No-one, however, should believe that they have a monopoly in any one skill.
Without providing evidence for their statement, Maynard and Ayalew complain that the Colleges give advice without evidence. The track record of the Colleges' evidence-based reports and published evidence-based working party deliberations are summarized in their annual reports, which are in the public domain. These include: evidence-based guidelines of best practice; the prevention and dangers of a wide variety of health hazards; improving professional standards through training programmes, courses and examinations; guidance on individual appraisal and revalidation of doctors; and workforce numbers to provide quality care of patients in the NHS—to mention just a few. All the Colleges know that Ministers, quite rightly, are unlikely to take notice of their advice unless it is backed up by the best evidence available at the time.
Assessing their efforts to improve individual patient experience and outcomes is the raison d'etre of every doctor. Health Policy Departments have also contributed much research in this area. These assessments require many different types of methodology. Qualitative data from patient reported outcome measures (PROM) and quality of life measures are certainly important, but there are many others, including quantitative, objective and often longer term evaluation, and audit of procedures as well as of individual doctors' performance and clinical trials. All are needed. For more than ten years the Clinical Effectiveness and Evaluation Units of the Royal Colleges of Physicians and Surgeons and the equivalent units of other Colleges have demonstrated the importance the Colleges attach to outcomes. They work closely with the Department of Health and they support extensively the work of NICE.
Maynard and Ayalew question the justification for the indirect subsidy by the tax payer of the work of the Colleges, but at the same time recognize that the majority of the Colleges' income is derived independently from government or public sources. When the evidence is considered—the Colleges' contribution in all manner of ways to improving standards of patient care in the NHS and their long-established role in postgraduate training and assessment of almost every doctor in the NHS—it is evident that the tax payer gets a pretty good deal! In particular, they get a meticulously validated system of postgraduate examinations of internationally recognized quality to maintain professional standards.
The government policy to attempt to take over the role of training and assessment of doctors has been, on the basis of evidence, an acknowledged disaster. This is exemplified by the recent history of the Postgraduate Medical Education and Training Board (PMETB)—only six of 29 members of the Board are nominated from the Colleges—many aspects of MMC, and MTAS. Additionally, the attempt by Government to set up a single authority for the personal development of all NHS personnel with the creation of the NHSU failed within four years. To attempt to build on failed systems is neither wise nor cost-effective. Lessons need to be learnt from these flawed policy decisions to prevent a repetition of these damaging mistakes, and the huge costs to the public purse need to be calculated and published.
More generally, the evidence on the soundness of advice given from other expert bodies on the now numerous NHS ‘reforms’—on which the views of the Colleges have so often been largely ignored—is often elusive. There have been repeated U-turns by Government in the history of the NHS, as successive policies implemented at great cost are, after a short time, rejected or reversed. These have not, as far as I am aware, been subjected to an analysis of money wasted or lessons learned before the next one is introduced.
But I must not deal blow for blow. Rather than apportioning blame or seeking commercial solutions, a more constructive approach is surely for all professional bodies with experience and good evidence, be it the Royal Colleges, Health Policy Units or others, to be recognized for what they are good at by a receptive authority responsible for the prevailing health care system. It needs all the help it can get.
Competing interests MTW is the Past President of the Royal College of Physicians and past Chairman of the Conference of Royal Medical Colleges and their Faculties (now renamed the Academy of the Royal Medical Colleges).