PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jrsocmedLink to Publisher's site
 
J R Soc Med. 2007 July; 100(7): 306–308.
PMCID: PMC1905868

Performance management and the Royal Colleges of medicine and surgery

What are the purposes of the Royal Colleges and why are there so many of them? Is there a case for their rationalization and improved transparency and performance management? Are the Colleges efficient in supplying necessary examining and accreditation for specialist training? How broad and evidence-based is their policy expertise, and could alternative arrangements better provide the services needed by practitioners over their careers?

The number of Medical Royal Colleges has grown rapidly in the last 35 years from 10 to 18. The rationale for this proliferation is unclear and brings to mind the Greek historian Herodotus, who wrote about a visit to Egypt in fifth century BC as follows:

The practice of medicine they split into separate parts, each doctor being responsible for the treatment of one disease. There are, in consequence, innumerable doctors, some specialising in diseases of the eye, others of the head, others of the teeth, others of the stomach and so on; whilst others again deal with the sort of troubles which cannot be exactly localised.1 (Herodotus: The Histories, book 2, page 132, Penguin Books, 1954)

As medical historians have observed,2 change in medicine can be slow. The rationalization of the division of labour in fifth century BC Egypt and 2006 remains challenging in terms of its evidence base of cost effectiveness. Similarly the cost-effectiveness of the proliferation of the number of Colleges can be questioned. Is it not possible that their merger would produce economies of scale both in terms of reducing costs but also in improved management of their functions?

As can be seen from Table 1, the first Royal College was established in Scotland in 1505. Each College is a registered charity and as such they benefit from tax breaks. Accessing the accounts of the Colleges is easier in England, where details are provided without difficulty, than in Scotland, where only summary income and expenditure details were made available. The Irish College refused all cooperation in accessing their accounts.

Table 1
Royal Colleges in the UK

Table 2 summarizes the income and expenditure for each College (as indicated not all this data is for the same year). The total income of the 18 Colleges, ignoring the slight timing differences, was £114 million. Some of the Colleges are relatively rich—for instance, the Royal College of Surgeons of England carried forward funds of nearly £68 million—whilst others have very limited budgets, such as the Faculty of Occupational Medicine of the Royal College of Physicians of London, which carried forward funds of only £622 000. The main source of income was subscriptions for membership and charging for examinations and courses. Extracting this function from the Colleges and relocating it to a centralized agency which could exploit economies of scale would reduce total College funding.

Table 2
Funding of the Royal Colleges

A related issue is the often implicit subsidies given to the Colleges by the NHS. Losing eminent practitioners for a couple of weeks each year to be College examiners imposes a high opportunity cost on hospitals. The examiners are paid College fees but the hospitals are not compensated for the loss of health care services available to the local population. In an increasingly commercialized NHS where consultant activity rates are scrutinized by PCT Commissioners, charging the Colleges for these services may become necessary. Failure to do so is, in effect, a subsidy paid by the taxpayer to the Colleges through the NHS. Similarly, the work of Trust consultants for College working parties needs careful scrutiny in an NHS world now more clearly focused on practitioner productivity in terms of activity and outcomes.

Consideration of the efficiency of College activity is relevant to public policy making for a variety of reasons. First, their charitable status and the related tax subsidies make College activity a matter of private and public interest. Second, the remarkable influence of the Colleges on public policy making is both obvious and not always clearly evidence based. The capacity of the Colleges to draw on the evidence base and translate this into sound advice to Ministers and civil servants is constrained by their funding and their expertise.

Sir Iain Chalmers has argued that:

Because professionals sometimes do more harm than good when they intervene in the lives of other people, their policies and practices should be informed by rigorous, transparent and up-to-date evaluation.3

Whilst College officials may be eminent practitioners in their clinical fields and should be able to give sound evidence-based advice to Whitehall about practice issues, their expertise in terms of NHS policy advice may be quite limited. Thus the pertinence and relevance of their advice on issues such as health care reform may reinforce the tendency of Government to adopt policies with little regard to evidence. For instance, the Colleges' capacity to offer coherent advice on the radical reforms of the Blair Government is not evident, but could perhaps have been generated by collective action rather than private, segmented despair!

When collective action is institutionalized, the capacity of the consequent ‘medical club’ to achieve its remit is not obvious. For instance, the Postgraduate Medical Education and Training Board (PMETB) is responsible for ‘establishing and raising standards and quality in postgraduate medical education and training’. Whilst College and professional input into such work is essential, there appears to be little input from non-clinical professionals who might, for instance, press for costing of any proposals that are made so that their relative cost effectiveness can be appraised. In the absence of such evidence there is a risk that raising the standards of education will increase costs to trainees with little gain to patients.

In other areas, such as clinical governance, College officials may have some expertise (e.g. in identifying poor practice) but little knowledge of how best to design systems and policy interventions to redress deficiencies. This has led to essential but increasingly bureaucratic proposals that neither utilize the existing evidence base nor put in place evaluation of new interventions to improve governance and efficiently facilitate revalidation. For instance, a vital missing component of each policy is the measurement and management of patient reported outcome measures (PROM) of success in restoring or at least stabilizing the mental and physical functioning of patients.4,5 PROM have been available for decades (e.g. www.sf36.org and www.euroqol.org), translated into dozens of languages, used in thousands of clinical trials—and largely ignored in routine clinical practice, in part because of weak College leadership.

There is always the risk that advice given in areas outside the expertise of Colleges may be interpreted as being based on professional self-interest rather than what is best for the patient, the taxpayer and the profession. In mitigating that risk, Presidents and their colleagues should perhaps constrain their activity to areas where their expertise is soundly evidence-based and collaborate selectively to generate advice elsewhere with outside expertise.

In the next decade, the biggest challenge to practitioners and the Colleges of which they are members is the development of their managerial skills. The everyday practice of clinicians involves management (i.e. control of the deployment of resources both in terms of own time and the inputs of the care team). Management requires the collection and analysis of real-time data about activity and patient outcomes. Despite the long availability of administrative data sets such as Hospital Episode Statistics (HES), their use by clinicians remains as minimal as that by non-clinical managers.

Some of the more efficient Colleges are slowly rousing their members to realization of the existence and potential usefulness of these data (e.g. the Royal College of Physicians of London [http://hiu.rcplondon.ac.uk/lab.asp] and Royal College of Physicians6). The interest in and capacity of other Colleges to emulate such investments, however, is uneven, calling into doubt their fitness for purpose in terms of providing relevant and timely education and training for members in a rapidly changing health care system.

The management of activity by clinicians, required for clinical governance, revalidation and job and institutional survival in a competitive NHS, is incomplete without the management of outcomes. There has been a gradual introduction of some degree of measurement and management of mortality (e.g. the Health Care Commission's publication of 30-day mortality rates by surgeon for cardiothoracic surgery [www.healthcarecommission.org.uk]). However, the profession and College leaders have to be persuaded that the use of outcome data involves more than measuring relative failure in terms of mortality, complication rates and readmission levels. The challenge for the Colleges is whether they are capable of leading their members into the measurement and management of success in terms of patient related outcome measures. Whilst some Colleges acknowledge this challenge (e.g. Royal College of Surgeons7), none is providing leadership in this essential element of consumer protection and the demonstration of professional competence.

CONCLUSIONS

The Royal Colleges of medicine and surgery have grown with little regard to purpose and value for money. Their recent proliferation and increasing cost in terms of tax subsidies make it timely to evaluate their purpose and performance. Would one organization with national standards of training and examination produce and maintain practitioners better for their long and demanding careers? Would clearer definition of their advisory roles, particularly with regard to Government, and greater transparency in their activities ensure that advice, when given, was derived from the clinical evidence base rather than evidence-free views, particularly in relation to the formulation of national health care policy? Are the Royal Colleges fit for purpose in executing the role of revalidating their members? Everyone concerned—professionals, patients and taxpayers—have an interest in these roles being executed efficiently, but the available evidence that these expensive organizations give value for money is incomplete and questionable.

Notes

Competing interests None declared.

References

1. Herodotus. The Histories. London: Penguin Books, 1954; Book 2: 132
2. Wootton D. Bad Medicine: Doctors Doing Harm since Hippocrates. Oxford: Oxford University Press, 2006
3. Chalmers I. Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations. Ann Am Acad Pol Sci 2003;589: 22-40
4. Kind P, Williams A. Measuring success in Health care—the time has come to do it properly. Health Policy Matters 2004; 9. Available at www.york.ac.uk/healthsciences/pubs/hpmindex.htm
5. Appleby J, Devlin N. Measuring Success in the NHS. London: Dr Foster, Kings Fund and City University, 2004
6. Royal College of Physicians. Engaging Clinicians in Improving Data Quality in the NHS. Report of research conducted in the RCP iLab. London: RCP, 2006
7. Royal College of Surgeons. Delivering High Quality Surgical Services for the Future. Consultation Document. London: RCS, 2006

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press