PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jmedethJournal of Medical EthicsCurrent TOCInstructions to authors
 
J Med Ethics. Nov 2006; 32(11): 662–664.
PMCID: PMC2563290
Consistency in decision making by research ethics committees: a controlled comparison
E Angell, A J Sutton, K Windridge, and M Dixon‐Woods
E Angell, A J Sutton, M Dixon‐Woods, Department of Health Sciences, University of Leicester, Leicester, UK
K Windridge, Trent Research and Development Support Unit, Department of Health Sciences
Correspondence to: M Dixon‐Woods
Department of Health Sciences, University of Leicester, 22‐28 Princess Road West, Leicester LE1 6TP, UK; md11@le.ac.uk
Received August 30, 2005; Revised February 7, 2006; Accepted February 9, 2006.
There has been longstanding interest in the consistency of decisions made by research ethics committees (RECs) in the UK, but most of the evidence has come from single studies submitted to multiple committees. A systematic comparison was carried out of the decisions made on 18 purposively selected applications, each of which was reviewed independently by three different RECs in a single strategic health authority. Decisions on 11 applications were consistent, but disparities were found among RECs on decisions on seven applications. An analysis of the agreement between decisions of RECs yielded an overall measure of agreement of κ = 0.286 (95% confidence interval −0.06 to 0.73), indicating a level of agreement that, although probably better than chance, may be described as “slight”. The small sample size limits the robustness of these findings. Further research on reasons for inconsistencies in decision making between RECs, and on the importance of such inconsistencies for a range of arguments, is needed.
Research ethics committees (RECs) in the UK aim to make decisions on research proposals in line with published guidance.1 The importance of systematic and consistent decision making by RECs has been repeatedly emphasised, most recently in the national standard operating procedures for RECs,2 prompted partly by European Union legislation (Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use 20013). Much of the evidence on consistency in decision making by RECs in the UK is based on studies of a single proposal being submitted to multiple committees and is authored by the investigators themselves, usually in response to frustrating experiences.4,5,6 Little evidence exists on the extent to which RECs agree about proposals. We compared decisions on the same proposals made by different RECs.
The three RECs in the Leicestershire, Northamptonshire and Rutland Strategic Health Authority were included in the project, which was conducted between February 2004 and February 2005. Members of the committees gave their oral consent to participation in autumn 2003. Purposive sampling was used to select applications for inclusion in the project. Sampling, which was conducted by EA, an REC administrator, aimed to represent different types of studies and different types of applicants. The different types of studies were as follows:
  • Intervention: a study that intervenes in the normal clinical care of a patient or health service user.
  • Non‐intervention: a study that does not include an intervention but seeks to measure outcomes or processes.
  • Qualitative: a study that uses distinctive qualitative research methods.
The types of applicants were:
  • Novices: applicants who had not previously applied to the REC.
  • Experienced: applicants who had submitted at least one previous application to the REC.
Over a 12‐month period, each application was reviewed by all three RECs and all of them prepared a decision letter. Each application was assigned a “lead” REC before any committee considered the application, and applicants received the decision letter of the lead REC only. The two non‐lead RECs were sent this application as a “dummy”, despatched as one of the many applications for each meeting, and not labelled as “the dummy” (although it may sometimes have been possible for members to guess). Over the course of the project, each committee reviewed 12 dummy applications and 6 applications as the lead committee.
Three decision letters for each application were generated for analysis. Under governance arrangements for RECs1 and guidance on standard operating procedures,2 there were four formal decisions available to RECs:
  • Favourable: the application is ethically acceptable.
  • Provisional: amendments to the application or further information are required before a final decision can be made.
  • Unfavourable: the application is ethically unacceptable.
  • Outside remit: the application is deemed to fall outside the remit of governance arrangements for RECs.
Patterns of agreement in decisions were assessed descriptively. Agreement was further assessed using the κ statistic, which indicates the proportion of observed agreement that cannot be explained by chance. The one “favourable” decision was grouped as “provisional” for this analysis.
Of the 18 applications, 11 received consistent decisions from all three committees (table 11).). A provisional decision was the most frequent decision for all the three RECs. REC3 had a higher incidence of unfavourable decisions (five, compared with two in REC1 and REC2), but was also the only REC to give a favourable decision. REC2 was the only one to give an “outside remit” decision.
Table thumbnail
Table 1 Study types, applicants and decisions
Of the seven applications that received inconsistent decisions, six received consistent decisions from two committees. One application received different decisions from all three committees. Of the six applications where two committees agreed, four received consistent decisions from REC1 and REC2, one received consistent decisions from REC2 and REC3, and one received consistent decisions from REC1 and REC3. All the three committees agreed on seven novice and four expert applications, and all three did not agree on three novice and four expert applications. The committees agreed on five non‐intervention, four qualitative and two intervention studies, but did not agree on one non‐intervention, three qualitative and three intervention studies.
An analysis of agreement of decisions between RECs yielded an overall measure of agreement of κ = 0.286 (95% confidence interval (CI) −0.06 to 0.73), indicating a level of agreement that, although probably better than chance, may be described as “slight”.
Our study provides evidence on the consistency of outcomes of decision making by RECs. It was limited by its location in a single strategic health authority. Variations in the type of applications submitted over time constrained the extent to which it was possible to select equal proportions of different types of studies and applicants, although a reasonable balance was achieved. More importantly, the difficulties of conducting this type of research, which necessarily results in additional burdens on RECs, limited the size of the sample that could be obtained. The wide CI shows a considerable uncertainty about the point estimate, and polarised response categories such as those found in our dataset make the interpretation of κ statistics difficult. This inherently low‐powered analysis nonetheless deals with an important question in a subject in which it is difficult to obtain data.
Although 11 of 18 applications considered by three RECs in our project received consistent decisions, seven—over a third of the sample—received inconsistent decisions. This evidence raises the bigger question of the significance of consistency of decision making for various arguments.7 Disparities in process and outcome are a source of frustration and delay for researchers,8 besides the possibility that, in a context where research governance is constructed as a regulatory and managerial enterprise,9 variations in outcome will be seen as evidence of problems in performance. Further research on reasons for variations in decision making, including further analysis of the content of decision letters in our own study, is needed. Such research may help further understand the kind of issues identified by RECs as being “ethical”, the way to resolve them, and the kind of ideas and principles that seem to inform opinions about ethical issues. This kind of research can afford an exploration of whether disparities are due, as previous authors have suggested, to inconsistencies in moral judgements (which may be unavoidable, acceptable or even desirable) or due to irrationality or carelessness, or operation of conflicting interests (which should be reduced or removed).10
Acknowledgements
The idea for this study was conceived after discussions between EA and two committee members (Dr Douglas Tincello and Dr Carl Edwards). Dr Jennifer Kurinczuk provided helpful comments on an earlier draft. Professor Paul Burton commented in detail on statistical issues and Dr Sheila Bonas offered useful advice on presentation of data. COREC gave permission for coordination of the project to be undertaken during working hours. The project was unfunded, but LNR SHA allowed 2 days for writing up.
Abbreviations
REC - research ethics committee
Footnotes
Competing interests: EA was employed as the Administrator to the Leicestershire, Northamptonshire and Rutland Research Ethics Committees during the period of the study. No other author has any conflict of interest.
Ethics approval: Support was sought and obtained from COREC in summer 2003. COREC confirmed that the project constituted service evaluation and development and therefore need not be reviewed by a Research Ethics Committee.
1. Central Office for Research Ethics Committees Governance arrangements for research ethics committees (GAfREC). London: Department of Health, 2001, http://www.dh.gov.uk/assetRoot/04/05/86/09/04058609.pdf (accessed Jul 2005)
2. Central Office for Research Ethics Committees Standard operating procedures for research ethics committees. http://www.corec.org.uk/recs/guidance/docs/SOPs.doc (accessed Jul 2005)
3. The European Parliament and the Council of the European Union http://europa.eu.int/eur‐lex/pri/en/oj/dat/2001/l_121/l_12120010501en00340044.pdf (accessed Jul 2005) Official J Eur Communities. 2001: L121/34,
4. Hotopf M, Wesseley S, Noah A. Are ethics committees reliable? J R Soc Med 1995. 8831–33.33. [PMC free article] [PubMed]
5. Lux A L, Edwards S W, Osborne J P. Responses of local research ethics committees to a study with approval from a multicentre research ethics committee. BMJ 2000. 3201182–1183.1183. [PMC free article] [PubMed]
6. Maskell N A, Jones E L, Davies R J O. Variation in obtaining local ethical approval for participation in a multi‐centre study. Q J Med 2003. 96305–307.307. [PubMed]
7. Ashcroft R E. The ethics and governance of medical research: what does regulation have to do with morality? New Rev Bioethics 2003. 141–58.58. [PubMed]
8. Nicholl J. The ethics of research ethics committees. BMJ 2000. 3201217. [PMC free article] [PubMed]
9. Department of Health Research governance for health and social care. 2nd edn. London: Department of Health, 2005.
10. Edwards S J L, Ashcroft R, Kirchin S. Research ethics committees: differences and moral judgement. Bioethics 2004. 18408–427.427. [PubMed]
Articles from Journal of Medical Ethics are provided here courtesy of
BMJ Group