|Home | About | Journals | Submit | Contact Us | Français|
Problem based learning, an educational intervention characterised by small group and self directed learning, is one of medical education's more recent success stories, at least in terms of its ubiquity. From its beginnings in McMaster University in the 1960s it has been adopted in undergraduate medical courses worldwide. It is also being used in postgraduate and continuing medical education.
Problem based learning has been the subject of at least four much quoted reviews, three published in the early 1990s and one more recently.1–4 Such attention is not surprising. What might be surprising is that the effects of such a popular educational approach are seemingly small, except in the area of student satisfaction. According to the reviews the extent of knowledge gained by such measures as performance in licensing examinations is at best unclear. Participants in problem based learning, however, can expect small gains in clinical reasoning.
The paper by Smits and colleagues in this issue provides a review of problem based learning in postgraduate and continuing education (p 153).5 It is, however, based on only six studies which met the authors' inclusion criteria for controlled study designs. The conclusions of the paper are similar to those of the major reviews. There is limited evidence that use of problem based learning in postgraduate and continuing medical education increases knowledge, doctor performance, and patient outcomes. There is moderate evidence for increased satisfaction of participants.
The debate on systematic reviews of problem based learning was taken to a new level with the publication of two articles in Medical Education in September 2000.6,7 They focused on the potential effects of research design on the findings of reviews. Albanese concentrated on effect size while Norman and Schmidt argued for a theory based approach to the study of educational interventions. Taking the debate to this level is timely given the recent interest in the nature of evidence in medical education research, particularly through the work of the best evidence medical education movement. Smits and his colleagues claim that controlled evaluation studies provide the best evidence of educational effectiveness. Despite claims in the paper to the contrary, this is not necessarily supported by the advocates of best evidence medical education, who have moved away from grading studies according to the gold standard of randomised control to a scheme based on criteria such as quality, utility, and strength of evidence.8 Norman and Schmidt provide a critique of the randomised control trial approach to researching curriculum interventions suggesting that such studies are doomed to fail. This is familiar to educational researchers outside medicine who some time ago abandoned the supremacy of randomised designs to embrace a range of quasi-experimental and qualitative designs.
Three of the limitations of randomised control studies for studying educational interventions are highlighted by the paper. The first is randomisation. While randomisation is theoretically possible in educational research it is often not feasible nor justifiable. Is it justifiable to enrol medical professionals in postgraduate and continuing education programmes in which they are given no choice over the learning methods they will engage in? Furthermore, as Norman and Schmidt point out, randomisation relies on the maintenance of blind allocation.7 Maintaining blinding is rarely possible in research on educational interventions.
The second issue is control of variables. At the very least the intervention itself may be variable. There are many variants of problem based learning. The process of education depends on the context. A myriad of factors, including facilities and resources, teacher and student motivation, individual expectations, and institutional ethos affect the process. Again it is theoretically possible to control for such variables but in doing so the key factors that determine the success or failure of the intervention may be removed.
The third issue concerns the choice of appropriate outcome measures. There is much interest in the defining of clear outcomes for medical education and hence for medical education research.9,10 But the outcomes must be appropriate for the intervention. For example, is improved patient health an appropriate measure of educational effectiveness in continuing medical education? After all it is influenced by a whole range of factors within and outside a doctor's control.
Education is a discipline that is rich in theory. One of the functions of educational theory is to make predictions about outcomes and their relationships that can be tested through empirical work. Much research about medical education proceeds devoid of theory. More not less theory based research is needed7 so that researchers will focus on significant outcomes that are amenable to intervention.
There is a clear imperative to research the effects of educational interventions at all levels of medical education and training. The research, however, must be designed so that the findings can be truly ascribed to the intervention rather than being an artefact of the methods used.
Learning in practice p 153