|Home | About | Journals | Submit | Contact Us | Français|
Evaluation is an essential part of the educational process. The focus of evaluation is on local quality improvement and is analogous to clinical audit. Medical schools require evaluation as part of their quality assurance procedures, but the value of evaluation is much greater than the provision of simple audit information. It provides evidence of how well students' learning objectives are being achieved and whether teaching standards are being maintained. Importantly, it also enables the curriculum to evolve. A medical curriculum should constantly develop in response to the needs of students, institutions, and society. Evaluation can check that the curriculum is evolving in the desired way. It should be viewed positively as contributing to the academic development of an institution and its members.
Evaluation and educational research are similar activities but with important differences. Research is usually aimed at producing generalisable results that can be published in peer reviewed literature, and it requires ethical and other safeguards. Evaluation is generally carried out for local use and does not usually require ethics committee approval. Evaluation has to be carefully considered by curriculum committees, however, to ensure that it is being carried out ethically. Finally, evaluation is a continuous process, whereas research may not become continuous if the answer to the question is found.
Evaluation may cover the process and/or outcome of any aspect of education, including the delivery and content of teaching. Questions about delivery may relate to organisation—for example, administrative arrangements, physical environment, and teaching methods. Information may also be sought about the aptitude of the teacher(s) involved. The content may be evaluated for its level (it should not be too easy or too difficult), its relevance to curriculum objectives, and integration with previous learning.
*Adapted by Barr et al (see “Further reading” box)
Outcome measures may show the impact of the curriculum on the knowledge, skills, attitudes, and behaviour of students. Kirkpatrick described four levels on which to focus evaluation; these have recently been adapted for use in health education evaluation by Barr and colleagues. Some indication of these attributes may be obtained by specific methods of inquiry—for example, by analysing data from student assessments.
The full impact of the curriculum may not be known until some time after the student has graduated
Evaluation should be designed at the start of developing a curriculum, not added as an afterthought. When an educational need has been identified, the first stage is to define the learning outcomes for the curriculum. The goals of the evaluation should be clearly articulated and linked to the outcomes.
Clarifying the goals of evaluation will help to specify the evidence needed to determine success or failure of the training. .A protocol should then be prepared so that individual responsibilities are clearly outlined.
An ideal evaluation method would be reliable, valid, acceptable, and inexpensive. Unfortunately, ideal methods for evaluating teaching in medical schools are scarce.
To reduce possible bias in evaluation, collect views from more than one group of people—for example, students, teachers, other clinicians, and patients
Establishing the reliability and validity of instruments and methods of evaluation can take many years and be costly. Testing and retesting of instruments to establish their psychometric properties without any additional benefit for students or teachers is unlikely to be popular with them. There is a need for robust “off the shelf” instruments that can be used to evaluate curriculums reliably. The process of evaluation itself may produce a positive educational impact if it emphasises those elements that are considered valuable and important by medical schools.
Several issues should be considered before designing an evaluation that collects information from students.
Competence—Students can be a reliable and valid source of information. They are uniquely aware of what they can consume, and they observe teaching daily. They are also an inexpensive resource. Daily contact, however, does not mean that students are skilled in evaluation. Evaluation by students should be limited to areas in which they are competent to judge.
Ownership—Students who are not committed to an evaluation may provide poor information. They need to feel ownership for an evaluation by participating in its development. The importance of obtaining the information and the type of information needed must be explicit. Usually the results of an evaluation will affect only subsequent cohorts of students, so current students must be convinced of the value of providing data.
Sampling—Students need to feel that their time is respected. If they are asked to fill out endless forms they will resent the waste of their time. If they become bored by tedious repetition, the reliability of the data will deteriorate. One solution is to use different sampling strategies for evaluating different elements of a curriculum. If reliable information can be obtained from 100 students, why collect data from 300?
Anonymity is commonly advocated as a guard against bias when information is collected from students. However, those who support asking students to sign evaluation forms say that this helps to create a climate of responsible peer review. If students are identifiable from the information they provide, this must not affect their progress. Data should be collected centrally and students' names removed so that they cannot be identified by teachers whom they have criticised.
Feedback—Students need to know that their opinions are valued, so they should be told of the results of the evaluation and given details of the resulting action.
Evaluation may involve subjective and objective measures and qualitative and quantitative approaches. The resources devoted to evaluation should reflect its importance, but excessive data collection should be avoided. A good system should be easy to administer and use information that is readily available.
Interviews—Individual interviews with students are useful if the information is sensitive—for example, when a teacher has received poor ratings from students, and the reasons are not clear. A group interview can provide detailed views from students or teachers. A teaching session can end with reflection by the group.
Surveys—Questionnaires are useful for obtaining information from large numbers of students or teachers about the teaching process. Electronic methods for administering questionnaires may improve response rates. The quality of the data, however, is only as good as the questions asked, and the data may not provide the reasons for a poorly rated session.
Questionnaire surveys are the most common evaluation tool
Information from student assessment—Data from assessment are useful for finding out if students have achieved the learning outcomes of a curriculum. A downward trend in examination results over several cohorts of students may indicate a deficiency in the curriculum. Caution is needed when interpreting this source of information, as students' examination performance depends as much on their application, ability, and motivation as on the teaching.
The main purpose of evaluation is to inform curriculum development. No curriculum is perfect in design and delivery. If the results of an evaluation show that no further development is needed, doubt is cast on the methods of evaluation or the interpretation of the results.
This does not mean that curriculums should be in a constant state of change, but that the results of evaluation to correct deficiencies are acted on, that methods continue to improve, and that content is updated. Then the process starts all over again.
Jill Morrison is professor of general practice and deputy associate dean for education at Glasgow University.
The ABC of learning and teaching in medicine is edited by Peter Cantillon, senior lecturer in medical informatics and medical education, National University of Ireland, Galway, Republic of Ireland; Linda Hutchinson, director of education and workforce development and consultant paediatrician, University Hospital Lewisham; and Diana F Wood, deputy dean for education and consultant endocrinologist, Barts and the London, Queen Mary's School of Medicine and Dentistry, Queen Mary, University of London. The series will be published as a book in late spring.