PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-12 (12)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
1.  Cumulative assessment: strategic choices to influence students’ study effort 
BMC Medical Education  2013;13:172.
Background
It has been asserted that assessment can and should be used to drive students’ learning. In the current study, we present a cumulative assessment program in which test planning, repeated testing and compensation are combined in order to influence study effort. The program is aimed at helping initially low-scoring students improve their performance during a module, without impairing initially high-scoring students’ performance. We used performance as a proxy for study effort and investigated whether the program worked as intended.
Methods
We analysed students’ test scores in two second-year (n = 494 and n = 436) and two third-year modules (n = 383 and n = 345) in which cumulative assessment was applied. We used t-tests to compare the change in test scores of initially low-scoring students with that of initially high-scoring students between the first and second subtest and again between the combined first and second subtest and the third subtest. During the interpretation of the outcomes we took regression to the mean and test difficulty into account.
Results
Between the first and the second subtest in all four modules, the scores of initially low-scoring students increased more than the scores of initially high-scoring students decreased. Between subtests two and three, we found a similar effect in one module, no significant effect in two modules and the opposite effect in another module.
Conclusion
The results between the first two subtests suggest that cumulative assessment may positively influence students’ study effort. The inconsistent outcomes between subtests two and three may be caused by differences in perceived imminence, impact and workload between the third subtest and the first two. Cumulative assessment may serve as an example of how several evidence-based assessment principles can be integrated into a program for the benefit of student learning.
doi:10.1186/1472-6920-13-172
PMCID: PMC3880587  PMID: 24370117
Summative assessment; Learning effects of assessment; Medical education; Higher education; Knowledge development; Knowledge retention; Test enhanced learning; Cumulative assessment; Repeated testing
2.  Medical students’ and teachers’ perceptions of sexual misconduct in the student–teacher relationship 
Perspectives on Medical Education  2013;2(5-6):276-289.
Teachers are important role models for the development of professional behaviour of young trainee doctors. Unfortunately, sometimes they show unprofessional behaviour. To address misconduct in teaching, it is important to determine where the thresholds lie when it comes to inappropriate behaviours in student–teacher encounters. We explored to what extent students and teachers perceive certain behaviours as misconduct or as sexual harassment. We designed—with a reference group—five written vignettes describing inappropriate behaviours in the student–teacher relationship. Clinical students (n = 1,195) and faculty of eight different hospitals (n = 1,497) were invited to rate to what extent they perceived each vignette as misconduct or sexual harassment. Data were analyzed using t tests and Pearson’s correlations. In total 643 students (54 %) and 551 teachers (37 %) responded. All vignettes were consistently considered more as misconduct than as actual sexual harassment. At an individual level, respondents differed largely as to whether they perceived an incident as misconduct or sexual harassment. Comparison between groups showed that teachers’ and students’ perceptions on three vignettes differed significantly, although the direction differed. Male students were more lenient towards certain behaviours than female students. To conclude, perceptions of misconduct and sexual harassment are not univocal. We recommend making students and teachers aware that the boundaries of others may not be the same as their own.
doi:10.1007/s40037-013-0091-y
PMCID: PMC3824750  PMID: 24170538
Student–teacher relationship; Sexual harassment; Misconduct; Boundary issues; Unprofessional behaviour; Gender differences
3.  Which characteristics of written feedback are perceived as stimulating students’ reflective competence: an exploratory study 
BMC Medical Education  2013;13:94.
Background
Teacher feedback on student reflective writing is recommended to improve learners’ reflective competence. To be able to improve teacher feedback on reflective writing, it is essential to gain insight into which characteristics of written feedback stimulate students’ reflection processes. Therefore, we investigated (1) which characteristics can be distinguished in written feedback comments on reflective writing and (2) which of these characteristics are perceived to stimulate students’ reflection processes.
Methods
We investigated written feedback comments from forty-three teachers on their students’ reflective essays. In Study 1, twenty-three medical educators grouped the comments into distinct categories. We used Multiple Correspondence Analysis to determine dimensions in the set of comments. In Study 2, another group of twenty-one medical educators individually judged whether the comments stimulated reflection by rating them on a five-point scale. We used t-tests to investigate whether comments classified as stimulating and not stimulating reflection differed in their scores on the dimensions.
Results
Our results showed that characteristics of written feedback comments can be described in three dimensions: format of the feedback (phrased as statement versus question), focus of the feedback (related to the levels of students’ reflections) and tone of the feedback (positive versus negative). Furthermore, comments phrased as a question and in a positive tone were judged as stimulating reflection more than comments at the opposite side of those dimensions (t = (14.5) = 6.48; p = < .001 and t = (15) = −1.80; p < .10 respectively). The effect sizes were large for format of the feedback comment (r = .86) and medium for tone of the feedback comment (r = .42).
Conclusions
This study suggests that written feedback comments on students’ reflective essays should be formulated as a question, positive in tone and tailored to the individual student’s reflective level in order to stimulate students to reflect on a slightly higher level. Further research is needed to examine whether incorporating these characteristics into teacher training helps to improve the quality of written feedback comments on reflective writing.
doi:10.1186/1472-6920-13-94
PMCID: PMC3750500  PMID: 23829790
Undergraduate medical education; Written feedback; Reflective writing; Professional development
4.  The effect of implementing undergraduate competency-based medical education on students’ knowledge acquisition, clinical performance and perceived preparedness for practice: a comparative study 
BMC Medical Education  2013;13:76.
Background
Little is known about the gains and losses associated with the implementation of undergraduate competency-based medical education. Therefore, we compared knowledge acquisition, clinical performance and perceived preparedness for practice of students from a competency-based active learning (CBAL) curriculum and a prior active learning (AL) curriculum.
Methods
We included two cohorts of both the AL curriculum (n = 453) and the CBAL curriculum (n = 372). Knowledge acquisition was determined by benchmarking each cohort on 24 interuniversity progress tests against parallel cohorts of two other medical schools. Differences in knowledge acquisition were determined comparing the number of times CBAL and AL cohorts scored significantly higher or lower on progress tests. Clinical performance was operationalized as students’ mean clerkship grade. Perceived preparedness for practice was assessed using a survey.
Results
The CBAL cohorts demonstrated relatively lower knowledge acquisition than the AL cohorts during the first study years, but not at the end of their studies. We found no significant differences in clinical performance. Concerning perceived preparedness for practice we found no significant differences except that students from the CBAL curriculum felt better prepared for ‘putting a patient problem in a broad context of political, sociological, cultural and economic factors’ than students from the AL curriculum.
Conclusions
Our data do not support the assumption that competency-based education results in graduates who are better prepared for medical practice. More research is needed before we can draw generalizable conclusions on the potential of undergraduate competency-based medical education.
doi:10.1186/1472-6920-13-76
PMCID: PMC3668236  PMID: 23711403
Medical education; Competency-based education; Undergraduate medical education; Competence; Curriculum development; Curriculum comparison; Active learning; Clinical performance; Self-efficacy; Progress test
5.  Longitudinal training and assessing consultation competence, a role for self reflection on performance 
Medical consultation (patient–doctor encounter), consisting of history taking, physical examination and treatment, is the starting point of any contact between doctor and patient. Learning to conduct a consultation is a complex skill. Both communicative and medical contents need to be applied and integrated. Conducting an adequate consultation is a skill which is gradually learned and perfected during training and career. This article discusses the background and implementation of a longitudinal integrated consultation training programme in clerkships. In the programme, the student’s reflection on the consultation plays an important role in education and assessment.
doi:10.1007/s40037-012-0028-x
PMCID: PMC3508275  PMID: 23205345
Consultation competence; Self reflection; Assessment; Longitudinal
6.  Does reflection have an effect upon case-solving abilities of undergraduate medical students? 
BMC Medical Education  2012;12:75.
Background
Reflection on professional experience is increasingly accepted as a critical attribute for health care practice; however, evidence that it has a positive impact on performance remains scarce. This study investigated whether, after allowing for the effects of knowledge and consultation skills, reflection had an independent effect on students’ ability to solve problem cases.
Methods
Data was collected from 362 undergraduate medical students at Ghent University solving video cases and reflected on the experience of doing so. For knowledge and consultation skills results on a progress test and a course teaching consultation skills were used respectively. Stepwise multiple linear regression analysis was used to test the relationship between the quality of case-solving (dependent variable) and reflection skills, knowledge, and consultation skills (dependent variables).
Results
Only students with data on all variables available (n = 270) were included for analysis. The model was significant (Anova F(3,269) = 11.00, p < 0.001, adjusted R square 0.10) with all variables significantly contributing.
Conclusion
Medical students’ reflection had a small but significant effect on case-solving, which supports reflection as an attribute for performance. These findings suggest that it would be worthwhile testing the effect of reflection skills training on clinical competence.
doi:10.1186/1472-6920-12-75
PMCID: PMC3492041  PMID: 22889271
7.  Using video-cases to assess student reflection: Development and validation of an instrument 
BMC Medical Education  2012;12:22.
Background
Reflection is a meta-cognitive process, characterized by: 1. Awareness of self and the situation; 2. Critical analysis and understanding of both self and the situation; 3. Development of new perspectives to inform future actions. Assessors can only access reflections indirectly through learners’ verbal and/or written expressions. Being privy to the situation that triggered reflection could place reflective materials into context. Video-cases make that possible and, coupled with a scoring rubric, offer a reliable way of assessing reflection.
Methods
Fourth and fifth year undergraduate medical students were shown two interactive video-cases and asked to reflect on this experience, guided by six standard questions. The quality of students’ reflections were scored using a specially developed Student Assessment of Reflection Scoring rubric (StARS®). Reflection scores were analyzed concerning interrater reliability and ability to discriminate between students. Further, the intra-rater reliability and case specificity were estimated by means of a generalizability study with rating and case scenario as facets.
Results
Reflection scores of 270 students ranged widely and interrater reliability was acceptable (Krippendorff’s alpha = 0.88). The generalizability study suggested 3 or 4 cases were needed to obtain reliable ratings from 4th year students and ≥ 6 cases from 5th year students.
Conclusion
Use of StARS® to assess student reflections triggered by standardized video-cases had acceptable discriminative ability and reliability. We offer this practical method for assessing reflection summatively, and providing formative feedback in training situations.
doi:10.1186/1472-6920-12-22
PMCID: PMC3426495  PMID: 22520632
8.  Key elements in assessing the educational environment: where is the theory? 
The educational environment has been increasingly acknowledged as vital for high-quality medical education. As a result, several instruments have been developed to measure medical educational environment quality. However, there appears to be no consensus about which concepts should be measured. The absence of a theoretical framework may explain this lack of consensus. Therefore, we aimed to (1) find a comprehensive theoretical framework defining the essential concepts, and (2) test its applicability. An initial review of the medical educational environment literature indicated that such frameworks are lacking. Therefore, we chose an alternative approach to lead us to relevant frameworks from outside the medical educational field; that is, we applied a snowballing technique to find educational environment instruments used to build the contents of the medical ones and investigated their theoretical underpinnings (Study 1). We found two frameworks, one of which was described as incomplete and one of which defines three domains as the key elements of human environments (personal development/goal direction, relationships, and system maintenance and system change) and has been validated in different contexts. To test its applicability, we investigated whether the items of nine medical educational environment instruments could be mapped unto the framework (Study 2). Of 374 items, 94% could: 256 (68%) pertained to a single domain, 94 (25%) to more than one domain. In our context, these domains were found to concern goal orientation, relationships and organization/regulation. We conclude that this framework is applicable and comprehensive, and recommend using it as theoretical underpinning for medical educational environment measures.
doi:10.1007/s10459-011-9346-8
PMCID: PMC3490064  PMID: 22307806
Educational environment; Instrument development; Learning environment; Medical education; Theoretical framework
9.  Factors confounding the assessment of reflection: a critical review 
BMC Medical Education  2011;11:104.
Background
Reflection on experience is an increasingly critical part of professional development and lifelong learning. There is, however, continuing uncertainty about how best to put principle into practice, particularly as regards assessment. This article explores those uncertainties in order to find practical ways of assessing reflection.
Discussion
We critically review four problems: 1. Inconsistent definitions of reflection; 2. Lack of standards to determine (in)adequate reflection; 3. Factors that complicate assessment; 4. Internal and external contextual factors affecting the assessment of reflection.
Summary
To address the problem of inconsistency, we identified processes that were common to a number of widely quoted theories and synthesised a model, which yielded six indicators that could be used in assessment instruments. We arrived at the conclusion that, until further progress has been made in defining standards, assessment must depend on developing and communicating local consensus between stakeholders (students, practitioners, teachers, supervisors, curriculum developers) about what is expected in exercises and formal tests. Major factors that complicate assessment are the subjective nature of reflection's content and the dependency on descriptions by persons being assessed about their reflection process, without any objective means of verification. To counter these validity threats, we suggest that assessment should focus on generic process skills rather than the subjective content of reflection and where possible to consider objective information about the triggering situation to verify described reflections. Finally, internal and external contextual factors such as motivation, instruction, character of assessment (formative or summative) and the ability of individual learning environments to stimulate reflection should be considered.
doi:10.1186/1472-6920-11-104
PMCID: PMC3268719  PMID: 22204704
10.  The reliability of in-training assessment when performance improvement is taken into account 
During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students’ clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.
doi:10.1007/s10459-010-9226-7
PMCID: PMC2995207  PMID: 20349272
In-training assessment; Longitudinal assessment; Mini-CEX; Reliability; Undergraduate students; Workplace learning
11.  The role of peer meetings for professional development in health science education: a qualitative analysis of reflective essays 
Introduction The development of professional behaviour is an important objective for students in Health Sciences, with reflective skills being a basic condition for this development. Literature describes a variety of methods giving students opportunities and encouragement for reflection. Although the literature states that learning and working together in peer meetings fosters reflection, these findings are based on experienced professionals. We do not know whether participation in peer meetings also makes a positive contribution to the learning experiences of undergraduate students in terms of reflection. Aim The aim of this study is to gain an understanding of the role of peer meetings in students’ learning experiences regarding reflection. Method A phenomenographic qualitative study was undertaken. Students’ learning experiences in peer meetings were analyzed by investigating the learning reports in students’ portfolios. Data were coded using open coding. Results The results indicate that peer meetings created an interactive learning environment in which students learned about themselves, their skills and their abilities as novice professionals. Students also mentioned conditions for a well-functioning group. Conclusion The findings indicate that peer meetings foster the development of reflection skills as part of professional behaviour.
doi:10.1007/s10459-008-9133-3
PMCID: PMC2744783  PMID: 18766452
Reflection; Peer meetings; Professional behaviour; Teaching; Collaborative learning
12.  The Effect of Enhanced Experiential Learning on the Personal Reflection of Undergraduate Medical Students 
Objective:
This study's aim was to test the expectation that enhanced experiential learning is an effective educational method that encourages personal reflection in medical students.
Methods:
Using a pre post-test follow-up design, the level of the personal reflection ability of an exposure group of first-year medical students participating in a new enhanced experiential learning program was compared to that of a control group of second- and third-year medical students participating in a standard problem-based learning program. Personal reflection was assessed using the Groningen Reflection Ability Scale (GRAS). Students’ growth in reflection was analyzed with multilevel analysis.
Results:
After one year, first-year medical students in the exposure group achieved a level of personal reflection comparable to that reached by students of the control group in their third year. This difference in growth of reflection was statistically significant (p<.001), with a small effect size (effect size = 0.18). The reflection growth curve of the control group declined slightly in the third year as a function of study time.
Conclusion:
Enhanced experiential learning has a positive effect on the personal reflection ability of undergraduate medical students.
doi:10.3885/meo.2008.Res00279
PMCID: PMC2779594  PMID: 20165543

Results 1-12 (12)