PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-12 (12)
 

Clipboard (0)
None
Journals
Year of Publication
1.  Influence of the workplace on learning physical examination skills 
BMC Medical Education  2014;14:61.
Background
Hospital clerkships are considered crucial for acquiring competencies such as diagnostic reasoning and clinical skills. The actual learning process in the hospital remains poorly understood. This study investigates how students learn clinical skills in workplaces and factors affecting this.
Methods
Six focus group sessions with 32 students in Internal Medicine rotation (4–9 students per group; sessions 80–90 minutes). Verbatim transcripts were analysed by emerging themes and coded independently by three researchers followed by constant comparison and axial coding.
Results
Students report to learn the systematics of the physical examination, gain agility and become able to recognise pathological signs. The learning process combines working alongside others and working independently with increasing responsibility for patient care. Helpful behaviour includes making findings explicit through patient files or during observation, feedback by abnormal findings and taking initiative. Factors affecting the process negatively include lack of supervision, uncertainty about tasks and expectations, and social context such as hierarchy of learners and perceived learning environment.
Conclusion
Although individual student experiences vary greatly between different hospitals, it seems that proactivity and participation are central drivers for learning. These results can improve the quality of existing programmes and help design new ways to learn physical examination skills.
doi:10.1186/1472-6920-14-61
PMCID: PMC3976051  PMID: 24678562
2.  Qualitative study about the ways teachers react to feedback from resident evaluations 
BMC Medical Education  2013;13:98.
Background
Currently, one of the main interventions that are widely expected to contribute to teachers’ professional development is confronting teachers with feedback from resident evaluations of their teaching performance. Receiving feedback, however, is a double edged sword. Teachers see themselves confronted with information about themselves and are, at the same time, expected to be role models in the way they respond to feedback. Knowledge about the teachers’ responses could be not only of benefit for their professional development, but also for supporting their role modeling. Therefore, research about professional development should include the way teachers respond to feedback.
Method
We designed a qualitative study with semi-structured individual conversations about feedback reports, gained from resident evaluations. Two researchers carried out a systematic analysis using qualitative research software. The analysis focused on what happened in the conversations and structured the data in three main themes: conversation process, acceptance and coping strategies.
Results
The result section describes the conversation patterns and atmosphere. Teachers accepted their results calmly, stating that, although they recognised some points of interest, they could not meet with every standard. Most used coping strategies were explaining the results from their personal beliefs about good teaching and attributing poor results to external factors and good results to themselves. However, some teachers admitted that they had poor results because of the fact that they were not “sharp enough” in their resident group, implying that they did not do their best.
Conclusions
Our study not only confirms that the effects of feedback depend first and foremost on the recipient but also enlightens the meaning and role of acceptance and being a role model. We think that the results justify the conclusion that teachers who are responsible for the day release programmes in the three departments tend to respond to the evaluation results just like human beings do and, at the time of the conversation, are initially not aware of the fact that they are role models in the way they respond to feedback.
doi:10.1186/1472-6920-13-98
PMCID: PMC3751067  PMID: 23866849
Teachers; Professional development; Feedback; Role modeling
3.  Quality assurance in transnational higher education: a case study of the tropEd network 
BMC Medical Education  2013;13:43.
Introduction
Transnational or cross-border higher education has rapidly expanded since the 1980s. Together with that expansion issues on quality assurance came to the forefront. This article aims to identify key issues regarding quality assurance of transnational higher education and discusses the quality assurance of the tropEd Network for International Health in Higher Education in relation to these key issues.
Methods
Literature review and review of documents.
Results
From the literature the following key issues regarding transnational quality assurance were identified and explored: comparability of quality assurance frameworks, true collaboration versus erosion of national education sovereignty, accreditation agencies and transparency. The tropEd network developed a transnational quality assurance framework for the network. The network accredits modules through a rigorous process which has been accepted by major stakeholders. This process was a participatory learning process and at the same time the process worked positive for the relations between the institutions.
Discussion
The development of the quality assurance framework and the process provides a potential example for others.
doi:10.1186/1472-6920-13-43
PMCID: PMC3614883  PMID: 23537108
Quality assurance; Higher education; Cross-border; Transnational; Networks
4.  Teachers’ perceptions of aspects affecting seminar learning: a qualitative study 
BMC Medical Education  2013;13:22.
Background
Many medical schools have embraced small group learning methods in their undergraduate curricula. Given increasing financial constraints on universities, active learning groups like seminars (with 25 students a group) are gaining popularity. To enhance the understanding of seminar learning and to determine how seminar learning can be optimised it is important to investigate stakeholders’ views. In this study, we qualitatively explored the views of teachers on aspects affecting seminar learning.
Methods
Twenty-four teachers with experience in facilitating seminars in a three-year bachelor curriculum participated in semi-structured focus group interviews. Three focus groups met twice with an interval of two weeks led by one moderator. Sessions were audio taped, transcribed verbatim and independently coded by two researchers using thematic analysis. An iterative process of data reduction resulted in emerging aspects that influence seminar learning.
Results
Teachers identified seven key aspects affecting seminar learning: the seminar teacher, students, preparation, group functioning, seminar goals and content, course coherence and schedule and facilities. Important components of these aspects were: the teachers’ role in developing seminars (‘ownership’), the amount and quality of preparation materials, a non-threatening learning climate, continuity of group composition, suitability of subjects for seminar teaching, the number and quality of seminar questions, and alignment of different course activities.
Conclusions
The results of this study contribute to the unravelling of the ‘the black box’ of seminar learning. Suggestions for ways to optimise active learning in seminars are made regarding curriculum development, seminar content, quality assurance and faculty development.
doi:10.1186/1472-6920-13-22
PMCID: PMC3576232  PMID: 23399475
Seminar learning; Undergraduate (veterinary) medical education; Focus groups; Faculty development
5.  A systematic review of outcome and impact of Master’s in health and health care 
BMC Medical Education  2013;13:18.
Background
The ‘human resources for health’ crisis has highlighted the need for more health (care) professionals and led to an increased interest in health professional education, including master’s degree programmes. The number of these programmes in low- and middle-income countries (LMIC) is increasing, but questions have been raised regarding their relevance, outcome and impact. We conducted a systematic review to evaluate the outcomes and impact of health-related master’s degree programmes.
Methods
We searched the databases Scopus, Pubmed, Embase, CINAHL, ERIC, Psychinfo and Cochrane (1999 - November 2011) and selected websites. All papers describing outcomes and impact of health-related Master programmes were included. Three reviewers, two for each article, extracted data independently. The articles were categorised by type of programme, country, defined outcomes and impact, study methods used and level of evidence, and classified according to outcomes: competencies used in practice, graduates’ career progression and impact on graduates’ workplaces and sector/society.
Results
Of the 33 articles included in the review, most originated from the US and the UK, and only one from a low-income country. The programmes studied were in public health (8), nursing (8), physiotherapy (5), family practice (4) and other topics (8). Outcomes were defined in less than one third of the articles, and impact was not defined at all. Outcomes and impact were measured by self-reported alumni surveys and qualitative methods. Most articles reported that competencies learned during the programme were applied in the workplace and alumni reported career progression or specific job changes. Some articles reported difficulties in using newly gained competencies in the workplace. There was limited evidence of impact on the workplace. Only two articles reported impact on the sector. Most studies described learning approaches, but very few described a mechanism to ensure outcome and impact of the programme.
Conclusions
Evidence suggests that graduates apply newly learned competencies in the field and that they progress in their career. There is a paucity of well-designed studies assessing the outcomes and impact of health-related master’s degree programmes in low- and middle-income countries. Studies of such programmes should consider the context and define outcomes and impact.
doi:10.1186/1472-6920-13-18
PMCID: PMC3620571  PMID: 23388181
Master’s degree programmes; Evaluation; Outcomes; Impact; Systematic review; Public health
6.  Teacher-made models: the answer for medical skills training in developing countries? 
BMC Medical Education  2012;12:98.
Background
The advantages of using simulators in skills training are generally recognized, but simulators are often too expensive for medical schools in developing countries. Cheaper locally-made models (or part-task trainers) could be the answer, especially when teachers are involved in design and production (teacher-made models, TM).
Methods
We evaluated the effectiveness of a TM in training and assessing intravenous injection skills in comparison to an available commercial model (CM) in a randomized, blind, pretest-posttest study with 144 undergraduate nursing students. All students were assessed on both the TM and the CM in the pre-test and post-test. After the post-test the students were also assessed while performing the skill on real patients.
Results
Differences in the mean scores pre- and post-test were marked in all groups. Training with TM or CM improved student scores substantially but there was no significant difference in mean scores whether students had practiced on TM or CM. Students who practiced on TM performed better on communication with the patient than did students who practiced on CM. Decreasing the ratio of students per TM model helped to increase practice opportunities but did not improve student’s mean scores. The result of the assessment on both the TM and the CM had a low correlation with the results of the assessment on real persons.
Conclusions
The TM appears to be an effective alternative to CM for training students on basic IV skills, as students showed similar increases in performance scores after training on models that cost considerably less than commercially available models. These models could be produced using locally available materials in most countries, including those with limited resources to invest in medical education and skills laboratories.
doi:10.1186/1472-6920-12-98
PMCID: PMC3533861  PMID: 23082941
Clinical skills laboratory; Teacher made models; Commercial models; Vietnam
7.  Does reflection have an effect upon case-solving abilities of undergraduate medical students? 
BMC Medical Education  2012;12:75.
Background
Reflection on professional experience is increasingly accepted as a critical attribute for health care practice; however, evidence that it has a positive impact on performance remains scarce. This study investigated whether, after allowing for the effects of knowledge and consultation skills, reflection had an independent effect on students’ ability to solve problem cases.
Methods
Data was collected from 362 undergraduate medical students at Ghent University solving video cases and reflected on the experience of doing so. For knowledge and consultation skills results on a progress test and a course teaching consultation skills were used respectively. Stepwise multiple linear regression analysis was used to test the relationship between the quality of case-solving (dependent variable) and reflection skills, knowledge, and consultation skills (dependent variables).
Results
Only students with data on all variables available (n = 270) were included for analysis. The model was significant (Anova F(3,269) = 11.00, p < 0.001, adjusted R square 0.10) with all variables significantly contributing.
Conclusion
Medical students’ reflection had a small but significant effect on case-solving, which supports reflection as an attribute for performance. These findings suggest that it would be worthwhile testing the effect of reflection skills training on clinical competence.
doi:10.1186/1472-6920-12-75
PMCID: PMC3492041  PMID: 22889271
8.  Preclinical students’ experiences in early clerkships after skills training partly offered in primary health care centers: a qualitative study from Indonesia 
BMC Medical Education  2012;12:35.
Background
Students may encounter difficulties when they have to apply clinical skills trained in their pre-clinical studies in clerkships. Early clinical exposure in the pre-clinical phase has been recommended to reduce these transition problems. The aim of this study is to explore differences in students' experiences during the first clerkships between students exclusively trained in a skills laboratory and peers for whom part of their skills training was substituted by early clinical experiences (ECE).
Methods
Thirty pre-clinical students trained clinical skills exclusively in a skills laboratory; 30 peers received part of their skills training in PHC centers. Within half a year after commencing their clerkships all 60 students shared their experiences in focus group discussions (FGDs). Verbatim transcripts of FGDs were analyzed using Atlas-Ti software.
Results
Clerkship students who had participated in ECE in PHC centers felt better prepared to perform their clinical skills during the first clerkships than peers who had only practiced in a skills laboratory. ECE in PHC centers impacted positively in particular on students’ confidence, clinical reasoning, and interpersonal communication.
Conclusion
In the Indonesian setting ECE in PHC centers reduce difficulties commonly encountered by medical students in the first clerkships.
doi:10.1186/1472-6920-12-35
PMCID: PMC3527268  PMID: 22640419
Clinical skills training; Early clinical experiences; Clerkships
9.  Using video-cases to assess student reflection: Development and validation of an instrument 
BMC Medical Education  2012;12:22.
Background
Reflection is a meta-cognitive process, characterized by: 1. Awareness of self and the situation; 2. Critical analysis and understanding of both self and the situation; 3. Development of new perspectives to inform future actions. Assessors can only access reflections indirectly through learners’ verbal and/or written expressions. Being privy to the situation that triggered reflection could place reflective materials into context. Video-cases make that possible and, coupled with a scoring rubric, offer a reliable way of assessing reflection.
Methods
Fourth and fifth year undergraduate medical students were shown two interactive video-cases and asked to reflect on this experience, guided by six standard questions. The quality of students’ reflections were scored using a specially developed Student Assessment of Reflection Scoring rubric (StARS®). Reflection scores were analyzed concerning interrater reliability and ability to discriminate between students. Further, the intra-rater reliability and case specificity were estimated by means of a generalizability study with rating and case scenario as facets.
Results
Reflection scores of 270 students ranged widely and interrater reliability was acceptable (Krippendorff’s alpha = 0.88). The generalizability study suggested 3 or 4 cases were needed to obtain reliable ratings from 4th year students and ≥ 6 cases from 5th year students.
Conclusion
Use of StARS® to assess student reflections triggered by standardized video-cases had acceptable discriminative ability and reliability. We offer this practical method for assessing reflection summatively, and providing formative feedback in training situations.
doi:10.1186/1472-6920-12-22
PMCID: PMC3426495  PMID: 22520632
10.  Factors confounding the assessment of reflection: a critical review 
BMC Medical Education  2011;11:104.
Background
Reflection on experience is an increasingly critical part of professional development and lifelong learning. There is, however, continuing uncertainty about how best to put principle into practice, particularly as regards assessment. This article explores those uncertainties in order to find practical ways of assessing reflection.
Discussion
We critically review four problems: 1. Inconsistent definitions of reflection; 2. Lack of standards to determine (in)adequate reflection; 3. Factors that complicate assessment; 4. Internal and external contextual factors affecting the assessment of reflection.
Summary
To address the problem of inconsistency, we identified processes that were common to a number of widely quoted theories and synthesised a model, which yielded six indicators that could be used in assessment instruments. We arrived at the conclusion that, until further progress has been made in defining standards, assessment must depend on developing and communicating local consensus between stakeholders (students, practitioners, teachers, supervisors, curriculum developers) about what is expected in exercises and formal tests. Major factors that complicate assessment are the subjective nature of reflection's content and the dependency on descriptions by persons being assessed about their reflection process, without any objective means of verification. To counter these validity threats, we suggest that assessment should focus on generic process skills rather than the subjective content of reflection and where possible to consider objective information about the triggering situation to verify described reflections. Finally, internal and external contextual factors such as motivation, instruction, character of assessment (formative or summative) and the ability of individual learning environments to stimulate reflection should be considered.
doi:10.1186/1472-6920-11-104
PMCID: PMC3268719  PMID: 22204704
11.  Interactive seminars or small group tutorials in preclinical medical education: results of a randomized controlled trial 
BMC Medical Education  2010;10:79.
Background
Learning in small group tutorials is appreciated by students and effective in the acquisition of clinical problem-solving skills but poses financial and resource challenges. Interactive seminars, which accommodate large groups, might be an alternative. This study examines the educational effectiveness of small group tutorials and interactive seminars and students' preferences for and satisfaction with these formats.
Methods
Students in year three of the Leiden undergraduate medical curriculum, who agreed to participate in a randomized controlled trial (RCT, n = 107), were randomly allocated to small group tutorials (n = 53) or interactive seminars (n = 54). Students who did not agree were free to choose either format (n = 105). Educational effectiveness was measured by comparing the participants' results on the end-of-block test. Data on students' reasons and satisfaction were collected by means of questionnaires. Data was analyzed using student unpaired t test or chi-square test where appropriate.
Results
There were no significant differences between the two educational formats in students' test grades. Retention of knowledge through active participation was the most frequently cited reason for preferring small group tutorials, while a dislike of compulsory course components was mentioned more frequently by students preferring interactive seminars. Small group tutorials led to greater satisfaction.
Conclusions
We found that small group tutorials leads to greater satisfaction but not to better learning results. Interactive learning in large groups might be might be an effective alternative to small group tutorials in some cases and be offered as an option.
doi:10.1186/1472-6920-10-79
PMCID: PMC3000405  PMID: 21073744
12.  Combining a leadership course and multi-source feedback has no effect on leadership skills of leaders in postgraduate medical education. An intervention study with a control group 
Background
Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time.
Methods
Study participants were consultants responsible for postgraduate medical education at clinical departments. Study design: pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances.
Results
There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149).
Conclusion
The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting.
doi:10.1186/1472-6920-9-72
PMCID: PMC2797774  PMID: 20003311

Results 1-12 (12)