Search tips
Search criteria

Results 1-6 (6)

Clipboard (0)
more »
Year of Publication
Document Types
1.  Newly qualified doctors' views about whether their medical school had trained them well: questionnaire surveys 
A survey of newly qualified doctors in the UK in 2000/2001 found that 42% of them felt unprepared for their first year of employment in clinical posts. We report on how UK qualifiers' preparedness has changed since then, and on the impact of course changes upon preparedness.
Postal questionnaires were sent to all doctors who qualified from UK medical schools, in their first year of clinical work, in 2003 (n = 4257) and 2005 (n = 4784); and findings were compared with those in 2000/2001 (n = 5330). The response rates were 67% in 2000/2001, 65% in 2003, and 43% in 2005. The outcome measure was the percentage of doctors agreeing with the statement "My experience at medical school has prepared me well for the jobs I have undertaken so far".
In the 2000/2001 survey 36.3% strongly agreed or agreed with the statement, as did 50.3% in the 2003 survey and 58.5% in 2005 (chi-squared test for linear trend: χ2 = 259.5; df = 1; p < 0.001). Substantial variation in preparedness between doctors from different medical schools, reported in the first survey, was still present in 2003 and 2005. Between 1998 and 2006 all UK medical schools updated their courses. Within each cohort a significantly higher percentage of the respondents from schools with updated courses felt well prepared.
UK medical schools are now training doctors who feel better prepared for work than in the past. Some of the improvement may be attributable to curricular change.
PMCID: PMC2203980  PMID: 17945007
2.  Are the General Medical Council’s Tests of Competence fair to long standing doctors? A retrospective cohort study 
BMC Medical Education  2015;15:80.
The General Medical Council’s Fitness to Practise investigations may involve a test of competence for doctors with performance concerns. Concern has been raised about the suitability of the test format for doctors who qualified before the introduction of Single Best Answer and Objective Structured Clinical Examination assessments, both of which form the test of competence. This study explored whether the examination formats used in the tests of competence are fair to long standing doctors who have undergone fitness to practise investigation.
A retrospective cohort design was used to determine an association between year of primary medical qualification and doctors’ test of competence performance. Performance of 95 general practitioners under investigation was compared with a group of 376 volunteer doctors. We analysed performance on knowledge test, OSCE overall, and three individual OSCE stations using Spearman’s correlation and regression models.
Doctors under investigation performed worse on all test outcomes compared to the comparison group. Qualification year correlated positively with performance on all outcomes except for physical examination (e.g. knowledge test r = 0.48, p < 0.001 and OSCE r = 0.37, p < 0.001). Qualification year was associated with test performance in doctors under investigation even when controlling for sex, ethnicity and qualification region. Regression analyses showed that qualification year was associated with knowledge test, OSCE and communication skills performance of doctors under investigation when other variables were controlled for. Among volunteer doctors this was not the case and their performance was more strongly related to where they qualified and their ethnic background. Furthermore, volunteer doctors who qualified before the introduction of Single Best Answer and OSCE assessments, still outperformed their peers under investigation.
Earlier graduates under fitness to practise investigation performed less well on the test of competence than their more recently qualified peers under investigation. The performance of the comparator group tended to stay consistent irrespective of year qualified. Our results suggest that the test format does not disadvantage early qualified doctors. We discuss findings in relation to the GMC’s fitness to practise procedures and suggest alternative explanations for the poorer performance of long standing doctors under investigation.
PMCID: PMC4453964  PMID: 25896823
General medical council; Fitness to practise; Tests of competence; Volunteer; Pilot; Assessment; Performance; Qualification year
3.  Cross-comparison of MRCGP & MRCP(UK) in a database linkage study of 2,284 candidates taking both examinations: assessment of validity and differential performance by ethnicity 
MRCGP and MRCP(UK) are the main entry qualifications for UK doctors entering general [family] practice or hospital [internal] medicine. The performance of MRCP(UK) candidates who subsequently take MRCGP allows validation of each assessment.
In the UK, underperformance of ethnic minority doctors taking MRCGP has had a high political profile, with a Judicial Review in the High Court in April 2014 for alleged racial discrimination. Although the legal challenge was dismissed, substantial performance differences between white and BME (Black and Minority Ethnic) doctors undoubtedly exist. Understanding ethnic differences can be helped by comparing the performance of doctors who take both MRCGP and MRCP(UK).
We identified 2,284 candidates who had taken one or more parts of both assessments, MRCP(UK) typically being taken 3.7 years before MRCGP. We analyzed performance on knowledge-based MCQs (MRCP(UK) Parts 1 and 2 and MRCGP Applied Knowledge Test (AKT)) and clinical examinations (MRCGP Clinical Skills Assessment (CSA) and MRCP(UK) Practical Assessment of Clinical Skills (PACES)).
Correlations between MRCGP and MRCP(UK) were high, disattenuated correlations for MRCGP AKT with MRCP(UK) Parts 1 and 2 being 0.748 and 0.698, and for CSA and PACES being 0.636.
BME candidates performed less well on all five assessments (P < .001). Correlations disaggregated by ethnicity were complex, MRCGP AKT showing similar correlations with Part1/Part2/PACES in White and BME candidates, but CSA showing stronger correlations with Part1/Part2/PACES in BME candidates than in White candidates.
CSA changed its scoring method during the study; multiple regression showed the newer CSA was better predicted by PACES than the previous CSA.
High correlations between MRCGP and MRCP(UK) support the validity of each, suggesting they assess knowledge cognate to both assessments.
Detailed analyses by candidate ethnicity show that although White candidates out-perform BME candidates, the differences are largely mirrored across the two examinations. Whilst the reason for the differential performance is unclear, the similarity of the effects in independent knowledge and clinical examinations suggests the differences are unlikely to result from specific features of either assessment and most likely represent true differences in ability.
PMCID: PMC4302509  PMID: 25592199
MRCGP; MRCP(UK); Applied knowledge test; Clinical skills assessment; PACES; Correlation; Ethnicity; Black and minority ethnic
4.  Investigating possible ethnicity and sex bias in clinical examiners: an analysis of data from the MRCP(UK) PACES and nPACES examinations 
BMC Medical Education  2013;13:103.
Bias of clinical examiners against some types of candidate, based on characteristics such as sex or ethnicity, would represent a threat to the validity of an examination, since sex or ethnicity are ‘construct-irrelevant’ characteristics. In this paper we report a novel method for assessing sex and ethnic bias in over 2000 examiners who had taken part in the PACES and nPACES (new PACES) examinations of the MRCP(UK).
PACES and nPACES are clinical skills examinations that have two examiners at each station who mark candidates independently. Differences between examiners cannot be due to differences in performance of a candidate because that is the same for the two examiners, and hence may result from bias or unreliability on the part of the examiners. By comparing each examiner against a ‘basket’ of all of their co-examiners, it is possible to identify examiners whose behaviour is anomalous. The method assessed hawkishness-doveishness, sex bias, ethnic bias and, as a control condition to assess the statistical method, ‘even-number bias’ (i.e. treating candidates with odd and even exam numbers differently). Significance levels were Bonferroni corrected because of the large number of examiners being considered.
The results of 26 diets of PACES and six diets of nPACES were examined statistically to assess the extent of hawkishness, as well as sex bias and ethnicity bias in individual examiners. The control (odd-number) condition suggested that about 5% of examiners were significant at an (uncorrected) 5% level, and that the method therefore worked as expected. As in a previous study (BMC Medical Education, 2006, 6:42), some examiners were hawkish or doveish relative to their peers. No examiners showed significant sex bias, and only a single examiner showed evidence consistent with ethnic bias. A re-analysis of the data considering only one examiner per station, as would be the case for many clinical examinations, showed that analysis with a single examiner runs a serious risk of false positive identifications probably due to differences in case-mix and content-specificity.
In examinations where there are two independent examiners at a station, our method can assess the extent of bias against candidates with particular characteristics. The method would be far less sensitive in examinations with only a single examiner per station as examiner variance would be confounded with candidate performance variance. The method however works well when there is more than one examiner at a station and in the case of the current MRCP(UK) clinical examination, nPACES, found possible sex bias in no examiners and possible ethnic bias in only one.
PMCID: PMC3737060  PMID: 23899223
Examiner bias; Hawks and doves; Sex; Ethnicity
5.  The effect of a brief social intervention on the examination results of UK medical students: a cluster randomised controlled trial 
Ethnic minority (EM) medical students and doctors underperform academically, but little evidence exists on how to ameliorate the problem. Psychologists Cohen et al. recently demonstrated that a written self-affirmation intervention substantially improved EM adolescents' school grades several months later. Cohen et al.'s methods were replicated in the different setting of UK undergraduate medical education.
All 348 Year 3 white (W) and EM students at one UK medical school were randomly allocated to an intervention condition (writing about one's own values) or a control condition (writing about another's values), via their tutor group. Students and assessors were blind to the existence of the study. Group comparisons on post-intervention written and OSCE (clinical) assessment scores adjusted for baseline written assessment scores were made using two-way analysis of covariance. All assessment scores were transformed to z-scores (mean = 0 standard deviation = 1) for ease of comparison. Comparisons between types of words used in essays were calculated using t-tests. The study was covered by University Ethics Committee guidelines.
Groups were statistically identical at baseline on demographic and psychological factors, and analysis was by intention to treat [intervention group EM n = 95, W n = 79; control group EM n = 77; W n = 84]. As predicted, there was a significant ethnicity by intervention interaction [F(4,334) = 5.74; p = 0.017] on the written assessment. Unexpectedly, this was due to decreased scores in the W intervention group [mean difference = 0.283; (95% CI = 0.093 to 0.474] not improved EM intervention group scores [mean difference = -0.060 (95% CI = -0.268 to 0.148)]. On the OSCE, both W and EM intervention groups outperformed controls [mean difference = 0.261; (95%CI = -0.047 to -0.476; p = 0.013)]. The intervention group used more optimistic words (p < 0.001) and more "I" and "self" pronouns in their essays (p < 0.001), whereas the control group used more "other" pronouns (p < 0.001) and more negations (p < 0.001).
Cohen et al.'s finding that a brief self-affirmation task narrowed the ethnic academic achievement gap was replicated on the written assessment but against expectations, this was due to reduced performance in the W group. On the OSCE, the intervention improved performance in both W and EM groups. In the intervention condition, participants tended to write about themselves and used more optimistic words than in the control group, indicating the task was completed as requested. The study shows that minimal interventions can have substantial educational outcomes several months later, which has implications for the multitude of seemingly trivial changes in teaching that are made on an everyday basis, whose consequences are never formally assessed.
PMCID: PMC2717066  PMID: 19552810
6.  The educational background and qualifications of UK medical students from ethnic minorities 
UK medical students and doctors from ethnic minorities underperform in undergraduate and postgraduate examinations. Although it is assumed that white (W) and non-white (NW) students enter medical school with similar qualifications, neither the qualifications of NW students, nor their educational background have been looked at in detail. This study uses two large-scale databases to examine the educational attainment of W and NW students.
Attainment at GCSE and A level, and selection for medical school in relation to ethnicity, were analysed in two separate databases. The 10th cohort of the Youth Cohort Study provided data on 13,698 students taking GCSEs in 1999 in England and Wales, and their subsequent progression to A level. UCAS provided data for 1,484,650 applicants applying for admission to UK universities and colleges in 2003, 2004 and 2005, of whom 52,557 applied to medical school, and 23,443 were accepted.
NW students achieve lower grades at GCSE overall, although achievement at the highest grades was similar to that of W students. NW students have higher educational aspirations, being more likely to go on to take A levels, especially in science and particularly chemistry, despite relatively lower achievement at GCSE. As a result, NW students perform less well at A level than W students, and hence NW students applying to university also have lower A-level grades than W students, both generally, and for medical school applicants. NW medical school entrants have lower A level grades than W entrants, with an effect size of about -0.10.
The effect size for the difference between white and non-white medical school entrants is about B0.10, which would mean that for a typical medical school examination there might be about 5 NW failures for each 4 W failures. However, this effect can only explain a portion of the overall effect size found in undergraduate and postgraduate examinations of about -0.32.
PMCID: PMC2359745  PMID: 18416818

Results 1-6 (6)