Most UK medical schools use aptitude tests during student selection, but large-scale studies of predictive validity are rare. This study assesses the United Kingdom Clinical Aptitude Test (UKCAT), and its four sub-scales, along with measures of educational attainment, individual and contextual socio-economic background factors, as predictors of performance in the first year of medical school training.
A prospective study of 4,811 students in 12 UK medical schools taking the UKCAT from 2006 to 2008 as a part of the medical school application, for whom first year medical school examination results were available in 2008 to 2010.
UKCAT scores and educational attainment measures (General Certificate of Education (GCE): A-levels, and so on; or Scottish Qualifications Authority (SQA): Scottish Highers, and so on) were significant predictors of outcome. UKCAT predicted outcome better in female students than male students, and better in mature than non-mature students. Incremental validity of UKCAT taking educational attainment into account was significant, but small. Medical school performance was also affected by sex (male students performing less well), ethnicity (non-White students performing less well), and a contextual measure of secondary schooling, students from secondary schools with greater average attainment at A-level (irrespective of public or private sector) performing less well. Multilevel modeling showed no differences between medical schools in predictive ability of the various measures. UKCAT sub-scales predicted similarly, except that Verbal Reasoning correlated positively with performance on Theory examinations, but negatively with Skills assessments.
This collaborative study in 12 medical schools shows the power of large-scale studies of medical education for answering previously unanswerable but important questions about medical student selection, education and training. UKCAT has predictive validity as a predictor of medical school outcome, particularly in mature applicants to medical school. UKCAT offers small but significant incremental validity which is operationally valuable where medical schools are making selection decisions based on incomplete measures of educational attainment. The study confirms the validity of using all the existing measures of educational attainment in full at the time of selection decision-making. Contextual measures provide little additional predictive value, except that students from high attaining secondary schools perform less well, an effect previously shown for UK universities in general.
Medical student selection; Educational attainment; Aptitude tests; UKCAT; Socio-economic factors; Contextual measures
Over two-thirds of UK medical schools are augmenting their selection procedures for medical students by using the United Kingdom Clinical Aptitude Test (UKCAT), which employs tests of cognitive and non-cognitive personal qualities, but clear evidence of the tests’ predictive validity is lacking. This study explores whether academic performance and professional behaviours that are important in a health professional context can be predicted by these measures, when taken before or very early in the medical course.
This prospective cohort study follows the progress of the entire student cohort who entered Hull York Medical School in September 2007, having taken the UKCAT cognitive tests in 2006 and the non-cognitive tests a year later. This paper reports on the students’ first and second academic years of study. The main outcome measures were regular, repeated tutor assessment of individual students’ interpersonal skills and professional behaviour, and annual examination performance in the three domains of recall and application of knowledge, evaluation of data, and communication and practical clinical skills. The relationships between non-cognitive test scores, cognitive test scores, tutor assessments and examination results were explored using the Pearson product–moment correlations for each group of data; the data for students obtaining the top and bottom 20% of the summative examination results were compared using Analysis of Variance.
Personal qualities measured by non-cognitive tests showed a number of statistically significant relationships with ratings of behaviour made by tutors, with performance in each year’s objective structured clinical examinations (OSCEs), and with themed written summative examination marks in each year. Cognitive ability scores were also significantly related to each year’s examination results, but seldom to professional behaviours. The top 20% of examination achievers could be differentiated from the bottom 20% on both non-cognitive and cognitive measures.
This study shows numerous significant relationships between both cognitive and non-cognitive test scores, academic examination scores and indicators of professional behaviours in medical students. This suggests that measurement of non-cognitive personal qualities in applicants to medical school could make a useful contribution to selection and admission decisions. Further research is required in larger representative groups, and with more refined predictor measures and behavioural assessment methods, to establish beyond doubt the incremental validity of such measures over conventional cognitive assessments.
Objectives To determine whether the UK Clinical Aptitude Test (UKCAT) adds value to the selection process for school leaver applicants to medical and dental school, and in particular whether UKCAT can reduce the socioeconomic bias known to affect A levels.
Design Cohort study
Setting Applicants to 23 UK medical and dental schools in 2006.
Participants 9884 applicants who took the UKCAT in the UK and who achieved at least three passes at A level in their school leaving examinations (53% of all applicants).
Main outcome measures Independent predictors of obtaining at least AAB at A level and
UKCAT scores at or above the 30th centile for the cohort, for the subsections and the entire test.
Results Independent predictors of obtaining at least AAB at A level were white ethnicity (odds ratio 1.58, 95% confidence interval 1.41 to 1.77), professional or managerial background (1.39, 1.22 to 1.59), and independent or grammar schooling (2.26, 2.02 to 2.52) (all P<0.001). Independent predictors of achieving UKCAT scores at or above the 30th centile for the whole test were male sex (odd ratio 1.48, 1.32 to 1.66), white ethnicity (2.17, 1.94 to 2.43), professional or managerial background (1.34, 1.17 to 1.54), and independent or grammar schooling (1.91, 1.70 to 2.14) (all P<0.001). One major limitation of the study was that socioeconomic status was not volunteered by approximately 30% of the applicants. Those who withheld socioeconomic status data were significantly different from those who provided that information, which may have caused bias in the analysis.
Conclusions UKCAT was introduced with a high expectation of increasing the diversity and fairness in selection for UK medical and dental schools. This study of a major subgroup of applicants in the first year of operation suggests that it has an inherent favourable bias to men and students from a higher socioeconomic class or independent or grammar schools. However, it does provide a reasonable proxy for A levels in the selection process.
Objective To determine whether the use of the UK clinical aptitude test (UKCAT) in the medical schools admissions process reduces the relative disadvantage encountered by certain sociodemographic groups.
Design Prospective cohort study.
Setting Applicants to 22 UK medical schools in 2009 that were members of the consortium of institutions utilising the UKCAT as a component of their admissions process.
Participants 8459 applicants (24 844 applications) to UKCAT consortium member medical schools where data were available on advanced qualifications and socioeconomic background.
Main outcome measures The probability of an application resulting in an offer of a place on a medicine course according to seven educational and sociodemographic variables depending on how the UKCAT was used by the medical school (in borderline cases, as a factor in admissions, or as a threshold).
Results On univariate analysis all educational and sociodemographic variables were significantly associated with the relative odds of an application being successful. The multilevel multiple logistic regression models, however, varied between medical schools according to the way that the UKCAT was used. For example, a candidate from a non-professional background was much less likely to receive a conditional offer of a place compared with an applicant from a higher social class when applying to an institution using the test only in borderline cases (odds ratio 0.51, 95% confidence interval 0.45 to 0.60). No such effect was observed for such candidates applying to medical schools using the threshold approach (1.27, 0.84 to 1.91). These differences were generally reflected in the interactions observed when the analysis was repeated, pooling the data. Notably, candidates from several under-represented groups applying to medical schools that used a threshold approach to the UKCAT were less disadvantaged than those applying to the other institutions in the consortium. These effects were partially reflected in significant differences in the absolute proportion of such candidates finally taking up places in the different types of medical schools; stronger use of the test score (as a factor or threshold) was associated with a significantly increased odds of entrants being male (1.74, 1.25 to 2.41) and from a low socioeconomic background (3.57, 1.03 to 12.39). There was a non-significant trend towards entrants being from a state (non-grammar) school (1.60, 0.97 to 2.62) where a stronger use of the test was employed. Use of the test only in borderline cases was associated with increased odds of entrants having relatively low academic attainment (5.19, 2.02 to 13.33) and English as a second language (2.15, 1.03 to 4.48).
Conclusions The use of the UKCAT may lead to more equitable provision of offers to those applying to medical school from under-represented sociodemographic groups. This may translate into higher numbers of some, but not all, relatively disadvantaged students entering the UK medical profession.
The UK Clinical Aptitude Test (UKCAT) was introduced in 2006 as an additional tool for the selection of medical students. It tests mental ability in four distinct domains (Quantitative Reasoning, Verbal Reasoning, Abstract Reasoning, and Decision Analysis), and the results are available to students and admissions panels in advance of the selection process. As yet the predictive validity of the test against course performance is largely unknown.
The study objective was to determine whether UKCAT scores predict performance during the first two years of the 5-year undergraduate medical course at Nottingham.
We studied a single cohort of students, who entered Nottingham Medical School in October 2007 and had taken the UKCAT. We used linear regression analysis to identify independent predictors of marks for different parts of the 2-year preclinical course.
Data were available for 204/260 (78%) of the entry cohort. The UKCAT total score had little predictive value. Quantitative Reasoning was a significant independent predictor of course marks in Theme A ('The Cell'), (p = 0.005), and Verbal Reasoning predicted Theme C ('The Community') (p < 0.001), but otherwise the effects were slight or non-existent.
This limited study from a single entry cohort at one medical school suggests that the predictive value of the UKCAT, particularly the total score, is low. Section scores may predict success in specific types of course assessment.
The ultimate test of validity will not be available for some years, when current cohorts of students graduate. However, if this test of mental ability does not predict preclinical performance, it is arguably less likely to predict the outcome in the clinical years. Further research from medical schools with different types of curriculum and assessment is needed, with longitudinal studies throughout the course.
The UK Clinical Aptitude Test (UKCAT) was introduced to facilitate widening participation in medical and dental education in the UK by providing universities with a continuous variable to aid selection; one that might be less sensitive to the sociodemographic background of candidates compared to traditional measures of educational attainment. Initial research suggested that males, candidates from more advantaged socioeconomic backgrounds and those who attended independent or grammar schools performed better on the test. The introduction of the A* grade at A level permits more detailed analysis of the relationship between UKCAT scores, secondary educational attainment and sociodemographic variables. Thus, our aim was to further assess whether the UKCAT is likely to add incremental value over A level (predicted or actual) attainment in the selection process.
Data relating to UKCAT and A level performance from 8,180 candidates applying to medicine in 2009 who had complete information relating to six key sociodemographic variables were analysed. A series of regression analyses were conducted in order to evaluate the ability of sociodemographic status to predict performance on two outcome measures: A level ‘best of three’ tariff score; and the UKCAT scores.
In this sample A level attainment was independently and positively predicted by four sociodemographic variables (independent/grammar schooling, White ethnicity, age and professional social class background). These variables also independently and positively predicted UKCAT scores. There was a suggestion that UKCAT scores were less sensitive to educational background compared to A level attainment. In contrast to A level attainment, UKCAT score was independently and positively predicted by having English as a first language and male sex.
Our findings are consistent with a previous report; most of the sociodemographic factors that predict A level attainment also predict UKCAT performance. However, compared to A levels, males and those speaking English as a first language perform better on UKCAT. Our findings suggest that UKCAT scores may be more influenced by sex and less sensitive to school type compared to A levels. These factors must be considered by institutions utilising the UKCAT as a component of the medical and dental school selection process.
Medical student selection; Educational attainment; Aptitude tests; UKCAT; Socio-economic factors
The UK Clinical Aptitude Test (UKCAT) was introduced in 2006 as an additional tool for the selection of medical students. It tests mental ability in four distinct domains (Verbal Reasoning, Quantitative Reasoning, Abstract Reasoning, and Decision Analysis), and the results are available to students and admission panels in advance of the selection process. Our first study showed little evidence of any predictive validity for performance in the first two years of the Nottingham undergraduate course.
The study objective was to determine whether the UKCAT scores had any predictive value for the later parts of the course, largely delivered via clinical placements.
Students entering the course in 2007 and who had taken the UKCAT were asked for permission to use their anonymised data in research. The UKCAT scores were incorporated into a database with routine pre-admission socio-demographics and subsequent course performance data. Correlation analysis was followed by hierarchical multivariate linear regression.
The original study group comprised 204/254 (80%) of the full entry cohort. With attrition over the five years of the course this fell to 185 (73%) by Year 5. The Verbal Reasoning score and the UKCAT Total score both demonstrated some univariate correlations with clinical knowledge marks, and slightly less with clinical skills. No parts of the UKCAT proved to be an independent predictor of clinical course marks, whereas prior attainment was a highly significant predictor (p <0.001).
This study of one cohort of Nottingham medical students showed that UKCAT scores at admission did not independently predict subsequent performance on the course. Whilst the test adds another dimension to the selection process, its fairness and validity in selecting promising students remains unproven, and requires wider investigation and debate by other schools.
Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities.
Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation.
Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs/O-levels.
Educational attainment has strong CLPVs for undergraduate and postgraduate performance, accounting for perhaps 65% of true variance in first year performance. Such CLPVs justify the use of educational attainment measure in selection, but also raise a key theoretical question concerning the remaining 35% of variance (and measurement error, range restriction and right-censorship have been taken into account). Just as in astrophysics, ‘dark matter’ and ‘dark energy’ are posited to balance various theoretical equations, so medical student selection must also have its ‘dark variance’, whose nature is not yet properly characterized, but explains a third of the variation in performance during training. Some variance probably relates to factors which are unpredictable at selection, such as illness or other life events, but some is probably also associated with factors such as personality, motivation or study skills.
Medical student selection; Undergraduate performance; Postgraduate performance; Educational attainment; Aptitude tests; Criterion-related construct validity; Range restriction; Right censorship; Grade inflation; Markov Chain Monte Carlo algorithm
Medical student selection is an important but difficult task. Three recent papers by McManus et al. in BMC Medicine have re-examined the role of tests of attainment of learning (A’ levels, GCSEs, SQA) and of aptitude (AH5, UKCAT), but on a much larger scale than previously attempted. They conclude that A’ levels are still the best predictor of future success at medical school and beyond. However, A’ levels account for only 65% of the variance in performance that is found. Therefore, more work is needed to establish relevant assessment of the other 35%.
Please see related research articles http://www.biomedcentral.com/1741-7015/11/242, http://www.biomedcentral.com/1741-7015/11/243 and http://www.biomedcentral.com/1741-7015/11/244.
Medical School Admission; Predictors of performance; Aptitude testing
Internationally, tests of general mental ability are used in the selection of medical students. Examples include the Medical College Admission Test, Undergraduate Medicine and Health Sciences Admission Test and the UK Clinical Aptitude Test. The most widely used measure of their efficacy is predictive validity.
A new tool, the Health Professions Admission Test- Ireland (HPAT-Ireland), was introduced in 2009. Traditionally, selection to Irish undergraduate medical schools relied on academic achievement. Since 2009, Irish and EU applicants are selected on a combination of their secondary school academic record (measured predominately by the Leaving Certificate Examination) and HPAT-Ireland score. This is the first study to report on the predictive validity of the HPAT-Ireland for early undergraduate assessments of communication and clinical skills.
Students enrolled at two Irish medical schools in 2009 were followed up for two years. Data collected were gender, HPAT-Ireland total and subsection scores; Leaving Certificate Examination plus HPAT-Ireland combined score, Year 1 Objective Structured Clinical Examination (OSCE) scores (Total score, communication and clinical subtest scores), Year 1 Multiple Choice Questions and Year 2 OSCE and subset scores. We report descriptive statistics, Pearson correlation coefficients and Multiple linear regression models.
Data were available for 312 students. In Year 1 none of the selection criteria were significantly related to student OSCE performance. The Leaving Certificate Examination and Leaving Certificate plus HPAT-Ireland combined scores correlated with MCQ marks.
In Year 2 a series of significant correlations emerged between the HPAT-Ireland and subsections thereof with OSCE Communication Z-scores; OSCE Clinical Z-scores; and Total OSCE Z-scores. However on multiple regression only the relationship between Total OSCE Score and the Total HPAT-Ireland score remained significant; albeit the predictive power was modest.
We found that none of our selection criteria strongly predict clinical and communication skills. The HPAT- Ireland appears to measures ability in domains different to those assessed by the Leaving Certificate Examination. While some significant associations did emerge in Year 2 between HPAT Ireland and total OSCE scores further evaluation is required to establish if this pattern continues during the senior years of the medical course.
Selection; Medical; Student; Validity; Predictive; HPAT-Ireland; Assessment; Cognitive; Ability
Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions.
This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude test score based on UMAT, school result and interview score) were correlated with GPA over four years of study. Standard regression of each of the three admissions variables on GPA, for each institution at each year level was also conducted.
Overall, the data found positive correlations between performance in medical school, school achievement and UMAT, but not interview. However, there were substantial differences between schools, across year levels, and within sections of UMAT exposed. Despite this, each admission variable was shown to add towards explaining course performance, net of other variables.
The findings suggest the strength of multiple admissions tools in predicting outcomes of medical students. However, they also highlight the large differences in outcomes achieved by different schools, thus emphasising the pitfalls of generalising results from predictive validity studies without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered. The assumption that high-positive correlations are desirable (or even expected) in these studies is also problematised.
Selection; Predictive validity; Admissions policy
All applicants and those who subsequently enrolled for the 1964-65 session in the Western medical schools were studied with the hope that it would encourage a national registration of applicants. Seven hundred and sixty-four applicants completed 865 applications for 288 places in four schools. Although the principal factor in selecting medical students in all Western schools is pre-medical performance, 49 “good-quality” (academically of good standing and under 30 years of age) resident applicants were not accepted in their own provincial school, and 49 places were filled with “poor-quality” students.
The loss of good applicants to the Western medical schools and the 20% overlap of each school's applicant pool with that of other schools suggests that objective standards of quality must be developed, and that a regular annual national assessment of applicants should be conducted by the Association of Canadian Medical Colleges.
UK medical students and doctors from ethnic minorities underperform in undergraduate and postgraduate examinations. Although it is assumed that white (W) and non-white (NW) students enter medical school with similar qualifications, neither the qualifications of NW students, nor their educational background have been looked at in detail. This study uses two large-scale databases to examine the educational attainment of W and NW students.
Attainment at GCSE and A level, and selection for medical school in relation to ethnicity, were analysed in two separate databases. The 10th cohort of the Youth Cohort Study provided data on 13,698 students taking GCSEs in 1999 in England and Wales, and their subsequent progression to A level. UCAS provided data for 1,484,650 applicants applying for admission to UK universities and colleges in 2003, 2004 and 2005, of whom 52,557 applied to medical school, and 23,443 were accepted.
NW students achieve lower grades at GCSE overall, although achievement at the highest grades was similar to that of W students. NW students have higher educational aspirations, being more likely to go on to take A levels, especially in science and particularly chemistry, despite relatively lower achievement at GCSE. As a result, NW students perform less well at A level than W students, and hence NW students applying to university also have lower A-level grades than W students, both generally, and for medical school applicants. NW medical school entrants have lower A level grades than W entrants, with an effect size of about -0.10.
The effect size for the difference between white and non-white medical school entrants is about B0.10, which would mean that for a typical medical school examination there might be about 5 NW failures for each 4 W failures. However, this effect can only explain a portion of the overall effect size found in undergraduate and postgraduate examinations of about -0.32.
Baylor College of Medicine has conducted a summer enrichment program for minority/disadvantaged premedical students since 1969. Follow-up data on medical school application and acceptance for participants from 1980 through 1984 were analyzed in relation to selected preprogram variables--cumulative college grade point average, total Scholastic Aptitude Test score, competitiveness of undergraduate college, sex, and ethnicity. Results of univariate and multivariate analyses indicated that: 1) females were significantly less likely to apply to medical school than males, 2) females had significantly lower mean MCAT scores (5.9 vs 7.2) even though their preprogram academic performance was comparable to that of the males, and 3) after controlling for MCAT scores, none of the preprogram variables were significant in predicting medical school acceptance. These findings suggest the need for research to explain the discrepancy between male and female MCAT performance and frequency of medical school application in summer program participants. The findings also have implications for the type of counseling provided to female participants in summer enrichment programs.
Selection of medical students in the UK is still largely based on prior academic achievement, although doubts have been expressed as to whether performance in earlier life is predictive of outcomes later in medical school or post-graduate education. This study analyses data from five longitudinal studies of UK medical students and doctors from the early 1970s until the early 2000s. Two of the studies used the AH5, a group test of general intelligence (that is, intellectual aptitude). Sex and ethnic differences were also analyzed in light of the changing demographics of medical students over the past decades.
Data from five cohort studies were available: the Westminster Study (began clinical studies from 1975 to 1982), the 1980, 1985, and 1990 cohort studies (entered medical school in 1981, 1986, and 1991), and the University College London Medical School (UCLMS) Cohort Study (entered clinical studies in 2005 and 2006). Different studies had different outcome measures, but most had performance on basic medical sciences and clinical examinations at medical school, performance in Membership of the Royal Colleges of Physicians (MRCP(UK)) examinations, and being on the General Medical Council Specialist Register.
Correlation matrices and path analyses are presented. There were robust correlations across different years at medical school, and medical school performance also predicted MRCP(UK) performance and being on the GMC Specialist Register. A-levels correlated somewhat less with undergraduate and post-graduate performance, but there was restriction of range in entrants. General Certificate of Secondary Education (GCSE)/O-level results also predicted undergraduate and post-graduate outcomes, but less so than did A-level results, but there may be incremental validity for clinical and post-graduate performance. The AH5 had some significant correlations with outcome, but they were inconsistent. Sex and ethnicity also had predictive effects on measures of educational attainment, undergraduate, and post-graduate performance. Women performed better in assessments but were less likely to be on the Specialist Register. Non-white participants generally underperformed in undergraduate and post-graduate assessments, but were equally likely to be on the Specialist Register. There was a suggestion of smaller ethnicity effects in earlier studies.
The existence of the Academic Backbone concept is strongly supported, with attainment at secondary school predicting performance in undergraduate and post-graduate medical assessments, and the effects spanning many years. The Academic Backbone is conceptualized in terms of the development of more sophisticated underlying structures of knowledge ('cognitive capital’ and 'medical capital’). The Academic Backbone provides strong support for using measures of educational attainment, particularly A-levels, in student selection.
Academic Backbone; Secondary school attainment; Undergraduate medical education; Post-graduate medical education; Longitudinal analyses; Continuities; Medical student selection; Cognitive capital; Medical capital; Aptitude tests
A national survey of medical school admissions administrators was used to assess the acceptability of applicants' qualifications that included degrees earned partly online, partly in a community college, or in a traditional program. A questionnaire was sent from The Florida State University in 2007 to admissions administrators in the 125 accredited allopathic medical schools in the United States. In each of three situations, the respondents were asked to select one of two hypothetical applicants to invite for an interview. The applicants with their coursework taken in a traditional-residential setting were overwhelmingly preferred over the applicant holding the degree earned partly online. Further analysis indicated that online courses were perceived as not presenting sufficient opportunity for students to develop important social skills through interaction with other students and mentors.
Graduate school admissions; online degrees; acceptability
In 1998, a new selection process which utilised an aptitude test and an interview in addition to previous academic achievement was introduced into an Australian undergraduate medical course.
To test the outcomes of the selection criteria over an 11-year period.
1174 students who entered the course from secondary school and who enrolled in the MBBS from 1999 through 2009 were studied in relation to specific course outcomes. Regression analyses using entry scores, sex and age as independent variables were tested for their relative value in predicting subsequent academic performance in the 6-year course. The main outcome measures were assessed by weighted average mark for each academic year level; together with results in specific units, defined as either ‘knowledge'-based or ‘clinically’ based.
Previous academic performance and female sex were the major independent positive predictors of performance in the course. The interview score showed positive predictive power during the latter years of the course and in a range of ‘clinically' based units. This relationship was mediated predominantly by the score for communication skills.
Results support combining prior academic achievement with the assessment of communication skills in a structured interview as selection criteria into this undergraduate medical course.
This paper describes the use of CASIP, an authoring language for CAI materials with unrestricted natural language input, to produce brief tests that assess in an objective and standardized manner a variety of aptitudes in medical students, residents or applicants to medical school. These aptitudes include decisiveness, methodical thinking, ethical values, crisis management ability, persistence, combativeness, and ease of frustration.
Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared.
334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years.
Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males.
A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially.
Diversity improves all students’ academic experiences and their abilities to work with patients from differing backgrounds. Little is known about what makes minority students select one medical school over another.
To measure the impact of the existence of a health disparities course in the medical school curriculum on recruitment of underrepresented minority (URM) college students to the University of Chicago Pritzker School of Medicine.
All medical school applicants interviewed in academic years 2007 and 2008 at the University of Chicago Pritzker School of Medicine (PSOM) attended an orientation that detailed a required health care disparities curriculum introduced in 2006. Matriculants completed a precourse survey measuring the impact of the existence of the course on their decision to attend PSOM. URM was defined by the American Association of Medical Colleges as Black, American Indian/Alaskan Native, Native Hawaiian, Mexican American, and Mainland Puerto Rican.
Precourse survey responses were 100% and 96% for entering classes of 2007 and 2008, respectively. Among those students reporting knowledge of the course (128/210, 61%), URM students (27/37, 73%) were more likely than non-URM students (38/91, 42%) to report that knowledge of the existence of the course influenced their decision to attend PSOM (p = 0.002). Analysis of qualitative responses revealed that students felt that the curriculum gave the school a reputation for placing importance on health disparities and social justice issues. URM student enrollment at PSOM, which had remained stable from years 2005 and 2006 at 12% and 11% of the total incoming classes, respectively, increased to 22% of the total class size in 2007 (p = 0.03) and 19 percent in 2008.
The required health disparities course may have contributed to the increased enrollment of URM students at PSOM in 2007 and 2008.
health disparities; curriculum; education; medical students; underserved
Introduction: The present study examines the question whether the selection of dental students should be based solely on average school-leaving grades (GPA) or whether it could be improved by using a subject-specific aptitude test.
Methods: The HAM-Nat Natural Sciences Test was piloted with freshmen during their first study week in 2006 and 2007. In 2009 and 2010 it was used in the dental student selection process. The sample size in the regression models varies between 32 and 55 students.
Results: Used as a supplement to the German GPA, the HAM-Nat test explained up to 12% of the variance in preclinical examination performance. We confirmed the prognostic validity of GPA reported in earlier studies in some, but not all of the individual preclinical examination results.
Conclusion: The HAM-Nat test is a reliable selection tool for dental students. Use of the HAM-Nat yielded a significant improvement in prediction of preclinical academic success in dentistry.
student selection dentistry; prediction of study success; admission test
Objective To study medical students' views about the quality of the teaching they receive during their undergraduate training, especially in terms of the hidden curriculum.
Design Semistructured interviews with individual students.
Setting One medical school in the United Kingdom.
Participants 36 undergraduate medical students, across all stages of their training, selected by random and quota sampling, stratified by sex and ethnicity, with the whole medical school population as a sampling frame.
Main outcome measures Medical students' experiences and perceptions of the quality of teaching received during their undergraduate training.
Results Students reported many examples of positive role models and effective, approachable teachers, with valued characteristics perceived according to traditional gendered stereotypes. They also described a hierarchical and competitive atmosphere in the medical school, in which haphazard instruction and teaching by humiliation occur, especially during the clinical training years.
Conclusions Following on from the recent reforms of the manifest curriculum, the hidden curriculum now needs attention to produce the necessary fundamental changes in the culture of undergraduate medical education.
More and more medical school applicants in England and Wales are gaining the maximum grade at A level of AAA, and the UK Government has now agreed to pilot the introduction of a new A* grade. This study assessed the likely utility of additional grades of A* or of A**.
Statistical analysis of university selection data collected by the Universities and Colleges Admissions Service (UCAS), consisting of data from 1,484,650 applicants to UCAS for the years 2003, 2004 and 2005, of whom 23,628 were medical school applicants, and of these 14,510 were medical school entrants from the UK, aged under 21, and with three or four A level results. The main outcome measure was the number of points scored by applicants in their best three A level subjects.
Censored normal distributions showed a good fit to the data using maximum likelihood modelling. If it were the case that A* grades had already been introduced, then at present about 11% of medical school applicants and 18% of entrants would achieve the maximum score of 3 A*s. Projections for the years 2010, 2015 and 2020 suggest that about 26%, 35% and 46% of medical school entrants would have 3 A* grades.
Although A* grades at A level will help in medical student selection, within a decade, a third of medical students will gain maximum grades. While revising the A level system there is a strong argument, as proposed in the Tomlinson Report, for introducing an A** grade.
The General Medical Council (GMC) is holding consultations in order to decide on the proposed changes to the undergraduate medical assessment. In the last round of consultation only eight medical students formally responded nationally.
To determine the views of a larger proportion of final year medical students across the country on the proposed changes to the undergraduate medical assessment.
An online national survey of 10 medical schools, from which 401 responses from final year medical students were collected.
Results and discussion
The results indicate the medical students' views on the GMC's proposed changes to standardise the assessment system. The majority of the students were in favour of having a say in any changes to their future assessment. They agreed with the principle that there should be a consistency between assessments at different medical schools and currently their results did not represent preparedness to practice.
To evaluate the scientific reasoning in basic science among undergraduate medical students, we established the National Medical Science Olympiad in Iran. In this Olympiad, the drawing of a concept map was used to evaluate a student's knowledge framework; students' ability in hypothesis generation and testing were also evaluated in four different steps. All medical students were invited to participate in this program. Finally, 133 undergraduate medical students with average grades ≥ 16/20 from 45 different medical schools in Iran were selected. The program took the form of four exams: drawing a concept map (Exam I), hypothesis generation (Exam II), choosing variables based on the hypothesis (Exam III), measuring scientific thought (Exam IV). The examinees were asked to complete all examination items in their own time without using textbooks, websites, or personal consultations. Data were presented as mean ± SE of each parameter. The correlation coefficient between students' scores in each exam with the total final score and average grade was calculated using the Spearman test.
Out of a possible score of 200, the mean ± SE of each exam were as follows: 183.88 ± 5.590 for Exam I; 78.68 ± 9.168 for Exam II; 92.04 ± 2.503 for exam III; 106.13 ± 2.345 for Exam IV. The correlation of each exam score with the total final score was calculated, and there was a significant correlation between them (p < 0.001). The scatter plot of the data showed a linear correlation between the score for each exam and the total final score. This meant that students with a higher final score were able to perform better in each exam through having drawn up a meaningful concept map.
The average grade was significantly correlated with the total final score (R = 0.770), (p < 0.001). There was also a significant correlation between each exam score and the average grade (p < 0.001). The highest correlation was observed between Exam I (R = 0.7708) and the average grade. This means students with higher average grades had better grades in each exam, especially in drawing the concept map.
We hope that this competition will encourage medical schools to integrate theory and practice, analyze data, and read research articles. Our findings relate to a selected population, and our data may not be applicable to all medical students. Therefore, further studies are required to validate our results.