We used 3 sets of scores as outcome variables in this article. We used the CBSE as an indicator of academic performance for the first 5 semesters of basic science or the pre-clinical stage. CPIE scores were used to explore the relationship between selection criteria and performance near the end of the medical program. Furthermore, medical students take examinations throughout their educational program, and we used the weighted average grades for all courses taken before internship (msGPA) as an indicator of average performance throughout the medical education program.
The results obtained showed relatively weak correlations between the predictors and academic performance. Konkoor total score and sub-scores, as the sole admission criteria, were relatively poor predictors of medical students’ academic performance, especially for CPIE scores. The findings of this study indicate that the Konkoor is not a consistent predictor, and its predictive validity declines over the academic years in medical school. In addition, because there is considerable variation in the predictive validity of different science and non-science subtests, revisions should be considered in the weighting system or in limiting the use of some subtests. Of course, there were discrepancies among outcome variables regarding their predictability by Konkoor subsections. While religious studies and foreign languages had the least predictive validity of the Konkoor subsections for the CBSE and CPIE, mathematics and physics showed the weakest relationship among predictor variables with msGPA. Conversely, biology maintained a strong relationship with all 3 outcome variables. Evaluating the association between Konkoor scores and other outcome variables can pave the way for better interpretation of the predictive validity of the Konkoor’s subsections in the future. Although not included in the admission criteria for the cohort of students whose data were used in this study, hsGPAs increased predictive values for the CPIE, as did msGPAs, in particular, when added to regression models that already included Konkoor scores. However, this combination slightly reduced the explanatory power of the model for predicting CBSE scores. The discrepancy in the predictive validity of hsGPA between outcome variables could be attributed to different methods of examination. Both hsGPA and msGPA are average course grades of students over a relatively long period of time. Both of these variables employ similar methods (multiple tests over a long period), whereas the Konkoor, CBSE and CPIE are centralized exams that are mostly taken only once by each student in his or her life. Perhaps different skills are needed to excel in these 2 different methods of evaluation. A similar rationale has been proposed to explain the difference in the predictive validity of the MCAT for medical school and licensing exam performance [8
Selecting a limited number of students from a large pool of applicants, to produce “good doctors”, is the ultimate goal of the medical schools’ admission committees. This is of particular importance in an educational system in which less than 1% of the total number of applicants across the country are finally accepted into medical schools [7
]. Studies that examine the capacity of the selection criteria to assure future successful performance are necessary to help admissions committees make sound and evidence-based decisions. There have been few studies investigating the relationship between Konkoor scores and medical students’ performance in Iran, all of which have been published in local journals. In one study, successful medical students (those without a history of dropping out and with a medical school GPA greater than 15 out of 20) performed better on the Konkoor science subsections compared with their unsuccessful counterparts (with a history of dropping out) [17
]. In another study, there was no significant relationship between the rank on the Konkoor and the total score of critical thinking among 89 students at Isfahan University of Medical Sciences [18
]. The role of critical thinking has been highlighted in medical education, and it has been recommended to consider training in critical thinking as a part or pre-requisite of the medical curriculum [19
Various studies have investigated the validity of the North American Medical College Admission Test (MCAT) and the Graduate Australian Medical School Admissions Test (GAMSAT) for predicting medical school performance, and these studies have assessed the extent to which these exams supplement the power of other medical school admissions criteria. The results of previous studies on the predictive validity of the MCAT have shown that correlations between this exam and academic results vary mostly from 0.3 to 0.6 [8
]. Moreover, a review of literature on the value of the GAMSAT in predicting medical school performance indicated that this exam alone is relatively poor at predicting academic performance [12
]. Nevertheless, it should be considered that these college entrance tests are used in conjunction with other admissions requirements, such as undergraduate GPA and interviews, which supplement the predictive power of these exams. As a result, these selection tools, in combination, are good predictors of students’ subsequent performance [10
] Compared to the MCAT and especially the GAMSAT, the Konkoor by itself is not a poor predictor of medical school performance. However, it is not supplemented by any other criteria, and this fact renders all admissions criteria relatively poor predictors of performance at Iranian medical schools. A difference between Konkoor and these admission tests is that, as mentioned in the method section, negative marks are awarded for Konkoor which can mask the true candidate ability.
Among the MCAT subtest scores, the biological sciences has the largest predictive validity for measures of medical school performance [8
]. In predicting performance on the medical board licensing examination measures, the biological sciences and verbal reasoning subtests of MCAT have better predictive validity than other subsets [8
]. In British studies, also, it has been documented that A level chemistry and biology are good predictors of performance in basic medical science examinations [21
]. Among the sections of GAMSAT, section III (on reasoning in biological and physical sciences) is most strongly associated with year 1 academic performance [14
]. In the present study, the science sub-scores – especially biology – showed the largest predictive validity for all outcome variables. Although chemistry showed large correlation coefficients in correlation analyses, in most regression models it was not included in the final model.
Besides cognitive abilities, medical schools generally agree that non-cognitive abilities are important contributors to the ability of students to become competent physicians, and hence, it is not sufficient to admit students solely on the basis of academic achievement. In recent years, there have been many attempts, such as the Multiple Mini-Interview, to develop assessment tools that are capable of predicting the non-cognitive qualities of the candidates [22
]. Although evidence has emphasized the importance of non-cognitive characteristics to the admissions process, these characteristics have played no role in the admission of medical students in Iran.
Recently, there have been some reforms in an attempt to improve medical education in Iran, such as graduate entry to medical schools [6
] Furthermore, in an attempt to improve the admissions procedure, hsGPAs have in the past 3
years been added to the Konkoor in the admissions process for all fields of study, including medicine [15
] Although these reforms are valuable, they need to be systematically assessed in the future.
An important limitation of the present study was that the validity coefficients were not corrected for range restriction and criterion unreliability, and hence, the observed relationship may be an underestimate of true validity. As mentioned above, applicants to all fields of study in the stream of “experimental sciences” (including medicine) participate in one examination on one day (392073 applicants for the 2003 Konkoor on experimental sciences). After reporting the results, eligible students are allowed to declare their top 100 field-department-university priorities in the order of their preferences. They can select any field of study in the stream of the Konkoor in which they have participated. Thus, the definition of the true
applicant pool for medicine is not clear with this exam, and therefore, correction for restriction is not possible. At the same time, as described in previous studies [24
], assuming that all the applicants in the stream of “experimental sciences” are the applicant pool for this study would likely result in an overestimation in that the total testing sample across all fields of study in this stream is generally more variable (has greater standard deviations) than true
applicants to medicine. In addition, the data from MOHME show that the CBSE and CPIE have acceptable reliabilities, which hover around 0.95 in recent years. As a result, we assumed that criterion unreliability may not be the case in this study, and we did not perform the respective corrections. Missing data, especially for students’ high school GPAs, was another limitation of this study. Most of the missing data are due to the shortcomings of MOHME database. This is “missing at random” and thus causes no bias. Notably, with regard to Konkoor scores, there was no significant difference between the group of applicants with hsGPA data and the group of applicants for whom data on hsGPA were missing (P
value = 0.130). Of course, a minor part of the missing data is due to delays and attrition which can confound the results. Unfortunately, we have no precise information on this error. The outcome measurements in our study were knowledge-based examinations that test the students' recall of information. Consequently, we were not able to assess the clinical skills of medical students and their relationship with admissions criteria as important measurements of medical students’ performance.
This nationwide study followed one cohort of students from entrance to medical school through internship. We used different outcome variables to provide information about the relationship between Konkoor scores and academic performance in different learning phases. In contrast to many predictive validation studies [9
], the predictor and outcome variables used in this study were not school-dependent, and therefore, we were able to conduct a single analysis for the entire national population. Nevertheless, the predictive validity of the Konkoor needs to be evaluated with other outcome variables. Work should be done on the validity of admissions criteria for predicting residency national exams and other domains, such as practical skills, clinical performance, professionalism and job satisfaction. These measurements are of particular importance because sound criterion measurements for non-cognitive domains is an issue that most studies have not addressed [25