PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of amjpharmedLink to Publisher's site
 
Am J Pharm Educ. 2010 March 10; 74(2): 34.
PMCID: PMC2856427

Malaysian Pharmacy Students' Assessment of an Objective Structured Clinical Examination (OSCE)

Ahmed Awaisu, PhD, MPharm,corresponding authora Norny Syafinaz Abd Rahman, MPharm, BPharm,b Mohamad Haniki Nik Mohamed, PharmD, BPharm,b Siti Halimah Bux Rahman Bux, BPharm,b and Nor Ilyani Mohamed Nazar, MPharm, BPharmb

Abstract

Objective

To implement and determine the effectiveness of an objective structured clinical examination (OSCE) to assess fourth-year pharmacy students' skills in a clinical pharmacy course.

Design

A 13-station OSCE was designed and implemented in the 2007-2008 academic year as part of the assessment methods for a clinical pharmacy course. The broad competencies tested in the OSCE included: patient counseling and communication, clinical pharmacokinetics (CPK), identification and resolution of drug-related problems (DRPs), and literature evaluation/drug information provision.

Assessment

Immediately after all students completed the OSCE, a questionnaire containing items on the clarity of written instructions, difficulty of the tasks, perceived degree of learning gained and needed, and the suitability of the references or literature resources provided was administered. More than 70% of the students felt that a higher degree of learning was needed to accomplish the tasks at the 2 DRP stations and 2 CPK stations and the majority felt the written instructions provided at the phenytoin CPK station were difficult to understand. Although about 60% of the students rated OSCE as a difficult form of assessment, 75% said it should be used more and 81% perceived they learned a lot from it.

Conclusion

Although most students felt that the OSCE accurately assessed their skills, a majority felt the tasks required in some stations required a higher degree of learning than they had achieved. This may indicate deficiencies in the students' learning abilities, the course curriculum, or the OSCE station design. Future efforts should include providing clearer instructions at OSCE stations and balancing the complexity of the competencies assessed.

Keywords: clinical competencies, objective structured clinical examination, bachelor of pharmacy, Malaysia

INTRODUCTION

Colleges and schools of pharmacy traditionally have assessed students' performance using multiple-choice and essay questions. However, these methods of assessment may not adequately evaluate mastery of essential skills and measure cognitive learning in clinical settings.1,2 Furthermore, clinical faculty members often see a disparity between performance of high achievers in the classroom and in clinical settings.3 This inconsistency may stem from differences in testing for memorization of information and clinical application of knowledge. Therefore, the use of performance-based assessment methods, such as the objective structured clinical examination (OSCE), in undergraduate pharmacy education is of fundamental importance.1,4-7 OSCE has been used in evaluating clinical competence in health professions education around the world. Since the role of pharmacists has expanded beyond compounding and dispensing drugs, strategies for teaching and evaluation in pharmacy education must change as well.1,8-10 This is also in tandem with the philosophy and practice of pharmaceutical care, with more emphasis on experiential training than didactic learning. Because more emphasis is being placed on the experiential aspect of training, more emphasis must be placed on effective and accurate evaluation of students' performance in practice settings.1,11 OSCE has been an instrumental part of clinical competence assessment in the Faculty of Pharmacy at the International Islamic University Malaysia (IIUM) since 2006.4 The complexities of competencies tested at different OSCE stations may vary significantly. Further, the clarity of instructions given to examinees, as well as the perceived degree of learning needed to achieve the competency being tested/evaluated, may also differ from one OSCE station to another. Such wide variations may influence the validity and reliability of the overall examination. Thus, the current study focused on examinees' perceptions of the OSCE stations' effectiveness in accurately evaluating competencies. We believe that examinees perceived that the clarity of instructions and level of complexity of tasks varied from one OSCE station to the next and that affected their performance. In this paper, we describe the pharmacy students' perceptions regarding the assessment of different competencies at OSCE stations and how OSCEs compare with other assessment methods.

DESIGN

A 13-station OSCE (7 active, 3 preparatory, and 3 rest stations) was designed and implemented as part of the assessment methods for the Clinical Pharmacy III course in the bachelor of pharmacy (BPharm) curriculum at IIUM during the 2007-2008 academic session. The course was offered in the second semester to fourth-year BPharm students who had fulfilled the prerequisite for the course (ie, Clinical Pharmacy I). Clinical Pharmacy III was a 3-credit-hour required course with 40 contact hours. Instructional strategies for the course included didactic lectures, hospital ward attachments (clerkships), and tutorials. The course was designed to provide students with an understanding on the various factors that determine the choice of drugs for individual patients. The course exposed students to the practical aspects of pharmacy with regard to patient care and drug therapy. Students were given the opportunity to put their knowledge of clinical pharmacy and therapeutics into practice. They also had the opportunity to observe and participate in ward rounds with other caregivers. Emphasis was placed on the role of pharmacists in patient care. Students were also expected to understand the clinical pharmacokinetics of drugs and their relationship to patients' treatment. The key learning outcomes of the course included integration of the concepts of pharmaceutical care, pharmacotherapeutics, and clinical pharmacokinetics in the identification and resolution of drug-related problems and the application of an evidence-based approach. The course assessment methods included clerkship (ie, clinical attachment) rating, long and short essays (via examination), and the OSCE. Fifty-two students who registered for the Clinical Pharmacy III course during the second semester of the 2007-2008 academic session were examined via OSCE in addition to other assessment methods.

OSCE was the “gold standard” used to evaluate the clinical competency of undergraduate pharmacy students at IIUM. The general objective of the OSCE was to assess the students' competency in various aspects of therapeutics, clinical pharmacokinetics, drug information, and pharmaceutical care in general. In addition, the method aimed to effectively assess communication skills, drug therapy, and other clinical problems identification and resolution skills. The specific objectives of individual competencies are presented in Appendix 1. The language used for instruction and communication during the OSCE was either English or Bahasa Malay (Malaysia's national language). All the patients and actors recruited for participation in the OSCE were able to understand and fluently speak English and/or Bahasa Malay.

Prior to the OSCE, standardized patient and actor training was conducted for individual stations to minimize intra- and inter-actor bias and avoid inconsistency in the tasks given to examinees. Four patient actors (2 active and 2 reserves) were recruited and specially trained as standardized patients or clients to portray a scenario, such as specific medical or drug-related problems, at each station. The training comprised of role-playing for possible interactions with the student pharmacist during the OSCE. The simulated patients and actors followed scripted scenarios at all stations in order to standardize the examination and ensure parity.

The standardized simulated patients and actors played a relatively passive role in the OSCE, answering questions only if they were asked but not volunteering information. They were instructed to give only a negative response such as “No,” “I don't know,” or “Sorry, I am not sure” to any questions asked by the examinees that deviated from the case scenario. Most of the patients and actors were accustomed to the processes and nature of the OSCE since they had participated previously.

Fourth-year BPharm students were assessed through the OSCE, which contributed 25% to the course examination at the end of the second semester. The examinees were required to perform specific functions to complete the task or address the problem in each station. They were given 15 minutes at each station and assessed using a structured and standardized checklist. Two OSCEs (ie, 2 identical but separate sets of stations) were run simultaneously in parallel for 2 groups of 26 students each. The processes and mechanics of OSCE development and application were described in a previous study.4 A summary of the workstations is provided in Appendix 1. Detailed samples on how the workstations were developed are available from the authors on request.

EVALUATION AND ASSESSMENT

A 46-item questionnaire was administered to students immediately after the entire class had completed the examination. The questionnaire was developed based on a comprehensive literature review and modified from a previously validated instrument used to evaluate a similar group of students. To ensure content and construct validity, a senior faculty member with experience in pharmacy education, curriculum design, and evaluation, and another academic expert in pharmacy practice survey research methodology and psychometrics, reviewed the instrument. Modifications were made to better fit the study objectives and ambiguous items were clarified. The instrument then was reviewed again and further refined for use in the study.

The questionnaire was comprised of items to gather demographic data from the respondents and questions evaluating the OSCE stations in terms of ease of understanding the written instructions; difficulty of the tasks; perceived degree of learning gained and needed; and the appropriateness and usefulness of the references or literature resources provided. Examinees were also asked to rate the OSCE in relation to other assessment methods using a 3-point Likert scale. Three open-ended questions also were asked to generate additional qualitative data on the strengths and weaknesses of the examination and students' recommendations for improvement.

Examinees were asked to complete the questionnaire on a voluntary basis immediately after the OSCE. No disclosure of identity was required on the questionnaire and participants were ensured of confidentiality. The data were analyzed using SPSS, version 12, (SPSS, Inc, Chicago, IL). Most of the data were analyzed using descriptive statistics. For the open-ended questions, the qualitative data generated were analyzed manually using thematic content analysis. Interpretable responses were summarized and categorized into themes.

Students' Rating of the OSCE Stations

The 52 students examined via the OSCE during 2007-2008 academic session all completed the questionnaire (Table (Table1).1). The majority (76.9%) of the students were female. Students were asked to assess the comparative ease or difficulty associated with understanding instructions and performing tasks at the 7 OSCE stations evaluating competence in DRPs, drug information and literature evaluation, clinical pharmacokinetics, and patient education on the use of insulin delivery devices. More than 50% of the students agreed that it was easy to comprehend the written instructions provided at the station assessing their ability to counsel patients concerning insulin delivery devices and the station assessing their ability to manage a paracetamol overdose, but less than 20% felt the instructions provided at the station assessing their ability to use CPK in dosing regimen design for phenytoin were clear (Table (Table22).

Table 1
Demographic Characteristics of Undergraduate Pharmacy Students' Who Completed a Questionnaire Regarding an Objective Structured Clinical Examination
Table 2
Students' Rating of Competencies at Objective Structured Clinical Examination Stations (N = 52)

At least half of the students were neutral about the level of difficulty of the tasks at most of the stations (Table (Table2).2). More examinees (25%-48%) felt that the tasks performed at the 2 DRPs stations for tuberculosis (TB) and epilepsy competencies as well as the 2 CPK stations for digoxin and phenytoin competencies were difficult. In addition, not less than half of the students believed that a high degree of learning was gained from the following stations: paracetamol overdose management (53.8%), insulin delivery devices counseling (57.7%), drug information (55.8%), and phenytoin CPK (50%). Over 70% of the respondents felt that a high degree of learning was needed to accomplish the tasks at the 2 DRPs stations and the 2 CPK stations (Table (Table22).

Almost 50% of the students reported that the 15 minutes allocated for completing each task was inadequate, especially for the 2 CPK stations (Table (Table2),2), whereas 46% of the students were satisfied with the amount of time given for completing the tasks at the paracetamol overdose management station and insulin delivery devices counseling station. Table Table22 shows how the respondents rated the appropriateness of the reference materials and literature resources provided at the various stations. Nearly two-thirds of the students agreed that the references and literature resources provided at the drug information station, insulin delivery devices counseling station, and paracetamol overdose station were appropriate.

Students' Rating of Assessment Methods

Students were also asked to rate various assessment instruments in terms of difficulty, fairness, degree of learning, and their preferences on the frequency they felt the instruments should be used for assessing competencies. The majority of the students did not take a stance about the difficulty and fairness of all the assessment methods and remained neutral. The relative comparison of OSCE with other assessment methods in terms of difficulty, fairness, degree of learning, and preferred frequency of use is presented in Table Table3.3. Only 13.5% of the students felt that the OSCE was fair and nearly 60% did not take a stance. Although about two-thirds of the students rated the OSCE as a difficult form of assessment, yet about 81% perceived that they have learned a lot from it.

Table 3
Bachelor of Pharmacy Students' Rating of OSCE in Relation to Other Assessment Methods Used in Clinical Pharmacy and Therapeutics

Multiple-choice questions were rated as the fairest (38.5%) among all assessment methods. Furthermore, students indicated that OSCEs and clerkship ratings (performance on clerkship assignments, presentations, and written reports) were the methods that imparted the highest degree of learning/knowledge. In addition, over 70% of the examinees agreed that OSCE and clerkship ratings should be used much more (Table (Table33).

Examinees Responses to Open-Ended Questions

Several themes were identified among students' responses to each of the 3 open-ended questions. Sixteen respondents commented that the OSCE has exposed them to what seemed like “real life” cases and accurately measured their knowledge and skills (13 comments). Other themes that emerged included: the OSCE highlighted areas of weaknesses in my skills and knowledge (8 comments); the OSCE enhanced my communication skills (7 comments); the OSCE was a true reflection of the skills learned from the curriculum (4 comments); I obtained additional experience and learned a lot from the OSCE (3 comments).

Fifteen of the students stated that the 15-minute limit allocated for each station (especially the CPK station) was inadequate. Students indicated that the OSCE caused them to be nervous (12 comments). Some students also felt there was inter-evaluator variability in the OSCE (9 comments). Moreover, 6 students indicated that the OSCE was an anxiety-producing and stress-inducing examination. Some students also stated that instructions at some stations were ambiguous (5 comments).

Examinees recommended that the time allocated at each station be increased for future OSCEs, especially stations that involved calculations (14 comments), with 3 students stating that different tasks needed different lengths of time to complete. Other suggestions for improvement were that the OSCE be introduced earlier in the pharmacy curriculum rather than in the final year (6 comments); that students be familiarized with the OSCE system (5 comments); and that the clarity of instructions be ensured (5 comments). Examinees also suggested that competencies receive broader coverage in the pharmacy curriculum and that additional appropriate references be provided at each station (4 comments each); that much easier questions be used (3 comments); that more emphasis be placed on experiential training rather than theoretical teaching (2 comments); that staff members be more efficient and supportive (2 comments); that an activity be added to the resting station (2 comments); that other venues be used in the future; that equations be provided at each station that requires calculation; that an external evaluator be used; and that the examination be stress free.

DISCUSSION

We believed that the characteristics of the clinical competencies assessed and/or tasks required at OSCE stations would have profound effects on examinees' performance. Pharmacy students generally perceived that the tasks given at the stations assessing CPK skills and identification of DRPs were difficult. There were obvious variations in the students' perceptions of OSCE stations, depending on the types of tasks they were asked to perform. This finding is in agreement with other reports and observations. While the dependability of a well-constructed OSCE as an assessment tool is generally quite good, it varies significantly among stations, suggesting that the quality of students' performance is task specific.11,12

Newble and Swanson suggested that low reliabilities of OSCEs are more likely attributed to the variability among stations due to the unique nature of the individual competency being assessed rather than to poor interrater reliability.12 As a result, longer OSCE instruments, comprised of more stations, may be required to obtain more acceptable (ie, higher) dependability coefficients and, consequently, lower standard errors of measurement.11 Similarly, the findings that the tasks performed at DRP identification/resolution stations and CPK stations were difficult were consistent, with an overwhelming proportion of students (over 70%) feeling that a high degree of learning was needed to accomplish the tasks at these stations. The examinees felt that the DRP station regarding epilepsy was the most difficult station in terms of the tasks they were required to perform (48% believed it was difficult). Furthermore, they pointed at the same station as the most difficult in terms of instructions comprehension (about 35% perceived that the instructions for this station were difficult to comprehend). These perceptions may be consistent with how the students performed at the individual OSCE stations.

Our observations are consistent with those reported in a similar setting by Corbo and colleagues from Brighton University who found that final-year undergraduate pharmacy students performed poorly in activities that demanded an element of clinical problem identification and resolution or when performing a clinical calculation in an OSCE.5 This may be partly explained by students' nervousness whenever a task involved calculations or problem-solving skills. Clearly, different clinical competencies or tasks at OSCE stations may have different degrees of complexities and this in turn may have profound effects on the examinees performance. However, other covariates such as clarity of instructions, validity of assessment tools, and time allocation should be carefully controlled. These findings point to the possibilities of students' deficiencies, and/or clinical training deficits, and/or inadequacies in the design of some stations. In this study, about half of the students felt that the written instructions at the drug overdose station and patient counseling station were easier to understand than instructions at the other stations. Those particular stations also were pointed out by the students as having “easy tasks to be completed.”

Although we have taken steps to improve several aspects of the OSCE over the last 3 years, some issues remain unresolved. For instance, students' criticisms that the OSCE is bias due to different examiners at parallel stations remained virtually unchanged. This was due to a shortage of resources and organizational difficulties associated with conducting the OSCE, and we are working on an acceptable solution to this problem.

Despite, all the concerns raised, the OSCE has received considerable support from examinees. A substantial proportion of the students agreed that they gained a high degree of knowledge in all the stations of the OSCE, but felt that the OSCE format should be introduced earlier in the curriculum. In essence, OSCEs allow students to integrate pharmacotherapeutic knowledge, problem-solving skills, and communication and interpersonal skills into each exercise.13 The method permits participants to learn from potentially dangerous mistakes prior to an actual patient encounter. Students agreed that the time allocated in completing each task was inadequate, especially in CPK stations. Substantial proportions of students reported difficulties with both time management and stress control. However, many examinees felt that reference materials or resources prepared in individual OSCE stations were suitable and relevant.

The use of OSCE is thought to be more objective, more valid, and more reliable than most other assessment methods.14-16 In order to evaluate the validity of an OSCE as an effective battery to assess clinical competence, evidence concerning construct validity should be gathered as described widely in the medical literature.17-19 Content evidence, clarity of instructions, station developer expertise, and adequacy of OSCE content in relation to curriculum objectives should be critically examined by an appropriate panel of experts.17-19 The clarity of instructions could be verified further by surveying examinees.19-20 To lend additional support to the validity of an OSCE station, the majority of examinees should agree that the instructions provided are clear.

The pharmacy students in this study perceived that tasks given at clinical problem identification and resolution stations or those related to clinical pharmacokinetics calculations were difficult and a high degree of learning was needed to achieve the competencies at those stations. These findings have highlighted possible areas of deficits in students' knowledge and skills, and/or deficiencies in clinical training, as well as in the design of the OSCE. Future efforts should be geared towards creating clear instructions at OSCE stations and the complexity of the competencies needs to be balanced. More emphasis should be placed on clinical problem solving and identification skills, as well as clinical calculations during experiential training. The findings from this study have important implications on students' preparedness for effective delivery of pharmaceutical care, especially as it relates to problem-solving skills and therapeutic drug-monitoring services.

Limitations

Our findings should be interpreted in the light of major limitations. First, the majority of the respondents remained neutral on many questions, thereby limiting the generalizabilty of the results. Candidates' performance was assessed by trained examiners (faculty members) using standardized checklists in each of the OSCE stations in order to achieve high interrater reliability. Since 2 parallel OSCEs were conducted, the training emphasized consistency in grading between each pair of examiners to ensure the same scores were achieved by a student regardless of which OSCE arm he/she completed. However, no data were generated to support the sufficiency of this in ensuring the validity and reliability of the examination and to determine whether high interrater reliability of scores was achieved. We believe that a certain degree of interrater variability may have undermined the validity and reliability of the OSCE. In the future, an OSCE's validity and reliability in this type of setting should be tested by using procedures such as internal structure evaluation (ie, interrater reliability assessment) and RASCH measurements.15,17,18,21

ACKNOWLEDGEMENTS

We acknowledge with thanks the support of all faculty members who were involved in the OSCE and those who provided continued support for its success.

Appendix 1. OSCE Stations Summary

An external file that holds a picture, illustration, etc.
Object name is ajpe34app1.jpg

REFERENCES

1. Monaghan MS, Vanderbush RE, McKay AB. Evaluation of clinical skills in pharmaceutical education: past, present and future. Am J Pharm Educ. 1995;59(4):354–358.
2. Stowe CD, Gardner SF. Real-time standardized participant grading of an objective structured clinical examination. Am J Pharm Educ. 2005;69(3):272–276.
3. Gardner SF, Stowe CD, Hopkins DD. Comparison of traditional testing methods and standardized patient examinations for therapeutics. Am J Pharm Educ. 2001;65(3):236–240.
4. Awaisu A, Nik Mohamed MH, Al-Efan QAM. Perception of pharmacy students in Malaysia on the use of objective structured clinical examinations to evaluate competence. Am J Pharm Educ. 2007;71(6) Article 118. [PMC free article] [PubMed]
5. Corbo M, Patel JP, Abdel Tawab R, Davies JG. Evaluating clinical skills of undergraduate pharmacy students using objective structured clinical examinations (OSCEs) Pharm Educ. 2006;6(1):53–58.
6. Rutter PM. The introduction of observed structured clinical examinations (OSCEs) to the MPharm degree pathway. Pharm Educ. 2002;1(2):173–180.
7. Rees JA, Collett JH, Crowther I, Mylrea S. Assessment of competence using a structured objective examination approach. Pharm J Supp. 1991;247(Supp. Oct. 12):R32.
8. Commission to implement change in pharmaceutical education. Background paper II: Entry-level, curricular outcomes, curricular content and educational process. Am J Pharm Educ. 1993;57:377–385.
9. Brandt BF. Effective teaching and learning strategies. Pharmacother. 2000;20(10 Pt 2):307S–316S. [PubMed]
10. Bruce SP, Bower A, Hak E, Schwartz AH. Utilization of the Center for the Advancement of Pharmaceutical Education Educational Outcomes, Revised Version 2004: Report of the 2005 American College of Clinical Pharmacy Educational Affairs Committee. Pharmacother. 2006;26(8):1193–1200. [PubMed]
11. Fielding DW, Page GG, Rogers WT, O'Byrne CC, Schulzer M, Moody KG, Dyer S. Application of objective structured clinical examinations in an assessment of pharmacists' continuing competency. Am J Pharm Educ. 1997;61(2):117–125.
12. Newble DI, Swanson DB. Psychometric characteristics of objective structured clinical examinations. Med Educ. 1988;22(4):325–334. [PubMed]
13. Cerveny JD, Knapp R, DelSignore M, Carson DS. Experience with objective structured clinical examinations as a participant evaluation instrument in disease management certificate programs. Am J Pharm Educ. 1999;63(4):377–381.
14. Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE) Med Educ. 1979;13(1):41–54. [PubMed]
15. van der Vleuten CPM, Norman GR, de Graaff E. Pitfalls in the pursuit of objectivity: issues of reliability. Med Educ. 1991;25(2):110–118. [PubMed]
16. Cunnington JPW, Neville AJ, Norman GR. The risks of thoroughness: reliability and validity of global ratings and checklists in an OSCE. Adv Health Sci Educ. 1996;1(3):227–233. [PubMed]
17. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–837. [PubMed]
18. Downing SM, Haladyna TM. Validity threats: overcoming interference with proposed interpretations of assessment data. Med Educ. 2004;38(3):327–333. [PubMed]
19. Varkey P, Natt N, Lesnick T, Downing S, Yudkowsky R. Validity evidence for an OSCE to assess competency in system-based practice and practice-based learning and improvement: a preliminary investigation. Acad Med. 2008;83(8):775–780. [PubMed]
20. Taghva A, Mir-Sepassi GR, Zarghami M. A brief report on the implementation of an objective structured clinical examination (OSCE) in the 2006 Iranian board of psychiatry examination. Iranian Journal of Psychiatry and Behavioral Sciences (IJPBS). 2007;1(1):39–40.
21. Iramaneerat C, Yudkowsky R, Myford CM, Downing SM. Quality control of an OSCE using generalizability theory and many-faceted Rasch measurement. Adv Health Sci Educ. 2008;13(4):479–493. [PubMed]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy