Developing and testing the cognitive skills and abstract thinking of undergraduate medical students are the main objectives of problem based learning. Modified Essay Questions (MEQ) and Multiple Choice Questions (MCQ) may both be designed to test these skills. The objectives of this study were to assess the effectiveness of both forms of questions in testing the different levels of the cognitive skills of undergraduate medical students and to detect any item writing flaws in the questions.
A total of 50 MEQs and 50 MCQs were evaluated. These questions were chosen randomly from various examinations given to different batches of undergraduate medical students taking course MED 411–412 at the Department of Medicine, Qassim University from the years 2005 to 2009. The effectiveness of the questions was determined by two assessors and was defined by the question’s ability to measure higher cognitive skills, as determined by modified Bloom’s taxonomy, and its quality as determined by the presence of item writing flaws. ‘SPSS15’ and ‘Medcalc’ programs were used to tabulate and analyze the data.
The percentage of questions testing the level III (problem solving) cognitive skills of the students was 40% for MEQs and 60% for the MCQs; the remaining questions merely assessed the recall and comprehension. No significant difference was found between MEQ and MCQ in relation to the type of questions (recall; comprehension or problem solving x2 = 5.3, p = 0.07).The agreement between the two assessors was quite high in case of MCQ (kappa=0.609; SE 0.093; 95%CI 0.426 – 0.792) but lower in case of MEQ (kappa=0.195; SE 0.073; 95%CI 0.052 – 0.338). 16% of the MEQs and 12% of the MCQs had item writing flaws.
A well constructed MCQ is superior to MEQ in testing the higher cognitive skills of undergraduate medical students in a problem based learning setup. Constructing an MEQ for assessing the cognitive skills of a student is not a simple task and is more frequently associated with item writing flaws.
Modified essay question; Multiple-choice question; Bloom’s Taxonomy; cognition
Medical schools universally accept the idea that bioethics courses are essential components of education, but few studies which measure outcomes (i.e., knowledge or retention) have demonstrated their educational value in the literature. The goal of this study was to examine whether core concepts of a pre-clinical bioethics course were learned and retained. Over the course of 2 years, a pre-test comprising 25 multiple-choice questions was administered to two classes (2008–2010) of first-year medical students prior to the start of a 15-week ethics course, and an identical post-test was administered at the end of the course. A total of 189 students participated. Paired t tests showed a significant difference between pre-test scores and post-test scores. The pre-test average score was 69.8 %, and the post-test average was 82.6 %, an increase of 12.9 % after the ethics course. The pre- and post-test results also suggested a shift in difficulty level of the questions, with students finding identical questions easier after the intervention. Given the increase in post-test scores after the 15-week intervention, the study suggests that core concepts in medical ethics were learned and retained. These results demonstrate that an introductory bioethics course can improve short-term outcomes in knowledge and comprehension, and should provide impetus to educators to demonstrate improved educational outcomes in ethics at higher levels of B.S. Bloom’s Taxonomy of Learning.
Bioethics; Undergraduate medical education; Bloom’s Taxonomy of Learning
Characterizing and comparing cognitive skills assessed by introductory biology and physics indicate that (a) both course sequences assess primarily lower-order cognitive skills, (b) the distribution of items across cognitive skill levels differs significantly, and (c) there is no strong relationship between student performance and cognitive skill level.
Assessments and student expectations can drive learning: students selectively study and learn the content and skills they believe critical to passing an exam in a given subject. Evaluating the nature of assessments in undergraduate science education can, therefore, provide substantial insight into student learning. We characterized and compared the cognitive skills routinely assessed by introductory biology and calculus-based physics sequences, using the cognitive domain of Bloom's taxonomy of educational objectives. Our results indicate that both introductory sequences overwhelmingly assess lower-order cognitive skills (e.g., knowledge recall, algorithmic problem solving), but the distribution of items across cognitive skill levels differs between introductory biology and physics, which reflects and may even reinforce student perceptions typical of those courses: biology is memorization, and physics is solving problems. We also probed the relationship between level of difficulty of exam questions, as measured by student performance and cognitive skill level as measured by Bloom's taxonomy. Our analyses of both disciplines do not indicate the presence of a strong relationship. Thus, regardless of discipline, more cognitively demanding tasks do not necessarily equate to increased difficulty. We recognize the limitations associated with this approach; however, we believe this research underscores the utility of evaluating the nature of our assessments.
The purposes of this study were (a) to determine whether or not undergraduate athletic training educators are writing learning objectives that foster critical thinking (CT) skills, and (b) to determine if their written assignments and written examinations are measuring the extent to which students have developed CT skills.
Design and Setting:
Thirty institutions seeking accreditation for their athletic training programs from the Commission on Accreditation of Allied Health Educational Programs in the 1994-95 academic year were asked to provide their curriculum materials (course syllabus, two to three examinations, or both from each athletic training-specific course).
Thirteen curriculum directors (43%) provided materials.
Each learning objective, examination question, and written assignment was classified as either CT or non-critical thinking (NCT) using Bloom's taxonomy.
From 64 usable syllabi, a total of 678 learning objectives were classified as either CT (52%) or NCT (48%). From 81 written examinations, 3215 questions were classified as either CT (14%) or NCT (86%). In addition, a total of 143 written assignments were all classified as CT.
The results of this study indicate that educators fostered more CT in their learning objectives and written assignments than in their written exams. Valid educational instruments (eg, Bloom's taxonomy) may help educators design learning objectives, assignments, and examinations.
Bloom's taxonomy; learning objectives; test questions
Students today have unprecedented access to technology, the Internet, and social media. Their nearly ubiquitous use of these platforms is well documented. Given that today’s students may be primed to learn using a different medium, incorporating various technological elements into the classroom in a manner compatible with traditional approaches to teaching becomes a challenge.
We recently designed and implemented a strategy that capitalized on this knowledge. Students in their first neuroscience course were required to create a 3–5 minute digital video using video-making freeware available on any Mac or PC. They used images, text, animation, as well as downloaded music to describe the fundamental process of neurotransmission as it applies to a topic of their choice. In comparison to students taught using other more traditional approaches to demonstrate the process of neurotransmission, we observed that students who took part in the video-making project exhibited better understanding of the neurological process at multiple levels, as defined by Bloom’s revised taxonomy. This was true even of students who had no aspirations of pursuing a Neuroscience career, thus suggesting that there was an overall increased level of student engagement regardless of personal career interests. The utility of our approach was validated by both direct and indirect assessments. Importantly, this particular strategy to teaching difficult concepts offers a high degree of flexibility allowing it to potentially be incorporated into any upper-level Neuroscience course.
neurotransmission; e-learning; digital video; neurological disease; neurodegeneration; Facebook
The tendency to use portfolios for evaluation has been developed with the aim of optimizing the culture of assessment. The present study was carried out to determine the effect of using portfolios as an evaluation method on midwifery students' learning and satisfaction in prenatal practical training. In this prospective cohort study, all midwifery students in semester four (n=40), were randomly allocated to portfolio and routine evaluation groups. Based on their educational goals, the portfolio groups prepared packages which consisted of a complete report of the history, physical examinations, and methods of patient management (as evaluated by a checklist) for women who visited a prenatal clinic. During the last day of their course, a posttest, clinical exam, and student satisfaction form were completed. The two groups' mean age, mean pretest scores, and their prerequisite course that they should have taken in the previous semester were similar. The mean difference in the pre and post test scores for the two groups' knowledge and comprehension levels did not differ significantly (P>0.05). The average scores on questions in Bloom's taxonomy 2 and 3 of the portfolio group were significantly greater than those of the routine evaluation group (P=0.002, P=0.03, respectively). The mean of the two groups' clinical exam scores was significantly different. The portfolio group's mean scores on generating diagnostic and therapeutic solutions and the ability to apply theory in practice were higher than those of the routine group. Overall, students' satisfaction scores in the two evaluation methods were relatively similar. Portfolio evaluation provides the opportunity for more learning by increasing the student's participation in the learning process and helping them to apply theory in practice.
Evaluation; Portfolio; Learning; Satisfaction
We developed the Blooming Biology Tool (BBT), an assessment tool based on Bloom's Taxonomy, to assist science faculty in better aligning their assessments with their teaching activities and to help students enhance their study skills and metacognition. The work presented here shows how assessment tools, such as the BBT, can be used to guide and enhance teaching and student learning in a discipline-specific manner in postsecondary education. The BBT was first designed and extensively tested for a study in which we ranked almost 600 science questions from college life science exams and standardized tests. The BBT was then implemented in three different collegiate settings. Implementation of the BBT helped us to adjust our teaching to better enhance our students' current mastery of the material, design questions at higher cognitive skills levels, and assist students in studying for college-level exams and in writing study questions at higher levels of Bloom's Taxonomy. From this work we also created a suite of complementary tools that can assist biology faculty in creating classroom materials and exams at the appropriate level of Bloom's Taxonomy and students to successfully develop and answer questions that require higher-order cognitive skills.
Reliable and valid written tests of higher cognitive function are difficult to produce, particularly for the assessment of clinical problem solving. Modified Essay Questions (MEQs) are often used to assess these higher order abilities in preference to other forms of assessment, including multiple-choice questions (MCQs). MEQs often form a vital component of end-of-course assessments in higher education. It is not clear how effectively these questions assess higher order cognitive skills. This study was designed to assess the effectiveness of the MEQ to measure higher-order cognitive skills in an undergraduate institution.
An analysis of multiple-choice questions and modified essay questions (MEQs) used for summative assessment in a clinical undergraduate curriculum was undertaken. A total of 50 MCQs and 139 stages of MEQs were examined, which came from three exams run over two years. The effectiveness of the questions was determined by two assessors and was defined by the questions ability to measure higher cognitive skills, as determined by a modification of Bloom's taxonomy, and its quality as determined by the presence of item writing flaws.
Over 50% of all of the MEQs tested factual recall. This was similar to the percentage of MCQs testing factual recall. The modified essay question failed in its role of consistently assessing higher cognitive skills whereas the MCQ frequently tested more than mere recall of knowledge.
Construction of MEQs, which will assess higher order cognitive skills cannot be assumed to be a simple task. Well-constructed MCQs should be considered a satisfactory replacement for MEQs if the MEQs cannot be designed to adequately test higher order skills. Such MCQs are capable of withstanding the intellectual and statistical scrutiny imposed by a high stakes exit examination.
Biologists' conceptions of higher-order questions include Bloom's, difficulty, time, and student experience. Biologists need more guidance to understand the difference between Bloom's and item difficulty. Biologists' conceptions about higher-order questioning can be used as a starting point for professional development to reform teaching.
We present an exploratory study of biologists’ ideas about higher-order cognition questions. We documented the conversations of biologists who were writing and reviewing a set of higher-order cognition questions. Using a qualitative approach, we identified the themes of these conversations. Biologists in our study used Bloom's Taxonomy to logically analyze questions. However, biologists were also concerned with question difficulty, the length of time required for students to address questions, and students’ experience with questions. Finally, some biologists demonstrated an assumption that questions should have one correct answer, not multiple reasonable solutions; this assumption undermined their comfort with some higher-order cognition questions. We generated a framework for further research that provides an interpretation of participants’ ideas about higher-order questions and a model of the relationships among these ideas. Two hypotheses emerge from this framework. First, we propose that biologists look for ways to measure difficulty when writing higher-order questions. Second, we propose that biologists’ assumptions about the role of questions in student learning strongly influence the types of higher-order questions they write.
A major influence on education since the 1950’s has been Bloom’s Taxonomy, a classification of learning objectives across multiple domains meant to educate the whole student (Anderson and Krathwohl, 2001). Although it has influenced educational pedagogy in primary education, higher education remains, in antiquity, heavily lecture based; viewing the instructor as an expert who professes their vast knowledge to their students. However, when students serve as instructor, it is difficult to apply this traditional view to the college classroom. Here we discuss the development, pedagogical approach, and experience of a senior level seminar course in which the students and instructor collaboratively explored an emerging field, embodied cognition, which combines research and theory from psychology and neuroscience among other disciplines, in which neither the students nor instructor were an expert. Students provided feedback and evaluations at three time points over the course of the semester, before class started, at midterm and at the end of the semester in order to address the experience and effectiveness of a collaborative seminar experience in which the instructor assumed a role closer to an equal of the students. Student responses revealed both high levels of satisfaction and degrees of perceived learning within the course at both the midterm and final evaluation. The approach of this seminar may be beneficial when applied to other seminars or course formats as students in this course felt as though they were learning more and appreciated being a more equal partner in their own learning process.
embodied cognition; seminar; collaborative learning; student-centered teaching; engaged learning
To evaluate an instructional model for teaching clinically relevant medicinal chemistry.
An instructional model that uses Bloom's cognitive and Krathwohl's affective taxonomy, published and tested concepts in teaching medicinal chemistry, and active learning strategies, was introduced in the medicinal chemistry courses for second-professional year (P2) doctor of pharmacy (PharmD) students (campus and distance) in the 2005-2006 academic year. Student learning and the overall effectiveness of the instructional model were assessed. Student performance after introducing the instructional model was compared to that in prior years.
Student performance on course examinations improved compared to previous years. Students expressed overall enthusiasm about the course and better understood the value of medicinal chemistry to clinical practice.
The explicit integration of the cognitive and affective learning objectives improved student performance, student ability to apply medicinal chemistry to clinical practice, and student attitude towards the discipline. Testing this instructional model provided validation to this theoretical framework. The model is effective for both our campus and distance-students. This instructional model may also have broad-based applications to other science courses.
medicinal chemistry; distance education; instructional model
To incorporate structural biology, enzyme kinetics, and visualization of protein structures in a medicinal chemistry course to teach fundamental concepts of drug design and principles of drug action.
Pedagogy for active learning was incorporated via hands-on experience with visualization software for drug-receptor interactions and concurrent laboratory sessions. Learning methods included use of clicker technology, in-class assignments, and analogies.
Quizzes and tests that included multiple-choice and open-ended items based on Bloom's taxonomy were used to assess learning. Student feedback, classroom exercises, and tests were used to assess teaching methods and effectiveness in meeting learning outcomes.
The addition of active-learning activities increased students' understanding of fundamental medicinal chemistry concepts such as ionization state of molecules, enzyme kinetics, and the significance of protein structure in drug design.
drug-receptor interactions; enzyme kinetics; medicinal chemistry; active learning
Previously we showed that weekly, written, timed, and peer-graded practice exams help increase student performance on written exams and decrease failure rates in an introductory biology course. Here we analyze the accuracy of peer grading, based on a comparison of student scores to those assigned by a professional grader. When students graded practice exams by themselves, they were significantly easier graders than a professional; overall, students awarded ≈25% more points than the professional did. This difference represented ≈1.33 points on a 10-point exercise, or 0.27 points on each of the five 2-point questions posed. When students graded practice exams as a group of four, the same student-expert difference occurred. The student-professional gap was wider for questions that demanded higher-order versus lower-order cognitive skills. Thus, students not only have a harder time answering questions on the upper levels of Bloom's taxonomy, they have a harder time grading them. Our results suggest that peer grading may be accurate enough for low-risk assessments in introductory biology. Peer grading can help relieve the burden on instructional staff posed by grading written answers—making it possible to add practice opportunities that increase student performance on actual exams.
Objectives. To implement and assess the effectiveness of an assignment requiring doctor of pharmacy (PharmD) students to write examination questions for the medicinal chemistry sections of a pharmacotherapeutics course.
Design. Students were divided into groups of 5-6 and given detailed instructions and grading rubrics for writing multiple-choice examination questions on medicinal chemistry topics. The compiled student-written questions for each examination were provided to the entire class as a study aid. Approximately 5% of the student-written questions were used in course examinations.
Assessment. Student appreciation of and performance in the medicinal chemistry portion of the course was significantly better than that of the previous year’s class. Also, students’ responses on a qualitative survey instrument indicated that the assignment provided students’ guidance on which concepts to focus on, helped them retain knowledge better, and fostered personal exploration of the content, which led to better performance on examinations.
Conclusion. Adding an active-learning assignment in which students write examination questions for the medicinal chemistry portion of a pharmacotherapeutics course was an effective means of increasing students engagement in the class and knowledge of the course material.
examination; active learning; medicinal chemistry
Multiple-choice question (MCQ) examinations are increasingly used as the assessment method of theoretical knowledge in large class-size modules in many life science degrees. MCQ-tests can be used to objectively measure factual knowledge, ability and high-level learning outcomes, but may also introduce gender bias in performance dependent on topic, instruction, scoring and difficulty. The ‘Single Answer’ (SA) test is often used in which students choose one correct answer, in which they are unable to demonstrate partial knowledge. Negatively marking eliminates the chance element of guessing but may be considered unfair. Elimination testing (ET) is an alternative form of MCQ, which discriminates between all levels of knowledge, while rewarding demonstration of partial knowledge. Comparisons of performance and gender bias in negatively marked SA and ET tests have not yet been performed in the life sciences. Our results show that life science students were significantly advantaged by answering the MCQ test in elimination format compared to single answer format under negative marking conditions by rewarding partial knowledge of topics. Importantly, we found no significant difference in performance between genders in either cohort for either MCQ test under negative marking conditions. Surveys showed that students generally preferred ET-style MCQ testing over SA-style testing. Students reported feeling more relaxed taking ET MCQ and more stressed when sitting SA tests, while disagreeing with being distracted by thinking about best tactics for scoring high. Students agreed ET testing improved their critical thinking skills. We conclude that appropriately-designed MCQ tests do not systematically discriminate between genders. We recommend careful consideration in choosing the type of MCQ test, and propose to apply negative scoring conditions to each test type to avoid the introduction of gender bias. The student experience could be improved through the incorporation of the elimination answering methods in MCQ tests via rewarding partial and full knowledge.
Undergraduate students struggle to read the scientific literature and educators have suggested that this may reflect deficiencies in their science literacy skills. In this two-year study we develop and test a strategy for using the scientific literature to teach science literacy skills to novice life science majors. The first year of the project served as a preliminary investigation in which we evaluated student science literacy skills, created a set of science literacy learning objectives aligned with Bloom’s taxonomy, and developed a set of homework assignments that used peer-reviewed articles to teach science literacy. In the second year of the project the effectiveness of the assignments and the learning objectives were evaluated. Summative student learning was evaluated in the second year on a final exam. The mean score was 83.5% (±20.3%) and there were significant learning gains (p < 0.05) in seven of nine of science literacy skills. Project data indicated that even though students achieved course-targeted lower-order science literacy objectives, many were deficient in higher-order literacy skills. Results of this project suggest that building scientific literacy is a continuing process which begins in first-year science courses with a set of fundamental skills that can serve the progressive development of literacy skills throughout the undergraduate curriculum.
The postgraduate training program in psychiatry in Saudi Arabia, which was established in 1997, is a 4-year residency program. Written exams comprising of multiple choice questions (MCQs) are used as a summative assessment of residents in order to determine their eligibility for promotion from one year to the next. Test blueprints are not used in preparing examinations.
To develop test blueprints for the written examinations used in the psychiatry residency program.
Based on the guidelines of four professional bodies, documentary analysis was used to develop global and detailed test blueprints for each year of the residency program. An expert panel participated during piloting and final modification of the test blueprints. Their opinion about the content, weightage for each content domain, and proportion of test items to be sampled in each cognitive category as defined by modified Bloom’s taxonomy were elicited.
Eight global and detailed test blueprints, two for each year of the psychiatry residency program, were developed. The global test blueprints were reviewed by experts and piloted. Six experts participated in the final modification of test blueprints. Based on expert consensus, the content, total weightage for each content domain, and proportion of test items to be included in each cognitive category were determined for each global test blueprint. Experts also suggested progressively decreasing the weightage for recall test items and increasing problem solving test items in examinations, from year 1 to year 4 of the psychiatry residence program.
A systematic approach using a documentary and content analysis technique was used to develop test blueprints with additional input from an expert panel as appropriate. Test blueprinting is an important step to ensure the test validity in all residency programs.
test blueprinting; psychiatry; residency program; summative assessment; documentary and content analysis; Kingdom of Saudi Arabia
In an era of easy access to information, university students who will soon enter health professions need to develop their information competencies. The Research Readiness Self-Assessment (RRSA) is based on the Information Literacy Competency Standards for Higher Education, and it measures proficiency in obtaining health information, evaluating the quality of health information, and understanding plagiarism.
This study aimed to measure the proficiency of college-age health information consumers in finding and evaluating electronic health information; to assess their ability to discriminate between peer-reviewed scholarly resources and opinion pieces or sales pitches; and to examine the extent to which they are aware of their level of health information competency.
An interactive 56-item online assessment, the Research Readiness Self-Assessment (RRSA), was used to measure the health information competencies of university students. We invited 400 students to take part in the study, and 308 participated, giving a response rate of 77%. The RRSA included multiple-choice questions and problem-based exercises. Declarative and procedural knowledge were assessed in three domains: finding health information, evaluating health information, and understanding plagiarism. Actual performance was contrasted with self-reported skill level. Upon answering all questions, students received a results page that summarized their numerical results and displayed individually tailored feedback composed by an experienced librarian.
Even though most students (89%) understood that a one-keyword search is likely to return too many documents, few students were able to narrow a search by using multiple search categories simultaneously or by employing Boolean operators. In addition, nearly half of the respondents had trouble discriminating between primary and secondary sources of information as well as between references to journal articles and other published documents. When presented with questionable websites on nonexistent nutritional supplements, only 50% of respondents were able to correctly identify the website with the most trustworthy features. Less than a quarter of study participants reached the correct conclusion that none of the websites made a good case for taking the nutritional supplements. Up to 45% of students were unsure if they needed to provide references for ideas expressed in paraphrased sentences or sentences whose structure they modified. Most respondents (84%) believed that their research skills were good, very good, or excellent. Students’ self-perceptions of skill tended to increase with increasing level of education. Self-reported skills were weakly correlated with actual skill level, operationalized as the overall RRSA score (Cronbach alpha = .78 for 56 RRSA items).
While the majority of students think that their research skills are good or excellent, many of them are unable to conduct advanced information searches, judge the trustworthiness of health-related websites and articles, and differentiate between various information sources. Students’ self-reports may not be an accurate predictor of their actual health information competencies.
Health information; electronic health information; evaluation of electronic resources; electronics; telecommunications; consumer health information; patient education; educational status; computer network
Confidence-based marking (CBM), developed by A. R. Gardner-Medwin et al., has been used for many years in the medical school setting as an assessment tool. Our study evaluates the use of CBM in the neuroanatomy laboratory setting, and its effectiveness as a tool for student self-assessment and learning.
The subjects were 224 students enrolled in Neuroscience I over a period of four trimesters. Regional neuroanatomy multiple choice question (MCQ) quizzes were administered the week following topic presentation in the laboratory. A total of six quizzes was administered during the trimester and the MCQ was paired with a confidence question, and the paired questions were scored using a three-level CBM scoring scheme.
Spearman's rho correlation coefficients indicated that the number of correct answers was correlated highly with the CBM score (high, medium, low) for each topic. The χ2 analysis within each neuroscience topic detected that the distribution of students into low, medium, and high confidence levels was a function of number of correct answers on the quiz (p < .05). Pairwise comparisons of quiz performance with CBM score as the covariate detected that the student's level of understanding of course content was greatest for information related to spinal cord and medulla, and least for information related to midbrain and cerebrum.
CBM is a reliable strategy for challenging students to think discriminately-based on their knowledge of material. The three-level CBM scoring scheme was a valid tool to assess student learning of core neuroanatomic topics regarding structure and function.
Chiropractic; Educational Measurement; Learning
Health care professionals often lack adequate knowledge about health literacy and the skills needed to address low health literacy among patients and their caregivers. Many promising practices for mitigating the effects of low health literacy are not used consistently. Improving health literacy training for health care professionals has received increasing emphasis in recent years. The development and evaluation of curricula for health professionals has been limited by the lack of agreed-upon educational competencies in this area. This study aimed to identify a set of health literacy educational competencies and target behaviors, or practices, relevant to the training of all health care professionals. The authors conducted a thorough literature review to identify a comprehensive list of potential health literacy competencies and practices, which they categorized into 1 or more educational domains (i.e., knowledge, skills, attitudes) or a practice domain. The authors stated each item in operationalized language following Bloom's Taxonomy. The authors then used a modified Delphi method to identify consensus among a group of 23 health professions education experts representing 11 fields in the health professions. Participants rated their level of agreement as to whether a competency or practice was both appropriate and important for all health professions students. A predetermined threshold of 70% agreement was used to define consensus. After 4 rounds of ratings and modifications, consensus agreement was reached on 62 out of 64 potential educational competencies (24 knowledge items, 27 skill items, and 11 attitude items), and 32 out of 33 potential practices. This study is the first known attempt to develop consensus on a list of health literacy practices and to translate recommended health literacy practices into an agreed-upon set of measurable educational competencies for health professionals. Further work is needed to prioritize the competencies and practices in terms of relative importance.
Questions have long been used as a teaching tool by teachers and preceptors to assess students’ knowledge, promote comprehension, and stimulate critical thinking. Well-crafted questions lead to new insights, generate discussion, and promote the comprehensive exploration of subject matter. Poorly constructed questions can stifle learning by creating confusion, intimidating students, and limiting creative thinking. Teachers most often ask lower-order, convergent questions that rely on students’ factual recall of prior knowledge rather than asking higher-order, divergent questions that promote deep thinking, requiring students to analyze and evaluate concepts. This review summarizes the taxonomy of questions, provides strategies for formulating effective questions, and explores practical considerations to enhance student engagement and promote critical thinking. These concepts can be applied in the classroom and in experiential learning environments.
questioning; critical thinking; pedagogy; effective teaching; teaching tool
To implement an answer-until-correct examination format for a pharmacokinetics course and determine whether this format assessed pharmacy students' mastery of the desired learning outcomes as well as a mixed format examination (eg, one with a combination of open-ended and fill-in-the-blank questions).
Students in a core pharmacokinetics course were given 3 examinations in answer-until-correct format. The format allowed students multiple attempts at answering each question, with points allocated based on the number of attempts required to correctly answer the question. Examination scores were compared to those of students in the previous year as a control.
The grades of students who were given the immediate feedback examination format were equivalent to those of students in the previous year. The students preferred the testing format because it allowed multiple attempts to answer questions and provided immediate feedback. Some students reported increased anxiety because of the new examination format.
The immediate feedback format assessed students' mastery of course outcomes, provided immediate feedback to encourage deep learning and critical-thinking skills, and was preferred by students over the traditional examination format.
critical thinking; assessment; survey; anxiety; answer-until-correct; pharmacokinetics; examination
Objective. To compare the effectiveness of team-based learning (TBL) to that of traditional lectures on learning outcomes in a therapeutics course sequence.
Design. A revised TBL curriculum was implemented in a therapeutic course sequence. Multiple choice and essay questions identical to those used to test third-year students (P3) taught using a traditional lecture format were administered to the second-year pharmacy students (P2) taught using the new TBL format.
Assessment. One hundred thirty-one multiple-choice questions were evaluated; 79 tested recall of knowledge and 52 tested higher level, application of knowledge. For the recall questions, students taught through traditional lectures scored significantly higher compared to the TBL students (88%±12% vs 82%±16%, p=0.01). For the questions assessing application of knowledge, no differences were seen between teaching pedagogies (81%±16% vs 77%±20%, p=0.24). Scores on essay questions and the number of students who achieved 100% were also similar between groups.
Conclusion. Transition to a TBL format from a traditional lecture-based pedagogy allowed P2 students to perform at a similar level as students with an additional year of pharmacy education on application of knowledge type questions. However, P3 students outperformed P2 students regarding recall type questions and overall. Further assessment of long-term learning outcomes is needed to determine if TBL produces more persistent learning and improved application in clinical settings.
active learning; team-based learning; pharmacotherapy; assessment
Evidence-based medicine (EBM) involves approaching a clinical problem using a four-step method: (1) formulate a clear clinical question from a patient’s problem, (2) search the literature for relevant clinical articles, (3) evaluate (critically appraise) the evidence for its validity and usefulness, (4) implement useful findings into clinical practice. EBM has now been incorporated as an integral part of the medical curriculum in many faculties of medicine around the world. The Faculty of Medicine, King Abdulaziz University, started its process of curriculum reform and introduction of the new curriculum 4 years ago. One of the most characteristic aspects of this curriculum is the introduction of special study modules and electives as a student-selected component in the fourth year of study; the Introduction to Evidence-Based Medicine course was included as one of these special study modules. The purpose of this article is to evaluate the EBM skills of medical students after completing the course and their perceptions of the faculty member delivering the course and organization of the course.
Materials and methods
The EBM course was held for the first time as a special study module for fourth-year medical students in the first semester of the academic year 2009–2010. Fifteen students were enrolled in this course. At the end of the course, students anonymously evaluated aspects of the course regarding their EBM skills and course organization using a five- point Likert scale in response to an online course evaluation questionnaire. In addition, students’ achievement was evaluated with regard to the skills and competencies taught in the course.
Medical students generally gave high scores to all aspects of the EBM course, including course organization, course delivery, methods of assessment, and overall. Scores were also high for students’ self-evaluation of skill level and EBM experience. The results of a faculty member’s evaluation of the students’ achievement showed an average total percentage (92.2%) for all EBM steps.
The EBM course at the Faculty of Medicine, King Abdulaziz University, is useful for familiarizing medical students with the basic principles of EBM and to help them in answering routine questions of clinical interest in a systematic way. In light of the results obtained from implementing this course with a small number of students, and as a student-selected component, the author believes integrating EBM longitudinally throughout the curriculum would be beneficial for King Abdulaziz University medical students. It would provide a foundation of knowledge, offer easy access to resources, promote point-of-care and team learning, help students to develop applicable skills for lifelong learning, and help the faculty to achieve its goals of becoming more student-centered and encouraging students to employ more self-directed learning strategies.
student-selected component; evidence-based medicine; learning; curriculum
Many students in Biomedical Sciences have difficulty understanding biomechanics. In a second-year course, biomechanics is taught in the first week and examined at the end of the fourth week. Knowledge is retained longer if the subject material is repeated. However, how does one encourage students to repeat the subject matter? For this study, we developed ‘two opportunities to practice per day (TOPday)’, consisting of multiple-choice questions on biomechanics with immediate feedback, which were sent via e-mail. We investigated the effect of TOPday on self-confidence, enthusiasm, and test results for biomechanics. All second-year students (n = 95) received a TOPday of biomechanics on every regular course day with increasing difficulty during the course. At the end of the course, a non-anonymous questionnaire was conducted. The students were asked how many TOPday questions they completed (0–6 questions [group A]; 7–18 questions [group B]; 19–24 questions [group C]). Other questions included the appreciation for TOPday, and increase (no/yes) in self-confidence and enthusiasm for biomechanics. Seventy-eight students participated in the examination and completed the questionnaire. The appreciation for TOPday in group A (n = 14), B (n = 23) and C (n = 41) was 7.0 (95 % CI 6.5–7.5), 7.4 (95 % CI 7.0–7.8), and 7.9 (95 % CI 7.6–8.1), respectively (p < 0.01 between A and C). Of the students who actively participated (B and C), 91 and 80 % reported an increase in their self-confidence and enthusiasm, respectively, for biomechanics due to TOPday. In addition, they had a higher test result for biomechanics (p < 0.01) compared with those who did not actively participate (A). In conclusion, the teaching method ‘TOPday’ seems an effective way to encourage students to repeat the subject material, with the extra advantage that students are stimulated to keep on practising for the examination. The appreciation was high and there was a positive association between active participation, on the one hand, and self-confidence, enthusiasm, and test results for biomechanics on the other.
Daily quiz; Biomechanics; Confidence; Enthusiasm; Education; Test results