PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1548258)

Clipboard (0)
None

Related Articles

1.  Replacing Lecture with Peer-led Workshops Improves Student Learning 
CBE Life Sciences Education  2009;8(3):182-192.
Peer-facilitated workshops enhanced interactivity in our introductory biology course, which led to increased student engagement and learning. A majority of students preferred attending two lectures and a workshop each week over attending three weekly lectures. In the workshops, students worked in small cooperative groups as they solved challenging problems, evaluated case studies, and participated in activities designed to improve their general learning skills. Students in the workshop version of the course scored higher on exam questions recycled from preworkshop semesters. Grades were higher over three workshop semesters in comparison with the seven preworkshop semesters. Although males and females benefited from workshops, there was a larger improvement of grades and increased retention by female students; although underrepresented minority (URM) and non-URM students benefited from workshops, there was a larger improvement of grades by URM students. As well as improving student performance and retention, the addition of interactive workshops also improved the quality of student learning: Student scores on exam questions that required higher-level thinking increased from preworkshop to workshop semesters.
doi:10.1187/cbe.09-01-0002
PMCID: PMC2736022  PMID: 19723813
2.  Increased Course Structure Improves Performance in Introductory Biology 
CBE Life Sciences Education  2011;10(2):175-186.
We tested the hypothesis that highly structured course designs, which implement reading quizzes and/or extensive in-class active-learning activities and weekly practice exams, can lower failure rates in an introductory biology course for majors, compared with low-structure course designs that are based on lecturing and a few high-risk assessments. We controlled for 1) instructor effects by analyzing data from quarters when the same instructor taught the course, 2) exam equivalence with new assessments called the Weighted Bloom's Index and Predicted Exam Score, and 3) student equivalence using a regression-based Predicted Grade. We also tested the hypothesis that points from reading quizzes, clicker questions, and other “practice” assessments in highly structured courses inflate grades and confound comparisons with low-structure course designs. We found no evidence that points from active-learning exercises inflate grades or reduce the impact of exams on final grades. When we controlled for variation in student ability, failure rates were lower in a moderately structured course design and were dramatically lower in a highly structured course design. This result supports the hypothesis that active-learning exercises can make students more skilled learners and help bridge the gap between poorly prepared students and their better-prepared peers.
doi:10.1187/cbe.10-08-0105
PMCID: PMC3105924  PMID: 21633066
3.  Prescribed Active Learning Increases Performance in Introductory Biology 
CBE— Life Sciences Education  2007;6(2):132-139.
We tested five course designs that varied in the structure of daily and weekly active-learning exercises in an attempt to lower the traditionally high failure rate in a gateway course for biology majors. Students were given daily multiple-choice questions and answered with electronic response devices (clickers) or cards. Card responses were ungraded; clicker responses were graded for right/wrong answers or participation. Weekly practice exams were done as an individual or as part of a study group. Compared with previous versions of the same course taught by the same instructor, students in the new course designs performed better: There were significantly lower failure rates, higher total exam points, and higher scores on an identical midterm. Attendance was higher in the clicker versus cards section; attendance and course grade were positively correlated. Students did better on clicker questions if they were graded for right/wrong answers versus participation, although this improvement did not translate into increased scores on exams. In this course, achievement increases when students get regular practice via prescribed (graded) active-learning exercises.
doi:10.1187/cbe.06-09-0194
PMCID: PMC1885904  PMID: 17548875
4.  Writing to Learn: An Evaluation of the Calibrated Peer Review™ Program in Two Neuroscience Courses 
Although the majority of scientific information is communicated in written form, and peer review is the primary process by which it is validated, undergraduate students may receive little direct training in science writing or peer review. Here, I describe the use of Calibrated Peer Review™ (CPR), a free, web-based writing and peer review program designed to alleviate instructor workload, in two undergraduate neuroscience courses: an upper- level sensation and perception course (41 students, three assignments) and an introductory neuroscience course (50 students; two assignments). Using CPR online, students reviewed primary research articles on assigned ‘hot’ topics, wrote short essays in response to specific guiding questions, reviewed standard ‘calibration’ essays, and provided anonymous quantitative and qualitative peer reviews. An automated grading system calculated the final scores based on a student’s essay quality (as determined by the average of three peer reviews) and his or her accuracy in evaluating 1) three standard calibration essays, 2) three anonymous peer reviews, and 3) his or her self review. Thus, students were assessed not only on their skill at constructing logical, evidence-based arguments, but also on their ability to accurately evaluate their peers’ writing. According to both student self-reports and instructor observation, students’ writing and peer review skills improved over the course of the semester. Student evaluation of the CPR program was mixed; while some students felt like the peer review process enhanced their understanding of the material and improved their writing, others felt as though the process was biased and required too much time. Despite student critiques of the program, I still recommend the CPR program as an excellent and free resource for incorporating more writing, peer review, and critical thinking into an undergraduate neuroscience curriculum.
PMCID: PMC3592621  PMID: 23493247
peer review, writing to learn; web-based learning; learning technology; Calibrated Peer Review
5.  How Accurate Is Peer Grading? 
CBE Life Sciences Education  2010;9(4):482-488.
Previously we showed that weekly, written, timed, and peer-graded practice exams help increase student performance on written exams and decrease failure rates in an introductory biology course. Here we analyze the accuracy of peer grading, based on a comparison of student scores to those assigned by a professional grader. When students graded practice exams by themselves, they were significantly easier graders than a professional; overall, students awarded ≈25% more points than the professional did. This difference represented ≈1.33 points on a 10-point exercise, or 0.27 points on each of the five 2-point questions posed. When students graded practice exams as a group of four, the same student-expert difference occurred. The student-professional gap was wider for questions that demanded higher-order versus lower-order cognitive skills. Thus, students not only have a harder time answering questions on the upper levels of Bloom's taxonomy, they have a harder time grading them. Our results suggest that peer grading may be accurate enough for low-risk assessments in introductory biology. Peer grading can help relieve the burden on instructional staff posed by grading written answers—making it possible to add practice opportunities that increase student performance on actual exams.
doi:10.1187/cbe.10-03-0017
PMCID: PMC2995766  PMID: 21123695
6.  Relation between contemplative exercises and an enriched psychology students' experience in a neuroscience course 
Frontiers in Psychology  2014;5:1296.
This article examines the relation of contemplative exercises with enhancement of students' experience during neuroscience studies. Short contemplative exercises inspired by the Buddhist tradition of self-inquiry were introduced in an undergraduate neuroscience course for psychology students. At the start of the class, all students were asked to participate in short “personal brain investigations” relevant to the topic presented. These investigations were aimed at bringing stable awareness to a specific perceptual, emotional, attentional, or cognitive process and observing it in a non-judgmental, non-personal way. In addition, students could choose to participate, for bonus credit, in a longer exercise designed to expand upon the weekly class activity. In the exercise, students continued their “personal brain investigations” for 10 min a day, 4 days a week. They wrote “lab reports” on their daily observations, obtained feedback from the teacher, and at the end of the year reviewed their reports and reflected upon their experiences during the semester. Out of 265 students, 102 students completed the bonus track and their final reflections were analyzed using qualitative methodology. In addition, 91 of the students answered a survey at the end of the course, 43 students participated in a quiz 1 year after course graduation, and the final grades of all students were collected and analyzed. Overall, students reported satisfaction from the exercises and felt they contributed to their learning experience. In the 1-year follow-up, the bonus-track students were significantly more likely than their peers to remember class material. The qualitative analysis of bonus-track students' reports revealed that the bonus-track process elicited positive feelings, helped students connect with class material and provided them with personal insights. In addition, students acquired contemplative skills, such as increased awareness and attention, non-judgmental attitudes, and better stress-management abilities. We provide examples of “personal brain investigations” and discuss limitations of introducing a contemplative approach.
doi:10.3389/fpsyg.2014.01296
PMCID: PMC4235268  PMID: 25477833
contemplative pedagogy; contemplative neuroscience; pedagogical psychology; pedagogical neuroscience
7.  An electronic portfolio for quantitative assessment of surgical skills in undergraduate medical education 
BMC Medical Education  2013;13:65.
Background
We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects.
Methods
Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved.
Results
Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant.
Conclusions
Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
doi:10.1186/1472-6920-13-65
PMCID: PMC3651863  PMID: 23642100
Electronic portfolio; Surgical subjects; Self-guided learning; Self-assessment; Evaluative portfolio
8.  Weekly Rotation of Facilitators to Improve Assessment of Group Participation in a Problem-based Learning Curriculum 
Objectives
To determine whether implementing a rotating facilitator structure provides a reliable method of assessing group participation and assigning grades to third-professional year pharmacy students in a problem-based learning curriculum.
Design
In the 2004-2005 school year, a “one block, one facilitator” structure was replaced by a “weekly rotating facilitator” structure. Each student received a grade from the assigned facilitator each week. The 8 weekly grades were then averaged for a final course grade. Student grades were reviewed weekly and at the end of each block. Facilitators and students completed survey instruments at the end of each of four 8-week blocks.
Assessment
Student grades were reviewed, and the class average was compared to the class averages from the 2 previous years. For example, in block I the class average was 86 which compared to averages of 88 and 87 for 2002-03 and 2003-04 respectively. Survey data revealed a 40% agreement by facilitators in block I that student performance was improved compared to student performance prior to this change. This agreement increased to 71%, 72%, and 71% respectively for blocks II - IV. Student survey data at the end of the academic year supported weekly facilitator rotation and revealed that a majority of students agreed that exposure to a variety of facilitators enhanced their group participation.
Conclusion
As confirmed by student grades and student and faculty members’ feedback, the change to a rotating facilitator structure resulted in a reliable method of assigning student grades for group participation.
PMCID: PMC1803686  PMID: 17332853
problem-based learning; curriculum; facilitators; assessment
9.  Pleasure and Pain: Teaching Neuroscientific Principles of Hedonism in a Large General Education Undergraduate Course 
In a large (250 registrants) general education lecture course, neuroscience principles were taught by two professors as co-instructors, starting with simple brain anatomy, chemistry, and function, proceeding to basic brain circuits of pleasure and pain, and progressing with fellow expert professors covering relevant philosophical, artistic, marketing, and anthropological issues. With this as a base, the course wove between fields of high relevance to psychology and neuroscience, such as food addiction and preferences, drug seeking and craving, analgesic pain-inhibitory systems activated by opiates and stress, neuroeconomics, unconscious decision-making, empathy, and modern neuroscientific techniques (functional magnetic resonance imaging and event-related potentials) presented by the co-instructors and other Psychology professors. With no formal assigned textbook, all lectures were PowerPoint-based, containing links to supplemental public-domain material. PowerPoints were available on Blackboard several days before the lecture. All lectures were also video-recorded and posted that evening. The course had a Facebook page for after-class conversation and one of the co-instructors communicated directly with students on Twitter in real time during lecture to provide momentary clarification and comment. In addition to graduate student Teaching Assistants (TAs), to allow for small group discussion, ten undergraduate students who performed well in a previous class were selected to serve as discussion leaders. The Discussion Leaders met four times at strategic points over the semester with groups of 20–25 current students, and received one credit of Independent Study, thus creating a course within a course. The course grade was based on weighted scores from two multiple-choice exams and a five-page writing assignment in which each student reviewed three unique, but brief original peer-review research articles (one page each) combined with expository writing on the first and last pages. A draft of the first page, collected early in the term, was returned to each student by graduate TAs to provide individual feedback on scientific writing. Overall the course has run three times at ful or near enrollment capacity despite being held at an 8:00 AM time slot. Student-generated teaching evaluations place it well within the normal range, while this format importantly contributes to budget efficiency permitting the teaching of more required small-format courses (e.g., freshman writing). The demographics of the course have changed to one in which the vast majority of the students are now outside the disciplines of neuroscience or psychology and are taking the course to fulfill a General Education requirement. This pattern allows the wide dissemination of basic neuroscientific knowledge to a general college audience.
PMCID: PMC3852868  PMID: 24319388
Reward; addiction; relapse; craving; body weight regulation; analgesia; empathy; behavioral economics; fMRI
10.  A Technology-Enhanced Patient Case Workshop 
Objectives
To assess the impact of technology-based changes on student learning, skill development, and satisfaction in a patient-case workshop.
Design
A new workshop format for a course was adopted over a 3-year period. Students received and completed patient cases and obtained immediate performance feedback in class instead of preparing the case prior to class and waiting for instructors to grade and return their cases. The cases were designed and accessed via an online course management system.
Assessment
Student satisfaction was measured using end-of-course surveys. The impact of the technology-based changes on student learning, problem-solving, and critical-thinking skills was measured and compared between the 2 different course formats by assessing changes in examination responses. Three advantages to the new format were reported: real-life format in terms of time constraint for responses, a team learning environment, and expedient grading and feedback. Students overwhelmingly agreed that the new format should be continued. Students' examination scores improved significantly under the new format.
Conclusion
The change in delivery of patient-case workshops to an online, real-time system was well accepted and resulted in enhanced learning, critical thinking, and problem-solving skills.
PMCID: PMC2739069  PMID: 19777101
course design; Internet access; online course management system; laptop computers; assessment
11.  Using Audience Response Technology to provide formative feedback on pharmacology performance for non-medical prescribing students - a preliminary evaluation 
BMC Medical Education  2012;12:113.
Background
The use of anonymous audience response technology (ART) to actively engage students in classroom learning has been evaluated positively across multiple settings. To date, however, there has been no empirical evaluation of the use of individualised ART handsets and formative feedback of ART scores. The present study investigates student perceptions of such a system and the relationship between formative feedback results and exam performance.
Methods
Four successive cohorts of Non-Medical Prescribing students (n=107) had access to the individualised ART system and three of these groups (n=72) completed a questionnaire about their perceptions of using ART. Semi-structured interviews were carried out with a purposive sample of seven students who achieved a range of scores on the formative feedback. Using data from all four cohorts of students, the relationship between mean ART scores and summative pharmacology exam score was examined using a non-parametric correlation.
Results
Questionnaire and interview data suggested that the use of ART enhanced the classroom environment, motivated students and promoted learning. Questionnaire data demonstrated that students found the formative feedback helpful for identifying their learning needs (95.6%), guiding their independent study (86.8%), and as a revision tool (88.3%). Interviewees particularly valued the objectivity of the individualised feedback which helped them to self-manage their learning. Interviewees’ initial anxiety about revealing their level of pharmacology knowledge to the lecturer and to themselves reduced over time as students focused on the learning benefits associated with the feedback.
A significant positive correlation was found between students’ formative feedback scores and their summative pharmacology exam scores (Spearman’s rho = 0.71, N=107, p<.01).
Conclusions
Despite initial anxiety about the use of individualised ART units, students rated the helpfulness of the individualised handsets and personalised formative feedback highly. The significant correlation between ART response scores and student exam scores suggests that formative feedback can provide students with a useful reference point in terms of their level of exam-readiness.
doi:10.1186/1472-6920-12-113
PMCID: PMC3515432  PMID: 23148762
Audience response technology; Teaching; Pharmacology; Non-medical prescribing
12.  Advanced Cardiac Resuscitation Evaluation (ACRE): A randomised single-blind controlled trial of peer-led vs. expert-led advanced resuscitation training 
Background
Advanced resuscitation skills training is an important and enjoyable part of medical training, but requires small group instruction to ensure active participation of all students. Increases in student numbers have made this increasingly difficult to achieve.
Methods
A single-blind randomised controlled trial of peer-led vs. expert-led resuscitation training was performed using a group of sixth-year medical students as peer instructors. The expert instructors were a senior and a middle grade doctor, and a nurse who is an Advanced Life Support (ALS) Instructor.
A power calculation showed that the trial would have a greater than 90% chance of rejecting the null hypothesis (that expert-led groups performed 20% better than peer-led groups) if that were the true situation. Secondary outcome measures were the proportion of High Pass grades in each groups and safety incidents.
The peer instructors designed and delivered their own course material. To ensure safety, the peer-led groups used modified defibrillators that could deliver only low-energy shocks.
Blinded assessment was conducted using an Objective Structured Clinical Examination (OSCE). The checklist items were based on International Liaison Committee on Resuscitation (ILCOR) guidelines using Ebel standard-setting methods that emphasised patient and staff safety and clinical effectiveness.
The results were analysed using Exact methods, chi-squared and t-test.
Results
A total of 132 students were randomised: 58 into the expert-led group, 74 into the peer-led group. 57/58 (98%) of students from the expert-led group achieved a Pass compared to 72/74 (97%) from the peer-led group: Exact statistics confirmed that it was very unlikely (p = 0.0001) that the expert-led group was 20% better than the peer-led group.
There were no safety incidents, and High Pass grades were achieved by 64 (49%) of students: 33/58 (57%) from the expert-led group, 31/74 (42%) from the peer-led group. Exact statistics showed that the difference of 15% meant that it was possible that the expert-led teaching was 20% better at generating students with High Passes.
Conclusions
The key elements of advanced cardiac resuscitation can be safely and effectively taught to medical students in small groups by peer-instructors who have undergone basic medical education training.
doi:10.1186/1757-7241-18-3
PMCID: PMC2818633  PMID: 20074353
13.  Temporal Structure of First-year Courses and Success at Course Exams: Comparison of Traditional Continual and Block Delivery of Anatomy and Chemistry Courses 
Croatian Medical Journal  2009;50(1):61-68.
Aim
To evaluate students’ academic success at the Anatomy and Chemistry courses delivered in a traditional continual course, spread over the two semesters, or in alternating course blocks, in which 1 group of students attended first the Anatomy and then the Chemistry course and vice versa.
Method
We analyzed the data on exam grades for Anatomy and Chemistry courses in the first year of the curriculum for academic year 2001/02, with the traditional continual delivery of the courses (n = 253 for chemistry and n = 243 for anatomy course), and academic year 2003/04, with block delivery of the courses (n = 255 for chemistry and n = 260 for anatomy course). The content of the courses and the teachers were similar in both academic years. Grades from the final examination were analyzed only for students who sat the exam at the first available exam term and passed the course. For the Anatomy block course, grades at 2 interim written tests and 2 parts of the final exam (practical stage exam and oral exam) in each block were analyzed for students who passed all interim tests and the final exam.
Results
There were no differences between the two types of course delivery in the number of students passing the final examination at first attempt. There was a decrease in passing percentage for the two Anatomy block course student groups in 2003/04 (56% passing students in block 1 vs 40% in block 2, P = 0.014). There were no differences in the final examination grade between the 2 blocks for both the Anatomy and Chemistry course, but there was an increase in the average grades from 2001/02 to 2003/04 academic year due to an increase in Chemistry grades (F1,399 = 18.4, P < 0.001, 2 × 2 ANOVA). Grades in Chemistry were significantly lower than grades in Anatomy when courses were delivered in a continual but not in a block format (F1,399 = 35.1, P < 0.001, 2 × 2 ANOVA). When both courses were delivered in a block format there was no effect of the sequence of their delivery (F1,206 = 1.8, P = 0.182, 2 × 2 ANOVA). There was also a significant difference in grades on interim assessments of Anatomy when it was delivered in the block format (F3,85 = 28.8, P < 0.001, between-within subjects 2 × 4 ANOVA) with grades from practical test and oral exam being significantly lower than grades on 2 interim tests that came at the beginning of the block (P < 0.001 for all pair-wise comparisons).
Conclusions
The type of course delivery was not associated with significant differences in student academic success in Anatomy and Chemistry courses in the medical curriculum. Students can successfully pass these courses when they are delivered either in a continual, whole year format or in a condensed time format of a course block, regardless of the number and type of courses preceding the block course.
doi:10.3325/cmj.2009.50.61
PMCID: PMC2657569  PMID: 19260146
14.  A Randomized-Controlled Study of Encounter Cards to Improve Oral Case Presentation Skills of Medical Students 
Objective
To determine the feasibility of oral case presentation (OCP) encounter cards as a tool for formative evaluation, to estimate the reliability and validity of the ratings when used in a medicine clerkship, and to examine whether the use of OCP encounter cards improves students' OCP skills.
Design
Randomized controlled study.
Setting
Medicine core clerkship at a U.S. medical school.
Participants/Intervention
Students enrolled in the medicine core clerkship (n=164) from January to December of 2003 were randomly assigned to receive weekly feedback using OCP encounter cards rating nine presentation compentencies or receive usual feedback. Mean OCP ratings were correlated with multiple summative assessments. Performance on an end-of-clerkship OCP was compared between intervention and control groups.
Main Results
Eighty percent of cards were completed. The mean OCP rating averaged over 9 competencies was 7.7 (SD=0.8) on a 9-point scale. Standard error of ratings was 0.3. OCP ratings were correlated with inpatient evaluations (r=.58), inpatient ratings of presentation skills (r=.43), and final grades (r=.40). Final OCP performance was similar for the intervention and control groups (7.0 vs 7.2, P=.09).
Conclusion
OCP encounter cards are a novel and feasible tool to assess clerkship students' oral case presentation skills. OCP card ratings are reproducible, and validity is suggested by their correlation with multiple markers of performance. However, encounter cards did not improve performance on summative oral presentations.
doi:10.1111/j.1525-1497.2005.0140.x
PMCID: PMC1490196  PMID: 16050885
medical education; medical students oral case presentations; clinical competence
15.  Teaching Students How to Study: A Workshop on Information Processing and Self-Testing Helps Students Learn 
CBE Life Sciences Education  2011;10(2):187-198.
We implemented a “how to study” workshop for small groups of students (6–12) for N = 93 consenting students, randomly assigned from a large introductory biology class. The goal of this workshop was to teach students self-regulating techniques with visualization-based exercises as a foundation for learning and critical thinking in two areas: information processing and self-testing. During the workshop, students worked individually or in groups and received immediate feedback on their progress. Here, we describe two individual workshop exercises, report their immediate results, describe students’ reactions (based on the workshop instructors’ experience and student feedback), and report student performance on workshop-related questions on the final exam. Students rated the workshop activities highly and performed significantly better on workshop-related final exam questions than the control groups. This was the case for both lower- and higher-order thinking questions. Student achievement (i.e., grade point average) was significantly correlated with overall final exam performance but not with workshop outcomes. This long-term (10 wk) retention of a self-testing effect across question levels and student achievement is a promising endorsement for future large-scale implementation and further evaluation of this “how to study” workshop as a study support for introductory biology (and other science) students.
doi:10.1187/cbe.10-11-0142
PMCID: PMC3105925  PMID: 21633067
16.  A Process-Oriented Guided Inquiry Approach to Teaching Medicinal Chemistry 
Objective
To integrate process-oriented guided-inquiry learning (POGIL) team-based activities into a 1-semester medicinal chemistry course for doctor of pharmacy (PharmD) students and determine the outcomes.
Design
Students in the fall 2007 section of the Medicinal Chemistry course were taught in a traditional teacher-centered manner, with the majority of class time spent on lectures and a few practice question sets. Students in the fall 2008 and fall 2009 sections of Medicinal Chemistry spent approximately 40% of class time in structured self-selected teams where they worked through guided-inquiry exercises to supplement the lecture material.
Assessment
The mean examination score of students in the guided-inquiry sections (fall 2008 and fall 2009) was almost 3 percentage points higher than that of students in the fall 2007 class (P < 0.05). Furthermore, the grade distribution shifted from a B-C centered distribution (fall 2007 class) to an A-B centered distribution (fall 2008 and fall 2009 classes).
Conclusions
The inclusion of the POGIL style team-based learning exercises improved grade outcomes for the students, encouraged active engagement with the material during class time, provided immediate feedback to the instructor regarding student-knowledge deficiencies, and created a classroom environment that was well received by students.
PMCID: PMC2972515  PMID: 21088726
medicinal chemistry; guided inquiry; active learning; process-oriented guided-inquiry learning (POGIL); team-based learning
17.  Medical Students' Exposure to and Attitudes about the Pharmaceutical Industry: A Systematic Review 
PLoS Medicine  2011;8(5):e1001037.
A systematic review of published studies reveals that undergraduate medical students may experience substantial exposure to pharmaceutical marketing, and that this contact may be associated with positive attitudes about marketing.
Background
The relationship between health professionals and the pharmaceutical industry has become a source of controversy. Physicians' attitudes towards the industry can form early in their careers, but little is known about this key stage of development.
Methods and Findings
We performed a systematic review reported according to PRISMA guidelines to determine the frequency and nature of medical students' exposure to the drug industry, as well as students' attitudes concerning pharmaceutical policy issues. We searched MEDLINE, EMBASE, Web of Science, and ERIC from the earliest available dates through May 2010, as well as bibliographies of selected studies. We sought original studies that reported quantitative or qualitative data about medical students' exposure to pharmaceutical marketing, their attitudes about marketing practices, relationships with industry, and related pharmaceutical policy issues. Studies were separated, where possible, into those that addressed preclinical versus clinical training, and were quality rated using a standard methodology. Thirty-two studies met inclusion criteria. We found that 40%–100% of medical students reported interacting with the pharmaceutical industry. A substantial proportion of students (13%–69%) were reported as believing that gifts from industry influence prescribing. Eight studies reported a correlation between frequency of contact and favorable attitudes toward industry interactions. Students were more approving of gifts to physicians or medical students than to government officials. Certain attitudes appeared to change during medical school, though a time trend was not performed; for example, clinical students (53%–71%) were more likely than preclinical students (29%–62%) to report that promotional information helps educate about new drugs.
Conclusions
Undergraduate medical education provides substantial contact with pharmaceutical marketing, and the extent of such contact is associated with positive attitudes about marketing and skepticism about negative implications of these interactions. These results support future research into the association between exposure and attitudes, as well as any modifiable factors that contribute to attitudinal changes during medical education.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The complex relationship between health professionals and the pharmaceutical industry has long been a subject of discussion among physicians and policymakers. There is a growing body of evidence that suggests that physicians' interactions with pharmaceutical sales representatives may influence clinical decision making in a way that is not always in the best interests of individual patients, for example, encouraging the use of expensive treatments that have no therapeutic advantage over less costly alternatives. The pharmaceutical industry often uses physician education as a marketing tool, as in the case of Continuing Medical Education courses that are designed to drive prescribing practices.
One reason that physicians may be particularly susceptible to pharmaceutical industry marketing messages is that doctors' attitudes towards the pharmaceutical industry may form early in their careers. The socialization effect of professional schooling is strong, and plays a lasting role in shaping views and behaviors.
Why Was This Study Done?
Recently, particularly in the US, some medical schools have limited students' and faculties' contact with industry, but some have argued that these restrictions are detrimental to students' education. Given the controversy over the pharmaceutical industry's role in undergraduate medical training, consolidating current knowledge in this area may be useful for setting priorities for changes to educational practices. In this study, the researchers systematically examined studies of pharmaceutical industry interactions with medical students and whether such interactions influenced students' views on related topics.
What Did the Researchers Do and Find?
The researchers did a comprehensive literature search using appropriate search terms for all relevant quantitative and qualitative studies published before June 2010. Using strict inclusion criteria, the researchers then selected 48 articles (from 1,603 abstracts) for full review and identified 32 eligible for analysis—giving a total of approximately 9,850 medical students studying at 76 medical schools or hospitals.
Most students had some form of interaction with the pharmaceutical industry but contact increased in the clinical years, with up to 90% of all clinical students receiving some form of educational material. The highest level of exposure occurred in the US. In most studies, the majority of students in their clinical training years found it ethically permissible for medical students to accept gifts from drug manufacturers, while a smaller percentage of preclinical students reported such attitudes. Students justified their entitlement to gifts by citing financial hardship or by asserting that most other students accepted gifts. In addition, although most students believed that education from industry sources is biased, students variably reported that information obtained from industry sources was useful and a valuable part of their education.
Almost two-thirds of students reported that they were immune to bias induced by promotion, gifts, or interactions with sales representatives but also reported that fellow medical students or doctors are influenced by such encounters. Eight studies reported a relationship between exposure to the pharmaceutical industry and positive attitudes about industry interactions and marketing strategies (although not all included supportive statistical data). Finally, student opinions were split on whether physician–industry interactions should be regulated by medical schools or the government.
What Do These Findings Mean?
This analysis shows that students are frequently exposed to pharmaceutical marketing, even in the preclinical years, and that the extent of students' contact with industry is generally associated with positive attitudes about marketing and skepticism towards any negative implications of interactions with industry. Therefore, strategies to educate students about interactions with the pharmaceutical industry should directly address widely held misconceptions about the effects of marketing and other biases that can emerge from industry interactions. But education alone may be insufficient. Institutional policies, such as rules regulating industry interactions, can play an important role in shaping students' attitudes, and interventions that decrease students' contact with industry and eliminate gifts may have a positive effect on building the skills that evidence-based medical practice requires. These changes can help cultivate strong professional values and instill in students a respect for scientific principles and critical evidence review that will later inform clinical decision-making and prescribing practices.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001037.
Further information about the influence of the pharmaceutical industry on doctors and medical students can be found at the American Medical Students Association PharmFree campaign and PharmFree Scorecard, Medsin-UKs PharmAware campaign, the nonprofit organization Healthy Skepticism, and the Web site of No Free Lunch.
doi:10.1371/journal.pmed.1001037
PMCID: PMC3101205  PMID: 21629685
18.  Using cloud-based mobile technology for assessment of competencies among medical students 
PeerJ  2013;1:e164.
Valid, direct observation of medical student competency in clinical settings remains challenging and limits the opportunity to promote performance-based student advancement. The rationale for direct observation is to ascertain that students have acquired the core clinical competencies needed to care for patients. Too often student observation results in highly variable evaluations which are skewed by factors other than the student’s actual performance. Among the barriers to effective direct observation and assessment include the lack of effective tools and strategies for assuring that transparent standards are used for judging clinical competency in authentic clinical settings. We developed a web-based content management system under the name, Just in Time Medicine (JIT), to address many of these issues. The goals of JIT were fourfold: First, to create a self-service interface allowing faculty with average computing skills to author customizable content and criterion-based assessment tools displayable on internet enabled devices, including mobile devices; second, to create an assessment and feedback tool capable of capturing learner progress related to hundreds of clinical skills; third, to enable easy access and utilization of these tools by faculty for learner assessment in authentic clinical settings as a means of just in time faculty development; fourth, to create a permanent record of the trainees’ observed skills useful for both learner and program evaluation. From July 2010 through October 2012, we implemented a JIT enabled clinical evaluation exercise (CEX) among 367 third year internal medicine students. Observers (attending physicians and residents) performed CEX assessments using JIT to guide and document their observations, record their time observing and providing feedback to the students, and their overall satisfaction. Inter-rater reliability and validity were assessed with 17 observers who viewed six videotaped student-patient encounters and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient OSCE exams. A total of 3567 CEXs were completed by 516 observers. The average number of evaluations per student was 9.7 (±1.8 SD) and the average number of CEXs completed per observer was 6.9 (±15.8 SD). Observers spent less than 10 min on 43–50% of the CEXs and 68.6% on feedback sessions. A majority of observers (92%) reported satisfaction with the CEX. Inter-rater reliability was measured at 0.69 among all observers viewing the videotapes and these ratings adequately discriminated competent from non-competent performance. The measured CEX grades correlated with subsequent student performance on an end-of-year OSCE. We conclude that the use of JIT is feasible in capturing discrete clinical performance data with a high degree of user satisfaction. Our embedded checklists had adequate inter-rater reliability and concurrent and predictive validity.
doi:10.7717/peerj.164
PMCID: PMC3792179  PMID: 24109549
Educational technology; Educational measurement; Medical students; Smart phones; Competency based assessment; Direct observation; Medical faculty; Clinical competence; iPhone; miniCEX
19.  Use of Live Interactive Webcasting for an International Postgraduate Module in eHealth: Case Study Evaluation 
Background
Producing “traditional” e-learning can be time consuming, and in a topic such as eHealth, it may have a short shelf-life. Students sometimes report feeling isolated and lacking in motivation. Synchronous methods can play an important part in any blended approach to learning.
Objective
The aim was to develop, deliver, and evaluate an international postgraduate module in eHealth using live interactive webcasting.
Methods
We developed a hybrid solution for live interactive webcasting using a scan converter, mixer, and digitizer, and video server to embed a presenter-controlled talking head or copy of the presenter’s computer screen (normally a PowerPoint slide) in a student chat room. We recruited 16 students from six countries and ran weekly 2.5-hour live sessions for 10 weeks. The content included the use of computers by patients, patient access to records, different forms of e-learning for patients and professionals, research methods in eHealth, geographic information systems, and telehealth. All sessions were recorded—presentations as video files and the student interaction as text files. Students were sent an email questionnaire of mostly open questions seeking their views of this form of learning. Responses were collated and anonymized by a colleague who was not part of the teaching team.
Results
Sessions were generally very interactive, with most students participating actively in breakout or full-class discussions. In a typical 2.5-hour session, students posted about 50 messages each. Two students did not complete all sessions; one withdrew from the pressure of work after session 6, and one from illness after session 7. Fourteen of the 16 responded to the feedback questionnaire. Most students (12/14) found the module useful or very useful, and all would recommend the module to others. All liked the method of delivery, in particular the interactivity, the variety of students, and the “closeness” of the group. Most (11/14) felt “connected” with the other students on the course. Many students (11/14) had previous experience with asynchronous e-learning, two as teachers; 12/14 students suggested advantages of synchronous methods, mostly associated with the interaction and feedback from teachers and peers.
Conclusions
This model of synchronous e-learning based on interactive live webcasting was a successful method of delivering an international postgraduate module. Students found it engaging over a 10-week course. Although this is a small study, given that synchronous methods such as interactive webcasting are a much easier transition for lecturers used to face-to-face teaching than are asynchronous methods, they should be considered as part of the blend of e-learning methods. Further research and development is needed on interfaces and methods that are robust and accessible, on the most appropriate blend of synchronous and asynchronous work for different student groups, and on learning outcomes and effectiveness.
doi:10.2196/jmir.1225
PMCID: PMC2802565  PMID: 19914901
Webcasting; synchronous e-learning; eHealth
20.  Assessment of scientific thinking in basic science in the Iranian second national Olympiad 
BMC Research Notes  2012;5:61.
Background
To evaluate the scientific reasoning in basic science among undergraduate medical students, we established the National Medical Science Olympiad in Iran. In this Olympiad, the drawing of a concept map was used to evaluate a student's knowledge framework; students' ability in hypothesis generation and testing were also evaluated in four different steps. All medical students were invited to participate in this program. Finally, 133 undergraduate medical students with average grades ≥ 16/20 from 45 different medical schools in Iran were selected. The program took the form of four exams: drawing a concept map (Exam I), hypothesis generation (Exam II), choosing variables based on the hypothesis (Exam III), measuring scientific thought (Exam IV). The examinees were asked to complete all examination items in their own time without using textbooks, websites, or personal consultations. Data were presented as mean ± SE of each parameter. The correlation coefficient between students' scores in each exam with the total final score and average grade was calculated using the Spearman test.
Results
Out of a possible score of 200, the mean ± SE of each exam were as follows: 183.88 ± 5.590 for Exam I; 78.68 ± 9.168 for Exam II; 92.04 ± 2.503 for exam III; 106.13 ± 2.345 for Exam IV. The correlation of each exam score with the total final score was calculated, and there was a significant correlation between them (p < 0.001). The scatter plot of the data showed a linear correlation between the score for each exam and the total final score. This meant that students with a higher final score were able to perform better in each exam through having drawn up a meaningful concept map.
The average grade was significantly correlated with the total final score (R = 0.770), (p < 0.001). There was also a significant correlation between each exam score and the average grade (p < 0.001). The highest correlation was observed between Exam I (R = 0.7708) and the average grade. This means students with higher average grades had better grades in each exam, especially in drawing the concept map.
Conclusions
We hope that this competition will encourage medical schools to integrate theory and practice, analyze data, and read research articles. Our findings relate to a selected population, and our data may not be applicable to all medical students. Therefore, further studies are required to validate our results.
doi:10.1186/1756-0500-5-61
PMCID: PMC3292993  PMID: 22270104
21.  Collaborative Testing Improves Performance but Not Content Retention in a Large-Enrollment Introductory Biology Class 
CBE Life Sciences Education  2012;11(4):392-401.
Collaborative testing has been shown to improve performance but not always content retention. In this study, we investigated whether collaborative testing could improve both performance and content retention in a large, introductory biology course. Students were semirandomly divided into two groups based on their performances on exam 1. Each group contained equal numbers of students scoring in each grade category (“A”–“F”) on exam 1. All students completed each of the four exams of the semester as individuals. For exam 2, one group took the exam a second time in small groups immediately following the individually administered test. The other group followed this same format for exam 3. Individual and group exam scores were compared to determine differences in performance. All but exam 1 contained a subset of cumulative questions from the previous exam. Performances on the cumulative questions for exams 3 and 4 were compared for the two groups to determine whether there were significant differences in content retention. Even though group test scores were significantly higher than individual test scores, students who participated in collaborative testing performed no differently on cumulative questions than students who took the previous exam as individuals.
doi:10.1187/cbe.12-04-0048
PMCID: PMC3516795  PMID: 23222835
22.  Writing Assignments with a Metacognitive Component Enhance Learning in a Large Introductory Biology Course 
CBE Life Sciences Education  2014;13(2):311-321.
Students score higher on postexam assessment topics learned via peer-reviewed writing, or when they correct exam questions initially answered incorrectly, compared with their nonparticipating peers.
Writing assignments, including note taking and written recall, should enhance retention of knowledge, whereas analytical writing tasks with metacognitive aspects should enhance higher-order thinking. In this study, we assessed how certain writing-intensive “interventions,” such as written exam corrections and peer-reviewed writing assignments using Calibrated Peer Review and including a metacognitive component, improve student learning. We designed and tested the possible benefits of these approaches using control and experimental variables across and between our three-section introductory biology course. Based on assessment, students who corrected exam questions showed significant improvement on postexam assessment compared with their nonparticipating peers. Differences were also observed between students participating in written and discussion-based exercises. Students with low ACT scores benefited equally from written and discussion-based exam corrections, whereas students with midrange to high ACT scores benefited more from written than discussion-based exam corrections. Students scored higher on topics learned via peer-reviewed writing assignments relative to learning in an active classroom discussion or traditional lecture. However, students with low ACT scores (17–23) did not show the same benefit from peer-reviewed written essays as the other students. These changes offer significant student learning benefits with minimal additional effort by the instructors.
doi:10.1187/cbe.13-05-0097
PMCID: PMC4041507
23.  A Multiyear Analysis of Team-Based Learning in a Pharmacotherapeutics Course 
Objectives. To evaluate the impact of team-based learning (TBL) in a pharmacotherapeutics course on pharmacy students’ ratings of faculty instructors and the course, and to assess students’ performance after implementation of team-taught TBL.
Design. Teaching methodology in a pharmacotherapeutics course was changed from a lecture with recitation approach in 2 semesters of a 6 credit-hour course to a TBL framework in a 3-semester 3+4+5 credit hour course. The distribution of faculty of instruction was changed from 4 faculty members per week to 1 faculty per 1-credit-hour module. TBL consisted of preclass study preparation, readiness assurance (Individual Readiness Assessment Test and Group Readiness Assessment Test), and in-class application exercises requiring simultaneous team responses.
Assessment. Retrospective analysis of student ratings of faculty and instructional methods was conducted for the 2 years pre-TBL and 4 years during TBL. Final course grades were evaluated during the same time period. Student ratings showed progressive improvements over 4 years after the introduction of team-based learning. When aggregated, ratings in the “excellent teacher” category were unchanged with TBL compared to pre-TBL. Improvements in faculty instructor approaches to teaching were noted during TBL. Group grades were consistently higher than individual grades, and aggregate course grades were similar to those prior to TBL implementation.
Conclusion. Implementation of TBL in a pharmacotherapeutics course series demonstrated the value of team performance over individual performance, indicated positive student perceptions of teaching approaches by course faculty, and resulted in comparable student performance in final course grades compared to the previous teaching method.
doi:10.5688/ajpe787142
PMCID: PMC4174384  PMID: 25258447
team-based learning; pharmacotherapeutics; student evaluations; faculty performance; student performance
24.  Development and evaluation of a Health Record Online Submission Tool (HOST) 
Medical Education Online  2010;15:10.3402/meo.v15i0.5350.
Introduction
Health records (HRs) are crucial to quality patient care. The Michigan State University College of Human Medicine begins teaching health record (HR) writing during the second-year clinical skills courses. Prior to this project, we used a cumbersome paper system to allow graduate assistants to grade and give feedback on students' HRs. This study discusses the development and evaluates the effectiveness of the new Health Record Online Submission Tool (HOST).
Methods
We developed an electronic submission system with the goals of decreasing the logistical demands of the paper-based system; improving the effectiveness, consistency, and oversight of HR instruction and evaluation; expanding the number of students who could serve as written record graduate assistants (WRGAs); and to begin preparing students for the use of electronic health records (EHRs). We developed the initial web-based system in 2003 and upgraded it to its present form, HOST, in 2007. We evaluated the system using course evaluations, surveys of WRGAs and clinical students, and queries of course faculty and staff.
Results
Course evaluation by 1,106 students during years 2001 through 2008 revealed that the students' self-assessment of ability to write HRs improved briefly with the introduction of HOST but then returned to baseline. The initial change to electronic submission was well received, though with continued use its rating dropped. A survey of 65 (response rate 61.3%) clinical students indicated that HOST did not completely prepare them for EHRs. The WRGAs (n=14; response rate 58%) found the system easy to use to give feedback to students. Faculty (n=3) and staff (n=2) found that it saved time and made the review of students' HRs and WRGAs grading simpler. Student perception of grading consistency did not improve.
Conclusions
HOST is the first published online method of in-depth HR training for preclinical students using information gathered in clinical encounters. With it we were able to maintain effective instruction, streamline course management, and significantly decrease staff time. HOST did not improve student perception of grading consistency and did not prepare students for specific EHR use. Within the context of our class size expansion and our community-based educational program, HOST bridges geography and can support future improvements in HR instruction and faculty development. Medical educators at other institutions could use a similar system to accomplish these goals.
doi:10.3402/meo.v15i0.5350
PMCID: PMC2958708  PMID: 20975928
medical education; competencies; health records; assessment; course management; evaluation; computer-aided instruction; electronic health record
25.  Daily Online Testing in Large Classes: Boosting College Performance while Reducing Achievement Gaps 
PLoS ONE  2013;8(11):e79774.
An in-class computer-based system, that included daily online testing, was introduced to two large university classes. We examined subsequent improvements in academic performance and reductions in the achievement gaps between lower- and upper-middle class students in academic performance. Students (N = 901) brought laptop computers to classes and took daily quizzes that provided immediate and personalized feedback. Student performance was compared with the same data for traditional classes taught previously by the same instructors (N = 935). Exam performance was approximately half a letter grade above previous semesters, based on comparisons of identical questions asked from earlier years. Students in the experimental classes performed better in other classes, both in the semester they took the course and in subsequent semester classes. The new system resulted in a 50% reduction in the achievement gap as measured by grades among students of different social classes. These findings suggest that frequent consequential quizzing should be used routinely in large lecture courses to improve performance in class and in other concurrent and subsequent courses.
doi:10.1371/journal.pone.0079774
PMCID: PMC3835925  PMID: 24278176

Results 1-25 (1548258)