Search tips
Search criteria

Results 1-25 (1483741)

Clipboard (0)

Related Articles

1.  Increased Course Structure Improves Performance in Introductory Biology 
CBE Life Sciences Education  2011;10(2):175-186.
We tested the hypothesis that highly structured course designs, which implement reading quizzes and/or extensive in-class active-learning activities and weekly practice exams, can lower failure rates in an introductory biology course for majors, compared with low-structure course designs that are based on lecturing and a few high-risk assessments. We controlled for 1) instructor effects by analyzing data from quarters when the same instructor taught the course, 2) exam equivalence with new assessments called the Weighted Bloom's Index and Predicted Exam Score, and 3) student equivalence using a regression-based Predicted Grade. We also tested the hypothesis that points from reading quizzes, clicker questions, and other “practice” assessments in highly structured courses inflate grades and confound comparisons with low-structure course designs. We found no evidence that points from active-learning exercises inflate grades or reduce the impact of exams on final grades. When we controlled for variation in student ability, failure rates were lower in a moderately structured course design and were dramatically lower in a highly structured course design. This result supports the hypothesis that active-learning exercises can make students more skilled learners and help bridge the gap between poorly prepared students and their better-prepared peers.
PMCID: PMC3105924  PMID: 21633066
2.  Prescribed Active Learning Increases Performance in Introductory Biology 
CBE— Life Sciences Education  2007;6(2):132-139.
We tested five course designs that varied in the structure of daily and weekly active-learning exercises in an attempt to lower the traditionally high failure rate in a gateway course for biology majors. Students were given daily multiple-choice questions and answered with electronic response devices (clickers) or cards. Card responses were ungraded; clicker responses were graded for right/wrong answers or participation. Weekly practice exams were done as an individual or as part of a study group. Compared with previous versions of the same course taught by the same instructor, students in the new course designs performed better: There were significantly lower failure rates, higher total exam points, and higher scores on an identical midterm. Attendance was higher in the clicker versus cards section; attendance and course grade were positively correlated. Students did better on clicker questions if they were graded for right/wrong answers versus participation, although this improvement did not translate into increased scores on exams. In this course, achievement increases when students get regular practice via prescribed (graded) active-learning exercises.
PMCID: PMC1885904  PMID: 17548875
3.  Writing to Learn: An Evaluation of the Calibrated Peer Review™ Program in Two Neuroscience Courses 
Although the majority of scientific information is communicated in written form, and peer review is the primary process by which it is validated, undergraduate students may receive little direct training in science writing or peer review. Here, I describe the use of Calibrated Peer Review™ (CPR), a free, web-based writing and peer review program designed to alleviate instructor workload, in two undergraduate neuroscience courses: an upper- level sensation and perception course (41 students, three assignments) and an introductory neuroscience course (50 students; two assignments). Using CPR online, students reviewed primary research articles on assigned ‘hot’ topics, wrote short essays in response to specific guiding questions, reviewed standard ‘calibration’ essays, and provided anonymous quantitative and qualitative peer reviews. An automated grading system calculated the final scores based on a student’s essay quality (as determined by the average of three peer reviews) and his or her accuracy in evaluating 1) three standard calibration essays, 2) three anonymous peer reviews, and 3) his or her self review. Thus, students were assessed not only on their skill at constructing logical, evidence-based arguments, but also on their ability to accurately evaluate their peers’ writing. According to both student self-reports and instructor observation, students’ writing and peer review skills improved over the course of the semester. Student evaluation of the CPR program was mixed; while some students felt like the peer review process enhanced their understanding of the material and improved their writing, others felt as though the process was biased and required too much time. Despite student critiques of the program, I still recommend the CPR program as an excellent and free resource for incorporating more writing, peer review, and critical thinking into an undergraduate neuroscience curriculum.
PMCID: PMC3592621  PMID: 23493247
peer review, writing to learn; web-based learning; learning technology; Calibrated Peer Review
4.  Relation between contemplative exercises and an enriched psychology students' experience in a neuroscience course 
Frontiers in Psychology  2014;5:1296.
This article examines the relation of contemplative exercises with enhancement of students' experience during neuroscience studies. Short contemplative exercises inspired by the Buddhist tradition of self-inquiry were introduced in an undergraduate neuroscience course for psychology students. At the start of the class, all students were asked to participate in short “personal brain investigations” relevant to the topic presented. These investigations were aimed at bringing stable awareness to a specific perceptual, emotional, attentional, or cognitive process and observing it in a non-judgmental, non-personal way. In addition, students could choose to participate, for bonus credit, in a longer exercise designed to expand upon the weekly class activity. In the exercise, students continued their “personal brain investigations” for 10 min a day, 4 days a week. They wrote “lab reports” on their daily observations, obtained feedback from the teacher, and at the end of the year reviewed their reports and reflected upon their experiences during the semester. Out of 265 students, 102 students completed the bonus track and their final reflections were analyzed using qualitative methodology. In addition, 91 of the students answered a survey at the end of the course, 43 students participated in a quiz 1 year after course graduation, and the final grades of all students were collected and analyzed. Overall, students reported satisfaction from the exercises and felt they contributed to their learning experience. In the 1-year follow-up, the bonus-track students were significantly more likely than their peers to remember class material. The qualitative analysis of bonus-track students' reports revealed that the bonus-track process elicited positive feelings, helped students connect with class material and provided them with personal insights. In addition, students acquired contemplative skills, such as increased awareness and attention, non-judgmental attitudes, and better stress-management abilities. We provide examples of “personal brain investigations” and discuss limitations of introducing a contemplative approach.
PMCID: PMC4235268  PMID: 25477833
contemplative pedagogy; contemplative neuroscience; pedagogical psychology; pedagogical neuroscience
5.  How Accurate Is Peer Grading? 
CBE Life Sciences Education  2010;9(4):482-488.
Previously we showed that weekly, written, timed, and peer-graded practice exams help increase student performance on written exams and decrease failure rates in an introductory biology course. Here we analyze the accuracy of peer grading, based on a comparison of student scores to those assigned by a professional grader. When students graded practice exams by themselves, they were significantly easier graders than a professional; overall, students awarded ≈25% more points than the professional did. This difference represented ≈1.33 points on a 10-point exercise, or 0.27 points on each of the five 2-point questions posed. When students graded practice exams as a group of four, the same student-expert difference occurred. The student-professional gap was wider for questions that demanded higher-order versus lower-order cognitive skills. Thus, students not only have a harder time answering questions on the upper levels of Bloom's taxonomy, they have a harder time grading them. Our results suggest that peer grading may be accurate enough for low-risk assessments in introductory biology. Peer grading can help relieve the burden on instructional staff posed by grading written answers—making it possible to add practice opportunities that increase student performance on actual exams.
PMCID: PMC2995766  PMID: 21123695
6.  What is more effective: a daily or a weekly formative test? 
Exams in anatomy courses are traditionally summative. Formative testing induces retrieval practice, provides feedback and enhances learning results. We investigated the optimal frequency for retrieval practice during an anatomy course.
During a first-year course, students were offered four online daily quizzes a week that assessed thoracic anatomy. Once a week they received a quiz about abdominal anatomy. Students immediately received feedback afterwards. In the fourth course week, a survey about participation and satisfaction was taken. 424 students participated in the final summative exam. Trunk wall questions were used as a control. Relationship between participation and test results was investigated with a one-way ANOVA.
More frequent participation in formative tests was correlated to higher scores in the summative exam with no difference between daily and weekly quizzes. This effect was found for thorax-abdomen and ‘control’ trunk wall questions. Participation in weekly quizzes was higher (p < 0.001). All survey responses showed a significant difference in favour of the weekly quiz (p < 0.001).
Discussion and conclusion
Participation in formative quizzes was correlated to summative exam scores. This correlation was not specific for the material tested, probably because of diligence. Student participation and preference were much higher in weekly quizzes.
PMCID: PMC4404460  PMID: 25822124
Formative test; Student participation; Student satisfaction
7.  Replacing Lecture with Peer-led Workshops Improves Student Learning 
CBE Life Sciences Education  2009;8(3):182-192.
Peer-facilitated workshops enhanced interactivity in our introductory biology course, which led to increased student engagement and learning. A majority of students preferred attending two lectures and a workshop each week over attending three weekly lectures. In the workshops, students worked in small cooperative groups as they solved challenging problems, evaluated case studies, and participated in activities designed to improve their general learning skills. Students in the workshop version of the course scored higher on exam questions recycled from preworkshop semesters. Grades were higher over three workshop semesters in comparison with the seven preworkshop semesters. Although males and females benefited from workshops, there was a larger improvement of grades and increased retention by female students; although underrepresented minority (URM) and non-URM students benefited from workshops, there was a larger improvement of grades by URM students. As well as improving student performance and retention, the addition of interactive workshops also improved the quality of student learning: Student scores on exam questions that required higher-level thinking increased from preworkshop to workshop semesters.
PMCID: PMC2736022  PMID: 19723813
8.  Medical Students' Exposure to and Attitudes about the Pharmaceutical Industry: A Systematic Review 
PLoS Medicine  2011;8(5):e1001037.
A systematic review of published studies reveals that undergraduate medical students may experience substantial exposure to pharmaceutical marketing, and that this contact may be associated with positive attitudes about marketing.
The relationship between health professionals and the pharmaceutical industry has become a source of controversy. Physicians' attitudes towards the industry can form early in their careers, but little is known about this key stage of development.
Methods and Findings
We performed a systematic review reported according to PRISMA guidelines to determine the frequency and nature of medical students' exposure to the drug industry, as well as students' attitudes concerning pharmaceutical policy issues. We searched MEDLINE, EMBASE, Web of Science, and ERIC from the earliest available dates through May 2010, as well as bibliographies of selected studies. We sought original studies that reported quantitative or qualitative data about medical students' exposure to pharmaceutical marketing, their attitudes about marketing practices, relationships with industry, and related pharmaceutical policy issues. Studies were separated, where possible, into those that addressed preclinical versus clinical training, and were quality rated using a standard methodology. Thirty-two studies met inclusion criteria. We found that 40%–100% of medical students reported interacting with the pharmaceutical industry. A substantial proportion of students (13%–69%) were reported as believing that gifts from industry influence prescribing. Eight studies reported a correlation between frequency of contact and favorable attitudes toward industry interactions. Students were more approving of gifts to physicians or medical students than to government officials. Certain attitudes appeared to change during medical school, though a time trend was not performed; for example, clinical students (53%–71%) were more likely than preclinical students (29%–62%) to report that promotional information helps educate about new drugs.
Undergraduate medical education provides substantial contact with pharmaceutical marketing, and the extent of such contact is associated with positive attitudes about marketing and skepticism about negative implications of these interactions. These results support future research into the association between exposure and attitudes, as well as any modifiable factors that contribute to attitudinal changes during medical education.
Please see later in the article for the Editors' Summary
Editors' Summary
The complex relationship between health professionals and the pharmaceutical industry has long been a subject of discussion among physicians and policymakers. There is a growing body of evidence that suggests that physicians' interactions with pharmaceutical sales representatives may influence clinical decision making in a way that is not always in the best interests of individual patients, for example, encouraging the use of expensive treatments that have no therapeutic advantage over less costly alternatives. The pharmaceutical industry often uses physician education as a marketing tool, as in the case of Continuing Medical Education courses that are designed to drive prescribing practices.
One reason that physicians may be particularly susceptible to pharmaceutical industry marketing messages is that doctors' attitudes towards the pharmaceutical industry may form early in their careers. The socialization effect of professional schooling is strong, and plays a lasting role in shaping views and behaviors.
Why Was This Study Done?
Recently, particularly in the US, some medical schools have limited students' and faculties' contact with industry, but some have argued that these restrictions are detrimental to students' education. Given the controversy over the pharmaceutical industry's role in undergraduate medical training, consolidating current knowledge in this area may be useful for setting priorities for changes to educational practices. In this study, the researchers systematically examined studies of pharmaceutical industry interactions with medical students and whether such interactions influenced students' views on related topics.
What Did the Researchers Do and Find?
The researchers did a comprehensive literature search using appropriate search terms for all relevant quantitative and qualitative studies published before June 2010. Using strict inclusion criteria, the researchers then selected 48 articles (from 1,603 abstracts) for full review and identified 32 eligible for analysis—giving a total of approximately 9,850 medical students studying at 76 medical schools or hospitals.
Most students had some form of interaction with the pharmaceutical industry but contact increased in the clinical years, with up to 90% of all clinical students receiving some form of educational material. The highest level of exposure occurred in the US. In most studies, the majority of students in their clinical training years found it ethically permissible for medical students to accept gifts from drug manufacturers, while a smaller percentage of preclinical students reported such attitudes. Students justified their entitlement to gifts by citing financial hardship or by asserting that most other students accepted gifts. In addition, although most students believed that education from industry sources is biased, students variably reported that information obtained from industry sources was useful and a valuable part of their education.
Almost two-thirds of students reported that they were immune to bias induced by promotion, gifts, or interactions with sales representatives but also reported that fellow medical students or doctors are influenced by such encounters. Eight studies reported a relationship between exposure to the pharmaceutical industry and positive attitudes about industry interactions and marketing strategies (although not all included supportive statistical data). Finally, student opinions were split on whether physician–industry interactions should be regulated by medical schools or the government.
What Do These Findings Mean?
This analysis shows that students are frequently exposed to pharmaceutical marketing, even in the preclinical years, and that the extent of students' contact with industry is generally associated with positive attitudes about marketing and skepticism towards any negative implications of interactions with industry. Therefore, strategies to educate students about interactions with the pharmaceutical industry should directly address widely held misconceptions about the effects of marketing and other biases that can emerge from industry interactions. But education alone may be insufficient. Institutional policies, such as rules regulating industry interactions, can play an important role in shaping students' attitudes, and interventions that decrease students' contact with industry and eliminate gifts may have a positive effect on building the skills that evidence-based medical practice requires. These changes can help cultivate strong professional values and instill in students a respect for scientific principles and critical evidence review that will later inform clinical decision-making and prescribing practices.
Additional Information
Please access these Web sites via the online version of this summary at
Further information about the influence of the pharmaceutical industry on doctors and medical students can be found at the American Medical Students Association PharmFree campaign and PharmFree Scorecard, Medsin-UKs PharmAware campaign, the nonprofit organization Healthy Skepticism, and the Web site of No Free Lunch.
PMCID: PMC3101205  PMID: 21629685
9.  Pleasure and Pain: Teaching Neuroscientific Principles of Hedonism in a Large General Education Undergraduate Course 
In a large (250 registrants) general education lecture course, neuroscience principles were taught by two professors as co-instructors, starting with simple brain anatomy, chemistry, and function, proceeding to basic brain circuits of pleasure and pain, and progressing with fellow expert professors covering relevant philosophical, artistic, marketing, and anthropological issues. With this as a base, the course wove between fields of high relevance to psychology and neuroscience, such as food addiction and preferences, drug seeking and craving, analgesic pain-inhibitory systems activated by opiates and stress, neuroeconomics, unconscious decision-making, empathy, and modern neuroscientific techniques (functional magnetic resonance imaging and event-related potentials) presented by the co-instructors and other Psychology professors. With no formal assigned textbook, all lectures were PowerPoint-based, containing links to supplemental public-domain material. PowerPoints were available on Blackboard several days before the lecture. All lectures were also video-recorded and posted that evening. The course had a Facebook page for after-class conversation and one of the co-instructors communicated directly with students on Twitter in real time during lecture to provide momentary clarification and comment. In addition to graduate student Teaching Assistants (TAs), to allow for small group discussion, ten undergraduate students who performed well in a previous class were selected to serve as discussion leaders. The Discussion Leaders met four times at strategic points over the semester with groups of 20–25 current students, and received one credit of Independent Study, thus creating a course within a course. The course grade was based on weighted scores from two multiple-choice exams and a five-page writing assignment in which each student reviewed three unique, but brief original peer-review research articles (one page each) combined with expository writing on the first and last pages. A draft of the first page, collected early in the term, was returned to each student by graduate TAs to provide individual feedback on scientific writing. Overall the course has run three times at ful or near enrollment capacity despite being held at an 8:00 AM time slot. Student-generated teaching evaluations place it well within the normal range, while this format importantly contributes to budget efficiency permitting the teaching of more required small-format courses (e.g., freshman writing). The demographics of the course have changed to one in which the vast majority of the students are now outside the disciplines of neuroscience or psychology and are taking the course to fulfill a General Education requirement. This pattern allows the wide dissemination of basic neuroscientific knowledge to a general college audience.
PMCID: PMC3852868  PMID: 24319388
Reward; addiction; relapse; craving; body weight regulation; analgesia; empathy; behavioral economics; fMRI
10.  Facilitating Improvements in Laboratory Report Writing Skills with Less Grading: A Laboratory Report Peer-Review Process† 
Incorporating peer-review steps in the laboratory report writing process provides benefits to students, but it also can create additional work for laboratory instructors. The laboratory report writing process described here allows the instructor to grade only one lab report for every two to four students, while giving the students the benefits of peer review and prompt feedback on their laboratory reports. Here we present the application of this process to a sophomore level genetics course and a freshman level cellular biology course, including information regarding class time spent on student preparation activities, instructor preparation, prerequisite student knowledge, suggested learning outcomes, procedure, materials, student instructions, faculty instructions, assessment tools, and sample data. T-tests comparing individual and group grading of the introductory cell biology lab reports yielded average scores that were not significantly different from each other (p = 0.13, n = 23 for individual grading, n = 6 for group grading). T-tests also demonstrated that average laboratory report grades of students using the peer-review process were not significantly different from those of students working alone (p = 0.98, n = 9 for individual grading, n = 6 for pair grading). While the grading process described here does not lead to statistically significant gains (or reductions) in student learning, it allows student learning to be maintained while decreasing instructor workload. This reduction in workload could allow the instructor time to pursue other high-impact practices that have been shown to increase student learning. Finally, we suggest possible modifications to the procedure for application in a variety of settings.
PMCID: PMC4416507  PMID: 25949758
11.  Feedback on students' clinical reasoning skills during fieldwork education 
Feedback on clinical reasoning skills during fieldwork education is regarded as vital in occupational therapy students' professional development. The nature of supervisors' feedback however, could be confirmative and/or corrective and corrective feedback could be with or without suggestions on how to improve. The aim of the study was to evaluate the impact of supervisors' feedback on final-year occupational therapy students' clinical reasoning skills through comparing the nature of feedback with the students' subsequent clinical reasoning ability.
A mixed-method approach with a convergent parallel design was used combining the collection and analysis of qualitative and quantitative data. From focus groups and interviews with students, data were collected and analysed qualitatively to determine how the students experienced the feedback they received from their supervisors. By quantitatively comparing the final practical exam grades with the nature of the feedback, their fieldwork End-of-Term grades and average academic performance it became possible to merge the results for comparison and interpretation.
Students' clinical reasoning skills seem to be improved through corrective feedback if accompanied by suggestions on how to improve, irrespective of their average academic performance. Supervisors were inclined to underrate high performing students and overrate lower performing students.
Students who obtained higher grades in the final practical examinations received more corrective feedback with suggestions on how to improve from their supervisors. Confirmative feedback alone may not be sufficient for improving the clinical reasoning skills of students.
PMCID: PMC4584508  PMID: 26256854
feedback; fieldwork education; mixed methodology research; physical dysfunction
12.  A Technology-Enhanced Patient Case Workshop 
To assess the impact of technology-based changes on student learning, skill development, and satisfaction in a patient-case workshop.
A new workshop format for a course was adopted over a 3-year period. Students received and completed patient cases and obtained immediate performance feedback in class instead of preparing the case prior to class and waiting for instructors to grade and return their cases. The cases were designed and accessed via an online course management system.
Student satisfaction was measured using end-of-course surveys. The impact of the technology-based changes on student learning, problem-solving, and critical-thinking skills was measured and compared between the 2 different course formats by assessing changes in examination responses. Three advantages to the new format were reported: real-life format in terms of time constraint for responses, a team learning environment, and expedient grading and feedback. Students overwhelmingly agreed that the new format should be continued. Students' examination scores improved significantly under the new format.
The change in delivery of patient-case workshops to an online, real-time system was well accepted and resulted in enhanced learning, critical thinking, and problem-solving skills.
PMCID: PMC2739069  PMID: 19777101
course design; Internet access; online course management system; laptop computers; assessment
13.  Advanced Cardiac Resuscitation Evaluation (ACRE): A randomised single-blind controlled trial of peer-led vs. expert-led advanced resuscitation training 
Advanced resuscitation skills training is an important and enjoyable part of medical training, but requires small group instruction to ensure active participation of all students. Increases in student numbers have made this increasingly difficult to achieve.
A single-blind randomised controlled trial of peer-led vs. expert-led resuscitation training was performed using a group of sixth-year medical students as peer instructors. The expert instructors were a senior and a middle grade doctor, and a nurse who is an Advanced Life Support (ALS) Instructor.
A power calculation showed that the trial would have a greater than 90% chance of rejecting the null hypothesis (that expert-led groups performed 20% better than peer-led groups) if that were the true situation. Secondary outcome measures were the proportion of High Pass grades in each groups and safety incidents.
The peer instructors designed and delivered their own course material. To ensure safety, the peer-led groups used modified defibrillators that could deliver only low-energy shocks.
Blinded assessment was conducted using an Objective Structured Clinical Examination (OSCE). The checklist items were based on International Liaison Committee on Resuscitation (ILCOR) guidelines using Ebel standard-setting methods that emphasised patient and staff safety and clinical effectiveness.
The results were analysed using Exact methods, chi-squared and t-test.
A total of 132 students were randomised: 58 into the expert-led group, 74 into the peer-led group. 57/58 (98%) of students from the expert-led group achieved a Pass compared to 72/74 (97%) from the peer-led group: Exact statistics confirmed that it was very unlikely (p = 0.0001) that the expert-led group was 20% better than the peer-led group.
There were no safety incidents, and High Pass grades were achieved by 64 (49%) of students: 33/58 (57%) from the expert-led group, 31/74 (42%) from the peer-led group. Exact statistics showed that the difference of 15% meant that it was possible that the expert-led teaching was 20% better at generating students with High Passes.
The key elements of advanced cardiac resuscitation can be safely and effectively taught to medical students in small groups by peer-instructors who have undergone basic medical education training.
PMCID: PMC2818633  PMID: 20074353
14.  An electronic portfolio for quantitative assessment of surgical skills in undergraduate medical education 
BMC Medical Education  2013;13:65.
We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects.
Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved.
Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant.
Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
PMCID: PMC3651863  PMID: 23642100
Electronic portfolio; Surgical subjects; Self-guided learning; Self-assessment; Evaluative portfolio
15.  Using Audience Response Technology to provide formative feedback on pharmacology performance for non-medical prescribing students - a preliminary evaluation 
BMC Medical Education  2012;12:113.
The use of anonymous audience response technology (ART) to actively engage students in classroom learning has been evaluated positively across multiple settings. To date, however, there has been no empirical evaluation of the use of individualised ART handsets and formative feedback of ART scores. The present study investigates student perceptions of such a system and the relationship between formative feedback results and exam performance.
Four successive cohorts of Non-Medical Prescribing students (n=107) had access to the individualised ART system and three of these groups (n=72) completed a questionnaire about their perceptions of using ART. Semi-structured interviews were carried out with a purposive sample of seven students who achieved a range of scores on the formative feedback. Using data from all four cohorts of students, the relationship between mean ART scores and summative pharmacology exam score was examined using a non-parametric correlation.
Questionnaire and interview data suggested that the use of ART enhanced the classroom environment, motivated students and promoted learning. Questionnaire data demonstrated that students found the formative feedback helpful for identifying their learning needs (95.6%), guiding their independent study (86.8%), and as a revision tool (88.3%). Interviewees particularly valued the objectivity of the individualised feedback which helped them to self-manage their learning. Interviewees’ initial anxiety about revealing their level of pharmacology knowledge to the lecturer and to themselves reduced over time as students focused on the learning benefits associated with the feedback.
A significant positive correlation was found between students’ formative feedback scores and their summative pharmacology exam scores (Spearman’s rho = 0.71, N=107, p<.01).
Despite initial anxiety about the use of individualised ART units, students rated the helpfulness of the individualised handsets and personalised formative feedback highly. The significant correlation between ART response scores and student exam scores suggests that formative feedback can provide students with a useful reference point in terms of their level of exam-readiness.
PMCID: PMC3515432  PMID: 23148762
Audience response technology; Teaching; Pharmacology; Non-medical prescribing
16.  Portfolio: A Comprehensive Method of Assessment for Postgraduates in Oral and Maxillofacial Surgery 
Post graduate learning and assessment is an important responsibility of an academic oral and maxillofacial surgeon. The current method of assessment for post graduate training include formative evaluation in the form of seminars, case presentations, log books and infrequently conducted end of year theory exams. End of the course theory and practical examination is a summative evaluation which awards the degree to the student based on grades obtained. Oral and maxillofacial surgery is mainly a skill based specialty and deliberate practice enhances skill. But the traditional system of assessment of post graduates emphasizes their performance on the summative exam which fails to evaluate the integral picture of the student throughout the course. Emphasis on competency and holistic growth of the post graduate student during training in recent years has lead to research and evaluation of assessment methods to quantify students’ progress during training. Portfolio method of assessment has been proposed as a potentially functional method for post graduate evaluation. It is defined as a collection of papers and other forms of evidence that learning has taken place. It allows the collation and integration of evidence on competence and performance from different sources to gain a comprehensive picture of everyday practice. The benefits of portfolio assessment in health professions education are twofold: it’s potential to assess performance and its potential to assess outcomes, such as attitudes and professionalism that are difficult to assess using traditional instruments. This paper is an endeavor for the development of portfolio method of assessment for post graduate student in oral and maxillofacial surgery.
Electronic supplementary material
The online version of this article (doi:10.1007/s12663-012-0381-7) contains supplementary material, which is available to authorized users.
PMCID: PMC3589504  PMID: 24431818
Oral and maxillofacial surgery; Post graduate; Assessment; Portfolio
17.  Teaching Students How to Study: A Workshop on Information Processing and Self-Testing Helps Students Learn 
CBE Life Sciences Education  2011;10(2):187-198.
We implemented a “how to study” workshop for small groups of students (6–12) for N = 93 consenting students, randomly assigned from a large introductory biology class. The goal of this workshop was to teach students self-regulating techniques with visualization-based exercises as a foundation for learning and critical thinking in two areas: information processing and self-testing. During the workshop, students worked individually or in groups and received immediate feedback on their progress. Here, we describe two individual workshop exercises, report their immediate results, describe students’ reactions (based on the workshop instructors’ experience and student feedback), and report student performance on workshop-related questions on the final exam. Students rated the workshop activities highly and performed significantly better on workshop-related final exam questions than the control groups. This was the case for both lower- and higher-order thinking questions. Student achievement (i.e., grade point average) was significantly correlated with overall final exam performance but not with workshop outcomes. This long-term (10 wk) retention of a self-testing effect across question levels and student achievement is a promising endorsement for future large-scale implementation and further evaluation of this “how to study” workshop as a study support for introductory biology (and other science) students.
PMCID: PMC3105925  PMID: 21633067
18.  Operant learning principles applied to teaching introductory statistics1 
A grade of A was given in an introductory statistics course for meeting a set of contingencies that included no work outside of class (except by request), near-perfect performance on exams following each unit of work in a programmed text, correction of all exam errors, self pacing of work, and the chance to finish the course early. A grade of incomplete was given otherwise. Correlations among performance measures failed to show any meaningful relationships between time taken to finish the course, errors made on exams, and errors made in the programmed text. Responses to a five-part questionnaire were overwhelmingly favorable to the course, but did not vary as a function of grade point average, time taken to finish the course, or number of errors made on exams. The uniformly high level of performance, the students' lack of interest in social contact with the instructor during class, and the absence of drop-outs are all attributed to the contingencies employed, chief among which, according to the instructor's judgment and student rankings, were self-pacing, frequent non-punitive exams and a guaranteed grade of A for near-perfect work at every stage.
PMCID: PMC1311116  PMID: 16795257
19.  Helping Struggling Students in Introductory Biology: A Peer-Tutoring Approach That Improves Performance, Perception, and Retention 
CBE Life Sciences Education  2015;14(2):ar16.
This study examines an optional peer-tutoring program offered to students who are struggling in a large-enrollment, introductory biology course. Students who regularly attended had increased exam performance, more expert-like perceptions of biology, and increased persistence relative to their struggling peers who were not attending.
The high attrition rate among science, technology, engineering, and mathematics (STEM) majors has long been an area of concern for institutions and educational researchers. The transition from introductory to advanced courses has been identified as a particularly “leaky” point along the STEM pipeline, and students who struggle early in an introductory STEM course are predominantly at risk. Peer-tutoring programs offered to all students in a course have been widely found to help STEM students during this critical transition, but hiring a sufficient number of tutors may not be an option for some institutions. As an alternative, this study examines the viability of an optional peer-tutoring program offered to students who are struggling in a large-enrollment, introductory biology course. Struggling students who regularly attended peer tutoring increased exam performance, expert-like perceptions of biology, and course persistence relative to their struggling peers who were not attending the peer-tutoring sessions. The results of this study provide information to instructors who want to design targeted academic assistance for students who are struggling in introductory courses.
PMCID: PMC4477732  PMID: 25976652
20.  Temporal Structure of First-year Courses and Success at Course Exams: Comparison of Traditional Continual and Block Delivery of Anatomy and Chemistry Courses 
Croatian Medical Journal  2009;50(1):61-68.
To evaluate students’ academic success at the Anatomy and Chemistry courses delivered in a traditional continual course, spread over the two semesters, or in alternating course blocks, in which 1 group of students attended first the Anatomy and then the Chemistry course and vice versa.
We analyzed the data on exam grades for Anatomy and Chemistry courses in the first year of the curriculum for academic year 2001/02, with the traditional continual delivery of the courses (n = 253 for chemistry and n = 243 for anatomy course), and academic year 2003/04, with block delivery of the courses (n = 255 for chemistry and n = 260 for anatomy course). The content of the courses and the teachers were similar in both academic years. Grades from the final examination were analyzed only for students who sat the exam at the first available exam term and passed the course. For the Anatomy block course, grades at 2 interim written tests and 2 parts of the final exam (practical stage exam and oral exam) in each block were analyzed for students who passed all interim tests and the final exam.
There were no differences between the two types of course delivery in the number of students passing the final examination at first attempt. There was a decrease in passing percentage for the two Anatomy block course student groups in 2003/04 (56% passing students in block 1 vs 40% in block 2, P = 0.014). There were no differences in the final examination grade between the 2 blocks for both the Anatomy and Chemistry course, but there was an increase in the average grades from 2001/02 to 2003/04 academic year due to an increase in Chemistry grades (F1,399 = 18.4, P < 0.001, 2 × 2 ANOVA). Grades in Chemistry were significantly lower than grades in Anatomy when courses were delivered in a continual but not in a block format (F1,399 = 35.1, P < 0.001, 2 × 2 ANOVA). When both courses were delivered in a block format there was no effect of the sequence of their delivery (F1,206 = 1.8, P = 0.182, 2 × 2 ANOVA). There was also a significant difference in grades on interim assessments of Anatomy when it was delivered in the block format (F3,85 = 28.8, P < 0.001, between-within subjects 2 × 4 ANOVA) with grades from practical test and oral exam being significantly lower than grades on 2 interim tests that came at the beginning of the block (P < 0.001 for all pair-wise comparisons).
The type of course delivery was not associated with significant differences in student academic success in Anatomy and Chemistry courses in the medical curriculum. Students can successfully pass these courses when they are delivered either in a continual, whole year format or in a condensed time format of a course block, regardless of the number and type of courses preceding the block course.
PMCID: PMC2657569  PMID: 19260146
21.  Weekly Rotation of Facilitators to Improve Assessment of Group Participation in a Problem-based Learning Curriculum 
To determine whether implementing a rotating facilitator structure provides a reliable method of assessing group participation and assigning grades to third-professional year pharmacy students in a problem-based learning curriculum.
In the 2004-2005 school year, a “one block, one facilitator” structure was replaced by a “weekly rotating facilitator” structure. Each student received a grade from the assigned facilitator each week. The 8 weekly grades were then averaged for a final course grade. Student grades were reviewed weekly and at the end of each block. Facilitators and students completed survey instruments at the end of each of four 8-week blocks.
Student grades were reviewed, and the class average was compared to the class averages from the 2 previous years. For example, in block I the class average was 86 which compared to averages of 88 and 87 for 2002-03 and 2003-04 respectively. Survey data revealed a 40% agreement by facilitators in block I that student performance was improved compared to student performance prior to this change. This agreement increased to 71%, 72%, and 71% respectively for blocks II - IV. Student survey data at the end of the academic year supported weekly facilitator rotation and revealed that a majority of students agreed that exposure to a variety of facilitators enhanced their group participation.
As confirmed by student grades and student and faculty members’ feedback, the change to a rotating facilitator structure resulted in a reliable method of assigning student grades for group participation.
PMCID: PMC1803686  PMID: 17332853
problem-based learning; curriculum; facilitators; assessment
22.  Use of Live Interactive Webcasting for an International Postgraduate Module in eHealth: Case Study Evaluation 
Producing “traditional” e-learning can be time consuming, and in a topic such as eHealth, it may have a short shelf-life. Students sometimes report feeling isolated and lacking in motivation. Synchronous methods can play an important part in any blended approach to learning.
The aim was to develop, deliver, and evaluate an international postgraduate module in eHealth using live interactive webcasting.
We developed a hybrid solution for live interactive webcasting using a scan converter, mixer, and digitizer, and video server to embed a presenter-controlled talking head or copy of the presenter’s computer screen (normally a PowerPoint slide) in a student chat room. We recruited 16 students from six countries and ran weekly 2.5-hour live sessions for 10 weeks. The content included the use of computers by patients, patient access to records, different forms of e-learning for patients and professionals, research methods in eHealth, geographic information systems, and telehealth. All sessions were recorded—presentations as video files and the student interaction as text files. Students were sent an email questionnaire of mostly open questions seeking their views of this form of learning. Responses were collated and anonymized by a colleague who was not part of the teaching team.
Sessions were generally very interactive, with most students participating actively in breakout or full-class discussions. In a typical 2.5-hour session, students posted about 50 messages each. Two students did not complete all sessions; one withdrew from the pressure of work after session 6, and one from illness after session 7. Fourteen of the 16 responded to the feedback questionnaire. Most students (12/14) found the module useful or very useful, and all would recommend the module to others. All liked the method of delivery, in particular the interactivity, the variety of students, and the “closeness” of the group. Most (11/14) felt “connected” with the other students on the course. Many students (11/14) had previous experience with asynchronous e-learning, two as teachers; 12/14 students suggested advantages of synchronous methods, mostly associated with the interaction and feedback from teachers and peers.
This model of synchronous e-learning based on interactive live webcasting was a successful method of delivering an international postgraduate module. Students found it engaging over a 10-week course. Although this is a small study, given that synchronous methods such as interactive webcasting are a much easier transition for lecturers used to face-to-face teaching than are asynchronous methods, they should be considered as part of the blend of e-learning methods. Further research and development is needed on interfaces and methods that are robust and accessible, on the most appropriate blend of synchronous and asynchronous work for different student groups, and on learning outcomes and effectiveness.
PMCID: PMC2802565  PMID: 19914901
Webcasting; synchronous e-learning; eHealth
23.  Small-Group Learning in an Upper-Level University Biology Class Enhances Academic Performance and Student Attitudes Toward Group Work 
PLoS ONE  2010;5(12):e15821.
To improve science learning, science educators' teaching tools need to address two major criteria: teaching practice should mirror our current understanding of the learning process; and science teaching should reflect scientific practice. We designed a small-group learning (SGL) model for a fourth year university neurobiology course using these criteria and studied student achievement and attitude in five course sections encompassing the transition from individual work-based to SGL course design. All students completed daily quizzes/assignments involving analysis of scientific data and the development of scientific models. Students in individual work-based (Individualistic) sections usually worked independently on these assignments, whereas SGL students completed assignments in permanent groups of six. SGL students had significantly higher final exam grades than Individualistic students. The transition to the SGL model was marked by a notable increase in 10th percentile exam grade (Individualistic: 47.5%; Initial SGL: 60%; Refined SGL: 65%), suggesting SGL enhanced achievement among the least prepared students. We also studied student achievement on paired quizzes: quizzes were first completed individually and submitted, and then completed as a group and submitted. The group quiz grade was higher than the individual quiz grade of the highest achiever in each group over the term. All students – even term high achievers –could benefit from the SGL environment. Additionally, entrance and exit surveys demonstrated student attitudes toward SGL were more positive at the end of the Refined SGL course. We assert that SGL is uniquely-positioned to promote effective learning in the science classroom.
PMCID: PMC3012112  PMID: 21209910
24.  Identifying Key Features of Effective Active Learning: The Effects of Writing and Peer Discussion 
CBE Life Sciences Education  2014;13(3):469-477.
This study compared the effectiveness of three different methods of implementing active-learning exercises in an introductory biology course. The results suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning.
We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning.
PMCID: PMC4152208  PMID: 25185230
25.  Using cloud-based mobile technology for assessment of competencies among medical students 
PeerJ  2013;1:e164.
Valid, direct observation of medical student competency in clinical settings remains challenging and limits the opportunity to promote performance-based student advancement. The rationale for direct observation is to ascertain that students have acquired the core clinical competencies needed to care for patients. Too often student observation results in highly variable evaluations which are skewed by factors other than the student’s actual performance. Among the barriers to effective direct observation and assessment include the lack of effective tools and strategies for assuring that transparent standards are used for judging clinical competency in authentic clinical settings. We developed a web-based content management system under the name, Just in Time Medicine (JIT), to address many of these issues. The goals of JIT were fourfold: First, to create a self-service interface allowing faculty with average computing skills to author customizable content and criterion-based assessment tools displayable on internet enabled devices, including mobile devices; second, to create an assessment and feedback tool capable of capturing learner progress related to hundreds of clinical skills; third, to enable easy access and utilization of these tools by faculty for learner assessment in authentic clinical settings as a means of just in time faculty development; fourth, to create a permanent record of the trainees’ observed skills useful for both learner and program evaluation. From July 2010 through October 2012, we implemented a JIT enabled clinical evaluation exercise (CEX) among 367 third year internal medicine students. Observers (attending physicians and residents) performed CEX assessments using JIT to guide and document their observations, record their time observing and providing feedback to the students, and their overall satisfaction. Inter-rater reliability and validity were assessed with 17 observers who viewed six videotaped student-patient encounters and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient OSCE exams. A total of 3567 CEXs were completed by 516 observers. The average number of evaluations per student was 9.7 (±1.8 SD) and the average number of CEXs completed per observer was 6.9 (±15.8 SD). Observers spent less than 10 min on 43–50% of the CEXs and 68.6% on feedback sessions. A majority of observers (92%) reported satisfaction with the CEX. Inter-rater reliability was measured at 0.69 among all observers viewing the videotapes and these ratings adequately discriminated competent from non-competent performance. The measured CEX grades correlated with subsequent student performance on an end-of-year OSCE. We conclude that the use of JIT is feasible in capturing discrete clinical performance data with a high degree of user satisfaction. Our embedded checklists had adequate inter-rater reliability and concurrent and predictive validity.
PMCID: PMC3792179  PMID: 24109549
Educational technology; Educational measurement; Medical students; Smart phones; Competency based assessment; Direct observation; Medical faculty; Clinical competence; iPhone; miniCEX

Results 1-25 (1483741)