We tested five course designs that varied in the structure of daily and weekly active-learning exercises in an attempt to lower the traditionally high failure rate in a gateway course for biology majors. Students were given daily multiple-choice questions and answered with electronic response devices (clickers) or cards. Card responses were ungraded; clicker responses were graded for right/wrong answers or participation. Weekly practice exams were done as an individual or as part of a study group. Compared with previous versions of the same course taught by the same instructor, students in the new course designs performed better: There were significantly lower failure rates, higher total exam points, and higher scores on an identical midterm. Attendance was higher in the clicker versus cards section; attendance and course grade were positively correlated. Students did better on clicker questions if they were graded for right/wrong answers versus participation, although this improvement did not translate into increased scores on exams. In this course, achievement increases when students get regular practice via prescribed (graded) active-learning exercises.
We tested the hypothesis that highly structured course designs, which implement reading quizzes and/or extensive in-class active-learning activities and weekly practice exams, can lower failure rates in an introductory biology course for majors, compared with low-structure course designs that are based on lecturing and a few high-risk assessments. We controlled for 1) instructor effects by analyzing data from quarters when the same instructor taught the course, 2) exam equivalence with new assessments called the Weighted Bloom's Index and Predicted Exam Score, and 3) student equivalence using a regression-based Predicted Grade. We also tested the hypothesis that points from reading quizzes, clicker questions, and other “practice” assessments in highly structured courses inflate grades and confound comparisons with low-structure course designs. We found no evidence that points from active-learning exercises inflate grades or reduce the impact of exams on final grades. When we controlled for variation in student ability, failure rates were lower in a moderately structured course design and were dramatically lower in a highly structured course design. This result supports the hypothesis that active-learning exercises can make students more skilled learners and help bridge the gap between poorly prepared students and their better-prepared peers.
To introduce a new approach to problem-based learning (PBL) for self-directed learning in renal therapeutics.
This 5-week course, designed for large student cohorts using minimal teaching resources, was based on a series of case studies and subsequent pharmaceutical care plans, followed by intensive and regular feedback from the instructor.
Assessment of achievement of the learning outcomes was based on weekly-graded care plans and peer review assessment, allowing each student to judge the contributions of each group member and their own, along with a written case-study based examination. The pharmaceutical care plan template, designed using a “tick-box” system, significantly reduced staff time for feedback and scoring.
The proposed instructional model achieved the desired learning outcomes with appropriate student feedback, while promoting skills that are essential for the students' future careers as health care professionals.
renal therapeutics; problem-based learning; case study; pharmaceutical care plan
Peer-facilitated workshops enhanced interactivity in our introductory biology course, which led to increased student engagement and learning. A majority of students preferred attending two lectures and a workshop each week over attending three weekly lectures. In the workshops, students worked in small cooperative groups as they solved challenging problems, evaluated case studies, and participated in activities designed to improve their general learning skills. Students in the workshop version of the course scored higher on exam questions recycled from preworkshop semesters. Grades were higher over three workshop semesters in comparison with the seven preworkshop semesters. Although males and females benefited from workshops, there was a larger improvement of grades and increased retention by female students; although underrepresented minority (URM) and non-URM students benefited from workshops, there was a larger improvement of grades by URM students. As well as improving student performance and retention, the addition of interactive workshops also improved the quality of student learning: Student scores on exam questions that required higher-level thinking increased from preworkshop to workshop semesters.
Kansas State University converted its introductory biology course, previously taught as an audio-tutorial (A-T), to a studio format in 1997. We share with others information about the process involved and present assessment data for the studio format course that address 1) student exam performance in A-T and studio; 2) student course grades in A-T and studio; 3) student and instructor perceptions and attitudes for A-T and studio; 4) student performance in subsequent biology courses for A-T and studio; and 5) gains in student learning for the studio course and other traditional lecture/lab courses. Collectively, these measures demonstrate that the studio format is as effective as or more effective (for some measures) than the A-T approach and traditional approaches in providing an effective learning environment. We discuss the issues involved in comparing course formats.
Previously we showed that weekly, written, timed, and peer-graded practice exams help increase student performance on written exams and decrease failure rates in an introductory biology course. Here we analyze the accuracy of peer grading, based on a comparison of student scores to those assigned by a professional grader. When students graded practice exams by themselves, they were significantly easier graders than a professional; overall, students awarded ≈25% more points than the professional did. This difference represented ≈1.33 points on a 10-point exercise, or 0.27 points on each of the five 2-point questions posed. When students graded practice exams as a group of four, the same student-expert difference occurred. The student-professional gap was wider for questions that demanded higher-order versus lower-order cognitive skills. Thus, students not only have a harder time answering questions on the upper levels of Bloom's taxonomy, they have a harder time grading them. Our results suggest that peer grading may be accurate enough for low-risk assessments in introductory biology. Peer grading can help relieve the burden on instructional staff posed by grading written answers—making it possible to add practice opportunities that increase student performance on actual exams.
This study assessed the importance of teacher preference of individual students, relative to peer rejection and student aggression, as an independent predictor of children's emotional adjustment and grades. First, a longitudinal, cross-lagged path analysis was conducted to determine the patterns of influence among teacher preference, peer rejection, and student aggression. Then, parallel growth analyses were examined to test whether lower initial and declining teacher preference, beyond the influence of initial-level and change in peer rejection and student aggression, predicted change in loneliness, depression, social anxiety, and grades. Social adjustment, emotional adjustment, and academic adjustment were assessed in the fall and spring of two consecutive school years with 1,193 third-grade students via peer-, teacher-, and self-report instruments as well as school records. In the cross-lagged path analysis, reciprocal influence over time between teacher preference and peer rejection was found, and student aggression predicted lower teacher preference and higher peer rejection. In the growth analyses, initial and declining teacher preference were independent predictors of increasing loneliness and declining grades. Discussion focuses on the relevance of the results within a transactional model of school adaptation.
Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not see the need to modify their study strategies for critical thinking, because the MC exam format has not changed. To test the effect of exam format, I used two sections of an introductory biology class. One section was assessed with exams in the traditional MC format, the other section was assessed with both MC and constructed-response (CR) questions. The mixed exam format was correlated with significantly more cognitively active study behaviors and a significantly better performance on the cumulative final exam (after accounting for grade point average and gender). There was also less gender-bias in the CR answers. This suggests that the MC-only exam format indeed hinders critical thinking in introductory science classes. Introducing CR questions encouraged students to learn more and to be better critical thinkers and reduced gender bias. However, student resistance increased as students adjusted their perceptions of their own critical-thinking abilities.
The authors studied gains in student learning when curriculum was changed to include an optional verbal final exam (VF). Students who passed the VF outscored peers on MCAT questions (66.4% [n=160] and 62% [n=285], respectively; p < 0.001), and passing the VF also correlated with higher performance in a range of upper-level science courses.
We studied gains in student learning over eight semesters in which an introductory biology course curriculum was changed to include optional verbal final exams (VFs). Students could opt to demonstrate their mastery of course material via structured oral exams with the professor. In a quantitative assessment of cell biology content knowledge, students who passed the VF outscored their peers on the medical assessment test (MAT), an exam built with 40 Medical College Admissions Test (MCAT) questions (66.4% [n = 160] and 62% [n = 285], respectively; p < 0.001);. The higher-achieving students performed better on MCAT questions in all topic categories tested; the greatest gain occurred on the topic of cellular respiration. Because the VF focused on a conceptually parallel topic, photosynthesis, there may have been authentic knowledge transfer. In longitudinal tracking studies, passing the VF also correlated with higher performance in a range of upper-level science courses, with greatest significance in physiology, biochemistry, and organic chemistry. Participation had a wide range but not equal representation in academic standing, gender, and ethnicity. Yet students nearly unanimously (92%) valued the option. Our findings suggest oral exams at the introductory level may allow instructors to assess and aid students striving to achieve higher-level learning.
The aim of training midwifery student is to increase scientific and practical abilities in trainees in order to present caring health services. Then, the evaluation tools of these abilities have great importance. This study tried to evaluate the content validity, criterion validity and base validity of academic exams.
This cross-sectional research was an evaluated type that has been done on 18 special theoretical courses of midwifery in 2 semesters in 2007-2008. The data gathered by checklists. The data about questionnaire and the result of analyzing exam questions (final and midterm) were compiled by 2 educating experts of medical education and 2 experts for each course. The data analyzed by SPSS software. For determining the base validity, spearman correlation test and for presenting descriptive results, distribution tables were applied.
The evaluation of 1013 questions showed that in 18 courses, in 61.18% of exams, more than 90% of questions had content validity and in 28.27% of exams, in more than 90% of questions, criterion validity had been considered. The results showed that in 92.38% of questions, content validity and in 80.45% of questions, criterion validity was considered. 11 courses out of 18 courses had base validity.
This survey showed that content validity of the exam questions in midwifery special theoretical courses was in favorable levels. But, the criterion validity of exam questions was far away from the ideal level. Then, education in each session can help the teachers achieve their exam purposes.
Content validity; criterion validity; midwifery special exams; reliability
To redesign a patient assessment course using a structured instructional design process and evaluate student learning.
Course coordinators collaborated with an instructional design and development expert to incorporate new pedagogical approaches (eg, Web-based self-tests), create new learning activities (eg, peer collaboration on worksheets, SOAP note writing), and develop grading rubrics.
Formative and summative surveys were administered for student self-assessment and course evaluation. Seventy-six students (78%) completed the summative survey. The mean course grade was 91.8% ± 3.6%, with more than 75% of students reporting achievement of primary course learning objectives. All of the additional learning activities helped students meet the learning objectives with the exception of the written drug information response.
The use of a structured instructional design process to redesign a patient assessment course was successful in creating a curriculum that succeeded in teaching students the specified learning objectives. Other colleges and schools are encouraged to collaborate with an instructional design and development expert to improve the pharmacy curriculum.
curriculum design; patient assessment; physical assessment
To evaluate students’ academic success at the Anatomy and Chemistry courses delivered in a traditional continual course, spread over the two semesters, or in alternating course blocks, in which 1 group of students attended first the Anatomy and then the Chemistry course and vice versa.
We analyzed the data on exam grades for Anatomy and Chemistry courses in the first year of the curriculum for academic year 2001/02, with the traditional continual delivery of the courses (n = 253 for chemistry and n = 243 for anatomy course), and academic year 2003/04, with block delivery of the courses (n = 255 for chemistry and n = 260 for anatomy course). The content of the courses and the teachers were similar in both academic years. Grades from the final examination were analyzed only for students who sat the exam at the first available exam term and passed the course. For the Anatomy block course, grades at 2 interim written tests and 2 parts of the final exam (practical stage exam and oral exam) in each block were analyzed for students who passed all interim tests and the final exam.
There were no differences between the two types of course delivery in the number of students passing the final examination at first attempt. There was a decrease in passing percentage for the two Anatomy block course student groups in 2003/04 (56% passing students in block 1 vs 40% in block 2, P = 0.014). There were no differences in the final examination grade between the 2 blocks for both the Anatomy and Chemistry course, but there was an increase in the average grades from 2001/02 to 2003/04 academic year due to an increase in Chemistry grades (F1,399 = 18.4, P < 0.001, 2 × 2 ANOVA). Grades in Chemistry were significantly lower than grades in Anatomy when courses were delivered in a continual but not in a block format (F1,399 = 35.1, P < 0.001, 2 × 2 ANOVA). When both courses were delivered in a block format there was no effect of the sequence of their delivery (F1,206 = 1.8, P = 0.182, 2 × 2 ANOVA). There was also a significant difference in grades on interim assessments of Anatomy when it was delivered in the block format (F3,85 = 28.8, P < 0.001, between-within subjects 2 × 4 ANOVA) with grades from practical test and oral exam being significantly lower than grades on 2 interim tests that came at the beginning of the block (P < 0.001 for all pair-wise comparisons).
The type of course delivery was not associated with significant differences in student academic success in Anatomy and Chemistry courses in the medical curriculum. Students can successfully pass these courses when they are delivered either in a continual, whole year format or in a condensed time format of a course block, regardless of the number and type of courses preceding the block course.
Objective. To implement peer-led team learning in an online course on controversial issues surrounding medications and the US healthcare system.
Design. The course was delivered completely online using a learning management system. Students participated in weekly small-group discussions in online forums, completed 3 reflective writing assignments, and collaborated on a peer-reviewed grant proposal project.
Assessment. In a post-course survey, students reported that the course was challenging but meaningful. Final projects and peer-reviewed assignments demonstrated that primary learning goals for the course were achieved and students were empowered to engage in the healthcare debate.
Conclusions. A peer-led team-learning is an effective strategy for an online course offered to a wide variety of student learners. By shifting some of the learning and grading responsibility to students, the instructor workload for the course was rendered more manageable.
peer-led team learning; online learning; interprofessional education; healthcare system
To transform a pharmaceutical mathematics course to a self-paced instructional format using Web-accessed databases for student practice and examination preparation.
The existing pharmaceutical mathematics course was modified from a lecture style with midsemester and final examinations to a self-paced format in which students had multiple opportunities to complete online, nongraded self-assessments as well as in-class module examinations.
Grades and course evaluations were compared between students taking the class in lecture format with midsemester and final examinations and students taking the class in the self-paced instructional format. The number of times it took students to pass examinations was also analyzed.
Based on instructor assessment and student feedback, the course succeeded in giving students who were proficient in pharmaceutical mathematics a chance to progress quickly and students who were less skillful the opportunity to receive instruction at their own pace and develop mathematical competence.
mathematics; Web-based instruction; self-paced; distance education; assessment
To compare medical students on a modern MBChB programme who did an optional intercalated degree with their peers who did not intercalate; in particular, to monitor performance in subsequent undergraduate degree exams.
This was a retrospective, observational study of anonymised databases of medical student assessment outcomes. Data were accessed for graduates, University of Aberdeen Medical School, Scotland, UK, from the years 2003 to 2007 (n = 861). The main outcome measure was marks for summative degree assessments taken after intercalating.
Of 861 medical students, 154 (17.9%) students did an intercalated degree. After adjustment for cohort, maturity, gender and baseline (3rd year) performance in matching exam type, having done an IC degree was significantly associated with attaining high (18–20) common assessment scale (CAS) marks in three of the six degree assessments occurring after the IC students rejoined the course: the 4th year written exam (p < 0.001), 4th year OSCE (p = 0.001) and the 5th year Elective project (p = 0.010).
Intercalating was associated with improved performance in Years 4 and 5 of the MBChB. This improved performance will further contribute to higher academic ranking for Foundation Year posts. Long-term follow-up is required to identify if doing an optional intercalated degree as part of a modern medical degree is associated with following a career in academic medicine.
Undergraduate students struggle to read the scientific literature and educators have suggested that this may reflect deficiencies in their science literacy skills. In this two-year study we develop and test a strategy for using the scientific literature to teach science literacy skills to novice life science majors. The first year of the project served as a preliminary investigation in which we evaluated student science literacy skills, created a set of science literacy learning objectives aligned with Bloom’s taxonomy, and developed a set of homework assignments that used peer-reviewed articles to teach science literacy. In the second year of the project the effectiveness of the assignments and the learning objectives were evaluated. Summative student learning was evaluated in the second year on a final exam. The mean score was 83.5% (±20.3%) and there were significant learning gains (p < 0.05) in seven of nine of science literacy skills. Project data indicated that even though students achieved course-targeted lower-order science literacy objectives, many were deficient in higher-order literacy skills. Results of this project suggest that building scientific literacy is a continuing process which begins in first-year science courses with a set of fundamental skills that can serve the progressive development of literacy skills throughout the undergraduate curriculum.
To convert a traditional graduate seminar course into a class that emphasizes written as well as oral communication skills.
Graduate pharmacology/toxicology students presented formal and informal seminars on their research progress and on recent peer-reviewed literature from the field. Students in the audience wrote critiques of the research project or article, as well as of the presentations themselves.
Students were evaluated based on oral presentations, class participation, and a scientific writing component. All faculty members provided constructive written comments and a grade. The course master provided the presenter with a formal written review and returned a “red pen” revision of each student critique.
This novel seminar/writing course introduces intensive focus on writing skills, which are especially essential today given the large number of graduate students for whom English is not a first language.
research; seminar; communication skills; graduate education
To develop, implement, and evaluate a process of intergroup peer assessment and feedback using problem-based learning (PBL) tutorials.
A peer-assessment process was used in a PBL tutorial setting for an integrated pharmacy practice course in which small groups of students graded each others’ PBL case presentations and provided feedback in conjunction with facilitator assessment.
Students' quantitative and qualitative perceptions of the peer assessment process were triangulated with facilitator feedback. Students became more engaged, confident, and motivated, and developed a range of self-directed, life-long learning skills. Students had mixed views regarding the fairness of the process and grade descriptors. Facilitators strongly supported the peer assessment process.
Peer assessment is an appropriate method to assess PBL skills and is endorsed by students as appropriate and useful.
pharmacy students; peer assessment; problem-based learning
To prove the hypothesis that procedural knowledge might be tested using Key Feature (KF) questions in written exams, the University of Veterinary Medicine Hannover Foundation (TiHo) pioneered this format in summative assessment of veterinary medicine students. Exams in veterinary medicine are either tested orally, practically, in written form or digitally in written form. The only question formats which were previously used in the written e-exams were Type A Single-choice Questions, Image Analysis and Short Answer Questions. E-exams are held at the TiHo using the electronic exam system Q [kju:] by CODIPLAN GmbH.
In order to examine less factual knowledge and more procedural knowledge and thus the decision-making skills of the students, a new question format was integrated into the exam regulations by the TiHo and some examiner used this for the first time in the computer based assessment. Following a successful pilot phase in formative e-exams for students, KF questions were also introduced in summative exams. A number of multiple choice questions were replaced by KF questions in four computer based assessment in veterinary medicine. The subjects were internal medicine, surgery, reproductive medicine and dairy science.
The integration and linking of KF questions into the computer based assessment system Q [kju:] went without any complications. The new question format was well received both by the students and the teaching staff who formulated the questions.
The hypothesis could be proven that Key Feature questions represent a practicable addition to the existing e-exam question formats for testing procedural knowledge. The number of KF questions will be therefore further increased in examinations in veterinary medicine at the TiHo.
Key feature questions; Written examination; Reliability; Electronic exam
Although the majority of scientific information is communicated in written form, and peer review is the primary process by which it is validated, undergraduate students may receive little direct training in science writing or peer review. Here, I describe the use of Calibrated Peer Review™ (CPR), a free, web-based writing and peer review program designed to alleviate instructor workload, in two undergraduate neuroscience courses: an upper- level sensation and perception course (41 students, three assignments) and an introductory neuroscience course (50 students; two assignments). Using CPR online, students reviewed primary research articles on assigned ‘hot’ topics, wrote short essays in response to specific guiding questions, reviewed standard ‘calibration’ essays, and provided anonymous quantitative and qualitative peer reviews. An automated grading system calculated the final scores based on a student’s essay quality (as determined by the average of three peer reviews) and his or her accuracy in evaluating 1) three standard calibration essays, 2) three anonymous peer reviews, and 3) his or her self review. Thus, students were assessed not only on their skill at constructing logical, evidence-based arguments, but also on their ability to accurately evaluate their peers’ writing. According to both student self-reports and instructor observation, students’ writing and peer review skills improved over the course of the semester. Student evaluation of the CPR program was mixed; while some students felt like the peer review process enhanced their understanding of the material and improved their writing, others felt as though the process was biased and required too much time. Despite student critiques of the program, I still recommend the CPR program as an excellent and free resource for incorporating more writing, peer review, and critical thinking into an undergraduate neuroscience curriculum.
peer review, writing to learn; web-based learning; learning technology; Calibrated Peer Review
This study examined the effects of exam length on student performance and cognitive fatigue in an undergraduate biology classroom. Exams tested higher order thinking skills. To test our hypothesis, we administered standard- and extended-length high-level exams to two populations of non-majors biology students. We gathered exam performance data between conditions as well as performance on the first and second half of exams within conditions. We showed that lengthier exams led to better performance on assessment items shared between conditions, possibly lending support to the spreading activation theory. It also led to greater performance on the final exam, lending support to the testing effect in creative problem solving. Lengthier exams did not result in lower performance due to fatiguing conditions, although students perceived subjective fatigue. Implications of these findings are discussed with respect to assessment practices.
We tested the effect of voluntary peer-facilitated study groups on student learning in large introductory biology lecture classes. The peer facilitators (preceptors) were trained as part of a Teaching Team (faculty, graduate assistants, and preceptors) by faculty and Learning Center staff. Each preceptor offered one weekly study group to all students in the class. All individual study groups were similar in that they applied active-learning strategies to the class material, but they differed in the actual topics or questions discussed, which were chosen by the individual study groups. Study group participation was correlated with reduced failing grades and course dropout rates in both semesters, and participants scored better on the final exam and earned higher course grades than nonparticipants. In the spring semester the higher scores were clearly due to a significant study group effect beyond ability (grade point average). In contrast, the fall study groups had a small but nonsignificant effect after accounting for student ability. We discuss the differences between the two semesters and offer suggestions on how to implement teaching teams to optimize learning outcomes, including student feedback on study groups.
To evaluate a rubric-based method of assessing pharmacy students' case presentations in the recitation component of a therapeutics course.
A rubric was developed to assess knowledge, skills, and professional behavior. The rubric was used for instructor, student peer, and student self-assessment of case presentations. Rubric-based composite scores were compared to the previous dichotomous checklist-based scores.
Rubric-based instructor scores were significantly lower and had a broader score distribution than those resulting from the checklist method. Spring 2007 rubric-based composite scores from instructors and peers were significantly lower than those from the pilot study results, but self-assessment composite scores were not significantly different.
Successful development and implementation of a grading rubric facilitated evaluation of knowledge, skills, and professional behavior from the viewpoints of instructor, peer, and self in a didactic course.
rubric; pharmacy; peer assessment; self-assessment; assessment
Objectives. To design and implement a cardiovascular pharmacotherapy elective course to enhance pharmacy students’ ability to evaluate medical literature and apply clinical evidence.
Design. In weekly class sessions, students were provided an overview of the important literature supporting therapeutic guidelines for the management of major cardiovascular diseases. Students worked in groups to complete outside-of-class assignments involving a patient case and then discussed the case in class. During the semester, each student also independently completed a literature search on an assigned topic, summarized the studies found in table format, and presented 1 of the studies to the class.
Assessment. Students’ grades on weekly patient case assignments steadily increased over the semester. Also, the average grade on the final examination was higher than the grade on the midterm take-home examination. On the course evaluation, students rated the course favorably in terms of improvement of confidence in evaluating the primary literature and applying it to practice.
Conclusion. Completion of the cardiovascular pharmacotherapy elective increased pharmacy students’ level of confidence in evaluating literature and applying clinical evidence in making patient care decisions.
cardiovascular disease; pharmacotherapy; evidence-based medicine; elective course
To assess the impact of technology-based changes on student learning, skill development, and satisfaction in a patient-case workshop.
A new workshop format for a course was adopted over a 3-year period. Students received and completed patient cases and obtained immediate performance feedback in class instead of preparing the case prior to class and waiting for instructors to grade and return their cases. The cases were designed and accessed via an online course management system.
Student satisfaction was measured using end-of-course surveys. The impact of the technology-based changes on student learning, problem-solving, and critical-thinking skills was measured and compared between the 2 different course formats by assessing changes in examination responses. Three advantages to the new format were reported: real-life format in terms of time constraint for responses, a team learning environment, and expedient grading and feedback. Students overwhelmingly agreed that the new format should be continued. Students' examination scores improved significantly under the new format.
The change in delivery of patient-case workshops to an online, real-time system was well accepted and resulted in enhanced learning, critical thinking, and problem-solving skills.
course design; Internet access; online course management system; laptop computers; assessment