PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
 
J Appl Behav Anal. 2009 Winter; 42(4): 795–800.
PMCID: PMC2790933

THE EFFECTS OF RESPONSE CARDS ON STUDENT AND TEACHER BEHAVIOR DURING VOCABULARY INSTRUCTION

Henry Roane, Action Editor

Abstract

The use of response cards during whole-class English vocabulary instruction was evaluated. Five low-participating students were observed during hand-raising conditions and response-card conditions to observe the effects of response cards on student responding and test scores and teacher questions and feedback. Responding and test scores were higher for all targeted students in the response-card condition. The teacher asked a similar number of questions in both conditions; however, she provided more feedback in the response-card condition.

Keywords: active responding, classroom, feedback, hand raising, response cards, teaching, vocabulary

Students learn best when they are actively engaged in learning relevant instructional material (Bost & Riccomini, 2006). One promising strategy designed to increase active student responding via opportunities to respond is response cards. Students use these small whiteboards to print answers to teacher questions and hold up the cards to show their answers (Heward et al., 1996). The current literature shows that response cards increase the frequency and accuracy of student responding during whole-class instruction (Narayan, Heward, & Gardner, 1990), increase scores on immediate assessments (Maheady, Michielli-Pendl, Mallette, & Harper, 2002), increase scores on delayed assessments (Christle & Schuster, 2003), and reduce disruptive behavior (Armendariz & Umbreit, 1999). A meta-analysis (Randolph, 2007) found large effect sizes for achievement on tests, responsiveness in class, and decreases in problem behavior when students used response cards.

To date, research on response cards has focused on student, rather than teacher, behavior (Randolph, 2007). Response cards allow the teacher to assess the understanding of the class as a whole and provide feedback to the group or to individual students (Heward, 1994). Christle and Schuster (2003) reported anecdotally that a teacher provided more feedback and modified instruction to improve student understanding when the students used response cards. The current study investigated the relative effects of response cards and hand raising on student participation and academic achievement and on teacher behavior during whole-class vocabulary instruction.

METHOD

Participants, Setting, and Materials

Although the whole class used response cards, the impact on 5 students was studied. The teacher identified these 5 students as reluctant to respond during whole-class question-and-answer sessions. The students ranged from low to high achievers academically. Leo and Brenda (both 10 years old) were native English speakers who had a history of school-related anxiety and excessive absences. Alice, Sam, and Nicky were 11-year-old students who had emigrated from China, Pakistan, and Iran respectively, 2 to 4 years prior to the study. The classroom teacher, who had 29 years of teaching experience, implemented the lessons, which the researcher and teacher planned.

The study took place in an inner-city public school in British Columbia that enrolled 450 students from 33 language groups representing 42 countries, and 56% of the students spoke English as a second or additional language. All teaching sessions took place in a fifth-grade classroom with 15 male and 14 female students, with an age range of 10 to 11 years.

The response cards consisted of a set of laminated cards that were accompanied by dry-erase markers.

Procedure

Teaching session format and selection of target words

The teacher delivered 30-min teaching sessions two or three times weekly during the regular class English instruction using a chapter book entitled Midnight for Charlie Bone (Minno, 2002). Students had access to this book during vocabulary instruction and during three 20-min blocks of silent reading each week. The book had 21 chapters, and each teaching session corresponded to one chapter of the book. Each phase of the study consisted of five sessions, except for the first intervention condition, which had six sessions. Ten words from each chapter (210 words total; 50 to 60 words per phase) were targeted for instruction in each session (see Table 1 for examples). The teacher and the researcher chose target words that would be equally difficult for all students and had not been taught previously.

Table 1
Sample Vocabulary Words Targeted for Instruction

During teaching sessions, the teacher wrote the 10 target words on the board, briefly modeled their pronunciation, provided definitions and sample sentences for each of the target words, read aloud the definitions of the words in random order, and asked the students to respond with the word that matched the definition. Students responded by raising their hands or their response cards, depending on the condition in effect. Next, the teacher read a sentence from the book, which contained one of the target words, but she omitted the key word and asked the students to provide the missing word (either by raising their hands or using their response cards). Finally, the teacher reviewed the words again by presenting the definitions in a random order and having the students provide the words. The teacher was free to ask questions, give feedback, and control the amount of wait time afforded to students when they were responding to her questions. The teacher timed each session with a stopwatch, which was stopped and restarted if there were any interruptions, so that only instruction time was counted in the 30-min period.

At the end of every phase, students completed a test that covered material from the previous five or six teaching sessions (four tests in total). All of the tests contained 15 words, randomly chosen from the target words presented during the teaching sessions in that phase, with the exception of the final test, which consisted of only 10 words (based on teacher request).

Response-card training

The teacher instructed students the day before initiation of the study to use the response cards. The teacher instructed students to hold up their cards when she said “cards up” and to put their cards down when she said “cards down” (Christle & Schuster, 2003); this was practiced several times over a 25-min period until students achieved proficiency in the use of response cards.

Response measurement and interobserver agreement

A reversal (ABAB) design was used to evaluate the effects of response cards on four dependent variables: (a) rate of teacher questions (a question about one specific vocabulary item that required a response from the students), (b) rate of teacher feedback statements (one statement containing information regarding the accuracy and understanding of target words provided immediately after student responding), (c) the percentage of student-initiated responses (student raised a hand during baseline or wrote down an answer on his or her response card) following teacher questions, and (d) test scores (based on the student matching each of the target words on the test with its appropriate definition). The teacher was not aware that her feedback was being monitored, nor was she given specific instructions regarding the delivery of feedback during the investigation. The frequencies of teacher questioning and teacher feedback were converted to a rate (responses per minute). Student-initiated responses were analyzed as percentage of opportunities by dividing the number of student responses by the number of teacher questions. The tests consisted of the key words listed at the top of the page and the definitions listed below. The students were to write the word beside its corresponding definition. Percentage correct on the tests was calculated by dividing the number of words correctly matched by the total number of words presented.

The observers sat at the back of the room (to be as unobtrusive as possible) and recorded occurrences of teacher questions and feedback, as well as the responses of the 5 target students. The content of the feedback and the accuracy of student responding were not recorded.

A second observer collected interobserver agreement data during eight of the 21 sessions and independently marked copies of the tests. Interobserver agreement was calculated as the number of agreements on the occurrence and nonoccurrence of behavior divided by the total number of agreements plus disagreements, and this ratio was converted to a percentage. Mean interobserver agreement was 96% (range, 92% to 100%) for student-initiated responses, 98% (range, 94% to 100%) for teacher questioning, and 84% (range, 75% to 100%) for teacher feedback. Interobserver agreement for test scores was 100%.

RESULTS AND DISCUSSION

Figure 1 (top) depicts the rates of teacher questioning and teacher feedback statements in each 30-min session. Similar rates of teacher questions occurred during the hand-raising (M  = 1.01 responses per minute) and response-card (M  = 1.06) conditions. By contrast, the teacher provided feedback more often in the response-card (M  = 1.2 responses per minute) compared to the hand-raising (M  = 0.92) condition. Anecdotally, the teacher usually gave feedback to individual students in the hand-raising condition, whereas she gave feedback to the whole group in the response-card condition.

Figure 1
Teacher questions and feedback to students, student-initiated response opportunities, and test scores for each student.

Figure 1 (middle) shows the results for student-initiated responses in the hand-raising and response-card conditions. Levels of hand raising were zero or low for all participants in the hand-raising condition (Ms  = 0%, 22%, 16%, 26%, and 27% for Alice, Leo, Brenda, Sam, and Nicky, respectively) and increased in the response-card condition (Ms  = 46%, 95%, 91%, 100%, and 100% for Alice, Leo, Brenda, Sam, and Nicky, respectively).

Figure 1 (bottom) shows the test scores for each of the students. Alice's test scores improved from the first hand-raising condition to the first response-card condition, but her scores showed no difference following the second exposure to these conditions. For the remaining 4 students, all received higher test scores following the response-card condition than following the hand-raising condition (Nicky was absent for the final test).

The teacher provided the students with a greater amount of feedback during the response-card condition than in the hand-raising condition. This finding extends the previous literature on response cards (Christle & Schuster, 2003). A potential reason for this outcome may be that in the response-card condition, the teacher had more information about errors across all students and may have been in a better position to provide informed feedback.

The increase in student responding replicates the findings of several previous studies that have compared response cards with hand raising in a variety of environments with a wide range of learners (Christle & Schuster, 2003; Davis & O'Neill, 2004; Gardner, Heward, & Grossi, 1994; Godfrey, Grisham-Brown, Schuster, & Hemmeter, 2003; Kellum, Carr, & Dozier, 2001; Lambert, Cartledge, Heward, & Lo, 2006; Maheady et al., 2002; Marmolejo, Wilder, & Bradley, 2004; Narayan et al., 1990).

There are several limitations to the present study. First, a pretest was not administered, so the students may have already known some of the vocabulary taught in the lessons. Second, although test scores improved with response cards, the scores were quite low (i.e., rarely over 80%), suggesting that the students did not master the target vocabulary words. Because accuracy of student responding was not measured, it is not clear whether students were providing correct or incorrect responses to teacher questions. A more thorough investigation of teacher feedback is warranted, specifically, the characteristics of response-card instruction that increase the quality and quantity of teacher feedback and the effects of different types of feedback on student responding.

Footnotes

This research was carried out as part of a Master of Special Education course by the first author under the supervision of the second author.

REFERENCES

  • Armendariz F, Umbreit J. Using active responding to reduce disruptive behavior in a general education classroom. Journal of Positive Behavior Interventions. 1999;1:152–158.
  • Bost L.W, Riccomini P.J. Effective instruction: An inconspicuous strategy for dropout prevention. Remedial and Special Education. 2006;27:301–311.
  • Christle C.A, Schuster J.W. The effects of using response cards on student participation, academic achievement, and on-task behavior during whole-class math instruction. Journal of Behavioral Education. 2003;12:147–165.
  • Davis L.L, O'Neill R.E. Use of response cards with a group of students with learning disabilities including those for whom English is a second language. Journal of Applied Behavior Analysis. 2004;37:219–222. [PMC free article] [PubMed]
  • Gardner R, III, Heward W.L, Grossi T.A. Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis. 1994;27:63–71. [PMC free article] [PubMed]
  • Godfrey S.A, Grisham-Brown J, Schuster J.W, Hemmeter M.L. The effects of three techniques on student participation with preschool children with attending problems. Education and Treatment of Children. 2003;26:255–272.
  • Heward W.L. Three “low-tech” strategies for increasing the frequency of active student response during group instruction. In: Gardner R III, Sainato D.M, Cooper J.O, Heron T.E, editors. Behavior analysis in education: Focus on measurably superior instruction. Pacific Grove, CA: Brooks/Cole; 1994. pp. 283–320.
  • Heward W.L, Gardner R, III, Cavanaugh R.A, Courson F.H, Grossi T.A, Barbetta P.M. Everyone participates in this class: Using response cards to increase active student response. Teaching Exceptional Children. 1996;28:4–11.
  • Kellum K.K, Carr J.E, Dozier C.L. Response-card instruction and student learning in a college classroom. Teaching of Psychology. 2001;28:101–104.
  • Lambert M.C, Cartledge G, Heward W.L, Lo Y. Effects of response cards on disruptive behavior and academic responding during math lessons by fourth-grade urban students. Journal of Positive Behavior Interventions. 2006;8:88–99.
  • Maheady L, Michielli-Pendl J, Mallette B, Harper G.F. A collaborative research project to improve the performance of a diverse sixth grade science class. Teacher Education and Special Education. 2002;25:55–70.
  • Marmolejo E.K, Wilder D.A, Bradley L. A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis. 2004;37:405–410. [PMC free article] [PubMed]
  • Minno J. Midnight for Charlie Bone. London: Egmont; 2002.
  • Narayan J, Heward W.L, Gardner R., III Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis. 1990;23:483–490. [PMC free article] [PubMed]
  • Randolph J.J. Meta-analysis of the research on response cards: Effects on test achievement, quiz achievement, participation, and off-task behavior. Journal of Positive Behavior Interventions. 2007;9:113–128.

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior