PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
School Psych Rev. Author manuscript; available in PMC 2011 January 1.
Published in final edited form as:
School Psych Rev. 2010; 39(1): 54–68.
PMCID: PMC2910920
NIHMSID: NIHMS154230

Teachers’ Perceptions of Word Callers and Related Literacy Concepts

Abstract

The purpose of the study was to investigate teachers’ perceptions of word callers as they relate to the concepts of reading fluency and reading comprehension. To this end, second grade students (N = 408) completed a series of reading fluency and reading comprehension assessments, and their teachers (N = 31) completed word caller nominations and a questionnaire regarding their concepts surrounding these issues. Our findings suggested that teachers often over nominated children as word callers. Further, questionnaire data indicated a great deal of ambiguity and inconsistency exists regarding teachers’ understanding and use of the term word caller. By contrast, teachers seemed to possess a veridical understanding of the related terms reading fluency and reading comprehension.

Keywords: word caller, reading fluency, reading comprehension, teacher judgments, curriculum-based measurement

Despite research supporting the utility of curriculum-based measurement in reading (CBM; Deno, 1985), some teachers have expressed concern that the use of CBM oral reading probes may overlook children for whom comprehension is the primary concern (Hamilton & Shinn, 2003; Shapiro, 2004). CBM oral reading fluency probes are utilized for a wide range of educational decisions including screening, progress monitoring, goal setting, instructional planning, predicting performance on high-stakes test, and even remedial and special education eligibility (Deno, 2003; Hintze & Silberglitt, 2005). Therefore, reading fluency is often utilized as a general indicator of reading competence due to its close relation with reading comprehension in early elementary school (Fuchs, Fuchs, Hosp, & Jenkins, 2001; Jenkins, Fuchs, van den Broek, Epsin, & Deno, 2003; Schwanenflugel, Meisinger, Wisenbaker, Kuhn, Strauss, & Morris, 2006; Shinn, Knutson, Good, Tilly, & Collins 1992).

Word callers are children who efficiently decode words but do so without commensurate comprehension taking place (Stanovich, 1986), in other words they “call out” the words in a text without understanding the meaning of the text as a whole. It is a somewhat commonly used term among teachers and educational publications (e.g., Walczyk & Griffin-Ross, 2007). However, some researchers have questioned the validity of the word caller construct (Hamilton & Shinn, 2003; Nathan & Stanovich, 1991; Stanovich, 1986, 2000) and have called it one of the “red herrings” (Nathan & Stanovich, 1991, p.177) of the reading literature. Although commonly used, there is almost no research to indicate how teachers conceptualize the term word caller or how it influences their instructional practices.

The Existing Word Caller Literature

Few studies to date have examined word callers, and findings from those studies lend little support to the validity of this construct (Hamilton & Shinn, 2003; Meisinger, Bradley, Schwanenflugel, Kuhn, & Morris, in press). For example, Hamilton and Shinn (2003) asked 75 third grade teachers across 20 elementary schools if they taught a student fitting the description of a word caller (i.e., a student who can read fluently but has difficulty with comprehension). Twenty-nine teachers (39%) indicated that they did and identified 33 students as word callers. These teachers were also asked to identify students whom they believed to have similar oral reading fluency as the word callers, but who read with comprehension. The teacher-identified word callers demonstrated weaker skills in both oral reading fluency and reading comprehension when compared to their peers. Students nominated as word callers averaged 89.1 words correct per minute on CBM reading probes whereas their peers averaged 116.2 words correct per minute. Moreover, despite several attempts using different criteria, Hamilton and Shinn were not able to identify any word callers in their sample.

Meisinger at el. (in press) found similar results in a two-part study examining the prevalence of word callers and the accuracy of teachers’ word caller nominations. The first study investigated the prevalence of word callers in a large sample (N = 868) of second- and third-grade children who were administered the Gray Oral Reading Test- Fourth Edition (GORT-4; Wiederholt & Bryant, 2001), and the Reading Comprehension subtest of the Wechsler Individual Achievement Test (WIAT-RC; Psychological Corporation, 1992). Using a criterion comparable to that of Hamilton and Shinn (2003), word callers were defined as children who could read fluently (as indicated by a minimum age-based standard score of 95 on the GORT-4 Fluency scale), but struggled to comprehend what they read (as indicated by an age-based standard score of below 85 on the WIAT-RC). Based on this criterion few students were identified as a word caller (1.4%) across the two grades.

In a second study, the accuracy of teachers’ word caller nominations and the reading profile of teacher-nominated word callers were examined in a sample consisting of third- (n = 110) and fifth- (n = 92) grade students and their teachers (n = 21) (Meisinger et al., in press). These students were also administered the GORT-4 and the reading comprehension subtest of the WIAT. Teachers were provided with the same definition of a word caller that was used in the Hamilton and Shinn (2003) study, were asked if any of their students fit the definition of a word caller, and were also asked to define literacy concepts pertinent to identifying word callers (i.e., reading fluency and reading comprehension). Consistent with previous research, the teachers nominated 22.3% of the students as word callers, but only 1.8% of third-grade students and 9.78% of the fifth-grade students were psychometrically identified as word callers, and teacher-nominated word callers demonstrated poor fluency and poor comprehension. Moreover, 61.9% of the teachers reportedly included comprehension as a feature of reading fluency, but no teacher included fluent reading as a component of reading comprehension and the differences in the two did not account for who they nominated as word callers (Meisinger et al., in press).

Teacher Judgments of Reading Skill

Teacher nominations of word callers are implicitly based on their perceptions of students’ reading fluency and reading comprehension. Although few studies have examined the accuracy with which teachers identify students as word callers, a larger literature has explored teachers’ judgments of academic skills. Typically, teachers are asked to make direct predictions of students’ performance on a specific academic task (e.g. how many comprehension questions will this student answer correctly on a particular test) or to provide comparative ratings (e.g. rate each student’s reading comprehension on a 5-point scale) which are later compared to students’ actual performance on objective measures (Begeny, Eckert, Montarello, & Storie, 2008; Demaray & Elliot, 1998; Hoge & Coladarci, 1989). Moderate to strong associations have been found between teacher judgments and actual student performance (Mdn. r = 0.66) (Hoge & Coladarci, 1989). However, correlation coefficients may mask teacher inaccuracies (Begeny et al., 2008). When percentage agreements were employed, results revealed that teachers often overestimated their students’ reading skills, especially reading fluency (Bates & Nettelbeck, 2001; Begeny et al., 2008; Hamilton & Shinn, 2003; Feinberg & Shapiro, 2003). Some evidence suggests that this overestimation is limited to students with average to low reading abilities, but that teachers are quite accurate judges of strong, fluent readers (Begeny et al., 2008).

Despite the importance of teachers’ ability to judge their students’ reading skills, particularly in light of the instructional decisions they make for their students, teachers’ own conceptualization of the reading skills they are judging is typically overlooked. For example, teachers may be asked to predict how many words a student will read correctly in a 1-minute reading probe, but rarely, if ever, are they asked how they would define, assess or provide instruction related to reading fluency. The conceptual basis for teachers’ predictions or judgments of students’ reading skills has been largely overlooked. Further, because teachers may vary in their understanding of literacy concepts, there may also be differences in how teachers identify and provide instructional support to their students.

Although the variability in teachers’ understanding and use of the terms reading fluency and comprehension is somewhat concerning, it is not unreasonable given that variability exists among researchers’ definitions of the terms. For example, many researchers defined reading fluency simply as rate and accuracy in oral reading (Torgesen, Rashotte, & Alexander, 2001; Shinn, 1989), and other definitions also included expressiveness or prosody (Allington, 1983; National Reading Panel, 2000). Further, some researchers incorporated comprehension processes in their definition of reading fluency (Fuchs et al., 2001; Samuels, 2006; Wolf & Kadzir-Cohen, 2001). A common approach to assessing reading fluency is to have a child read a passage aloud while an examiner records the number and type of misread words, and the time it takes the child to read the passage. Thus, most measurement instruments employ the simplest definition of text fluency as rate and accuracy in the oral reading of text. The present study also utilized the simple definition of reading fluency as reading rate and accuracy (words read correctly per minute), as it is the most readily observable and therefore most reliably measured (Torgesen et al., 2001).

Reading comprehension is often defined as “the process of simultaneously extracting and constructing meaning” from text (Sweet & Snow, 2003, p.1) and it entails three elements that occur within a sociocultural context: the reader, the text, and the activity (Sweet & Snow).

However, comprehension has been conceptualized in other ways, such as the interaction between a reader and a text to infer the author’s intended meaning (Johnston, 1981) or simply as a meaning-making process (Block, Gambrell, & Pressley, 2002). Comprehension is also sometimes viewed as a discrete skill such as identifying the main idea of a text (van den Broek, Lynch, Naslund, Ievers-Landis, & Verduim, 2003; Warren & Fitzgerald, 1997). Reading comprehension is assessed through a variety of formal methods including question answering, passage recall or retell, and cloze tasks (Fuchs et al., 2001). Comprehension may also be assessed informally through discussions, completion of a graphic organizer, or through a writing activity (Blachowicz & Ogle, 2001).

Because many factors influence comprehension, many factors could also complicate the assessment of reading comprehension. The text that the student is asked to read as part of a comprehension assessment, situational variables for the student, and the activity used to assess comprehension all might affect the assessment. For example, if a reader has difficulty with decoding or reading fluency, he/she may have difficulty comprehending the text (Vellutino, 2003). Comprehension might also be poor if a reader has limited background information about (Droop & Verhoeven, 1998), motivation for, or purpose for reading about topic (Miller & Faircloth, 2009). With respect to the text, issues such as genre (e.g., narrative versus expository), and text coherence, organization and complexity of the text that the student reads as part of a comprehension assessment can influence the results (Graesser, McNamara, & Louswerse, 2003). Lastly, how a reader is expected to demonstrate his/her understanding, is important. For example, a reader may be asked to respond to literal or inferential questions, respond orally, write an essay, select a picture, complete a multiple choice text, maze task, or retell, or to apply information learned in a text to complete a task (Keenan, Betjemann, & Olson, 2008).

Purpose

In sum, research suggests that word callers do not exist in appreciable numbers in the early elementary grades, yet teachers frequently identify their students as word callers (Hamilton & Shinn, 2003; Meisinger et al., in press). In addition, it is not clear why teachers’ nominations of word callers failed to align with students’ actual reading profiles. In previous studies teachers were provided with the researchers’ definition of word caller, which they then applied when nominating word callers. Although Meisinger and colleagues (in press) asked teachers to define the terms reading fluency and reading comprehension, teachers’ conceptualization of the term word caller were not addressed, nor were additional questions asked regarding how teachers assess or intervene in each area. Given the ambiguity surrounding teachers’ rationale for their word caller nominations, a more in depth examination of teachers’ definition and use of this term, and how their conceptualization relates to classroom practices, seems warranted.

The aim of the present study is to understand how teachers conceptualize word callers, as well as related concepts of reading fluency and comprehension, and how they assess and support children struggling in each of these areas. Four research questions guided the study. First, we asked how teachers defined the term word caller. Given the variability across researcher and teacher definitions of related literacy terms (i.e., reading fluency and comprehension) (Meisinger et al., in press), similar variation was expected across teachers definitions of word caller.

The second research question asked to what extent does teacher and researcher nominations of word callers overlap? In contrast to previous research, teachers in this study nominated word callers using their own understanding of the term versus applying a definition supplied by the researcher. Given that teachers in this study used their own definition of word caller, and the possibility that variation may exist across teachers’ definitions, minimal overlap was expected.

Third, we inquired about the actual reading profiles of students (on standardized reading assessments) who were nominated by their teacher as a word caller. It was hypothesized that a teacher’s definition of the term word caller would explain the observed reading profiles of teacher-nominated word callers.

Lastly, we asked how teachers’ conceptualizations of literacy terms related to aspects of their classroom practices. Although variability was expected across teachers’ understanding of literacy terms, considerable consistency was expected across definitions, assessment, and instructional support for each area for a particular teacher. Consistency between reported definition and application within a given term should denote a strong conceptual basis. To this end, teachers were asked questions designed to ascertain their definitions of a word caller, reading fluency, and reading comprehension. Teachers were asked to describe an assessment and a type of instruction they would provide to enhance reading fluency and reading comprehension, as well as how to remediate reading difficulties experienced by word callers.

To ensure that our previous findings were not instrument-specific (Meisinger et al., in press), we used different standardized assessments of fluency and comprehension, including a CBM of reading. In conclusion, the current study expands upon previous research by (a) exploring teachers’ definitions of word caller, (b) examining teacher nominations of word caller made on the basis of teacher (versus researcher) definition of the term, and (c) exploring the consistency of teachers’ conceptualization of the terms word caller, reading fluency, and reading comprehension.

Method

Participants

Participants included 31 second-grade teachers from 8 elementary schools located in either the southeastern or mid-western portion of the United States. Three percent of the teachers were classified as African-American, 94% as Caucasian, and 3% as other; all were female. Of the participating teachers, 45.2% held bachelor degrees while 54.8% also completed a master’s degree in education. Teachers had an average of 14.06 years of teaching experience (SD = 9.12) and had taught second grade for an average of 7.35 years (SD = 6.78). At the time of this study, none reported that they had been exposed to the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good, Kaminski, Smith, Laimon, & Dill, 2001) or had used the Gates-MacGinitie Reading Test-fourth edition (GMRT-4; MacGinitie, MacGinitie, Maria, Dreyer, & Hughes, 2000) as part of their classroom assessments. An average of 13.16 students (SD = 2.61; Range = 6–19) from each teacher’s classroom returned written parental consent forms and participated in the present study.

The participating children were 408 second-graders (M age = 8 yr., 5 mo.; range = 7 yr., 3 mo. to 9 yr., 8 mo.) from the participating teachers’ classrooms; 293 students attended schools in the Southeast and 115 students attended schools in the Midwest. Approximately 9.3% of the children were classified as African-American, 73.8% as Caucasian, 12.7% as Hispanic, 2.0% as Asian, and 2.2% as Other or unknown; 51% were female. Approximately 60% received free or reduced-cost lunch. The demographic information for the students for whom consent was obtained was similar to that of the school population. All attended general education classes and were not excluded on the basis of reading disability or other special education eligibility unless they received services in a self-contained classroom.

Participants in this study were part of a larger multi-site, multi-year project consisting of several components including theoretical models of reading, the normative development of reading fluency, the role of motivation in reading fluency, the impact of text characteristics in reading growth, remedial intervention for dysfluent readers, and classroom-level fluency pedagogical approaches. No data from participants in this study have appeared elsewhere. During the spring semester, all students were administered a battery of assessments, in a counterbalanced order, to determine their word reading ability, reading fluency, and reading comprehension skills. However, only reading fluency and reading comprehension were used in the current study. At the time of this study, none of the schools participated in Reading First.

Reading Assessments

Oral reading fluency

The DIBELS Oral Reading Fluency (DORF, 5th Edition; Good et al., 2001) is a standardized, criterion-referenced test of oral reading fluency. The DIBELS passages and procedures are based on curriculum-based measurement (CBM; Deno, 1985) in reading, which is a method of assessing oral reading fluency skills. Children are asked to read aloud from three grade level reading passages of similar difficulty for 1 minute. The following benchmark passages were used in this study: “If I had a Robot,” “My Grandpa Snores,” and “My Drift Bottle.” The number of words read correctly per minute (cwpm) for each passage was calculated for each child. The median cwpm of the three passages was then used to identify children’s benchmark level as being “low risk”, “some risk”, or “high risk” for failing end-of-year high stakes testing based on categories provided by the DIBELS (Good et al., 2001). Data from the DORF have been shown to be reliable (estimates between .89 and .97) and to correlate with other measures of reading at .52 to .92 (Shaw & Shaw, 2002). The median cwpm were used in the analyses.

Reading comprehension

The Comprehension subtest of the GMRT-4 is a group administered, standardized, and norm-referenced test of reading comprehension. The test is comprised of a series of 10 passages. Each passage is divided into four text segments and accompanied by a question and a panel of three pictures, one of which represents the best answer. Children have 35 minutes to read the passages silently and answer the questions. The GRMT-4 produces normal curve equivalent, based on the number of questions answered correctly, which were used in the analyses. The GRMT-4 technical manual (MacGinitie et al., 2000) reports reliabilities for 2nd grade of .82–.83 and validity estimates with other tests from .62-.60.

Teacher Assessments

Teacher questionnaire

Teachers completed a questionnaire designed to investigate their understanding of the terms “word caller,” “reading fluency,” and “reading comprehension.” Specifically, teachers were asked, “How do you define word caller?” “How do you decide whether or not a student is a word caller?” and “Please name or describe one instructional practice or strategy for supporting word callers.” The same basic frame was applied to the terms “reading fluency” and “reading comprehension.” Teacher responses to the reading fluency question were then classified as meeting either a basic reading fluency definition or an expanded reading fluency definition. Teacher responses were labeled as a basic reading fluency definition when at least one of the three components of reading fluency (i.e., speed, accuracy, and appropriate expression; National Reading Panel, 2000) were described. Responses were labeled as an expanded reading fluency definition if they also included reading comprehension processes (e.g., predicting, visualizing, making connections, and questioning; Pressley, 2000). Teachers’ definitions of reading comprehension were classified as a basic reading comprehension definition if only comprehension processes were mentioned or an expanded reading comprehension definition if one or more aspects of reading fluency were mentioned in addition to reading comprehension. Finally, because we did not have pre-determined classification for word callers, they emerged from the data. Specifically, teachers’ responses to the word caller question were classified as word caller (i.e., reads fluently but without comprehension), dysfluent reader (i.e., poor decoding or poor fluency, no mention of comprehension), or poor reader (i.e., reads without fluency and comprehension).

The teacher response data were then coded as “consistent” if the assessment described was congruent with the teachers’ definition of fluency, comprehension and word caller. For example, if a teacher endorsed a basic reading fluency definition (i.e., one that included rate, accuracy, or expression but did not include comprehension processes), then the assessment would be labeled “consistent” if the assessment described targeted reading fluency (e.g., listening to the student read aloud, use of an oral reading probe). Conversely, if that same teacher described an assessment that included comprehension (e.g., asking questions, retellings, making predictions), then that response would be labeled “inconsistent” with that teacher’s definition of reading fluency. The appropriateness of an assessment for fluency and comprehension was guided by recommendations found in reading assessment textbooks, with which teachers would be familiar (see Lipson & Wixson, 2002; McKenna & Stahl, 2003). Further, the congruence between the teachers’ literacy definitions and the instructional strategies were coded in a similar manner. For example, if a teacher endorsed a basic reading fluency definition and provided a fluency-oriented instructional strategy (e.g., repeated reading, choral reading, echo reading, partner or paired reading, reading with a tape), the response would be labeled “consistent”. Conversely, should that same teacher describe an instructional strategy that targeted comprehension (e.g., questioning, summarizing, completing graphic organizers), the response would be labeled “inconsistent”.

The appropriateness of an instructional strategy for fluency and comprehension was guided by recommendations found in reading textbooks, with which teachers might be familiar (see Blachowicz & Ogle, 2001; Harvey & Goudvis, 2007; Kuhn & Schwanenflugel, 2007; NRP, 2000; Tierney & Readance, 2005). A graduate student familiar with reading fluency assessment and practices and one author coded teacher response data. Inter-rater reliability was determining using a Cohen’s kappa coefficient (k = .96) based on 20% of the questionnaires. Disagreements, although seldom, were resolved though discussion and were then recoded.

Teacher nominations

Teachers were provided a list of their students for whom written parental consent had been obtained and asked to indicate whether each student was a “word caller.” Teachers were not provided a definition of the term, but rather made nominations based on their own understanding of the term.

Procedure

The purpose of the study was explained to each child and, after assent was given, the reading fluency assessment was administered individually in a quiet area of the school. The reading comprehension assessment was group administered to the children within 2 weeks of completing the individualized assessment. A doctoral school psychology student who had substantial experience administering reading fluency and comprehension assessments trained all examiners. A training session was held prior to data collection where the DORF procedures were reviewed. Examiners then engaged in practice administrations using taped recordings of children’s readings until they reached at least 95% agreement with the trainer on wcpm. Further, the trainer observed the examiners for the first week of testing to ensure procedural adherence. No procedural adherence violations were observed. Each testing session was tape recorded; this allowed minor miscue errors to be corrected and for the examiner to be provided feedback. Lastly, tape recordings were periodically checked throughout the data collection process to ensure maintenance of procedural adherence. Teacher questionnaires were distributed and completed within 2 weeks of the child assessments. The researchers met with teachers individually and addressed any questions that arose. Children received a sticker or a pencil and their teachers received books for their classroom or the school library as a small token of appreciation.

Word Caller Identification Criterion

Our word caller criterion was designed to be comparable to criteria used in previous studies (i.e., Hamilton & Shinn, 2003; Meisinger et al., in press) in that it required children to have fluency in the non-problematic range, while identifying distinct comprehension problems. The DORF test provides information in the form of risk categories based on the number of words read correctly per minute (cwpm) and the GMRT-4 provides normal curve equivalents (NCE). Thus, using this criterion, students identified as word callers needed to achieve reading fluency scores in the “some risk” category or above (WCPM ≥70) and reading comprehension scores comparable to a standard score of 85 or less (NCE ≤29).

Results

On average, children possessed age-appropriate reading skills (GRMT-4 normal curve equivalent M = 51.95, SD = 17.68; DORF median cwpm M = 90.71, SD = 33.46). While the majority of children (51.5%) were identified as low risk according to the DIBELs, fewer fell in the in the some risk (19.6%) or at risk category (28.9%). Further, 11.0% of children met our criteria for poor comprehension (NCE ≤29). In fact, when applying our word caller criterion, only 1.2 % (n = 5) of the children were identified as word callers based on the standardized reading assessments (i.e., the DORF and GRMT-4). Yet, the majority of teachers (71.0%) nominated at least one student in their class as a word caller, with a total of 24.8% (n = 101) of all students being so identified by their teacher.

Teacher Definitions of Literacy Terms

Teachers’ definitions of the terms reading fluency, reading comprehension, and word caller were examined to better understand how teachers conceptualized these constructs. When teachers’ definitions of fluency were examined, we found that teachers provided a basic fluency definition (i.e., rate, accuracy, and/or appropriate expression) (45.2%) as often as an expanded reading fluency definition (i.e., comprehension processes were also included) (38.7%). The remaining 16.1% of teachers provided responses that did not include any components of fluency (e.g., “reads with comprehension”) or were too vague to classify (e.g., “read independently”). Looking more closely at the teachers who provided a basic fluency definition, 14.3% included only one element of fluency, 71.4% included two elements of fluency, and 14.3% included all three elements of fluency (i.e., rate, accuracy, and expression). With respect to the teachers who provided an expanded definition of fluency, 25.0% included only one element of fluency, 33.3% included two elements of fluency, and 41.7% included all three elements of fluency. All teachers provided at least a simple definition of reading comprehension (e.g. “Understand the text” or “Understands what he/she reads) and 58.1% described comprehension processes (e.g., makes connections within the text, monitors comprehension, summarizes or retells a story). No teacher mentioned reading fluency when defining comprehension. These results suggest that many teachers’ may consider comprehension a key component of fluent reading, but not the reverse.

Unlike their definitions of fluency and comprehension, there was more variability in teacher concepts of “word caller.” The most common definition of word caller used by teachers’ (41.9%) paralleled the standard definition (i.e., children who read fluently but without comprehension). However, another 25.8% of teachers conceptualized word callers as merely dysfluent readers, and an additional 22.6% of teachers described word callers as overall poor readers (i.e., dysfluent with poor comprehension). The remaining 9.7% of teachers (n = 3) either did not respond to the word caller questions or reported being unfamiliar with the term. Given that these three teachers were not familiar with the word caller concept, their data as well as that of their students (n = 36) were not included in subsequent analyses. This conceptual variability may be responsible for why teacher-nominated word callers largely did not meet the expected reading profile. This was investigated in the next set of analyses.

Teacher-Nominated Versus Researcher-Identified Word Callers

Our word caller criterion was compared to teacher word caller nominations in a contingency table (see Table 1). Overall, little agreement existed across teachers and researchers regarding word caller status (k = .10). Teachers identified 73.9% of children correctly as either a word caller (n = 1) or as a non-word caller (n = 274). However, examination of teacher and researcher discrepancies revealed some interesting trends. Specifically, teachers were most accurate in identifying non-word callers; 99.6% of students nominated as non-word callers were also identified by the researchers as a word caller. However, only 1.0% of teacher-nominated word callers were researcher-identified. In other words, only one student of the 97 nominated by his or her teacher as a word caller also met our word caller criterion. In sum, teachers tended to over-nominate many children as word callers.

Table 1
Contingency Table for Teacher-Nominated versus Researcher-Identified Word Callers

Word Caller Profiles

Children nominated by their teacher as a word caller would be expected to possess fluent reading skills but struggle to comprehend text. To investigate the accuracy of teachers’ word caller nominations, children’s reading profiles were examined to determine whether their reading skills as measured by standardized reading tests would meet the expected profile. To this end, a series of one-way ANOVAs was conducted comparing reading comprehension (GRMT-4) and reading fluency (DORF median cwpm) as a function of teacher word caller nomination (i.e., word caller or non-word caller). An ANOVA approach assumes normally distributed variables and homogeneity of variance. Although F tests performed on a large sample, as in the present study, are generally robust to minor violations of these assumptions, the distributions and variances data were examined. Available data suggested relatively normal distributions for reading fluency (kurtosis = −0.13; skewness = 0.01) and comprehension (kurtosis = 0.34; skewness = −0.47). Further, Levine’s test of homogeneity of variance was conducted prior to each ANOVA and none were significant. Lastly, a Bonferroni correction was used to adjust for the use of multiple comparisons (alpha = .05/4 = .0125). Teacher-identified word callers had both lower reading comprehension, F (1, 370) = 54.98, p <.001, partial η2 = .13, and lower fluency scores, F (1, 370) = 43.25, p <.001, partial η2 = .11, than their peers (see Table 2).

Table 2
Means and Standard Deviations for Teacher-Nominated Word Callers and Peers on Standardized Reading Assessments

If the standard definition of word calling applied to children who were nominated as word callers, it would be expected that teacher-identified word callers would have negative or zero correlation between fluency and comprehension. An examination of the correlation between reading fluency and comprehension measures found the opposite. Fluency and comprehension were somewhat more closely related for students nominated as word callers (r = .62, p <.001) than their peers (r = .40, p <.001), z = 2.52, p <.05. Thus, teacher-identified word callers did not meet the expected reading profile.

Because teachers did not share a uniform definition for word caller, it seemed reasonable to hypothesize that the reading profiles of teacher-nominated word callers may vary based on their definition of the term. To explore this possibility, a series of one-way ANOVAs was conducted examining the relationship between teachers’ definition of word callers (i.e., word caller, dysfluent reader, poor reader) and the reading comprehension and fluency of the students they identified as word callers. This analysis revealed that differences in the way teachers defined the term word caller were unrelated to children’s reading fluency, F (2, 94) = 1.15, p = .32, partial η2 = .02, or comprehension skills, F (2, 94) = 1.06, p = .35, partial η2 = .02 (see Table 3). Thus, teachers’ rationale for nominating some children as word callers is unclear.

Table 3
Means and Standard Deviations of DIBELS and Gates-MacGinitie Reading Test (4th ed.; GMRT-4) for Teacher-Identified Word Callers across Teacher Word Caller Definitions

Consistency across Definitions, Assessment, and Instruction

We considered the teachers’ conceptual consistency of fluency, comprehension, and word caller by examining teachers’ description of how they would assess and provide instructional support for each area. Presumably, consistency across teacher’s definitions, the manner of assessment, and instructional approach or strategy should denote a solid conceptual basis for these literacy constructs. The assessment that teachers provided for both reading fluency and comprehension and for identifying word callers were consistent with their definition of these skills (see Table 4). Regarding instructional strategies, teachers also displayed a general consistency between definition of and the instructional strategy for reading fluency and reading comprehension; however, slightly less consistency existed with regard to word calling. In particular, of the teachers (41.9%) who ascribed to the traditional definition of word caller (i.e., children who read fluently but do so without comprehension taking place), only two-thirds (66.7%) provided a consistent instructional strategy. Thus, this analysis supports the others that there is conceptual ambiguity regarding how to address word calling instructionally.

Table 4
Consistency across Teachers’ Definitions, Assessment, and Instruction

Discussion

Despite the importance of teachers’ judgments in making appropriate instructional decisions for students, teachers’ own conceptualizations of the skills they judge are generally overlooked. The present study expanded upon our understanding of teachers’ conceptualization of important literacy terms and teachers’ ability to identify students who fit a common definition of word caller. Concern for word callers is sometimes raised by teachers regarding the use of CBM reading assessments (Hamilton & Shinn, 2003; Shapiro, 2004). The extant literature suggests that word callers are relatively rare in early elementary school, yet teachers frequently identify students as word callers (Hamilton & Shinn, 2003; Meisinger et al., in press). Virtually no research exists to indicate how teachers’ understand and use the term word caller. Our findings suggest that considerable variability and ambiguity exists across teacher definitions of the term.

Findings from this study confirmed previous research (Hamilton & Shinn, 2003; Meisinger et al., in press) that questioned the accuracy of teachers’ word caller nominations. Nearly one quarter of the students in the present study were nominated as a word caller by their teacher, yet only 1.2% was identified as word callers based on the standardized reading assessments (i.e., the DORF and GRMT-4). The preponderance of teacher error was made in the direction of over-nominating children as word callers. Similar to previous research (Hamilton & Shinn, 2003; Meisinger, et al., in press), children identified by their teachers as word callers, on average, read less fluently and comprehended less of what was read. Thus, our findings are consistent with the larger body of literature on teacher judgments, which suggests that teachers often overestimate their student’s reading skills, especially in reading fluency (Bates & Nettelbeck, 2001; Begeny et al., 2008; Hamilton & Shinn, 2003; Feinberg & Shapiro, 2003). Previous research used norm-referenced tests of reading fluency (i.e., the GORT-4), which may have led to inconsistencies between researchers identification and teachers nominations of word callers (Meisinger et al., in press). Consequently, we used a CBM oral reading probes in the present study, which may align more closely with how teachers conceptualize and assess oral reading fluency. Yet, even with this assessment, we identified only a few students as true words callers, which again confirmed that teachers over-nominated students as word callers. In sum, results from this work substantiate skepticism regarding the accuracy of teachers’ word-caller nominations, but teachers’ rationale for making these nominations remains unclear.

Almost half the teachers provided a basic fluency definition (i.e., rate, accuracy, and/or appropriate expression), while slightly fewer teachers included comprehension in their definition. Most teachers then described an appropriate means for assessing fluency and providing instructional support, regardless of their definition. With respect to comprehension, all teachers provided a reasonable definition of comprehension, and nearly all described an appropriate means for assessing comprehension and providing instructional support. Thus, issues related to how teachers conceptualized fluency and comprehension should not have hindered their ability to identify students as word callers.

Finally, we considered how teachers’ conceptualization of word callers might influence how they nominated students. Among the teachers in the present study, we identified three distinct definitions for a word caller. First, approximately half of the teachers provided a definition that is common in the literature; that is, a word caller is a student who reads fluently but with poor comprehension. A second definition provided by approximately a quarter of the teachers was that a word caller is a student who is primarily a dysfluent reader. The last definition, also provided by approximately a quarter of the teachers, is that a word caller is a student who, overall, is a poor reader. Unfortunately, many teachers nominated children as word callers who did not actually meet their own definition of a word caller. Moreover, despite having a solid conceptualization of reading comprehension and reading fluency, as denoted by a consistency across definition, assessment strategy, and instructional approach, teachers possessed less consistency for the concept of word caller. Taken together, our findings suggest that a great deal of ambiguity surrounds teachers’ understanding and use of the term word caller. This may not be especially surprising given the disagreement among researchers. For example, most researchers define words callers as children who read fluently but without comprehension, but some have described word callers as children who read accurately but not fluently (Marshall & Campbell, 2006). Additionally, because the literature does discuss words callers and it was a term known by most teachers in this study, there may be an assumption that there are many young students who are words callers, when in fact there are not.

Despite a strong conceptual basis for theses skills, teachers’ judgments of reading fluency often overestimate students skills (Bates & Nettelbeck, 2001; Begeny et al., 2008; Hamilton & Shinn, 2003; Feinberg & Shapiro, 2003), as evidenced by teachers’ over nomination of word callers in this study. There also may be a disconnect between teachers’ understanding of these literacy skills and the application of this knowledge when making judgments regarding their students skills. It may be prudent for teachers to augment their judgments of their students’ reading skills with objective assessments.

Limitations and Future Directions

Several limitations of this research warrant discussion. First, our analysis regarding the accuracy of teachers’ word caller nominations was based on a comparison between teacher-nominated and researcher-identified word callers. To identify word callers using the reading assessments, a decision rule was created regarding what constituted fluent reading and what constituted poor comprehension. The criterion used in this study and in our previous work (Meisinger et al., in press) was designed to replicate that of Hamilton and Shinn (2003). Some modification of the criterion used in Meisinger et al. (in press) was required for this study due to the use of criterion-referenced rather than norm-referenced assessments of reading fluency (i.e., risk categories based on the cwpm versus standard scores). Therefore, it should be noted that results regarding the accuracy of teachers’ word caller nominations may vary based on the comparison criterion used. Further, it could be argued that the use of norm-referenced tests of reading comprehension limits this work as such measures may represent non-typical reading contexts. That being said, the prevalence rate observed for children in the early elementary school years in our previous work (0.8– 2.3%) was quite similar to what was observed in this study (1.2%), despite the use of different measures and slightly different criteria.

Second, word calling may be situation specific. Motivation may play an important role in whether students actively engage in constructing meaning from the text (Quirk, 2008), such that word calling may occur more frequently in low motivation situations. Alternately, the presence of test anxiety may increase the likelihood of children reading text without processing the meaning of the text. Thus, future research should examine situational variables that may impact whether a student engages in word calling. Further, only students for whom consent had been obtained were eligible to be nominated as a word caller. It is possible that children without consent included a larger percentage of word callers. Also, information regarding the extent to which teachers have received implicit training in reading fluency, reading comprehension, and word callers is not available. The majority of teachers were experienced teachers and greater than half held masters degrees. If a different group of less experienced or less credentialed teachers were surveyed, such strong conceptualizations of reading fluency and comprehension may not be found. Therefore, our findings regarding teacher’s conceptualization of student skills may only be generalizable to similar teacher samples. Although this study sampled children from across multiple schools located in different regions of the country, our results may not generalize to older children. Our previous results suggest that word callers are more prevalent as children enter late elementary school (Meisinger et al., in press), and future investigations should continue to explore their prevalence in the late elementary and middle school years.

Finally, it is possible that the ways that questions were worded in the teacher questionnaire influenced the outcome of the study. For example, we can imagine that if teachers had been asked specifically whether comprehension was part of fluency or whether fluency was part of comprehension we might have received somewhat different results. Moreover, by asking teachers to identify words callers, they may have unconsciously felt obligated to do so. Our results, therefore, are limited to the most salient features of fluency and comprehension that came to mind when asked directly to name them.

Practical Implications

Given that variation exists across teachers definitions of word callers, that teachers’ often over nominate children as word callers, and the conceptual ambiguity that surrounds the term, several recommendations for school psychologists seem warranted. First, when school psychologists encounter this term, it may be prudent to ask the teacher what s/he means by a word caller. School psychologists may assist teachers by consulting with them regarding how to assess students whom they suspect of being a word caller. Alternately, school psychologists may need to verify that suspected word callers meet the expected reading profile by conducting their own independent assessment of the child’s reading fluency and comprehension skills. Thus, to avoid over identification of students as word callers, teachers should rely on reading assessments rather than their perceptions of students’ reading abilities. Lastly, school psychologists should not be overly concerned that the use of CBM assessments in reading will result in overlooking word callers in early elementary school. However, school psychologists should be aware that word callers may exist in greater numbers in older students (Meisinger et al., in press). Therefore, school psychologists may consider augmenting CBM reading probes with a comprehension task in older students (Schilling, Carlisle, Scott, & Zeng, 2007; Shapiro, Solari, & Petscher, 2008).

Several pedagogical implications for this work warrant discussion. Despite the conceptual ambiguity surrounding the term word caller, teachers seem to correctly identify appropriate assessment and instructional strategies for reading fluency and reading comprehension. This suggests that teachers would provide appropriate pedagogical support for children with deficits in these areas. Further, teachers’ concerns regarding word callers may reflect apprehension that instruction focusing on word reading or reading fluency can result in the creation of word callers (Hamilton & Shinn, 2003; Stanovich, 1986, 2000). Reading fluency is an important aspect of reading; however, fluency-oriented instruction should augment, rather than replace, comprehension instruction (Schwanenflugel & Ruston, 2008). Indeed, a balanced instructional approach that integrates decoding and fluency while emphasizing comprehension is recommended by the reading education literature (Pressley, Roehrig, Bogner, Raphael, & Dolezal, 2002).

Conclusions

Teachers generally are not accurate in their identification of students as word callers. However, it seems that this conceptual ambiguity is specific to the concept of word callers. Indeed, teachers appear to possess a veridical understanding of the terms reading fluency and reading comprehension, as evidenced by their reporting of appropriate assessment and pedagogical strategies for each area. As such, we may need to help teachers clarify their understanding of the term “word caller” in order to better assist learners as they develop into skilled readers.

Acknowledgments

This project was supported by the Interagency Education Research Initiative, a program of research managed jointly by the National Science Foundation, the Institute of Education Sciences in the U.S. Department of Education, and the National Institute of Child Health and Human Development in the National Institutes of Health (NICHD NIH). Funding for the project was provided by the NICHD NIH Grant 7 R01 HD040746-06.

Footnotes

Portions of this research were presented at the National Reading Conference in December, 2004 in San Antonio, TX and in December, 2006 in Los Angeles, CA.

Contributor Information

Elizabeth B. Meisinger, University of Memphis.

Barbara A. Bradley, University of Kansas.

Paula J. Schwanenflugel, University of Georgia.

Melanie R. Kuhn, Boston University.

References

  • Allington RL. Fluency: The neglected reading goal. The Reading Teacher. 1983;37:556–561.
  • Bates C, Nettelbeck T. Primary school teachers’ judgments of reading achievement. Educational Psychology. 2001;21(2):177–187.
  • Begeny JC, Eckert TL, Montarello SA, Storie MS. Teachers’ perceptions of students’ reading abilities: An examination of the relationship between teachers’ judgments and students’ performance across a continuum of rating methods. School Psychology Quarterly. 2008;23(1):43–55.
  • Blachowicz C, Ogle D. Reading comprehension: Strategies for independent learners. New York: Guilford Press; 2001.
  • Block CC, Gambrell LB, Pressley M. Improving comprehension instruction: Rethinking research, theory, and classroom practice. San Francisco, CA: Jossey-Bass; 2002.
  • Demaray MK, Elliot SN. Teachers’ judgments of students’ academic functioning: A comparison of actual and predicted performance. School Psychology Quarterly. 1998;13(1):8–24.
  • Deno SL. Curriculum-based measurement: The emerging alternative. Exceptional Children. 1985;52:219–232. [PubMed]
  • Deno SL. Developments in curriculum-based measurement. The Journal of Special Education. 2003;37:184–192.
  • Droop M, Verhoeven L. Background knowledge, linguistic complexity, and second- language reading comprehension. Journal of Literacy Research. 1998;30:253–271.
  • Feinberg AB, Shapiro ES. Accuracy of teacher judgments in predicting oral reading fluency. School Psychology Quarterly. 2003;18(1):52–65.
  • Fuchs LS, Fuchs D, Hosp MK, Jenkins JR. Text fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading. 2001;5:239–256.
  • Graesser AC, McNamara DS, Louwerse MM. What do readers need to learn in order to process coherence relations in narrative and expository text? In: Sweet AP, Snow CE, editors. Rethinking reading comprehension. New York: Guilford; 2003. pp. 82–98.
  • Good RH, Kaminski RA, Smith S, Laimon D, Dill S. Dynamic indicators of basic early literacy skills. 5. Eugene, OR: University of Oregon; 2001.
  • Hamilton C, Shinn MR. Characteristics of word callers: An investigation of the accuracy of teachers’ judgments of reading comprehension and oral reading skills. School Psychology Review. 2003;32:228–240.
  • Harvey S, Goudvis A. Strategies that work: Teaching comprehension for understanding and engagement. 2. Portland, ME: Stenhouse; 2007.
  • Hintze JM, Silberglitt B. A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review. 2005;34(3):372–386.
  • Hoge RD, Coladarci T. Teacher-based judgments of academic achievement: A review of literature. Review of Educational Research. 1989;59:297–313.
  • Jenkins JR, Fuchs LS, van den Broek P, Espin C, Deno SL. Sources of individual differences in reading comprehension and reading fluency. Journal of Educational Psychology. 2003;95:719–729.
  • Johnston P. Implications of basic research for the assessment of reading comprehension. Urbana-Champaign: Center for the Study of Reading, University of Illinois.; 1981. Technical Report. No 206.
  • Keenan JM, Betjemann RB, Olson RK. Reading comprehension tests vary in the skills they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading. 2008;12:281–300.
  • Kuhn MR, Schwanenflugel PJ. Fluency in the classroom. New York: Guilford Press; 2007.
  • Lipson MY, Wixon KK. Assessment and instruction of reading and writing difficulty: An interactive approach. 3. Boston: Allyn & Bacon; 2002.
  • MacGinitie WH, MacGinitie RK, Maria K, Dreyer LG, Hughes KE. Gates-MacGinitie ReadingTtests. 4. Rolling Meadows, IL: Riverside; 2000.
  • Marshall JC, Campbell YC. Practice makes permanent: Working toward fluency. New York: Guilford Press; 2006.
  • McKenna MC, Stahl SA. Assessment for reading instruction. New York: Guilford Press; 2003.
  • Meisinger EB, Bradley BA, Schwanenflugel PJ, Kuhn MR, Morris RD. Myth and reality of the word caller: The relation between teacher nominations and prevalence among elementary school children. School Psychology Quarterly in press. [PMC free article] [PubMed]
  • Miller SD, Faircloth BS. Motivation and reading comprehension. In: Israel SE, Duffy GG, editors. Handbook of research on reading comprehension. New York: Routledge; 2009. pp. 307–322.
  • Nathan RG, Stanovich KE. The causes and consequences of differences in reading fluency. Theory into practice. 1991;30:176–184.
  • National Reading Panel. Report of the national reading panel. Washington DC: Author; 2000.
  • Pressley M. What should comprehension instruction be the instruction of? In: Kamil ML, Mosenthal P, Pearson PD, Barr R, editors. Handbook of reading research. Vol. 3. Mahwah, NJ: Erlbaum; 2000. pp. 545–562.
  • Pressley M, Roehrig A, Bogner K, Raphael LM, Dolezal S. Balanced literacy instruction. Focus on Exceptional Children. 2002;34:1–14.
  • Samuels SJ. Reading fluency: Its past, present, and future. In: Rasinkski T, Blachowicz C, Lems K, editors. Fluency instruction: Research-based best practices. New York: Guilford; 2006. pp. 7–20.
  • Schilling, Carlisle SG, Scott JF, Zeng J. Are fluency measures important predictors of reading achievement? The Elementary School Journal. 2007;107:429–448.
  • Schwanenflugel PJ, Ruston HP. Becoming a fluent reader: From theory to practice. In: Kuhn MR, Schwanenflugel PJ, editors. Fluency in the classroom. New York: The Guilford press; 2008.
  • Schwanenflugel PJ, Meisinger EB, Wisenbaker JM, Kuhn MR, Strauss GP, Morris RD. Becoming a fluent and automatic reader in the early elementary school years. Reading Research Quarterly. 2006;41:496–522. [PMC free article] [PubMed]
  • Shapiro ES. Academic skills problems: Direct assessment and intervention. 3. New York: Guilford; 2004.
  • Shapiro ES, Solari E, Petcher Y. Use of a measure of reading comprehension to enhance prediction on the state high stakes assessment. Learning and Individual Differences. 2008;18:316–328. [PMC free article] [PubMed]
  • Shaw R, Shaw D. DIBELS oral reading fluency-based indicators of third grade reading skills for Colorado state assessment program (CASP) (Technical Report) Eugene, OR: University of Oregon; 2002.
  • Shinn MR. Identifying and defining academic problems: CBM screening and eligibility procedures. New York: Guilford; 1989.
  • Shinn MR, Knutson N, Good RH, Tilly WD, Collins VL. Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review. 1992;21:459–479.
  • Stanovich KE. Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly. 1986;21:360–406.
  • Stanovich KE. Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford; 2000.
  • Sweet AP, Snow CE. Rethinking reading comprehension: Solving problems in the teaching of literacy. New York: Guilford; 2003.
  • Quirk M. Motivating the development of reading fluency. In: Kuhn MR, Schwanenflugel PJ, editors. Fluency in the classroom. New York: Guilford; 2008.
  • The Psychological Corporation. Wechsler individual achievement test. San Antonio, TX: The Psychological Corporation; 1992.
  • Tierney RJ, Readence JE. Reading strategies and practices: A compendium. 6. Boston: Allyn & Bacon; 2005.
  • Torgesen JK, Rashotte CA, Alexander AW. Principles of fluency instruction in reading: Relationships with established empirical outcomes. In: Wolf M, editor. Dyslexia, fluency, & the brain. Timonium, MD: New York Press; 2001.
  • van den Broek P, Lynch JS, Naslund J, Ievers-Landis CE, Verduin K. The development of comprehension of main ideas in narratives: Evidence from the selection of title. Journal of Educational Psychology. 2003;95:707–718.
  • Vellutino FR. Individual differences as sources of variability in reading comprehension in elementary school children. In: Sweet AP, Snow CE, editors. Rethinking reading comprehension. New York: Guilford; 2003. pp. 51–81.
  • Walczyk JJ, Griffin-Ross DA. How important is reading skill fluency for comprehension? The Reading Teacher. 2007;60:560–569.
  • Warren L, Fitzgerald J. Helping parents to read expository literature to their children: Promoting main-idea and detail understanding. Reading Research and Instruction. 1997;36:341–60.
  • Wiederholt JL, Bryant BR. Gray oral reading tests. 4. Austin, TX: Pro-ed; 2001.
  • Wolf M, Katzir-Cohen T. Reading fluency and its intervention. Scientific Studies of Reading. 2001;5:211–239.