Eye movements of Chinese readers were monitored as they read sentences containing a critical character that was either a one-character word or the initial character of a two-character word. By manipulating the verb prior to the target word, the one-character target word (or the first character of the two-character target word) was either plausible or implausible, as an independent word, at the point at which it appeared, whereas the two-character word was always plausible. The eye movement data showed that the plausibility manipulation did not exert an influence on the reading of the-two character word or its component characters. However, plausibility significantly influenced reading of the one-character target word. These results suggest that processes of semantic integration in reading Chinese are performed at a word level, instead of a character level, and that word segmentation must take place very early in the course of processing.
Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading.
Deaf readers; eye movements; reading skill; perceptual span; visual processing in the parafovea
In this brief rejoinder, we respond to Farmer, Monaghan, Misyak, and Christiansen (2011). We argue that the data still do not support the claim that reading time is affected by the phonological typicality of a word for its part of speech. We also question Farmer et al.’ s claim that interleaving syntactic structures in an experiment modifies grammatically-based syntactic expectations.
The extent to which target words were predictable from prior context was varied: half of the target words were predictable and the other half were unpredictable. In addition, the length of the target word varied: the target words were short (4–6 letters), medium (7–9 letters), or long (10–12 letters). Length and predictability both yielded strong effects on the probability of skipping the target words and on the amount of time readers fixated the target words (when they were not skipped). However, there was no interaction in any of the measures examined for either skipping or fixation time. The results demonstrate that word predictability (due to contextual constraint) and word length have strong and independent influences on word skipping and fixation durations. Furthermore, since the long words extended beyond the word identification span, the data indicate that skipping can occur on the basis of partial information in relation to word identity.
In this article, we extend our previous work (Reichle, Pollatsek, & Rayner, 2012) using the principles of the E-Z Reader model to examine the factors that determine when and where the eyes move in both reading and non-reading tasks, and in particular the role that word/stimulus familiarity plays in determining when the eyes move from one word/stimulus to the next. In doing this, we first provide a brief overview of E-Z Reader, including its assumption that word familiarity is the “engine” driving eye movements during reading. We then review the theoretical considerations that motivated this assumption, as well as recent empirical evidence supporting its validity. We also report the results of three new simulations that were intended to demonstrate the utility of the familiarity check in three tasks: (1) reading; (2) searching for a target word in embedded in text; and (3) searching for the letter O in linear arrays of Landolt Cs. The results of these simulations suggest that the familiarity check always improves task efficiency by speeding its rate of performance. We provide several arguments as to why this conclusion is not likely to be true for the two non-reading tasks, and in the final section of the paper, we provide a fourth simulation to test the hypothesis that problems associated with the mis-identification of words may also curtail the too liberal use of word familiarity.
Many words in the English language contain semantically and morphologically unrelated smaller words (e.g., room in groom). Recent findings indicate that a high frequency embedded word produces interference during visual word identification (e.g., Bowers, Davis, & Hanley, 2005; Davis, Perea, & Acha, 2009; Davis & Taft, 2005). In an eye movement experiment we examined whether lexical embeddings produce interference even when explicit judgments about lexicality or category membership are not solicited. Participants silently read sentences that each contained a target word with a lexical embedding. Fixation times were longer on target words that contained a higher frequency embedding compared to those that contained a lower frequency embedding. This finding indicates that a high frequency embedding interferes with word identification during silent reading and adds to a growing body of evidence that a word’s orthographic neighborhood includes embedded words.
Reading; word recognition; eye movements; orthographic neighbor; lexical embedding
When making a decision, people spend longer looking at the option they ultimately choose compared other options—termed the gaze bias effect—even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, on-line during the first encounter with them. To extend their findings and test this claim, we recorded subjects’ eye movements as they made judgments about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same color content (e.g., both in color or both in black-and-white) or whether they differed in color content and the extent to which color content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the color content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information on-line.
Eye Movements; Decision Making; Gaze Bias; Selective Encoding; Heuristics
The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of two distinct types: Acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence rules, such as NASA) and initialisms (abbreviations in which the grapheme-phoneme correspondences are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading, and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lower case letters.
In this study, we examined eye movement guidance in Chinese reading. We embedded either a 2-character word or a 4-character word in the same sentence frame, and observed the eye movements of Chinese readers when they read these sentences. We found that when all saccades into the target words were considered that readers eyes tended to land near the beginning of the word. However, we also found that Chinese readers’ eyes landed at the center of words when they made only a single fixation on a word, and that they landed at the beginning of a word when they made more than one fixation on a word. However, simulations that we carried out suggest that these findings can’t be taken to unambiguously argue for word-based saccade targeting in Chinese reading. We discuss alternative accounts of eye guidance in Chinese reading and suggest that eye movement target planning for Chinese readers might involve a combination of character-based and word-based targeting contingent on word segmentation processes.
Eye movements; reading; Chinese reading
To contrast mechanisms of lexical access in production versus comprehension we compared the effects of word-frequency (high, low), context (none, low-constraining, high-constraining), and level of English proficiency (monolinguals, Spanish-English bilinguals, Dutch-English bilinguals), on picture naming, lexical decision, and eye fixation times. Semantic constraint effects were larger in production than in reading. Frequency effects were larger in production than in reading without constraining context, but larger in reading than in production with constraining context. Bilingual disadvantages were modulated by frequency in production but not in eye fixation times, were not smaller in low-constraining context, and were reduced by high-constraining context only in production and only at the lowest level of English proficiency. These results challenge existing accounts of bilingual disadvantages, and reveal fundamentally different processes during lexical access across modalities, entailing a primarily semantically driven search in production, but a frequency driven search in comprehension. The apparently more interactive process in production than comprehension could simply reflect a greater number of frequency-sensitive processing stages in production.
The present study employs a stereoscopic manipulation to present sentences in three dimensions to subjects as they read for comprehension. Subjects read sentences with (a) no depth cues, (b) a monocular depth cue that implied the sentence loomed out of the screen (i.e., increasing retinal size), (c) congruent monocular and binocular (retinal disparity) depth cues (i.e., both implied the sentence loomed out of the screen) and (d) incongruent monocular and binocular depth cues (i.e., the monocular cue implied the sentence loomed out of the screen and the binocular cue implied it receded behind the screen). Reading efficiency was mostly unaffected, suggesting that reading in three dimensions is similar to reading in two dimensions. Importantly, fixation disparity was driven by retinal disparity; fixations were significantly more crossed as readers progressed through the sentence in the congruent condition and significantly more uncrossed in the incongruent condition. We conclude that disparity depth cues are used on-line to drive binocular coordination during reading.
Reading is a complex skill involving the orchestration of a number of components. Researchers often talk about a “model of reading” when talking about only one aspect of the reading process (for example, models of word identification are often referred to as “models of reading”). Here, we review prominent models that are designed to account for (1) word identification, (2) syntactic parsing, (3) discourse representations, and (4) how certain aspects of language processing (e.g., word identification), in conjunction with other constraints (e g., limited visual acuity, saccadic error, etc.), guide readers’ eyes. Unfortunately, it is the case that these various models addressing specific aspects of the reading process seldom make contact with models dealing with other aspects of reading. Thus, for example, the models of word identification seldom make contact with models of eye movement control, and vice versa. While this may be unfortunate in some ways, it is quite understandable in other ways because reading itself is a very complex process. We discuss prototypical models of aspects of the reading process in the order mentioned above. We do not review all possible models, but rather focus on those we view as being representative and most highly recognized.
Recent research using word recognition paradigms such as lexical decision and speeded pronunciation has investigated how a range of variables affect the location and shape of response time distributions, using both parametric and non-parametric techniques. In this article, we explore the distributional effects of a word frequency manipulation on fixation durations in normal reading, making use of data from two recent eye movement experiments (Drieghe, Rayner, & Pollatsek, 2008; White, 2008). The ex-Gaussian distribution provided a good fit to the shape of individual subjects’ distributions in both experiments. The frequency manipulation affected both the shift and skew of the distributions, in both experiments, and this conclusion was supported by the non-parametric vincentizing technique. Finally, a new experiment demonstrated that White’s (2008) frequency manipulation also affects both shift and skew in RT distributions in the lexical decision task. These results argue against models of eye movement control in reading that propose that word frequency influences only a subset of fixations, and support models in which there is a tight connection between eye movement control and the progress of lexical processing.
Minshew and Goldstein (1998) postulated that autism spectrum disorder (ASD) is a disorder of complex information processing. The current study was designed to investigate this hypothesis. Participants with and without ASD completed two scene perception tasks: a simple “spot the difference” task, where they had to say which one of a pair of pictures had a detail missing, and a complex “which one's weird” task, where they had to decide which one of a pair of pictures looks “weird”. Participants with ASD did not differ from TD participants in their ability to accurately identify the target picture in both tasks. However, analysis of the eye movement sequences showed that participants with ASD viewed scenes differently from normal controls exclusively for the complex task. This difference in eye movement patterns, and the method used to examine different patterns, adds to the knowledge base regarding eye movements and ASD. Our results are in accordance with Minshew and Goldstein's theory that complex, but not simple, information processing is impaired in ASD.
A boundary change manipulation was implemented within a monomorphemic word (e.g., fountaom as a preview for fountain), where parallel processing should occur given adequate visual acuity, and within an unspaced compound (bathroan as a preview for bathroom), where some serial processing of the constituents is likely. Consistent with that hypothesis, there was no effect of the preview manipulation on fixation time on the 1st constituent of the compound, whereas there was on the corresponding letters of the monomorphemic word. There was also a larger preview disruption on gaze duration on the whole monomorphemic word than on the compound, suggesting more parallel processing within monomorphemic words.
In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms.
Perceptual span; Visual search; Eye movements
The perceptual span or region of effective vision during eye fixations in reading was examined as a function of reading speed (fast readers were compared to slow readers), font characteristics (fixed width vs. proportional width), and intra-word spacing (normal or reduced). The main findings were that fast readers (reading at about 330 wpm) had a larger perceptual span than slow readers (reading about 200 wpm) and the span was not affected by whether or not the text was fixed-width or proportional-width. Additionally, there were interesting font and intra-word spacing effects that have important implications for the optimal use of space in a line of text.
Although most studies of reading English (and other alphabetic languages) have indicated that readers do not obtain preview benefit from word n + 2, Yang, Wang, Xu, and Rayner (2009) reported evidence that Chinese readers obtain preview benefit from word n + 2. However, this effect may not be common in Chinese because the character prior to the target word in Yang et al.’s experiment was always a very high frequency function word. In the current experiment, we utilized a relatively low frequency word n + 1 to examine whether an n + 2 preview benefit effect would still exist and failed to find any preview benefit from word n + 2. These results are consistent with a recent study which indicated that foveal load modulates the perceptual span during Chinese reading (Yan, Kliegl, Shu, Pan, & Zhou, 2010). Implications of these results for models of eye movement control are discussed.
Chinese reading; Eye movements; Preview benefit; Linguistics; Psycholinguistics; Interdisciplinary Studies; Neurology; Education (general); Languages and Literature
The boundary paradigm (Rayner, 1975) was used to examine whether high level information affects preview benefit during Chinese reading. In two experiments, readers read sentences with a 1-character target word while their eye movements were monitored. In Experiment 1, the semantic relatedness between the target word and the preview word was manipulated so that there were semantically related and unrelated preview words, both of which were not plausible in the sentence context. No significant differences between these two preview conditions were found, indicating no effect of semantic preview. In Experiment 2, we further examined semantic preview effects with plausible preview words. There were four types of previews: identical, related & plausible, unrelated & plausible, and unrelated & implausible. The results revealed a significant effect of plausibility as single fixation and gaze duration on the target region were shorter in the two plausible conditions than in the implausible condition. Moreover, there was some evidence for a semantic preview benefit as single fixation duration on the target region was shorter in the related & plausible condition than the unrelated & plausible condition. Implications of these results for processing of high level information during Chinese reading are discussed.
Eye-movements; Reading Chinese; Preview benefit; Semantic and plausibility effects; Linguistics; Psycholinguistics; Interdisciplinary Studies; Neurology; Education (general); Languages and Literature
Kennedy and Pynte (2008) presented data that they suggested pose problems for models of eye movement control in reading in which words are encoded serially. They focus on situations in which pairs of words are fixated out of order (i.e., the first word is skipped and the second fixated prior to a regression back to the first word). We strongly disagree with their claims and contest their arguments. We argue that their data set was obtained selectively and the events they believe are problematic do not occur frequently during reading. Furthermore, we do not consider that Kennedy and Pynte’s arguments pose serious difficulties for serial models of reading such as E-Z Reader.
Models of eye movement control in reading and their impact on the field are discussed. Differences between the E-Z Reader model and the SWIFT model are reviewed, as are benchmark data that need to be accounted for by any model of eye movement control. Predictions made by the models and how models can sometimes account for counterintuitive findings are also discussed. Finally, the role of models and data in further understanding the reading process is considered.
Two experiments were undertaken to examine whether there is an age-related change in the speed with which readers can capture visual information during fixations in reading. Children’s and adults’ eye movements were recorded as they read sentences that were presented either normally or as “disappearing text”. The disappearing text manipulation had a surprisingly small effect on the children, inconsistent with the notion of an age-related change in the speed with which readers can capture visual information from the page. Instead, we suggest that differences between adults and children are related to the level of difficulty of the sentences for readers of different ages.
children; eye movements; reading
Using a word-by-word self-paced reading paradigm, Farmer, Christiansen, and Monaghan (2006) reported faster reading times for words that are phonologically typical for their syntactic category (i.e., noun or verb) than for words that are phonologically atypical. This result has been taken to suggest that language users are sensitive to subtle relationships between sound and syntactic function, and that they make rapid use of this information in comprehension. The present article reports attempts to replicate this result using both eyetracking during normal reading (Experiment 1) and word-by-word self-paced reading (Experiment 2). No hint of a phonological typicality effect emerged on any reading time measure in Experiment 1, nor did Experiment 2 replicate Farmer et al.’s finding from self-paced reading. Indeed, the differences between condition means were not consistently in the predicted direction, as phonologically atypical verbs were read more quickly than phonologically typical verbs, on most measures. Implications for research on visual word recognition are discussed.
An overview of language processing during reading and listening is provided. Evidence is reviewed indicating that language processing in both domains is fast and incremental. We also discuss some aspects of normal reading and listening that are often obscured in event related potential (ERP) research. We also discuss some apparent limitations of ERP techniques, as well as some recent indications that EEG measures can be used to probe how lexical knowledge and lexical or structural expectations can contribute to the incremental process of language comprehension.
The distribution of landing positions and durations of first fixations in a region containing a noun preceded by either an article (e.g. the soldiers) or a high-frequency three-letter word (e.g. all soldiers) were compared. Although there were fewer first fixations on the blank space between the high-frequency three-letter word and the noun than on the surrounding letters (and the fixations on the blank space were shorter), this pattern did not occur when the noun was preceded by an article. Radach (1996) inferred from a similar experiment that did not manipulate the type of short word that two words could be processed as a perceptual unit during reading when the first word is a short word. As this different pattern of fixations is restricted to article-noun pairs, it indicates that word grouping does not occur purely on the basis of word length during reading; moreover, as we demonstrate, one can explain the observed patterns in both conditions more parsimoniously, without adopting a word grouping mechanism in eye movement control during reading.