Recent research has shown contextual diversity (i.e., the number of passages in which a given word appears) to be a reliable predictor of word processing difficulty. It has also been demonstrated that word-frequency has little or no effect on word recognition speed when accounting for contextual diversity in isolated word processing tasks. An eye-movement experiment was conducted wherein the effects of word-frequency and contextual diversity were directly contrasted in a normal sentence reading scenario. Subjects read sentences with embedded target words which varied in word frequency and contextual diversity. All first-pass and later reading times were significantly longer for words with lower contextual diversity compared to words with higher contextual diversity when controlling for word-frequency and other important lexical properties. Furthermore, there was no difference in reading times for higher frequency and lower frequency words when controlling for contextual diversity. The results confirm prior findings regarding contextual diversity and word-frequency effects and demonstrate that contextual diversity is a more accurate predictor of word processing speed than word-frequency within a normal reading task.
word-frequency; contextual diversity; eye movements; reading
Previous studies have shown that a plausible preview word can facilitate the processing of a target word as compared to an implausible preview word (a plausibility preview benefit effect) when reading Chinese (Yang, Wang, Tong, & Rayner, 2012; Yang, 2013). Regarding the nature of this effect, it is possible that readers processed the meaning of the plausible preview word and did not actually encode the target word (given that the parafoveal preview word lies close to the fovea). The current experiment examined this possibility with three conditions wherein readers received a preview of a target word that was either (1) identical to the target word (identical preview), (2) a plausible continuation of the pre-target text, but the post-target text in the sentence was incompatible with it (initially plausible preview), or (3) not a plausible continuation of the pre-target text, nor compatible with the post-target text (implausible preview). Gaze durations on target words were longer in the initially plausible condition than the identical condition. Overall, the results showed a typical preview benefit, but also implied that readers did not encode the initially plausible preview. Also, a plausibility preview benefit was replicated: gaze durations were longer with implausible previews than the initially plausible ones. Furthermore, late eye movement measures did not reveal differences between the initially plausible and the implausible preview conditions, which argues against the possibility of misreading the plausible preview word as the target word. In sum, these results suggest that a plausible preview word provides benefit in processing the target word as compared to an implausible preview word, and this benefit is only present in early but not late eye movement measures.
plausibility preview benefit; eye movements; Chinese
Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers’ processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading.
deaf readers; orthographic code; phonological code; eye movements; preview benefits; word processing; reading level
It is well established that fixation durations during reading vary with processing difficulty, but there are different views on how oculomotor control, visual perception, shifts of attention, and lexical (and higher cognitive) processing are coordinated. Evidence for a one-to-one translation of input delay into saccadic latency would provide a much needed constraint for current theoretical proposals. Here, we tested predictions of such a direct-control perspective using the stimulus-onset delay (SOD) paradigm. Words in sentences were initially masked and, upon fixation, were individually unmasked with a delay (0-ms, 33-ms, 66-ms, 99-ms SODs). In Experiment 1, SODs were constant for all words in a sentence; in Experiment 2, SODs were manipulated on target words, while non-targets were unmasked without delay. In accordance with predictions of direct control, non-zero SODs entailed equivalent increases in fixation durations in both experiments. Yet, a population of short fixations pointed to rapid saccades as a consequence of low-level information at non-optimal viewing positions rather than of lexical processing. Implications of these results for theoretical accounts of oculomotor control are discussed.
stimulus-onset delay (SOD); oculomotor control; fixation durations; sentence reading
Compared to skilled adult readers, children typically make more fixations that are longer in duration, shorter saccades, and more regressions, thus reading more slowly (Blythe & Joseph, 2011). Recent attempts to understand the reasons for these differences have discovered some similarities (e.g., children and adults target their saccades similarly; Joseph, Liversedge, Blythe, White, & Rayner, 2009) and some differences (e.g., children’s fixation durations are more affected by lexical variables; Blythe, Liversedge, Joseph, White, & Rayner, 2009) that have yet to be explained. In this article, the E-Z Reader model of eye-movement control in reading (Reichle, 2011; Reichle, Pollatsek, Fisher, & Rayner, 1998) is used to simulate various eye-movement phenomena in adults vs. children in order to evaluate hypotheses about the concurrent development of reading skill and eye-movement behavior. These simulations suggest that the primary difference between children and adults is their rate of lexical processing, and that different rates of (post-lexical) language processing may also contribute to some phenomena (e.g., children’s slower detection of semantic anomalies; Joseph et al., 2008). The theoretical implications of this hypothesis are discussed, including possible alternative accounts of these developmental changes, how reading skill and eye movements change across the entire lifespan (e.g., college-aged vs. older readers), and individual differences in reading ability.
Computer model; Eye movements; E-Z Reader; Lexical access; Reading; Reading skill
Previous research indicates that removing initial strokes from Chinese characters makes them harder to read than removing final or internal ones. In the present study, we examined the contribution of important components to character configuration via singular value decomposition. The results indicated that when the least important segments, which did not seriously alter the configuration (contour) of the character, were deleted, subjects read as fast as when no segments were deleted. When the most important segments, which are located in the left side of a character and written first, were deleted, reading speed was greatly slowed. These results suggest that singular value decomposition, which has no information about stroke writing order, can identify the most important strokes for Chinese character identification. Furthermore, they also suggest that contour may be correlated with stroke writing order.
Readers continuously receive parafoveal information about the upcoming word in addition to the foveal information about the currently fixated word. Previous research (Inhoff, Radach, Starr, & Greenberg, 2000) showed that the presence of a parafoveal word which was similar to the foveal word facilitated processing of the foveal word. In three experiments, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal information that subjects received before or while fixating a target word (e.g. news) within a sentence. Specifically a reader’s parafovea could contain a repetition of the target (news), a correct preview of the post-target word (once), an unrelated word (warm), random letters (cxmr), a nonword neighbor of the target (niws), a semantically related word (tale), or a nonword neighbor of that word (tule). Target fixation times were significantly lower in the parafoveal repetition condition than in all other conditions, suggesting that foveal processing can be facilitated by parafoveal repetition. We present a simple model framework that can account for these effects.
In this selective review, we examine key findings on eye movements when viewing advertisements. We begin with a brief, general introduction to the properties and neural underpinnings of saccadic eye movements. Next, we provide an overview of eye movement behavior during reading, scene perception, and visual search, since each of these activities is, at various times, involved in viewing ads. We then review the literature on eye movements when viewing print ads and warning labels (of the kind that appear on alcohol and tobacco ads), before turning to a consideration of advertisements in dynamic media (television and the Internet). Finally, we propose topics and methodological approaches that may prove to be useful in future research.
advertising; eye movements; visual attention; saccades; marketing
Do we access information from any object we can see, or do we only access information from objects that we intend to name? In three experiments using a modified multiple object naming paradigm, subjects were required to name several objects in succession when previews appeared briefly and simultaneously in the same location as the target as well as at another location. In Experiment 1, preview benefit—faster processing of the target when the preview was related (a mirror image of the target) compared to unrelated (semantically and phonologically)—was found for the preview in the target location but not a location that was never to be named. In Experiment 2, preview benefit was found if a related preview appeared in either the target location or the third-to-be-named location. Experiment 3 showed the difference between results from the first two experiments was not due to the number of objects on the screen. These data suggest that attention serves to gate visual input about objects based on the intention to name them, and that information from one intended-to-be-named object can facilitate processing of an object in another location.
Parallel Processing; Object Naming; Eye Movements; Attention
Whether readers always identify words in the order they are printed is subject to considerable debate. In the present study, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the preview for a two-word target region (e.g. white walls in My neighbor painted the white walls black). Readers received an identical (white walls), transposed (walls white), or unrelated preview (vodka clubs). We found that there was a clear cost of having a transposed preview compared to an identical preview, indicating that readers cannot or do not identify words out of order. However, on some measures, the transposed preview condition did lead to faster processing than the unrelated preview condition, suggesting that readers may be able to obtain some useful information from a transposed preview. Implications of the results for models of eye movement control in reading are discussed.
Eye movements; reading; parafoveal processing; preview benefit; word order
The illiteracy rate in the deaf population has been alarmingly high for several decades, despite the fact that deaf children go through the standard stages of schooling. Much research addressing this issue has focused on word-level processes, but in the recent years, little research has focused on sentence-levels processes. Previous research (Fischler, 1985) investigated word integration within context in college-level deaf and hearing readers in a lexical decision task following incomplete sentences with targets that were congruous or incongruous relative to the preceding context; it was found that deaf readers, as a group, were more dependent on contextual information than their hearing counterparts. The present experiment extended Fischler’s results and investigated the relationship between frequency, predictability, and reading skill in skilled hearing, skilled deaf, and less-skilled deaf readers. Results suggest that only less-skilled deaf readers, and not all deaf readers, rely more on contextual cues to boost word processing. Additionally, early effects of frequency and predictability were found for all three groups of readers, without any evidence for an interaction between frequency and predictability.
Deaf readers; eye movements; reading skill; predictability effects; frequency effects
Readers experience processing difficulties when reading biased homographs preceded by subordinate-biasing contexts. Attempts to overcome this processing deficit have often failed to reduce the subordinate bias effect (SBE). In the present studies, we examined the processing of biased homographs preceded by single-sentence, subordinate-biasing contexts, and varied whether this preceding context contained a prior instance of the homograph or a control word/phrase. Having previously encountered the homograph earlier in the sentence reduced the SBE for the subsequent encounter, while simply instantiating the subordinate meaning produced processing difficulty. We compared these reductions in reading times to differences in processing time between dominant-biased repeated and non-repeated conditions in order to verify that the reductions observed in the subordinate cases did not simply reflect a general repetition benefit. Our results indicate that a strong, subordinate-biasing context can interact during lexical access to overcome the activation from meaning frequency and reduce the SBE during reading.
lexical ambiguity; eye-movements; reading; subordinate-bias effect; context
Eye movements of Chinese readers were monitored as they read sentences containing a critical character that was either a one-character word or the initial character of a two-character word. By manipulating the verb prior to the target word, the one-character target word (or the first character of the two-character target word) was either plausible or implausible, as an independent word, at the point at which it appeared, whereas the two-character word was always plausible. The eye movement data showed that the plausibility manipulation did not exert an influence on the reading of the-two character word or its component characters. However, plausibility significantly influenced reading of the one-character target word. These results suggest that processes of semantic integration in reading Chinese are performed at a word level, instead of a character level, and that word segmentation must take place very early in the course of processing.
Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading.
Deaf readers; eye movements; reading skill; perceptual span; visual processing in the parafovea
In this brief rejoinder, we respond to Farmer, Monaghan, Misyak, and Christiansen (2011). We argue that the data still do not support the claim that reading time is affected by the phonological typicality of a word for its part of speech. We also question Farmer et al.’ s claim that interleaving syntactic structures in an experiment modifies grammatically-based syntactic expectations.
The extent to which target words were predictable from prior context was varied: half of the target words were predictable and the other half were unpredictable. In addition, the length of the target word varied: the target words were short (4–6 letters), medium (7–9 letters), or long (10–12 letters). Length and predictability both yielded strong effects on the probability of skipping the target words and on the amount of time readers fixated the target words (when they were not skipped). However, there was no interaction in any of the measures examined for either skipping or fixation time. The results demonstrate that word predictability (due to contextual constraint) and word length have strong and independent influences on word skipping and fixation durations. Furthermore, since the long words extended beyond the word identification span, the data indicate that skipping can occur on the basis of partial information in relation to word identity.
In this article, we extend our previous work (Reichle, Pollatsek, & Rayner, 2012) using the principles of the E-Z Reader model to examine the factors that determine when and where the eyes move in both reading and non-reading tasks, and in particular the role that word/stimulus familiarity plays in determining when the eyes move from one word/stimulus to the next. In doing this, we first provide a brief overview of E-Z Reader, including its assumption that word familiarity is the “engine” driving eye movements during reading. We then review the theoretical considerations that motivated this assumption, as well as recent empirical evidence supporting its validity. We also report the results of three new simulations that were intended to demonstrate the utility of the familiarity check in three tasks: (1) reading; (2) searching for a target word in embedded in text; and (3) searching for the letter O in linear arrays of Landolt Cs. The results of these simulations suggest that the familiarity check always improves task efficiency by speeding its rate of performance. We provide several arguments as to why this conclusion is not likely to be true for the two non-reading tasks, and in the final section of the paper, we provide a fourth simulation to test the hypothesis that problems associated with the mis-identification of words may also curtail the too liberal use of word familiarity.
Many words in the English language contain semantically and morphologically unrelated smaller words (e.g., room in groom). Recent findings indicate that a high frequency embedded word produces interference during visual word identification (e.g., Bowers, Davis, & Hanley, 2005; Davis, Perea, & Acha, 2009; Davis & Taft, 2005). In an eye movement experiment we examined whether lexical embeddings produce interference even when explicit judgments about lexicality or category membership are not solicited. Participants silently read sentences that each contained a target word with a lexical embedding. Fixation times were longer on target words that contained a higher frequency embedding compared to those that contained a lower frequency embedding. This finding indicates that a high frequency embedding interferes with word identification during silent reading and adds to a growing body of evidence that a word’s orthographic neighborhood includes embedded words.
Reading; word recognition; eye movements; orthographic neighbor; lexical embedding
When making a decision, people spend longer looking at the option they ultimately choose compared other options—termed the gaze bias effect—even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, on-line during the first encounter with them. To extend their findings and test this claim, we recorded subjects’ eye movements as they made judgments about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same color content (e.g., both in color or both in black-and-white) or whether they differed in color content and the extent to which color content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the color content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information on-line.
Eye Movements; Decision Making; Gaze Bias; Selective Encoding; Heuristics
The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of two distinct types: Acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence rules, such as NASA) and initialisms (abbreviations in which the grapheme-phoneme correspondences are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading, and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lower case letters.
In this study, we examined eye movement guidance in Chinese reading. We embedded either a 2-character word or a 4-character word in the same sentence frame, and observed the eye movements of Chinese readers when they read these sentences. We found that when all saccades into the target words were considered that readers eyes tended to land near the beginning of the word. However, we also found that Chinese readers’ eyes landed at the center of words when they made only a single fixation on a word, and that they landed at the beginning of a word when they made more than one fixation on a word. However, simulations that we carried out suggest that these findings can’t be taken to unambiguously argue for word-based saccade targeting in Chinese reading. We discuss alternative accounts of eye guidance in Chinese reading and suggest that eye movement target planning for Chinese readers might involve a combination of character-based and word-based targeting contingent on word segmentation processes.
Eye movements; reading; Chinese reading
To contrast mechanisms of lexical access in production versus comprehension we compared the effects of word-frequency (high, low), context (none, low-constraining, high-constraining), and level of English proficiency (monolinguals, Spanish-English bilinguals, Dutch-English bilinguals), on picture naming, lexical decision, and eye fixation times. Semantic constraint effects were larger in production than in reading. Frequency effects were larger in production than in reading without constraining context, but larger in reading than in production with constraining context. Bilingual disadvantages were modulated by frequency in production but not in eye fixation times, were not smaller in low-constraining context, and were reduced by high-constraining context only in production and only at the lowest level of English proficiency. These results challenge existing accounts of bilingual disadvantages, and reveal fundamentally different processes during lexical access across modalities, entailing a primarily semantically driven search in production, but a frequency driven search in comprehension. The apparently more interactive process in production than comprehension could simply reflect a greater number of frequency-sensitive processing stages in production.
The present study employs a stereoscopic manipulation to present sentences in three dimensions to subjects as they read for comprehension. Subjects read sentences with (a) no depth cues, (b) a monocular depth cue that implied the sentence loomed out of the screen (i.e., increasing retinal size), (c) congruent monocular and binocular (retinal disparity) depth cues (i.e., both implied the sentence loomed out of the screen) and (d) incongruent monocular and binocular depth cues (i.e., the monocular cue implied the sentence loomed out of the screen and the binocular cue implied it receded behind the screen). Reading efficiency was mostly unaffected, suggesting that reading in three dimensions is similar to reading in two dimensions. Importantly, fixation disparity was driven by retinal disparity; fixations were significantly more crossed as readers progressed through the sentence in the congruent condition and significantly more uncrossed in the incongruent condition. We conclude that disparity depth cues are used on-line to drive binocular coordination during reading.
Reading is a complex skill involving the orchestration of a number of components. Researchers often talk about a “model of reading” when talking about only one aspect of the reading process (for example, models of word identification are often referred to as “models of reading”). Here, we review prominent models that are designed to account for (1) word identification, (2) syntactic parsing, (3) discourse representations, and (4) how certain aspects of language processing (e.g., word identification), in conjunction with other constraints (e g., limited visual acuity, saccadic error, etc.), guide readers’ eyes. Unfortunately, it is the case that these various models addressing specific aspects of the reading process seldom make contact with models dealing with other aspects of reading. Thus, for example, the models of word identification seldom make contact with models of eye movement control, and vice versa. While this may be unfortunate in some ways, it is quite understandable in other ways because reading itself is a very complex process. We discuss prototypical models of aspects of the reading process in the order mentioned above. We do not review all possible models, but rather focus on those we view as being representative and most highly recognized.
Recent research using word recognition paradigms such as lexical decision and speeded pronunciation has investigated how a range of variables affect the location and shape of response time distributions, using both parametric and non-parametric techniques. In this article, we explore the distributional effects of a word frequency manipulation on fixation durations in normal reading, making use of data from two recent eye movement experiments (Drieghe, Rayner, & Pollatsek, 2008; White, 2008). The ex-Gaussian distribution provided a good fit to the shape of individual subjects’ distributions in both experiments. The frequency manipulation affected both the shift and skew of the distributions, in both experiments, and this conclusion was supported by the non-parametric vincentizing technique. Finally, a new experiment demonstrated that White’s (2008) frequency manipulation also affects both shift and skew in RT distributions in the lexical decision task. These results argue against models of eye movement control in reading that propose that word frequency influences only a subset of fixations, and support models in which there is a tight connection between eye movement control and the progress of lexical processing.