PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-6 (6)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
Year of Publication
Document Types
1.  A goal-based perspective on eye movements in visual world studies 
Acta psychologica  2010;137(2):172-180.
There is an emerging literature on visual search in natural tasks suggesting that task-relevant goals account for a remarkably high proportion of saccades, including anticipatory eye-movements. Moreover, factors such as “visual saliency” that otherwise affect fixations become less important when they are bound to objects that are not relevant to the task at hand. We briefly review this literature and discuss the implications for task-based variants of the visual world paradigm. We argue that the results and their likely interpretation may profoundly affect the “linking hypothesis” between language processing and the location and timing of fixations in task-based visual world studies. We outline a goal-based linking hypothesis and discuss some of the implications for how we conduct visual world studies, including how we interpret and analyze the data. Finally, we outline some avenues of research, including examples of some classes of experiments that might prove fruitful for evaluating the effects of goals in visual world experiments and the viability of a goal-based linking hypothesis.
doi:10.1016/j.actpsy.2010.09.010
PMCID: PMC3109199  PMID: 21067708
2.  Attentional capture of objects referred to by spoken language 
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In Experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In Experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In Experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In Experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
doi:10.1037/a0023101
PMCID: PMC3145002  PMID: 21517215
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm
3.  Effects of prosodically-modulated sub-phonetic variation on lexical competition 
Cognition  2007;105(2):466-476.
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically-conditioned phonetic variation.
doi:10.1016/j.cognition.2006.10.008
PMCID: PMC2435387  PMID: 17141751
4.  Attentional Capture of Objects Referred to by Spoken Language 
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
doi:10.1037/a0023101
PMCID: PMC3145002  PMID: 21517215
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm
5.  Expectations from preceding prosody influence segmentation in online sentence processing 
Psychonomic bulletin & review  2011;18(6):10.3758/s13423-011-0167-9.
Previous work examining prosodic cues in online spoken-word recognition has focused primarily on local cues to word identity. However, recent studies have suggested that utterance-level prosodic patterns can also influence the interpretation of subsequent sequences of lexically ambiguous syllables (Dilley, Mattys, & Vinke, Journal of Memory and Language, 63:274–294, 2010; Dilley & McAuley, Journal of Memory and Language, 59:294–311, 2008). To test the hypothesis that these distal prosody effects are based on expectations about the organization of upcoming material, we conducted a visual-world experiment. We examined fixations to competing alternatives such as pan and panda upon hearing the target word panda in utterances in which the acoustic properties of the preceding sentence material had been manipulated. The proportions of fixations to the monosyllabic competitor were higher beginning 200 ms after target word onset when the preceding prosody supported a prosodic constituent boundary following pan-, rather than following panda. These findings support the hypothesis that expectations based on perceived prosodic patterns in the distal context influence lexical segmentation and word recognition.
doi:10.3758/s13423-011-0167-9
PMCID: PMC3811073  PMID: 21968925
Prosody; Expectations; Spoken-word recognition; Lexical competition; Perceptual organization; Visual-world paradigm
6.  Tracking the time course of orthographic information in spoken-word recognition 
Two experiments evaluated the time course and use of orthographic information in spoken-word recognition in a visual world eye-tracking experiment using printed words as referents. Participants saw four words on a computer screen and listened to spoken sentences instructing them to click on one of the words (e.g., Click on the word bead). The printed words appeared 200 ms before the onset of the spoken target word. In Experiment 1, the display included the target word and a competitor with either a lower degree of phonological overlap with the target (bear) or a higher degree of phonological overlap with the target (bean). Both competitors had the same degree of orthographic overlap with the target. There were more fixations to the competitors than to unrelated distracters. Crucially, the likelihood of fixating a competitor did not vary as a function of the amount of phonological overlap between target and competitor. In Experiment 2, the display included the target word and a competitor with either a lower degree of orthographic overlap with the target (bare) or a higher degree of orthographic overlap with the target (bear). Competitors were homophonous and thus had the same degree of phonological overlap with the target. There were more fixations to higher-overlap competitors than to lower-overlap competitors, beginning during the temporal interval where initial fixations driven by the vowel are expected to occur. The authors conclude that orthographic information is rapidly activated as a spoken word unfolds and is immediately used in mapping spoken words onto potential printed referents.
doi:10.1037/a0019901
PMCID: PMC2933075  PMID: 20804288
spoken-word recognition; orthography; phonology; visual-world paradigm; eye movements

Results 1-6 (6)