PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-6 (6)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
Year of Publication
Document Types
1.  The effect of object state-changes on event processing: Do objects compete with themselves? 
The Journal of Neuroscience  2012;32(17):5795-5803.
When an object is described as changing state during an event, do the representations of those states compete? The distinct states they represent cannot co-exist at any one moment in time, yet each representation must be retrievable at the cost of suppressing the other possible object states. We used functional magnetic resonance imaging of human participants to test whether such competition does occur, and whether this competition between object states recruits brain areas sensitive to other forms of conflict. In Experiment 1, the same object was changed either substantially or minimally by one of two actions. In Experiment 2, the same action either substantially or minimally changed one of two objects. On a subject-specific basis, we identified voxels most responsive to conflict in a Stroop color-word interference task. Voxels in left posterior ventrolateral prefrontal cortex most responsive to Stroop conflict were also responsive to our object state-change manipulation, and were not responsive to the imageability of the described action. In contrast, voxels in left middle frontal gyrus responsive to Stroop conflict were not responsive even to language, and voxels in left middle temporal gyrus that were responsive to language and imageability were not responsive to object state-change. Results suggest that, when representing object state-change, multiple incompatible representations of an object compete, and the greater the difference between the initial state and the end state of an object, the greater the conflict.
doi:10.1523/JNEUROSCI.6294-11.2012
PMCID: PMC3368505  PMID: 22539841
2.  Language can mediate eye movement control within 100 milliseconds, regardless of whether there is anything to move the eyes to 
Acta Psychologica  2011;137(2):190-200.
The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question “What is the delay between language input and saccade execution?” problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between “signal” (eye movements due to the unfolding language) and “noise” (eye movements due to extraneous factors)? In two studies, participants heard either ‘the man…’ or ‘the girl…’, and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100 ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.
doi:10.1016/j.actpsy.2010.09.009
PMCID: PMC3118831  PMID: 20965479
Oculomotor control; Saccades; Double-step paradigm; Language-mediated eye movements; Visual world paradigm
3.  Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation 
Cognition  2009;111(1):55-71.
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).
doi:10.1016/j.cognition.2008.12.005
PMCID: PMC2669403  PMID: 19193366
Sentence comprehension; Eye movements; Visual scene interpretation; Situation models
4.  Attentional capture of objects referred to by spoken language 
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In Experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In Experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In Experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In Experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
doi:10.1037/a0023101
PMCID: PMC3145002  PMID: 21517215
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm
5.  Incrementality and Prediction in Human Sentence Processing 
Cognitive science  2009;33(4):583-609.
We identify a number of principles with respect to prediction that, we argue, underpin adult language comprehension: (a) comprehension consists in realizing a mapping between the unfolding sentence and the event representation corresponding to the real-world event being described; (b) the realization of this mapping manifests as the ability to predict both how the language will unfold, and how the real-world event would unfold if it were being experienced directly; (c) concurrent linguistic and nonlinguistic inputs, and the prior internal states of the system, each drive the predictive process; (d) the representation of prior internal states across a representational substrate common to the linguistic and nonlinguistic domains enables the predictive process to operate over variable time frames and variable levels of representational abstraction. We review empirical data exemplifying the operation of these principles and discuss the relationship between prediction, event structure, thematic role assignment, and incrementality.
doi:10.1111/j.1551-6709.2009.01022.x
PMCID: PMC2854821  PMID: 20396405
Sentence processing; Prediction; Simple recurrent network; Thematic roles; Incrementality
6.  Attentional Capture of Objects Referred to by Spoken Language 
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
doi:10.1037/a0023101
PMCID: PMC3145002  PMID: 21517215
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm

Results 1-6 (6)