Multi-tasking can increase susceptibility to distraction, affecting whether irrelevant objects capture attention. Similarly, people with depression often struggle to concentrate when performing cognitively demanding tasks. This parallel suggests that depression is like multi-tasking. To test this idea, we examined relations between self-reported levels of anhedonic depression (a dimension that reflects the unique aspects of depression not shared with anxiety or other forms of distress) and attention capture by salient items in a visual search task. Furthermore, we compared these relations to the effects of performing a concurrent auditory task on attention capture. Strikingly, both multi-tasking and elevated levels of anhedonic depression were associated with increased capture by uniquely colored items, but decreased capture by abruptly appearing items. At least with respect to attention capture and distraction, depression seems to be functionally comparable to juggling a second, unrelated cognitive task.
depression; anhedonic depression; attention capture; multi-tasking; distractibility
In three N-Back experiments, we investigated components of the process of working memory (WM) updating, more specifically access to items stored outside the focus of attention and transfer from the focus to the region of WM outside the focus. We used stimulus complexity as a marker. We found that when WM transfer occurred under full attention, it was slow and highly sensitive to stimulus complexity, much more so than WM access. When transfer occurred in conjunction with access, however, it was fast and no longer sensitive to stimulus complexity. Thus the updating context altered the nature of WM processing: The dual-task situation (transfer in conjunction with access) drove memory transfer into a more efficient mode, indifferent to stimulus complexity. In contrast, access times consistently increased with complexity, unaffected by the processing context. This study reinforces recent reports that retrieval is a (perhaps the) key component of working memory functioning.
Separate cognitive processes govern the inhibitory control of manual and oculomotor movements. Despite this fundamental distinction, little is known about how these inhibitory control processes relate to more complex domains of behavioral functioning. This study sought to determine how these inhibitory control mechanisms relate to broadly defined domains of impulsive behavior. Thirty adults with attention-deficit/hyperactivity disorder (ADHD) and 28 comparison adults performed behavioral measures of inhibitory control and completed impulsivity inventories. Results suggest that oculomotor inhibitory control, but not manual inhibitory control, is related to specific domains of self-reported impulsivity. This finding was limited to the ADHD group; no significant relations between inhibitory control and impulsivity were found in comparison adults. These results highlight the heterogeneity of inhibitory control processes and their differential relations to different facets of impulsivity.
impulsivity; inhibitory control; manual; oculomotor; ADHD
Face recognition is a complex skill that requires the integration of facial features across the whole face, e.g., holistic processing. It is unclear whether, and to what extent, other species process faces in a manner that is similar to humans. Previous studies on the inversion effect, a marker of holistic processing, in nonhuman primates have revealed mixed results in part because many studies have failed to include alternative image categories necessary to understand whether the effects are truly face-specific. The present study re-examined the inversion effect in rhesus monkeys and chimpanzees using comparable testing methods and a variety of high quality stimuli including faces and nonfaces. The data support an inversion effect in chimpanzees only for conspecifics’ faces (expert category), suggesting face-specific holistic processing similar to humans. Rhesus monkeys showed inversion effects for conspecifics, but also for heterospecifics’ faces (chimpanzees), and nonfaces images (houses), supporting important species differences in this simple test of holistic face processing.
face recognition; inversion effect; holistic processing; matching-to-sample; comparative
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions.
attention; working memory; cuing; automaticity; strategic control; PsychINFO classification 2346
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
There is an emerging literature on visual search in natural tasks suggesting that task-relevant goals account for a remarkably high proportion of saccades, including anticipatory eye-movements. Moreover, factors such as “visual saliency” that otherwise affect fixations become less important when they are bound to objects that are not relevant to the task at hand. We briefly review this literature and discuss the implications for task-based variants of the visual world paradigm. We argue that the results and their likely interpretation may profoundly affect the “linking hypothesis” between language processing and the location and timing of fixations in task-based visual world studies. We outline a goal-based linking hypothesis and discuss some of the implications for how we conduct visual world studies, including how we interpret and analyze the data. Finally, we outline some avenues of research, including examples of some classes of experiments that might prove fruitful for evaluating the effects of goals in visual world experiments and the viability of a goal-based linking hypothesis.
In prior work, women were found to outperform men on short-term verbal memory tasks. The goal of the present work was to examine whether gender differences on short-term memory tasks are tied to the involvement of long-term memory in the learning process. In Experiment 1, men and women were compared on their ability to remember phonologically-familiar novel words and phonologically-unfamiliar novel words. Learning of phonologically-familiar novel words (but not of phonologically-unfamiliar novel words) can be supported by long-term phonological knowledge. Results revealed that women outperformed men on phonologically-familiar novel words, but not on phonologically-unfamiliar novel words. In Experiment 2, we replicated Experiment 1 using a within-subjects design, and confirmed gender differences on phonologically-familiar, but not phonologically-unfamiliar stimuli. These findings are interpreted to suggest that women are more likely than men to recruit native-language phonological knowledge during novel word-learning.
gender differences; word learning; phonology; short-term memory
Currently, it is unclear what model of timing best describes temporal processing across millisecond and second timescales in tasks with different response requirements. In the present set of experiments, we assessed whether the popular dedicated scalar model of timing accounts for performance across a restricted timescale surrounding the 1 second duration for different tasks. The first two experiments evaluate whether temporal variability scales proportionally with the timed duration within temporal reproduction. The third experiment compares timing across millisecond and second timescales using temporal reproduction and discrimination tasks designed with parallel structures. The data exhibit violations of the assumptions of a single scalar timekeeper across millisecond and second timescales within temporal reproduction; these violations are less apparent for temporal discrimination. The finding of differences across tasks suggests that task demands influence the mechanisms that are engaged for keeping time.
Time; Time perception; Time estimation; Prospective timing; Scalar timing PsycINFO classification: 2340
The target article represents a distillation of nearly 20 years of work dedicated to the analysis of visual selection. Throughout these years, Jan Theeuwes and his colleagues have been enormously productive in their development of a particular view of visual selection, one that emphasizes the role of bottom-up processes. This work has been very influential, as there is substantial merit to many aspects of this research. However, this endeavor has also been provocative—the reaction to this work has resulted in a large body of research that emphasizes the role of top-down processes. Here we highlight recent work not covered in Theeuwes’s review and discuss how this literature may not be compatible with Theeuwes’s theoretical perspective. In our view this ongoing debate has been one of the most interesting and productive in the field. One can only hope that in time the ultimate result will be a complete understanding of how visual selection actually works.
Studies in adults indicate that response preparation is crucial to inhibitory control, but it remains unclear whether preparation contributes to improvements in inhibitory control over the course of childhood and adolescence. In order to assess the role of response preparation in developmental improvements in inhibitory control, we parametrically manipulated the duration of the instruction period in an antisaccade (AS) task given to participants ages 8 to 31 years. Regressions showing a protracted development of AS performance were consistent with existing research, and two novel findings emerged. First, all participants showed improved performance with increased preparation time, indicating that response preparation is crucial to inhibitory control at all stages of development. Preparatory processes did not deteriorate at even the longest preparatory period, indicating that the youngest participants were able to sustain preparation at even the longest interval. Second, developmental trajectories did not differ for different preparatory period lengths, highlighting that the processes supporting response preparation continue to mature in tandem with improvements in AS performance. Our findings suggest that developmental improvements are not simply due to an inhibitory system that is faster to engage but may also reflect qualitative changes in the processes engaged during the preparatory period.
response preparation; cognitive control; antisaccade; inhibitory control; development
A recent article in Acta Psychologica (“Picture-plane inversion leads to qualitative changes of face perception” by B. Rossion, 2008) criticized several aspects of an earlier paper of ours (Riesenhuber et al., “Face processing in humans is compatible with a simple shape-based model of vision”, Proc Biol Sci, 2004). We here address Rossion’s criticisms and correct some misunderstandings. To frame the discussion, we first review our previously presented computational model of face recognition in cortex (Jiang et al., “Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques”, Neuron, 2006) that provides a concrete biologically plausible computational substrate for holistic coding, namely a neural representation learned for upright faces, in the spirit of the original simple-to-complex hierarchical model of vision by Hubel and Wiesel. We show that Rossion’s and others’ data support the model, and that there is actually a convergence of views on the mechanisms underlying face recognition, in particular regarding holistic processing.
The cumulative semantic cost describes a phenomenon in which picture naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. Here we test whether the cumulative semantic cost requires the assumption of lexical selection by competition. In Experiment 1 participants named a sequence of pictures, while in Experiment 2 participants named words instead of pictures, preceded by a gender marked determiner. We replicate the basic cumulative semantic cost with pictures (Exp. 1) and show that there is no cumulative semantic cost for word targets (Exp. 2). This pattern was replicated in Experiment 3 in which pictures and words were named along with their gender marked definite determiner, and were intermingled within the same experimental design. In addition, Experiment 3 showed that while picture naming induces a cumulative semantic cost for subsequently named words, word naming does not induce a cumulative semantic cost for subsequently named pictures. These findings suggest that the cumulative semantic cost arises prior to lexical selection and that the effect arises due to incremental changes to the connection weights between semantic and lexical representations.
Cumulative semantic cost; Semantic interference; Lexical access; Picture naming; Semantic access
In two experiments, we compared level of activation and temporal overlap accounts of compatibility effects in the Simon task by reducing the discriminability of spatial and non-spatial features of a target location word. Participants made keypress responses to the non-spatial or spatial feature of centrally-presented location words. The discriminability of the spatial feature of the word (Experiment 1), or of both the spatial and non-spatial feature (Experiment 2), was manipulated. When the spatial feature of the word was task-irrelevant, lowering the discriminability of this feature reduced the compatibility effect. The compatibility effect was restored when the discriminability of both the task-relevant and task-irrelevant features were reduced together. Results provide further evidence for the temporal overlap account of compatibility effects. Furthermore, compatibility effects when the spatial information was task-relevant and those when the spatial information was task-irrelevant were moderately correlated with each other, suggesting a common underlying mechanism in both versions.
Simon effect; stimulus-response compatibility; temporal overlap; automatic activation
While an increasing number of behavioral studies examining spatial cognition use experimental paradigms involving disorientation, the process by which one becomes disoriented is not well explored. The current study examined this process using a paradigm in which participants were blindfolded and underwent a succession of 70° or 200° passive, whole body rotations around a fixed vertical axis. After each rotation, participants used a pointer to indicate either their heading at the start of the most recent turn or their heading at the start of the current series of turns. Analyses showed that in both cases, mean pointing errors increased gradually over successive turns. In addition to the gradual loss of orientation indicated by this increase, analysis of the pointing errors also showed evidence of occasional, abrupt loss orientation. Results indicate multiple routes from an oriented to a disoriented state, and shed light on the process of becoming disoriented.
spatial cognition; disorientation
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
manual pointing; auditory space perception; perception / action; perceived direction; spatial cognition
Two experiments examined effects of mixed stimulus-response mappings and tasks for older and younger adults. In Experiment 1, participants performed two-choice spatial reaction tasks with blocks of pure and mixed compatible and incompatible mappings. In Experiment 2, a compatible or incompatible mapping was mixed with a Simon task for which the mapping of stimulus color to location was relevant and stimulus location irrelevant. In both experiments older adults showed larger mixing costs than younger adults and larger compatibility effects, with the differences particularly pronounced in Experiment 1 when location mappings were mixed. In mixed conditions, when stimulus location was relevant, older adults benefited more than younger adults from complete repetition of the task and stimulus from the preceding trial. When stimulus location was irrelevant, the benefit of complete repetition did not differ reliably between age groups. The results suggest that the age-related deficit associated with mixing mappings and tasks is primarily due to older adults having more difficulty separating task sets that activate conflicting response codes.
Aging; Attention; Response Selection; 2860 Gerontology; 2346 Attention
Submovements that are frequently observed in the final portion of pointing movements have traditionally been viewed as pointing accuracy adjustments. Here we re-examine this long-lasting interpretation by developing evidence that many of submovements may be non-corrective fluctuations arising from various sources of motor output variability. In particular, non-corrective submovements may emerge during motion termination and during motion of low speed. The contribution of these factors and the factor of accuracy regulation in submovement production is investigated here by manipulating movement mode (discrete, reciprocal, and passing) and target size (small and large). The three modes provided different temporal combinations of accuracy regulation and motion termination, thus allowing us to disentangle submovements associated with each factor. The target size manipulations further emphasized the role of accuracy regulation and provided variations in movement speed. Gross and fine submovements were distinguished based on the degree of perturbation of smooth motion. It was found that gross submovements were predominantly related to motion termination and not to pointing accuracy regulation. Although fine submovements were more frequent during movements to small than to large targets, other results show that they may also be not corrective submovements but rather motion fluctuations attributed to decreases in movement speed accompanying decreases in target size. Together, the findings challenge the traditional interpretation, suggesting that the majority of submovements are fluctuations emerging from mechanical and neural sources of motion variability. The implications of the findings for the mechanisms responsible for accurate target achievement are discussed.
arm kinematics; discrete; continuous; accuracy; variability
L2 syntactic processing has been primarily investigated in the context of syntactic anomaly detection, but only sparsely with syntactic ambiguity. In the field of event-related potentials (ERPs) syntactic anomaly detection and syntactic ambiguity resolution is linked to the P600. The current ERP experiment examined L2 syntactic processing in highly proficient L1 Spanish-L2 English readers who had acquired English informally around the age of 5 years. Temporary syntactic ambiguity (induced by verb subcategorization information) was tested as a language-specific phenomenon of L2, while syntactic anomaly resulted from phrase structure constraints that are similar in L1 and L2. Participants judged whether a sentence was syntactically acceptable or not. Native readers of English showed a P600 in the temporary syntactically ambiguous and syntactically anomalous sentences. A comparable picture emerged in the non-native readers of English. Both critical syntactic conditions elicited a P600, however, the distribution and latency of the P600 varied in the syntactic anomaly condition. The results clearly show that early acquisition of L2 syntactic knowledge leads to comparable online sensitivity towards temporal syntactic ambiguity and syntactic anomaly in early and highly proficient non-native readers of English and native readers of English.
ERPs; P600; L2 syntactic processing; Syntactic ambiguity; Syntactic anomaly
Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., Poulisse, 1999). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the non-target language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.
The influence of integrated goal representations on multilevel coordination stability was investigated in a task that required finger tapping in antiphase with metronomic tone sequences (inter-agent coordination) while alternating between the two hands (intra-personal coordination). The maximum rate at which musicians could perform this task was measured when taps did or did not trigger feedback tones. Tones produced by the two hands (very low, low, medium, high, very high) could be the same as, or different from, one another and the (medium-pitched) metronome tones. The benefits of feedback tones were greatest when they were close in pitch to the metronome and the left hand triggered low tones while the right hand triggered high tones. Thus, multilevel coordination was facilitated by tones that were easy to integrate with, but perceptually distinct from, the metronome, and by compatibility of movement patterns and feedback pitches.
Motor Coordination; Auditory Feedback; Perceptual Motor Processes; Finger Tapping
Infant imitation demonstrates that the perception and production of human action are closely linked by a ‘supramodal’ representation of action. This action representation unites observation and execution into a common framework, and it has far-reaching implications for the development of social cognition. It allows infants to see the behaviors of others as commensurate with their own—as ‘like me.’ Based on the ‘like me’ perception of others, social encounters are interpretable and informative. Infants can use themselves as a framework for understanding others and can learn about the possibilities and consequences of their own potential acts by observing the behavior of others. Through social interaction with other intentional agents who are viewed as ‘like me,’ infants develop a richer social cognition. This paper explores the early manifestations and cascading developmental effects of the ‘like me’ conception.
Imitation; Action representation; Intention; Cross-modal; Body representation; Mirror neurons