Mental rotation and number representation have both been studied widely, but although mental rotation has been linked to higher-level mathematical skills, to date it has not been shown whether mental rotation ability is linked to the most basic mental representation and processing of numbers. To investigate the possible connection between mental rotation abilities and numerical representation, 43 participants completed four tasks: 1) a standard pen-and-paper mental rotation task; 2) a multi-digit number magnitude comparison task assessing the compatibility effect, which indicates separate processing of decade and unit digits; 3) a number-line mapping task, which measures precision of number magnitude representation; and 4) a random number generation task, which yields measures both of executive control and of spatial number representations. Results show that mental rotation ability correlated significantly with both size of the compatibility effect and with number mapping accuracy, but not with any measures from the random number generation task. Together, these results suggest that higher mental rotation abilities are linked to more developed number representation, and also provide further evidence for the connection between spatial and numerical abilities.
•We show a link between mental rotation ability (MRA) and numerical representation.•MRA correlated with a measure of holding concurrent multiple number representations.•MRA correlated with a measure of number representation precision (number line task).•MRA did not correlate with an executive control task (random number generation).•These findings strengthen the link observed between spatial and numerical abilities.
Mental rotation; Numerical representation; Compatibility effect; Numerical cognition; Number line; Spatial abilities
Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. Previous work has found that non-sensory inputs, namely spatial memory, can play a powerful role in angular path integration (Arthur et al., 2007, 2009). Here we investigated the conditions under which spatial memory facilitates angular path integration. We hypothesized that the benefit of spatial memory is particularly likely in spatial updating tasks in which one’s self-location estimate is referenced to external space. To test this idea, we administered passive, nonvisual body rotations (ranging 40° – 140°) about the yaw axis and asked participants to use verbal reports or open-loop manual pointing to indicate the magnitude of the rotation. Prior to some trials, previews of the surrounding environment were given. We found that when participants adopted an egocentric frame of reference, the previously-observed benefit of previews on within-subject response precision was not manifested, regardless of whether remembered spatial frameworks were derived from vision or spatial language. We conclude that the powerful effect of spatial memory is dependent on one’s frame of reference during self-motion updating.
spatial memory; path integration; vestibular navigation; manual pointing; perception and action
Multi-tasking can increase susceptibility to distraction, affecting whether irrelevant objects capture attention. Similarly, people with depression often struggle to concentrate when performing cognitively demanding tasks. This parallel suggests that depression is like multi-tasking. To test this idea, we examined relations between self-reported levels of anhedonic depression (a dimension that reflects the unique aspects of depression not shared with anxiety or other forms of distress) and attention capture by salient items in a visual search task. Furthermore, we compared these relations to the effects of performing a concurrent auditory task on attention capture. Strikingly, both multi-tasking and elevated levels of anhedonic depression were associated with increased capture by uniquely colored items, but decreased capture by abruptly appearing items. At least with respect to attention capture and distraction, depression seems to be functionally comparable to juggling a second, unrelated cognitive task.
depression; anhedonic depression; attention capture; multi-tasking; distractibility
In three N-Back experiments, we investigated components of the process of working memory (WM) updating, more specifically access to items stored outside the focus of attention and transfer from the focus to the region of WM outside the focus. We used stimulus complexity as a marker. We found that when WM transfer occurred under full attention, it was slow and highly sensitive to stimulus complexity, much more so than WM access. When transfer occurred in conjunction with access, however, it was fast and no longer sensitive to stimulus complexity. Thus the updating context altered the nature of WM processing: The dual-task situation (transfer in conjunction with access) drove memory transfer into a more efficient mode, indifferent to stimulus complexity. In contrast, access times consistently increased with complexity, unaffected by the processing context. This study reinforces recent reports that retrieval is a (perhaps the) key component of working memory functioning.
The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification.
Letter similarity; Choice theory
Separate cognitive processes govern the inhibitory control of manual and oculomotor movements. Despite this fundamental distinction, little is known about how these inhibitory control processes relate to more complex domains of behavioral functioning. This study sought to determine how these inhibitory control mechanisms relate to broadly defined domains of impulsive behavior. Thirty adults with attention-deficit/hyperactivity disorder (ADHD) and 28 comparison adults performed behavioral measures of inhibitory control and completed impulsivity inventories. Results suggest that oculomotor inhibitory control, but not manual inhibitory control, is related to specific domains of self-reported impulsivity. This finding was limited to the ADHD group; no significant relations between inhibitory control and impulsivity were found in comparison adults. These results highlight the heterogeneity of inhibitory control processes and their differential relations to different facets of impulsivity.
impulsivity; inhibitory control; manual; oculomotor; ADHD
Face recognition is a complex skill that requires the integration of facial features across the whole face, e.g., holistic processing. It is unclear whether, and to what extent, other species process faces in a manner that is similar to humans. Previous studies on the inversion effect, a marker of holistic processing, in nonhuman primates have revealed mixed results in part because many studies have failed to include alternative image categories necessary to understand whether the effects are truly face-specific. The present study re-examined the inversion effect in rhesus monkeys and chimpanzees using comparable testing methods and a variety of high quality stimuli including faces and nonfaces. The data support an inversion effect in chimpanzees only for conspecifics’ faces (expert category), suggesting face-specific holistic processing similar to humans. Rhesus monkeys showed inversion effects for conspecifics, but also for heterospecifics’ faces (chimpanzees), and nonfaces images (houses), supporting important species differences in this simple test of holistic face processing.
face recognition; inversion effect; holistic processing; matching-to-sample; comparative
Causal counterfactuals e.g., ‘if the ignition key had been turned then the car would have started’ and causal conditionals e.g., ‘if the ignition key was turned then the car started’ are understood by thinking about multiple possibilities of different sorts, as shown in six experiments using converging evidence from three different types of measures. Experiments 1a and 1b showed that conditionals that comprise enabling causes, e.g., ‘if the ignition key was turned then the car started’ primed people to read quickly conjunctions referring to the possibility of the enabler occurring without the outcome, e.g., ‘the ignition key was turned and the car did not start’. Experiments 2a and 2b showed that people paraphrased causal conditionals by using causal or temporal connectives (because, when), whereas they paraphrased causal counterfactuals by using subjunctive constructions (had…would have). Experiments 3a and 3b showed that people made different inferences from counterfactuals presented with enabling conditions compared to none. The implications of the results for alternative theories of conditionals are discussed.
► Causal counterfactuals understood by envisaging different sorts of possibilities. ► Causal counterfactuals are paraphrased differently compared to causal conditionals. ► People make different inferences from counterfactuals in the context of a story.
Conditional reasoning; Counterfactuals; Causality; Enablers; Mental models
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions.
attention; working memory; cuing; automaticity; strategic control; PsychINFO classification 2346
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
There is an emerging literature on visual search in natural tasks suggesting that task-relevant goals account for a remarkably high proportion of saccades, including anticipatory eye-movements. Moreover, factors such as “visual saliency” that otherwise affect fixations become less important when they are bound to objects that are not relevant to the task at hand. We briefly review this literature and discuss the implications for task-based variants of the visual world paradigm. We argue that the results and their likely interpretation may profoundly affect the “linking hypothesis” between language processing and the location and timing of fixations in task-based visual world studies. We outline a goal-based linking hypothesis and discuss some of the implications for how we conduct visual world studies, including how we interpret and analyze the data. Finally, we outline some avenues of research, including examples of some classes of experiments that might prove fruitful for evaluating the effects of goals in visual world experiments and the viability of a goal-based linking hypothesis.
The present research examined whether 9.5-month-old infants can attribute to an agent a disposition to perform a particular action on objects, and can then use this disposition to predict which of two new objects—one that can be used to perform the action and one that cannot—the agent is likely to reach for next. The infants first received familiarization trials in which they watched an agent slide either three (Experiments 1 and 3) or six (Experiment 2) different objects forward and backward on an apparatus floor. During test, the infants saw two new identical objects placed side by side: one stood inside a short frame that left little room for sliding, and the other stood inside a longer frame that left ample room for sliding. The infants who saw the agent slide six different objects attributed to her a disposition to slide objects: they expected her to select the “slidable” as opposed to the “unslidable” test object, and they looked reliably longer when she did not. In contrast, the infants who saw the agent slide only three different objects looked about equally when she selected either test object. These results add to recent evidence that infants in the first year of life can attribute dispositions to agents, and can use these dispositions to help predict agents’ actions in new contexts.
Infant cognition; Disposition; Action comprehension; Psychological reasoning
Are 15-month-old infants able to detect a violation in the consistency of an event sequence that involves pretense? In Experiment 1, infants detected a violation when an actor pretended to pour liquid into one cup and then pretended to drink from another cup. In Experiment 2, infants no longer detected a violation when the cups were replaced with objects not typically used in the context of drinking actions, either shoes or tubes. Experiment 3 showed that infants’ difficulty in Experiment 2 was not due to the use of atypical objects per se, but arose from the novelty of seeing an actor appearing to drink from these objects. After receiving a single familiarization trial in which they observed the actor pretend to drink from either a shoe or a tube, infants now detected a violation when the actor pretended to pour into and to drink from different shoes or tubes. Thus, at an age (or just before the age) when infants are beginning to engage in pretend play, they are able to show comprehension of at least one aspect of pretense in a violation-of-expectation task: specifically, they are able to detect violations in the consistency of pretend action sequences.
Cognitive development; Infancy; Pretense comprehension; Theory of mind
In prior work, women were found to outperform men on short-term verbal memory tasks. The goal of the present work was to examine whether gender differences on short-term memory tasks are tied to the involvement of long-term memory in the learning process. In Experiment 1, men and women were compared on their ability to remember phonologically-familiar novel words and phonologically-unfamiliar novel words. Learning of phonologically-familiar novel words (but not of phonologically-unfamiliar novel words) can be supported by long-term phonological knowledge. Results revealed that women outperformed men on phonologically-familiar novel words, but not on phonologically-unfamiliar novel words. In Experiment 2, we replicated Experiment 1 using a within-subjects design, and confirmed gender differences on phonologically-familiar, but not phonologically-unfamiliar stimuli. These findings are interpreted to suggest that women are more likely than men to recruit native-language phonological knowledge during novel word-learning.
gender differences; word learning; phonology; short-term memory
Currently, it is unclear what model of timing best describes temporal processing across millisecond and second timescales in tasks with different response requirements. In the present set of experiments, we assessed whether the popular dedicated scalar model of timing accounts for performance across a restricted timescale surrounding the 1 second duration for different tasks. The first two experiments evaluate whether temporal variability scales proportionally with the timed duration within temporal reproduction. The third experiment compares timing across millisecond and second timescales using temporal reproduction and discrimination tasks designed with parallel structures. The data exhibit violations of the assumptions of a single scalar timekeeper across millisecond and second timescales within temporal reproduction; these violations are less apparent for temporal discrimination. The finding of differences across tasks suggests that task demands influence the mechanisms that are engaged for keeping time.
Time; Time perception; Time estimation; Prospective timing; Scalar timing PsycINFO classification: 2340
The target article represents a distillation of nearly 20 years of work dedicated to the analysis of visual selection. Throughout these years, Jan Theeuwes and his colleagues have been enormously productive in their development of a particular view of visual selection, one that emphasizes the role of bottom-up processes. This work has been very influential, as there is substantial merit to many aspects of this research. However, this endeavor has also been provocative—the reaction to this work has resulted in a large body of research that emphasizes the role of top-down processes. Here we highlight recent work not covered in Theeuwes’s review and discuss how this literature may not be compatible with Theeuwes’s theoretical perspective. In our view this ongoing debate has been one of the most interesting and productive in the field. One can only hope that in time the ultimate result will be a complete understanding of how visual selection actually works.
Studies in adults indicate that response preparation is crucial to inhibitory control, but it remains unclear whether preparation contributes to improvements in inhibitory control over the course of childhood and adolescence. In order to assess the role of response preparation in developmental improvements in inhibitory control, we parametrically manipulated the duration of the instruction period in an antisaccade (AS) task given to participants ages 8 to 31 years. Regressions showing a protracted development of AS performance were consistent with existing research, and two novel findings emerged. First, all participants showed improved performance with increased preparation time, indicating that response preparation is crucial to inhibitory control at all stages of development. Preparatory processes did not deteriorate at even the longest preparatory period, indicating that the youngest participants were able to sustain preparation at even the longest interval. Second, developmental trajectories did not differ for different preparatory period lengths, highlighting that the processes supporting response preparation continue to mature in tandem with improvements in AS performance. Our findings suggest that developmental improvements are not simply due to an inhibitory system that is faster to engage but may also reflect qualitative changes in the processes engaged during the preparatory period.
response preparation; cognitive control; antisaccade; inhibitory control; development
The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question “What is the delay between language input and saccade execution?” problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between “signal” (eye movements due to the unfolding language) and “noise” (eye movements due to extraneous factors)? In two studies, participants heard either ‘the man…’ or ‘the girl…’, and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100 ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.
Oculomotor control; Saccades; Double-step paradigm; Language-mediated eye movements; Visual world paradigm
Depression has been associated with impaired recollection of episodic details in tests of recognition memory that use verbal material. In two experiments, the remember/know procedure was employed to investigate the effects of dysphoric mood on recognition memory for pictorial materials that may not be subject to the same processing limitations found for verbal materials in depression. In Experiment 1, where the recognition test took place two weeks after encoding, subclinically depressed participants reported fewer know judgements which were likely to be at least partly due to a remember-to-know shift. Although pictures were accompanied by negative or neutral captions at encoding, no effect of captions on recognition memory was observed. In Experiment 2, where the recognition test occurred soon after viewing the pictures, subclinically depressed participants reported fewer remember judgements. All participants reported more remember judgements for pictures of emotionally negative content than pictures of neutral content. Together, these findings demonstrate that recognition memory for pictorial stimuli is compromised in dysphoric individuals in a way that is consistent with a recollection deficit for episodic detail and also reminiscent of that previously reported for verbal materials. These findings contribute to our developing understanding of how mood and memory interact.
Depression; Memory; Emotion; Recollection; Familiarity
The cumulative semantic cost describes a phenomenon in which picture naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. Here we test whether the cumulative semantic cost requires the assumption of lexical selection by competition. In Experiment 1 participants named a sequence of pictures, while in Experiment 2 participants named words instead of pictures, preceded by a gender marked determiner. We replicate the basic cumulative semantic cost with pictures (Exp. 1) and show that there is no cumulative semantic cost for word targets (Exp. 2). This pattern was replicated in Experiment 3 in which pictures and words were named along with their gender marked definite determiner, and were intermingled within the same experimental design. In addition, Experiment 3 showed that while picture naming induces a cumulative semantic cost for subsequently named words, word naming does not induce a cumulative semantic cost for subsequently named pictures. These findings suggest that the cumulative semantic cost arises prior to lexical selection and that the effect arises due to incremental changes to the connection weights between semantic and lexical representations.
Cumulative semantic cost; Semantic interference; Lexical access; Picture naming; Semantic access
In two experiments, we compared level of activation and temporal overlap accounts of compatibility effects in the Simon task by reducing the discriminability of spatial and non-spatial features of a target location word. Participants made keypress responses to the non-spatial or spatial feature of centrally-presented location words. The discriminability of the spatial feature of the word (Experiment 1), or of both the spatial and non-spatial feature (Experiment 2), was manipulated. When the spatial feature of the word was task-irrelevant, lowering the discriminability of this feature reduced the compatibility effect. The compatibility effect was restored when the discriminability of both the task-relevant and task-irrelevant features were reduced together. Results provide further evidence for the temporal overlap account of compatibility effects. Furthermore, compatibility effects when the spatial information was task-relevant and those when the spatial information was task-irrelevant were moderately correlated with each other, suggesting a common underlying mechanism in both versions.
Simon effect; stimulus-response compatibility; temporal overlap; automatic activation
While an increasing number of behavioral studies examining spatial cognition use experimental paradigms involving disorientation, the process by which one becomes disoriented is not well explored. The current study examined this process using a paradigm in which participants were blindfolded and underwent a succession of 70° or 200° passive, whole body rotations around a fixed vertical axis. After each rotation, participants used a pointer to indicate either their heading at the start of the most recent turn or their heading at the start of the current series of turns. Analyses showed that in both cases, mean pointing errors increased gradually over successive turns. In addition to the gradual loss of orientation indicated by this increase, analysis of the pointing errors also showed evidence of occasional, abrupt loss orientation. Results indicate multiple routes from an oriented to a disoriented state, and shed light on the process of becoming disoriented.
spatial cognition; disorientation
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
manual pointing; auditory space perception; perception / action; perceived direction; spatial cognition
Two experiments examined effects of mixed stimulus-response mappings and tasks for older and younger adults. In Experiment 1, participants performed two-choice spatial reaction tasks with blocks of pure and mixed compatible and incompatible mappings. In Experiment 2, a compatible or incompatible mapping was mixed with a Simon task for which the mapping of stimulus color to location was relevant and stimulus location irrelevant. In both experiments older adults showed larger mixing costs than younger adults and larger compatibility effects, with the differences particularly pronounced in Experiment 1 when location mappings were mixed. In mixed conditions, when stimulus location was relevant, older adults benefited more than younger adults from complete repetition of the task and stimulus from the preceding trial. When stimulus location was irrelevant, the benefit of complete repetition did not differ reliably between age groups. The results suggest that the age-related deficit associated with mixing mappings and tasks is primarily due to older adults having more difficulty separating task sets that activate conflicting response codes.
Aging; Attention; Response Selection; 2860 Gerontology; 2346 Attention
Submovements that are frequently observed in the final portion of pointing movements have traditionally been viewed as pointing accuracy adjustments. Here we re-examine this long-lasting interpretation by developing evidence that many of submovements may be non-corrective fluctuations arising from various sources of motor output variability. In particular, non-corrective submovements may emerge during motion termination and during motion of low speed. The contribution of these factors and the factor of accuracy regulation in submovement production is investigated here by manipulating movement mode (discrete, reciprocal, and passing) and target size (small and large). The three modes provided different temporal combinations of accuracy regulation and motion termination, thus allowing us to disentangle submovements associated with each factor. The target size manipulations further emphasized the role of accuracy regulation and provided variations in movement speed. Gross and fine submovements were distinguished based on the degree of perturbation of smooth motion. It was found that gross submovements were predominantly related to motion termination and not to pointing accuracy regulation. Although fine submovements were more frequent during movements to small than to large targets, other results show that they may also be not corrective submovements but rather motion fluctuations attributed to decreases in movement speed accompanying decreases in target size. Together, the findings challenge the traditional interpretation, suggesting that the majority of submovements are fluctuations emerging from mechanical and neural sources of motion variability. The implications of the findings for the mechanisms responsible for accurate target achievement are discussed.
arm kinematics; discrete; continuous; accuracy; variability