Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than faces with averted gaze. Control experiments ruled out the influence of low-level stimulus differences and differential response criteria. These results indicate an enhanced unconscious representation of direct gaze, enabling the automatic and rapid detection of other individuals making eye contact with the observer.
Eye contact; gaze processing; binocular rivalry; interocular suppression; unconscious processing
•Grapheme–color synesthetes display superior color working memory than controls.•This effect is independent of color familiarity and color discrimination abilities.•Controls and synesthetes do not differ in grapheme working memory.•These results support enhanced color processing in synesthesia.•They also support research linking sensory processing and working memory.
There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory.
Color-processing; n-Back; Grapheme-processing; Synesthesia; Visual; Working memory
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension.
Bilingualism; Language Processing; American Sign Language
Previous studies have investigated how existing social attitudes towards other races affect the way we ‘share’ their bodily experiences, for example in empathy for pain, and sensorimotor mapping. Here, we ask whether it is possible to alter implicit racial attitudes by experimentally increasing self-other bodily overlap. Employing a bodily illusion known as the ‘Rubber Hand Illusion’, we delivered multisensory stimulation to light-skinned Caucasian participants to induce the feeling that a dark-skinned hand belonged to them. We then measured whether this could change their implicit racial biases against people with dark skin. Across two experiments, the more intense the participants’ illusion of ownership over the dark-skinned rubber hand, the more positive their implicit racial attitudes became. Importantly, it was not the pattern of multisensory stimulation per se, but rather, it was the change in the subjective experience of body ownership that altered implicit attitudes. These findings suggest that inducing an overlap between the bodies of self and other through illusory ownership is an effective way to change and reduce negative implicit attitudes towards outgroups.
Rubber Hand Illusion; body representation; implicit racial attitudes; social cognition; body ownership; multisensory
Much research has demonstrated a shape bias in categorizing and naming solid objects. This research has shown that when an entity is conceptualized as an individual object, adults and children attend to the object’s shape. Separate research in the domain of numerical cognition suggest that there are distinct processes for quantifying small and large sets of discrete items. This research shows that small set discrimination, comparison, and apprehension is often precise for 1–3 and sometimes 4 items; however, large numerosity representation is imprecise. Results from three experiments suggest a link between the processes for small and large number representation and the shape bias in a forced choice categorization task using naming and non-naming procedures. Experiment 1 showed that adults generalized a newly learned name for an object to new instances of the same shape only when those instances were presented in sets of less than 3 or 4. Experiment 2 showed that preschool children who were monolingual speakers of three different languages were also influenced by set size when categorizing objects in sets. Experiment 3 extended these results and showed the same effect in a non-naming task and when the novel noun was presented in a count-noun syntax frame. The results are discussed in terms of a relation between the precision of object representation and the precision of small and large number representation.
Cognitive development; Number; Object shape
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes to perception of 3D space, objects and faces. Hearing a /woo/ sound increases the apparent vertical elongation of a shape, whereas hearing a /wee/ sound increases the apparent horizontal elongation. We further demonstrate that these sounds influence aspect ratio coding. Viewing and adapting to a tall (or flat) shape makes a subsequently presented symmetric shape appear flat (or tall). These aspect ratio aftereffects are enhanced when associated speech sounds are presented during the adaptation period, suggesting that the sounds influence visual population coding of aspect ratio. Taken together, these results extend previous demonstrations that visual information constrains auditory perception by showing the converse – speech sounds influence visual perception of a basic geometric feature.
Auditory–visual; Aspect ratio; Crossmodal; Shape perception; Speech perception
Infants’ abilities to discriminate native and non-native phonemes have been extensively investigated in monolingual learners, demonstrating a transition from language-general to language-specific sensitivities over the first year after birth. However, these studies have mostly been limited to the study of vowels and consonants in monolingual learners. There is relatively little research on other types of phonetic segments, such as lexical tone, even though tone languages are very well represented across languages of the world. The goal of the present study is to investigate how Mandarin Chinese-English bilingual learners contend with non-phonemic pitch variation in English spoken word recognition. This is contrasted with their treatment of phonemic changes in lexical tone in Mandarin spoken word recognition. The experimental design was cross-sectional and three age-groups were sampled (7.5 months, 9 months and 11 months). Results demonstrated limited generalization abilities at 7.5 months, where infants only recognized words in English when matched in pitch and words in Mandarin that were matched in tone. At 9 months, infants recognized words in Mandarin Chinese that matched in tone, but also falsely recognized words that contrasted in tone. At this age, infants also recognized English words whether they were matched or mismatched in pitch. By 11 months, infants correctly recognized pitch-matched and - mismatched words in English but only recognized tonal matches in Mandarin Chinese.
Four experiments tested whether there are enduring spatial representations of objects’ locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects’ locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects’ locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation.
A central question in intertemporal decision making is why people reverse their own past choices. Someone who initially prefers a long-run outcome might fail to maintain that preference for long enough to see the outcome realized. Such behavior is usually understood as reflecting preference instability or self-control failure. However, if a decision maker is unsure exactly how long an awaited outcome will be delayed, a reversal can constitute the rational, utility-maximizing course of action. In the present behavioral experiments, we placed participants in timing environments where persistence toward delayed rewards was either productive or counterproductive. Our results show that human decision makers are responsive to statistical timing cues, modulating their level of persistence according to the distribution of delay durations they encounter. We conclude that temporal expectations act as a powerful and adaptive influence on people’s tendency to sustain patient decisions.
decision making; intertemporal choice; dynamic inconsistency; statistical learning; interval timing
Recent research indicates that infants first use form and then surface features as the basis for individuating objects. However, very little is known about the underlying basis for infants' differential sensitivity to form than surface features. The present research assessed infants' sensitivity to luminance differences. Like other surface properties, luminance information typically reveals little about an object. Unlike other surface properties (e.g. pattern, color), the visual system can detect luminance differences at birth. The outcome of two experiments indicated that 11.5-month-olds, but not 7.5-month-olds, used luminance differences to individuate objects. These results suggest that it is not the age at which infants can detect a feature, but the nature of the information carried by the feature, that determines infants' capacity to individuate objects.
Object individuation; Infancy; Luminance
There has been some debate about whether infants 10 months and younger can use featural information to individuate objects. The present research tested the hypothesis that negative results obtained with younger infants reflect limitations in information processing capacities rather than the inability to individuate objects based on featural differences. Infants aged 9.5 months saw one object (i.e. a ball) or two objects (i.e. a box and a ball) emerge successively to opposite sides of an opaque occluder. Infants then saw a single ball either behind a transparent occluder or without an occluder. Only the infants who saw the ball behind the transparent occluder correctly judged that the one-ball display was inconsistent with the box–ball sequence. These results suggest that: (a) infants categorize events involving opaque and transparent occluders as the same kind of physical situation (i.e. occlusion) and (b) support the notion that infants are more likely to give evidence of object individuation when they need to reason about one kind of event (i.e. occlusion) than when they must retrieve and compare categorically distinct events (i.e. occlusion and no-occlusion).
Object individuation; Infant cognition; Cognitive development
Children use syntax to interpret sentences and learn verbs; this is syntactic bootstrapping. The structure-mapping account of early syntactic bootstrapping proposes that a partial representation of sentence structure, the set of nouns occurring with the verb, guides initial interpretation and provides an abstract format for new learning. This account predicts early successes, but also telltale errors: Toddlers should be unable to tell transitive sentences from other sentences containing two nouns. In testing this prediction, we capitalized on evidence that 21-month-olds use what they have learned about noun order in English sentences to understand new transitive verbs. In two experiments, 21-month-olds applied this noun-order knowledge to two-noun intransitive sentences, mistakenly assigning different interpretations to “The boy and the girl are gorping!” and “The girl and the boy are gorping!”. This suggests that toddlers exploit partial representations of sentence structure to guide sentence interpretation; these sparse representations are useful, but error-prone.
Language acquisition; Syntactic bootstrapping
Visual processing is highly sensitive to stimulus orientation; for example, face perception is drastically worse when faces are oriented inverted versus upright. However, stimulus orientation must be established in relation to a particular reference frame, and in most studies, several reference frames are conflated. Which reference frame(s) matter in the perception of faces? Here we describe a simple, novel method for dissociating effects of egocentric and environmental orientation on face processing. Participants performed one of two face-processing tasks (expression classification and recognition memory) as they lay horizontally, which served to disassociate the egocentric and environmental frames. We found large effects of egocentric orientation on performance and smaller but reliable effects of environmental orientation. In a follow-up control experiment, we ruled out the possibility that the latter could be explained by compensatory ocular counterroll. We argue that environmental orientation influences face processing, which is revealed when egocentric orientation is fixed.
inversion effect; reference frames; embodiment; face perception; memory
Contour interpolation is a perceptual process that fills-in missing edges on the basis of how surrounding edges (inducers) are spatiotemporally related. Cognitive encapsulation refers to the degree to which perceptual mechanisms act in isolation from beliefs, expectations, and utilities (Pylyshyn, 1999). Is interpolation encapsulated from belief? We addressed this question by having subjects discriminate briefly-presented, partially-visible fat and thin shapes, the edges of which either induced or did not induce illusory contours (relatable and non-relatable conditions, respectively). Half the trials in each condition incorporated task-irrelevant distractor lines, known to disrupt the filling-in of contours. Half of the observers were told that the visible parts of the shape belonged to a single thing (group strategy); the other half were told that the visible parts were disconnected (ungroup strategy). It was found that distractor lines strongly impaired performance in the relatable condition, but minimally in the non-relatable condition; that strategy did not alter the effects of the distractor lines for either the relatable or non-relatable stimuli; and that cognitively grouping relatable fragments improved performance whereas cognitively grouping non-relatable fragments did not. These results suggest that 1) filling-in effects during illusory contour formation cannot be easily removed via strategy; 2) filling-in effects cannot be easily manufactured from stimuli that fail to elicit interpolation; and 3) actively grouping fragments can readily improve discrimination performance, but only when those fragments form interpolated contours. Taken together, these findings indicate that discriminating filled-in shapes depends on strategy but filling-in itself may be encapsulated from belief.
Filling-in; Illusory contours; Contour interpolation; Perceptual completion; Strategy; Cognitive expectation; Cognitive impenetrability; Modularity; Perceptual organization
The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their solutions to math problems and were then given instruction in those problems. Children who produced many gestures conveying different information from their signs (gesture-sign mismatches) were more likely to succeed after instruction than children who produced few, suggesting that mismatch can occur within-modality, and paving the way for using gesture-based teaching strategies with deaf learners.
Gesture; Sign language; Mathematics; Learning; Mismatch
We examine the referential choices (pronouns/zeros vs. names/descriptions) made during a narrative by high-functioning children and adolescents with autism and a well-matched typically developing control group. The process of choosing appropriate referring expressions has been proposed to depend on two areas of cognitive functioning: a) judging the attention and knowledge of one’s interlocutor, and b) the use of memory and attention mechanisms to represent the discourse situation. We predicted possible group differences, since autism is often associated with deficits in a) mentalizing and b) memory and attention, as well as a more general tendency to have difficulty with the pragmatic aspects of language use. Results revealed that some of the participants with autism were significantly less likely to produce pronouns or zeros in some discourse contexts. However, the difference was only one of degree. Overall, all participants in our analysis exhibited fine-grained sensitivity to the discourse context. Furthermore, referential choices for all participants were modulated by factors related to the cognitive effort of language production.
Reference; language production; pronouns; autism; theory of mind; development
Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition, φ) and yield another proposition that is more complex (¬φ for negation; ?φ for question). Combining φ with¬ or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child’s gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process (Goldin-Meadow, 2003), and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.
Negation; Questions; Language Development; Sign Language; Home sign; Gesture; Language Creation; Sentence Modification; Free relatives; Wh-forms
We measured Event-Related Potentials (ERPs) and naming times to picture targets preceded by masked words (stimulus onset asynchrony: 80 ms) that shared one of three different types of relationship with the names of the pictures: (1) Identity related, in which the prime was the name of the picture (“socks” –
Semantic interference; Lexical selection; Response selection; Speech production; ERP; N400
Past research showing a bias towards the larger non-symbolic number by adults and children in line bisection tasks (de Hevia & Spelke, 2009) has been challenged by Gebuis and Gevers, suggesting that area subtended by the stimulus and not number is responsible for the biases. I review evidence supporting the idea that although sensitivity to number might be relatively affected by visual cues, number is a major, salient property of our environment. The influence of non-numerical cues might be seen as the concurrent processing of dimensions that entail information of magnitude, without implying that number is constructed out of those dimensions.
number; non-numerical cues; line bisection; spatial biases
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level.
visualization; integration; spatiotemporal; anorthoscopic; cross-section
A visual search experiment employed strings of Landolt Cs to examine how the gap size of and frequency of exposure to distractor strings affected eye movements. Increases in gap size were associated with shorter first-fixation durations, gaze durations, and total times, as well as fewer fixations. Importantly, both the number and duration of fixations decreased with repeated exposures. The findings provide evidence for the role of cognition in guiding eye-movements, and a potential explanation for word-frequency effects observed in reading.
eye-movement control; word frequency effects; reading; visual search
Geometry is one of the highest achievements of our species, but its foundations are obscure. Consistent with longstanding suggestions that geometrical knowledge is rooted in processes guiding navigation, the present study examines potential sources of geometrical knowledge in the navigation processes by which young children establish their sense of orientation. Past research reveals that children reorient both by the shape of the surface layout and the shapes of distinctive landmarks, but it fails to clarify what shape properties children use. The present study explores two-year-old children's sensitivity to angle, length, distance and direction by testing disoriented children’s search in a variety of fragmented rhombic and rectangular environments. Children reoriented themselves in accord with surface distances and directions, but they failed to use surface lengths or corner angles either for directional reorientation or as local landmarks. Thus, navigating children navigate by some but not all of the abstract properties captured by formal Euclidean geometry. While navigation systems may contribute to children's developing geometric understanding, they likely are not the sole source of abstract geometric intuitions.
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system.
conceptual representation; semantics; object processing; picture naming; categorisation
Understanding the intentional relations in others' actions is critical to human social life. Origins of this knowledge exist in the first year and are a function of both acting as an intentional agent and observing movement cues in actions. We explore a new mechanism we believe plays an important role in infants' understanding of new actions: comparison. We examine how the opportunity to compare a familiar action with a novel, tool use action helps 7- and 10-month-old infants extract and imitate the goal of a tool use action. Infants given the chance to compare their own reach for a toy with an experimenter's reach using a claw later imitated the goal of an experimenter's tool use action. Infants who engaged with the claw, were familiarized with the claw's causal properties, or learned the associations between claw and toys (but did not align their reaches with the claw's) did not imitate. Further, active participation in the familiar action to be compared was more beneficial than observing a familiar and novel action aligned for 10-month-olds. Infants' ability to extract the goal-relation of a novel action through comparison with a familiar action could have a broad impact on the development of action knowledge and social learning more generally.
infancy; cognitive development; action understanding; analogical reasoning