Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism.
Theories of language acquisition have highlighted the importance of adult speakers as active participants in children’s language learning. However, in many communities children are reported to be directly engaged by their caregivers only rarely (Lieven, 1994). This observation raises the possibility that these children learn language from observing, rather than participating in, communicative exchanges. In this paper, we quantify naturally occurring language input in one community where directed interaction with children has been reported to be rare (Yucatec Mayan). We compare this input to the input heard by children growing up in large families in the United States, and we consider how directed and overheard input relate to Mayan children’s later vocabulary. In Study 1, we demonstrate that 1-year-old Mayan children do indeed hear a smaller proportion of total input in directed speech than children from the US. In Study 2, we show that for Mayan (but not US) children, there are great increases in the proportion of directed input that children receive between 13 and 35 months. In Study 3, we explore the validity of using videotaped data in a Mayan village. In Study 4, we demonstrate that word types directed to Mayan children from adults at 24 months (but not word types overheard by children or word types directed from other children) predict later vocabulary. These findings suggest that adult talk directed to children is important for early word learning, even in communities where much of children’s early language input comes from overheard speech.
As with all culturally relevant human behaviours, words are meaningful because they are shared by the members of a community. This research investigates whether 9-month-old infants understand this fundamental fact about language. Experiment 1 examined whether infants who are trained on, and subsequently habituated to, a new word-referent link expect the link to be consistent across a second speaker. Experiment 2 examined whether 9-month-old infants distinguish between behaviours that are shared across individuals (i.e., words) from those that are not (i.e., object preferences). The present findings indicate that infants as young as 9 months of age expect new word-referent links, but not object preferences, to be consistent across individuals. Thus, by 9 months, infants have identified at least one of the aspects of human behaviour that is shared across individuals within a community. The implications for children’s acquisition of language and culture are discussed.
shared knowledge; conventionality; culture; pedagogy; infant cognition; action perception; language development
Theory of mind requires belief- and desire-understanding. Event-related brain potential (ERP) research on belief- and desire-reasoning in adults found mid-frontal activations for both desires and beliefs, and selective right-posterior activations only for beliefs. Developmentally, children understand desires before beliefs; thus, a critical question concerns whether neural specialization for belief-reasoning exists in childhood or develops later. Neural activity was recorded as 7- and 8-year-olds (N = 18) performed the same diverse-desires, diverse-beliefs, and physical control tasks used in a previous adult ERP study. Like adults, mid-frontal scalp activations were found for belief- and desire-reasoning. Moreover, analyses using correct trials alone yielded selective right-posterior activations for belief-reasoning. Results suggest developmental links between increasingly accurate understanding of complex mental states and neural specialization supporting this understanding.
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has begun to describe neural coding of number, it is unclear how specific characteristics of the neural coding may relate to the expansive list of behavioral phenomena in the development of number cognition. The following study considers several possibilities.
The present study examined whether 6- and 9-month-old Caucasian infants could categorize faces according to race. In Experiment 1, infants were familiarized with different female faces from a common ethnic background (i.e. either Caucasian or Asian) and then tested with female faces from a novel race category. Nine-month-olds were able to form discrete categories of Caucasian and Asian faces. However, 6-month-olds did not form discrete categories of faces based on race. In Experiment 2, a second group of 6- and 9-month-olds was tested to determine whether they could discriminate between different faces from the same race category. Results showed that both age groups could only discriminate between different faces from the own-race category of Caucasian faces. The findings of the two experiments taken together suggest that 9-month-olds formed a category of Caucasian faces that are further differentiated at the individual level. In contrast, although they could form a category of Asian faces, they could not discriminate between such other-race faces. This asymmetry in category formation at 9 months (i.e. categorization of own-race faces vs. categorical perception of other-race faces) suggests that differential experience with own- and other-race faces plays an important role in infants' acquisition of face processing abilities.
Very little is known about the neural underpinnings of language learning across the lifespan and how these might be modified by maturational and experiential factors. Building on behavioral research highlighting the importance of early word segmentation (i.e. the detection of word boundaries in continuous speech) for subsequent language learning, here we characterize developmental changes in brain activity as this process occurs online, using data collected in a mixed cross-sectional and longitudinal design. One hundred and fifty-six participants, ranging from age 5 to adulthood, underwent functional magnetic resonance imaging (fMRI) while listening to three novel streams of continuous speech, which contained either strong statistical regularities, strong statistical regularities and speech cues, or weak statistical regularities providing minimal cues to word boundaries. All age groups displayed significant signal increases over time in temporal cortices for the streams with high statistical regularities; however, we observed a significant right-to-left shift in the laterality of these learning-related increases with age. Interestingly, only the 5- to 10-year-old children displayed significant signal increases for the stream with low statistical regularities, suggesting an age-related decrease in sensitivity to more subtle statistical cues. Further, in a sample of 78 10-year-olds, we examined the impact of proficiency in a second language and level of pubertal development on learning-related signal increases, showing that the brain regions involved in language learning are influenced by both experiential and maturational factors.
Infants begin to segment novel words from speech by 7.5 months, demonstrating an ability to track, encode and retrieve words in the context of larger units. Although it is presumed that word recognition at this stage is a prerequisite to constructing a vocabulary, the continuity between these stages of development has not yet been empirically demonstrated. The goal of the present study is to investigate whether infant word segmentation skills are indeed related to later lexical development. Two word segmentation tasks, varying in complexity, were administered in infancy and related to childhood outcome measures. Outcome measures consisted of age-normed productive vocabulary percentiles and a measure of cognitive development. Results demonstrated a strong degree of association between infant word segmentation abilities at 7 months and productive vocabulary size at 24 months. In addition, outcome groups, as defined by median vocabulary size and growth trajectories at 24 months, showed distinct word segmentation abilities as infants. These findings provide the first prospective evidence supporting the predictive validity of infant word segmentation tasks and suggest that they are indeed associated with mature word knowledge.
Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved de novo in humans. An alternative account—the one we explored here—is that the rhythm of speech evolved through the modification of rhythmic facial expressions. We tested this idea by investigating the structure and development of macaque monkey lipsmacks and found that their developmental trajectory is strikingly similar to the one that leads from human infant babbling to adult speech. Specifically, we show that: 1) younger monkeys produce slower, more variable mouth movements and as they get older, these movements become faster and less variable; and 2) this developmental pattern does not occur for another cyclical mouth movement—chewing. These patterns parallel human developmental patterns for speech and chewing. They suggest that, in both species, the two types of rhythmic mouth movements use different underlying neural circuits that develop in different ways. Ultimately, both lipsmacking and speech converge on a ~5 Hz rhythm that represents the frequency that characterizes the speech rhythm of human adults. We conclude that monkey lipsmacking and human speech share a homologous developmental mechanism, lending strong empirical support for the idea that the human speech rhythm evolved from the rhythmic facial expressions of our primate ancestors.
evolution of speech; language evolution; facial expression; audiovisual speech; primate communication; insula; chewing; mother-infant
During the first year of life, infants’ face recognition abilities are subject to “perceptual narrowing,” the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in “face space,” a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise.
Implicit skill learning underlies obtaining not only motor, but also cognitive and social skills through the life of an individual. Yet, the ontogenetic changes in humans’ implicit learning abilities have not yet been characterized, and, thus, their role in acquiring new knowledge efficiently during development is unknown. We investigated such learning across the life span, between 4–85 years of age with an implicit probabilistic sequence learning task, and we found that the difference in implicitly learning high vs. low probability events - measured by raw reaction time (RT) - exhibited a rapid decrement around age of 12. Accuracy and z-transformed data showed partially different developmental curves suggesting a re-evaluation of analysis methods in developmental research. The decrement in raw RT differences supports an extension of the traditional 2-stage lifespan skill acquisition model: in addition to a decline above the age 60 reported in earlier studies, sensitivity to raw probabilities and, therefore, acquiring new skills is significantly more effective until early adolescence than later in life. These results suggest that due to developmental changes in early adolescence, implicit skill learning processes undergo a marked shift in weighting raw probabilities vs. more complex interpretations of events, which, with appropriate timing, prove to be an optimal strategy for human skill learning.
skill learning; implicit sequence learning; automaticity; Alternating Serial Reaction Time Task (ASRT); development; aging; critical period
A critical challenge for visual perception is to represent objects as the same persisting individuals over time and motion. Across several areas of cognitive science, researchers have identified cohesion as among the most important theoretical principles of object persistence: An object must maintain a single bounded contour over time. Drawing inspiration from recent work in adult visual cognition, the present study tested the power of cohesion as a constraint as it operates early in development. In particular, we tested whether the most minimal cohesion violation – a single object splitting into two – would destroy infants’ ability to represent a quantity of objects over occlusion. In a forced-choice crawling paradigm, 10- and 12-month-old infants witnessed crackers being sequentially placed into containers, and typically crawled toward the container with the greater cracker quantity. When one of the crackers was visibly split in half, however, infants failed to represent the relative quantities, despite controls for the overall quantities and the motions involved. This result helps to characterize the fidelity and specificity of cohesion as a fundamental principle of object persistence, suggesting that even the simplest possible cohesion violation can dramatically impair infants’ object representations and influence their overt behavior.
Visual working memory (VWM) capacity has been studied extensively in adults, and methodological advances have enabled researchers to probe capacity limits in infancy using a preferential looking paradigm. Evidence suggests that capacity increases rapidly between 6 and 10 months of age. To understand how the VWM system develops, we must understand the relationship between the looking behavior used to study VWM and underlying cognitive processes. We present a dynamic neural field model that captures both real-time and developmental processes underlying performance. Three simulation experiments show how looking is linked to VWM processes during infancy and how developmental changes in performance could arise through increasing neural connectivity. These results provide insight into the sources of capacity limits and VWM development more generally.
In altricial species, like the human, the caregiver, very often the mother, is one of the most potent stimuli during development. The distinction between mothers and other adults is learned early in life and results in numerous behaviors in the child, most notably mother-approach and stranger-wariness. The current study examined the influence of the maternal stimulus on amygdala activity and related circuitry in twenty-five developing children (n=13) and adolescents (n=12), and how this circuitry was associated with attachment-related behaviors. Results indicated that maternal stimuli were especially effective in recruiting activity in the left dorsal amygdala, and activity in this amygdala region showed increased functional connectivity with evaluative and motor regions during viewing of maternal stimuli. Increases in this left dorsal amygdala activity and related amygdala-cortical functional connectivity were associated with increased mother-approach behaviors as measured by in-scanner behavioral responding and out-of-scanner child-report. Moreover, age-related changes in amygdala activity to non-mothers statistically mediated the developmentally typical decline in stranger wariness seen across this period. These results suggest that mother-induced behaviors are enacted by maternal influence on amygdala-cortical circuitry during childhood and adolescence.
Predicting the actions of others is critical to smooth social interactions. Prior work suggests both understanding and anticipation of goal-directed actions appears early in development. In this study, on-line goal prediction was tested explicitly using an adaptation of Woodward’s (1998) paradigm for an eye-tracking task. Twenty 11-month-olds were familiarized to movie clips of a hand reaching to grasp 1 of 2 objects. Then object locations were swapped, and the hand made an incomplete reach between the objects. Here, infants reliably made their first look from the hand to the familiarized goal object, now in a new location. A separate control condition of 20 infants familiarized to the same movements of an unfamiliar claw revealed the opposite pattern: reliable prediction to the familiarized location, rather than the familiarized object. This study suggests by 11-months infants actively use goal analysis to generate on-line predictions of an agent’s next action.
The event related potential (ERP) effect of mismatch negativity (MMN) was the first electrophysiological probe to evaluate cognitive processing (change detection) in newborn infants. Initial studies of MMN predicted clinical utility for this measure in identification of infants at risk for developmental cognitive deficits. These predictions have not been realized. We hypothesized that in sleeping newborn infants, measures derived from wavelet assessment of power in the MMN paradigm would be more robust markers of the brain's response to stimulus change than the ERP-derived MMN. Consistent with this premise, we found increased power in response to unpredictable and infrequent tones compared to frequent tones. These increases were present at multiple locations on the scalp over a range of latencies and frequencies and occurred even in the absence of an ERP-derived MMN. There were two predominant effects. First, theta band power was elevated at middle and late latencies (200 to 600 ms), suggesting that neocortical theta rhythms that subserve working memory in adults are present at birth. Second, late latency (500 ms) increased power to the unpredictable and infrequent tones was observed in the beta and gamma bands suggesting that oscillations involved in adult cognition are also present in the neonate. These findings support the expectation that frequency-dependent measures, such as wavelet power, will improve the prospects for a clinically useful test of cortical function early in the postnatal period.
Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. 2.5-year-olds were tested using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult “Subject” answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous-and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents’ false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.
Parenting is traditionally conceptualized as an exogenous environment that affects child development. However, children can also influence the quality of parenting that they receive. Using longitudinal data from 650 identical and fraternal twin pairs, we found that, controlling for cognitive ability at age 2 years, cognitive stimulation by parents (coded from video recorded behaviors during a dyadic task) at 2 years predicted subsequent reading ability at age 4 years. Moreover, controlling for cognitive stimulation at 2 years, children’s cognitive ability at 2 years predicted the quality of stimulation received from their parents at 4 years. Genetic and environmental factors differentially contributed to these effects. Parenting influenced subsequent cognitive development through a family-level environmental pathway, whereas children’s cognitive ability influenced subsequent parenting through a genetic pathway. These results suggest that genetic influences on cognitive development occur through a transactional process, in which genetic predispositions lead children to evoke cognitively stimulating experiences from their environments.
Cognitive development; Gene-environment correlation; Cognitive stimulation; Parenting; Behavior genetics
Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movements, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.
In light of cross-cultural and experimental research highlighting effects of childrearing practices on infant motor skill, we asked whether wearing diapers, a seemingly innocuous childrearing practice, affects infant walking. Diapers introduce bulk between the legs, potentially exacerbating infants’ poor balance and wide stance. We show that walking is adversely affected by old-fashioned cloth diapers, and that even modern disposable diapers—habitually worn by most infants in the sample—incur a cost relative to walking naked. Infants displayed less mature gait patterns and more missteps and falls while wearing diapers. Thus, infants’ own diapers constitute an on-going biomechanical perturbation while learning to walk. Furthermore, shifts in diapering practices may have contributed to historical and cross-cultural differences in infant walking.
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development.
Two experiments tested whether 4-year-old children extract and use geometric information in simple maps without task instruction or feedback. Children saw maps depicting an arrangement of three containers and were asked to place an object into a container designated on the map. In Experiment 1, one of the three locations on the map and the array was distinct and therefore served as a landmark; in Experiment 2, only angle, distance and sense information specified the target container. Children in both experiments used information for distance and angle, but not sense, showing signature error patterns found in adults. Children thus show early, spontaneously developing abilities to detect geometric correspondences between three-dimensional layouts and two-dimensional maps, and they use these correspondences to guide navigation. These findings begin to chart the nature and limits of the use of core geometry in a uniquely human, symbolic task.