PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (571683)

Clipboard (0)
None

Related Articles

1.  Haptic information stabilizes and destabilizes coordination dynamics. 
Goal-directed, coordinated movements in humans emerge from a variety of constraints that range from 'high-level' cognitive strategies based on perception of the task to 'low-level' neuromuscular-skeletal factors such as differential contributions to coordination from flexor and extensor muscles. There has been a tendency in the literature to dichotomize these sources of constraint, favouring one or the other rather than recognizing and understanding their mutual interplay. In this experiment, subjects were required to coordinate rhythmic flexion and extension movements with an auditory metronome, the rate of which was systematically increased. When subjects started in extension on the beat of the metronome, there was a small tendency to switch to flexion at higher rates, but not vice versa. When subjects were asked to contact a physical stop, the location of which was either coincident with or counterphase to the auditory stimulus, two effects occurred. When haptic contact was coincident with sound, coordination was stabilized for both flexion and extension. When haptic contact was counterphase to the metronome, coordination was actually destabilized, with transitions occurring from both extension to flexion on the beat and from flexion to extension on the beat. These results reveal the complementary nature of strategic and neuromuscular factors in sensorimotor coordination. They also suggest the presence of a multimodal neural integration process - which is parametrizable by rate and context - in which intentional movement, touch and sound are bound into a single, coherent unit.
doi:10.1098/rspb.2001.1620
PMCID: PMC1088728  PMID: 11375110
2.  Crossmodal Interactions during Affective Picture Processing 
PLoS ONE  2014;9(2):e89858.
"Natural" crossmodal correspondences, such as the spontaneous tendency to associate high pitches with high spatial locations, are often hypothesized to occur preattentively and independently of task instructions (top-down attention). Here, we investigate bottom-up attentional engagement by using emotional scenes that are known to naturally and reflexively engage attentional resources. We presented emotional (pleasant and unpleasant) or neutral pictures either below or above a fixation cross, while participants were required to discriminate between a high or a low pitch tone (experiment 1). Results showed that despite a robust crossmodal attentional capture of task-irrelevant emotional pictures, the general advantage in classifying the tones for congruent over incongruent visual-auditory stimuli was similar for emotional and neutral pictures. On the other hand, when picture position was task-relevant (experiment 2), task-irrelevant tones did not interact with pictures with regard to their combination of pitch and visual vertical spatial position, but instead they were effective in minimizing the interference effect of emotional picture processing on the ongoing task. These results provide constraints on our current understanding of natural crossmodal correspondences.
doi:10.1371/journal.pone.0089858
PMCID: PMC3937419  PMID: 24587078
3.  Behavioral Impact of Unisensory and Multisensory Audio-Tactile Events: Pros and Cons for Interlimb Coordination in Juggling 
PLoS ONE  2012;7(2):e32308.
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception.
doi:10.1371/journal.pone.0032308
PMCID: PMC3288083  PMID: 22384211
4.  The Temporal Representation of In-Phase and Anti-Phase Movements 
Human movement science  2007;26(2):226-234.
We have proposed that the stability of bimanual coordination is influenced by the complexity of the representation of the task goals. Here we present two experiments to explore this hypothesis. First, we examined whether a temporal event structure is present in continuous movements by having participants vocalize while producing bimanual circling movements. Participants tended to vocalize once per movement cycle when moving in-phase. In contrast, vocalizations were not synchronized with anti-phase movements. While the in-phase result is unexpected, the latter would suggest anti-phase continuous movements lack an event structure. Second, we examined the event structure of movements marked by salient turn-around points. Participants made bimanual wrist flexion movements and were instructed to move ‘in synchrony’ with a metronome, without specifying how they should couple the movements to the metronome. During in-phase movements, participants synchronized one hand cycle with every metronome beat; during anti-phase movements, participants synchronized flexion of one hand with one metronome beat and extension of the other hand with the next beat. The results are consistent with the hypothesis that the instability of anti-phase movements is related to their more complex (or absent) event representation relative to that associated with in-phase movements.
doi:10.1016/j.humov.2007.01.002
PMCID: PMC1904833  PMID: 17343942
coordination; constraints; in-phase; anti-phase; stability
5.  Object representation in the human auditory system 
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision.
doi:10.1111/j.1460-9568.2006.04925.x
PMCID: PMC2855546  PMID: 16836636
auditory sensory memory; auditory stream segregation; event-related potentials; implicit memory; spectro-temporal processing
6.  The role of musical training in emergent and event-based timing 
Introduction: Musical performance is thought to rely predominantly on event-based timing involving a clock-like neural process and an explicit internal representation of the time interval. Some aspects of musical performance may rely on emergent timing, which is established through the optimization of movement kinematics, and can be maintained without reference to any explicit representation of the time interval. We predicted that musical training would have its largest effect on event-based timing, supporting the dissociability of these timing processes and the dominance of event-based timing in musical performance.
Materials and Methods: We compared 22 musicians and 17 non-musicians on the prototypical event-based timing task of finger tapping and on the typically emergently timed task of circle drawing. For each task, participants first responded in synchrony with a metronome (Paced) and then responded at the same rate without the metronome (Unpaced).
Results: Analyses of the Unpaced phase revealed that non-musicians were more variable in their inter-response intervals for finger tapping compared to circle drawing. Musicians did not differ between the two tasks. Between groups, non-musicians were more variable than musicians for tapping but not for drawing. We were able to show that the differences were due to less timer variability in musicians on the tapping task. Correlational analyses of movement jerk and inter-response interval variability revealed a negative association for tapping and a positive association for drawing in non-musicians only.
Discussion: These results suggest that musical training affects temporal variability in tapping but not drawing. Additionally, musicians and non-musicians may be employing different movement strategies to maintain accurate timing in the two tasks. These findings add to our understanding of how musical training affects timing and support the dissociability of event-based and emergent timing modes.
doi:10.3389/fnhum.2013.00191
PMCID: PMC3653057  PMID: 23717275
music; timing; finger tapping; circle drawing; emergent timing; event-based timing
7.  Dynamics of Distraction: Competition among Auditory Streams Modulates Gain and Disrupts Inter-Trial Phase Coherence in the Human Electroencephalogram 
PLoS ONE  2013;8(1):e53953.
Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech) or low (white noise) distraction. High-distraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.
doi:10.1371/journal.pone.0053953
PMCID: PMC3542320  PMID: 23326548
8.  Voice responses to changes in pitch of voice or tone auditory feedbacka) 
The present study was undertaken to examine if a subject’s voice F0 responded not only to perturbations in pitch of voice feedback but also to changes in pitch of a side tone presented congruent with voice feedback. Small magnitude brief duration perturbations in pitch of voice or tone auditory feedback were randomly introduced during sustained vowel phonations. Results demonstrated a higher rate and larger magnitude of voice F0 responses to changes in pitch of the voice compared with a triangular-shaped tone (experiment 1) or a pure tone (experiment 2). However, response latencies did not differ across voice or tone conditions. Data suggest that subjects responded to the change in F0 rather than harmonic frequencies of auditory feedback because voice F0 response prevalence, magnitude, or latency did not statistically differ across triangular-shaped tone or pure-tone feedback. Results indicate the audio–vocal system is sensitive to the change in pitch of a variety of sounds, which may represent a flexible system capable of adapting to changes in the subject’s voice. However, lower prevalence and smaller responses to tone pitch-shifted signals suggest that the audio–vocal system may resist changes to the pitch of other environmental sounds when voice feedback is present.
doi:10.1121/1.1849933
PMCID: PMC1351107  PMID: 15759705
9.  Role of auditory feedback in the control of successive keystrokes during piano playing 
The purpose of this study was to elucidate the role of auditory feedback derived from one keystroke in the control of the rhythmicity and velocity of successive keystrokes during piano playing. We examined the effects of transient auditory perturbations with respect to the pitch, loudness, and timing of one tone on subsequent keystrokes while six pianists played short excerpts from three simple musical pieces having different tempi (“event rates”). Immediately after a delay in tone production, the inter-keystroke interval became shorter. This compensatory action depended on the tempo, being most prominent at the medium tempo. This indicates that temporal information provided by auditory feedback is utilized to regulate the timing of movement elements produced in a sequence. We also found that the keystroke velocity changed after the timing, pitch, or loudness of a tone was altered, although the response differed depending on the type of perturbation. While delaying the timing or altering the pitch led to an increase in the velocity, altering the loudness changed the velocity in an inconsistent manner. Furthermore, perturbing a tone elicited by the right hand also affected the rhythmicity and velocity of keystrokes with the left hand, indicating that bimanual coordination of tone production was maintained. Finally, altering the pitch sometimes resulted in striking an incorrect key, mostly in the slow piece, emphasizing the importance of pitch information for accurate planning and execution of sequential piano keystrokes.
doi:10.1007/s00221-010-2307-2
PMCID: PMC3179864  PMID: 20521031
Feedback control; Auditory motor integration; Sequential movements; Bimanual control; Musicians; Pianists; Music
10.  Things are sounding up: Affective influences on auditory tone perception 
Psychonomic bulletin & review  2007;14(3):517-521.
Recent studies have documented robust and intriguing associations between affect and performance in cognitive tasks. The present two experiments sought to extend this line of work with reference to potential cross-modal effects. Specifically, the present studies examined whether word evaluations would bias subsequent judgments of low- and high-pitch tones. Because affective metaphors and related associations consistently indicate that positive is high and negative is low, we predicted and found that positive evaluations biased tone judgment in the direction of high-pitch tones, whereas the opposite was true of negative evaluations. Effects were found on accuracy rates, response biases, and reaction times. These effects occurred despite the irrelevance of prime evaluations to the tone judgment task. In addition to clarifying the nature of these cross-modal associations, the present results further the idea that affective evaluations exert large effects on perceptual judgments related to verticality.
PMCID: PMC2694503  PMID: 17874599
11.  Electromagnetic Correlates of Musical Expertise in Processing of Tone Patterns 
PLoS ONE  2012;7(1):e30171.
Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training.
doi:10.1371/journal.pone.0030171
PMCID: PMC3261169  PMID: 22279568
12.  Musicians and tone-language speakers share enhanced brainstem encoding but not perceptual benefits for musical pitch 
Brain and cognition  2011;77(1):1-10.
Behavioral and neurophysiological transfer effects from music experience to language processing are well-established but it is currently unclear whether or not linguistic expertise (e.g., speaking a tone language) benefits music-related processing and its perception. Here, we compare brainstem responses of English-speaking musicians/non-musicians and native speakers of Mandarin Chinese elicited by tuned and detuned musical chords, to determine if enhancements in subcortical processing translate to improvements in the perceptual discrimination of musical pitch. Relative to non-musicians, both musicians and Chinese had stronger brainstem representation of the defining pitches of musical sequences. In contrast, two behavioral pitch discrimination tasks revealed that neither Chinese nor non-musicians were able to discriminate subtle changes in musical pitch with the same accuracy as musicians. Pooled across all listeners, brainstem magnitudes predicted behavioral pitch discrimination performance but considering each group individually, only musicians showed connections between neural and behavioral measures. No brain-behavior correlations were found for tone language speakers or non-musicians. These findings point to a dissociation between subcortical neurophysiological processing and behavioral measures of pitch perception in Chinese listeners. We infer that sensory-level enhancement of musical pitch information yields cognitive-level perceptual benefits only when that information is behaviorally relevant to the listener.
doi:10.1016/j.bandc.2011.07.006
PMCID: PMC3159732  PMID: 21835531
Pitch discrimination; music perception; tone language; auditory evoked potentials; fundamental frequency-following response (FFR); experience-dependent plasticity
13.  Looking for a pattern: An MEG study on the abstract mismatch negativity in musicians and nonmusicians 
BMC Neuroscience  2009;10:42.
Background
The mismatch negativity (MMN) is an early component of event-related potentials/fields, which can be observed in response to violations of regularities in sound sequences. The MMN can be elicited by simple feature (e.g. pitch) deviations in standard oddball paradigms as well as by violations of more complex sequential patterns. By means of magnetoencephalography (MEG) we investigated if a pattern MMN could be elicited based on global rather than local probabilities and if the underlying ability to integrate long sequences of tones is enhanced in musicians compared to nonmusicians.
Results
A pattern MMN was observed in response to violations of a predominant sequential pattern (AAAB) within a standard oddball tone sequence consisting of only two different tones. This pattern MMN was elicited even though the probability of pattern deviants in the sequence was as high as 0.5. Musicians showed more leftward-lateralized pattern MMN responses, which might be due to a stronger specialization of the ability to integrate information in a sequence of tones over a long time range.
Conclusion
The results indicate that auditory grouping and the probability distribution of possible patterns within a sequence influence the expectations about upcoming tones, and that the MMN might also be based on global statistical knowledge instead of a local memory trace. The results also show that auditory grouping based on sequential regularities can occur at a much slower presentation rate than previously presumed, and that probability distributions of possible patterns should be taken into account even for the construction of simple oddball sequences.
doi:10.1186/1471-2202-10-42
PMCID: PMC2683848  PMID: 19405970
14.  Affective attention under cognitive load: reduced emotional biases but emergent anxiety-related costs to inhibitory control 
Trait anxiety is associated with deficits in attentional control, particularly in the ability to inhibit prepotent responses. Here, we investigated this effect while varying the level of cognitive load in a modified antisaccade task that employed emotional facial expressions (neutral, happy, and angry) as targets. Load was manipulated using a secondary auditory task requiring recognition of tones (low load), or recognition of specific tone pitch (high load). Results showed that load increased antisaccade latencies on trials where gaze toward face stimuli should be inhibited. This effect was exacerbated for high anxious individuals. Emotional expression also modulated task performance on antisaccade trials for both high and low anxious participants under low cognitive load, but did not influence performance under high load. Collectively, results (1) suggest that individuals reporting high levels of anxiety are particularly vulnerable to the effects of cognitive load on inhibition, and (2) support recent evidence that loading cognitive processes can reduce emotional influences on attention and cognition.
doi:10.3389/fnhum.2013.00188
PMCID: PMC3652291  PMID: 23717273
cognitive load; trait anxiety; threat processing; visual attention; antisaccade task
15.  Signal-controlled responding 
Pigeons' key pecks were reinforced with grain, then extinguished. An 8-second tone preceded the availability of peck-dependent grain 1 second after tone offset. When a tone signalled grain and an 8-second clicking sound did not, three pigeons pecked during a high percentage of tone periods, but they pecked during a low percentage of click periods. When the roles of the tone and clicking sound were reversed, performance reversed. For other birds, when a key peck during the tone cancelled the availability of grain (omission procedure), the tendency to key peck during the tone decreased some, but still remained high. A third group of pigeons received the omission procedure with the addition that the tone could not end unless 2 seconds had elapsed without a key peck. The pigeons continued to respond in a high percentage of tone periods. The experiments favor an explanation based on the pairing of the tone with a reinforced response, such as Pavlovian conditioning.
doi:10.1901/jeab.1979.31-115
PMCID: PMC1332794  PMID: 16812116
stimulus control; automaintenance; Pavlovian conditioning; key pecking; pigeons
16.  Contextual control of emergent equivalence relations. 
Three college students in Experiment 1 and 1 student in Experiment 2 learned visual conditional discriminations under contextual control by tones; the visual comparison stimulus that was correct with a given sample stimulus depended on whether a high tone or a low tone was present. Two of the subjects in Experiment 1 then demonstrated the emergence of two sets of contextually controlled three-member classes of equivalent stimuli, and the subject in Experiment 2 showed the emergence of contextually controlled four-member classes; the class membership of each stimulus varied as a function of the tones. Class membership was demonstrated by the subjects' performance of new conditional discriminations that they had never been taught directly. In Experiment 2, the procedures were intended to ensure that the tones exerted second-order conditional control and did not simply form compounds with each of the visual stimuli, but the subject's verbal description of the tasks suggested that this intention might not have been successful. It could not be ascertained, therefore, whether the tones exerted contextual control as independent second-order conditional stimuli or simply as common elements of auditory-visual stimulus compounds.
doi:10.1901/jeab.1989.51-29
PMCID: PMC1338890  PMID: 2921586
17.  Adaptation Reveals Multiple Levels of Representation in Auditory Stream Segregation 
When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones (“streaming”), rather than a single coherent stream, when the frequency separation (Δf) between tones is greater and the number of tone presentations is greater (“buildup”). However, the same large-Δf sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning, supporting the theory that streaming is a peripheral phenomenon. Here, we used adaptation to demonstrate that the contextual effect of prior Δf arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of Δf independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., Δf) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least three levels of auditory neural representation underlies segregation and formation of auditory streams.
doi:10.1037/a0012741
PMCID: PMC2726626  PMID: 19653761
auditory scene analysis; adaptation; buildup; frequency shift detector; cross-modal
18.  Task Difficulty and Performance Induce Diverse Adaptive Patterns in Gain and Shape of Primary Auditory Cortical Receptive Fields 
Neuron  2009;61(3):10.1016/j.neuron.2008.12.027.
Attention is essential for navigating complex acoustic scenes, when the listener seeks to extract a foreground source while suppressing background acoustic clutter. This study explored the neural correlates of this perceptual ability by measuring rapid changes of spectrotemporal receptive fields (STRFs) in primary auditory cortex during detection of a target tone embedded in noise. Compared to responses in the passive state, STRF gain decreased during task performance in most cells. By contrast, STRF shape changes were excitatory and specific, being strongest in cells with best frequencies near the target tone. The net effect of these adaptations was to accentuate the representation of the target tone relative to the noise, by enhancing responses of near-target cells to the tone during high-SNR tasks, while suppressing responses of far-from-target cells to the masking noise in low-SNR tasks. These adaptive STRF changes were largest in high-performance sessions, confirming a close correlation with behavior
doi:10.1016/j.neuron.2008.12.027
PMCID: PMC3882691  PMID: 19217382
19.  FINE-GRAINED PITCH ACCENT AND BOUNDARY TONE LABELING WITH PARAMETRIC F0 FEATURES 
Motivated by linguistic theories of prosodic categoricity, symbolic representations of prosody have recently attracted the attention of speech technologists. Categorical representations such as ToBI not only bear linguistic relevance, but also have the advantage that they can be easily modeled and integrated within applications. Since manual labeling of these categories is time-consuming and expensive, there has been significant interest in automatic prosody labeling. This paper presents a fine-grained ToBI-style prosody labeling system that makes use of features derived from RFC and TILT parameterization of F0 together with a n-gram prosodic language model for 4-way pitch accent labeling and 2-way boundary tone labeling. For this task, our system achieves pitch accent labeling accuracy of 56.4% and boundary tone labeling accuracy of 67.7% on the Boston University Radio News Corpus.
doi:10.1109/ICASSP.2008.4518667
PMCID: PMC2630521  PMID: 19180228
prosody; pitch accent; boundary tone; ToBI; RFC; TILT
20.  An fMRI study comparing rhythmic finger tapping in children and adults 
Pediatric Neurology  2012;46(2):94-100.
This study compared brain activations during unpaced rhythmic finger tapping in 12-year old children with those of adults. The subject pressed a button at a pace initially indicated by a metronome (12 consecutive tones) and then continued for 16 seconds of unpaced tapping to provide an assessment of his/her ability to maintain a steady rhythm. In particular, the analyses focused on the superior vermis of the cerebellum, which is known to play a key role in timing.
12 adults and 12 children performed this rhythmic finger tapping task in a 3T scanner. Whole-brain analyses were performed in Brain Voyager with a random effects analysis of variance using the general linear model. A dedicated cerebellar atlas was used to localise cerebellar activations.
As in adults, unpaced rhythmic finger tapping in children showed activations in the primary motor cortex, premotor cortex, and cerebellum. However, overall activation was different in that adults showed much more deactivation in response to the task, particularly in the occipital and frontal cortex. The other main differences were additional recruitment of motor and premotor areas in children compared to adults along with increased activity in the vermal region of the cerebellum.
These findings suggest that the timing component of the unpaced rhythmic finger tapping task is less efficient and automatic in children, who needed to recruit the superior vermis more intensively to maintain the rhythm, even though they performed somewhat more poorly than the adults.
doi:10.1016/j.pediatrneurol.2011.11.019
PMCID: PMC3266619  PMID: 22264703
21.  An Auditory Neural Correlate Suggests a Mechanism Underlying Holistic Pitch Perception 
PLoS ONE  2007;2(4):e369.
Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, “missing fundamental” pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.
doi:10.1371/journal.pone.0000369
PMCID: PMC1838520  PMID: 17426817
22.  Pitch discrimination by ferrets for simple and complex sounds 
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound, using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from ~20 – 80% across references (200 - 1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately 10 times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
doi:10.1121/1.3179676
PMCID: PMC2784999  PMID: 19739746
23.  Thai lexical tone perception in native speakers of Thai, English and Mandarin Chinese: An event-related potentials training study 
BMC Neuroscience  2008;9:53.
Background
Tone languages such as Thai and Mandarin Chinese use differences in fundamental frequency (F0, pitch) to distinguish lexical meaning. Previous behavioral studies have shown that native speakers of a non-tone language have difficulty discriminating among tone contrasts and are sensitive to different F0 dimensions than speakers of a tone language. The aim of the present ERP study was to investigate the effect of language background and training on the non-attentive processing of lexical tones. EEG was recorded from 12 adult native speakers of Mandarin Chinese, 12 native speakers of American English, and 11 Thai speakers while they were watching a movie and were presented with multiple tokens of low-falling, mid-level and high-rising Thai lexical tones. High-rising or low-falling tokens were presented as deviants among mid-level standard tokens, and vice versa. EEG data and data from a behavioral discrimination task were collected before and after a two-day perceptual categorization training task.
Results
Behavioral discrimination improved after training in both the Chinese and the English groups. Low-falling tone deviants versus standards elicited a mismatch negativity (MMN) in all language groups. Before, but not after training, the English speakers showed a larger MMN compared to the Chinese, even though English speakers performed worst in the behavioral tasks. The MMN was followed by a late negativity, which became smaller with improved discrimination. The High-rising deviants versus standards elicited a late negativity, which was left-lateralized only in the English and Chinese groups.
Conclusion
Results showed that native speakers of English, Chinese and Thai recruited largely similar mechanisms when non-attentively processing Thai lexical tones. However, native Thai speakers differed from the Chinese and English speakers with respect to the processing of late F0 contour differences (high-rising versus mid-level tones). In addition, native speakers of a non-tone language (English) were initially more sensitive to F0 onset differences (low-falling versus mid-level contrast), which was suppressed as a result of training. This result converges with results from previous behavioral studies and supports the view that attentive as well as non-attentive processing of F0 contrasts is affected by language background, but is malleable even in adult learners.
doi:10.1186/1471-2202-9-53
PMCID: PMC2483720  PMID: 18573210
24.  Motor Timing Deficits in Sequential Movements in Parkinson Disease Are Related to Action Planning: A Motor Imagery Study 
PLoS ONE  2013;8(9):e75454.
Timing of sequential movements is altered in Parkinson disease (PD). Whether timing deficits in internally generated sequential movements in PD depends also on difficulties in motor planning, rather than merely on a defective ability to materially perform the planned movement is still undefined. To unveil this issue, we adopted a modified version of an established test for motor timing, i.e. the synchronization–continuation paradigm, by introducing a motor imagery task. Motor imagery is thought to involve mainly processes of movement preparation, with reduced involvement of end-stage movement execution-related processes. Fourteen patients with PD and twelve matched healthy volunteers were asked to tap in synchrony with a metronome cue (SYNC) and then, when the tone stopped, to keep tapping, trying to maintain the same rhythm (CONT-EXE) or to imagine tapping at the same rhythm, rather than actually performing it (CONT-MI). We tested both a sub-second and a supra-second inter-stimulus interval between the cues. Performance was recorded using a sensor-engineered glove and analyzed measuring the temporal error and the interval reproduction accuracy index. PD patients were less accurate than healthy subjects in the supra-second time reproduction task when performing both continuation tasks (CONT-MI and CONT-EXE), whereas no difference was detected in the synchronization task and on all tasks involving a sub-second interval. Our findings suggest that PD patients exhibit a selective deficit in motor timing for sequential movements that are separated by a supra-second interval and that this deficit may be explained by a defect of motor planning. Further, we propose that difficulties in motor planning are of a sufficient degree of severity in PD to affect also the motor performance in the supra-second time reproduction task.
doi:10.1371/journal.pone.0075454
PMCID: PMC3781049  PMID: 24086534
25.  Relative influence of musical and linguistic experience on early cortical processing of pitch contours 
Brain and language  2008;108(1):1-9.
To assess domain specificity of experience dependent pitch representation we evaluated the mismatch negativity (MMN) and discrimination judgments of English musicians, English nonmusicians, and native Chinese for pitch contours presented in a non-speech context using a passive oddball paradigm. Stimuli consisted of homologues of Mandarin high rising (T2) and high level (T1) tones, and a linear rising ramp (T2L). One condition involved a between-category contrast (T1/T2), the other, a within-category contrast (T2L/T2). Irrespective of condition, musicians and Chinese showed larger MMN responses than nonmusicians; Chinese larger than musicians. Chinese, however, were less accurate than nonnatives in overt discrimination of T2L and T2. Taken together, these findings suggest that experience-dependent effects to pitch contours are domain-general and not driven by linguistic categories. Yet specific differences in long-term experience in pitch processing between domains (music vs. language) may lead to gradations in cortical plasticity to pitch contours.
doi:10.1016/j.bandl.2008.02.001
PMCID: PMC2670545  PMID: 18343493
Experience-dependent plasticity; mismatch negativity (MMN); music; language; nonspeech stimuli; iterated rippled noise (IRN); pitch; lexical tone; Mandarin; speech perception

Results 1-25 (571683)