Goal-directed, coordinated movements in humans emerge from a variety of constraints that range from 'high-level' cognitive strategies based on perception of the task to 'low-level' neuromuscular-skeletal factors such as differential contributions to coordination from flexor and extensor muscles. There has been a tendency in the literature to dichotomize these sources of constraint, favouring one or the other rather than recognizing and understanding their mutual interplay. In this experiment, subjects were required to coordinate rhythmic flexion and extension movements with an auditory metronome, the rate of which was systematically increased. When subjects started in extension on the beat of the metronome, there was a small tendency to switch to flexion at higher rates, but not vice versa. When subjects were asked to contact a physical stop, the location of which was either coincident with or counterphase to the auditory stimulus, two effects occurred. When haptic contact was coincident with sound, coordination was stabilized for both flexion and extension. When haptic contact was counterphase to the metronome, coordination was actually destabilized, with transitions occurring from both extension to flexion on the beat and from flexion to extension on the beat. These results reveal the complementary nature of strategic and neuromuscular factors in sensorimotor coordination. They also suggest the presence of a multimodal neural integration process - which is parametrizable by rate and context - in which intentional movement, touch and sound are bound into a single, coherent unit.
"Natural" crossmodal correspondences, such as the spontaneous tendency to associate high pitches with high spatial locations, are often hypothesized to occur preattentively and independently of task instructions (top-down attention). Here, we investigate bottom-up attentional engagement by using emotional scenes that are known to naturally and reflexively engage attentional resources. We presented emotional (pleasant and unpleasant) or neutral pictures either below or above a fixation cross, while participants were required to discriminate between a high or a low pitch tone (experiment 1). Results showed that despite a robust crossmodal attentional capture of task-irrelevant emotional pictures, the general advantage in classifying the tones for congruent over incongruent visual-auditory stimuli was similar for emotional and neutral pictures. On the other hand, when picture position was task-relevant (experiment 2), task-irrelevant tones did not interact with pictures with regard to their combination of pitch and visual vertical spatial position, but instead they were effective in minimizing the interference effect of emotional picture processing on the ongoing task. These results provide constraints on our current understanding of natural crossmodal correspondences.
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception.
We have proposed that the stability of bimanual coordination is influenced by the complexity of the representation of the task goals. Here we present two experiments to explore this hypothesis. First, we examined whether a temporal event structure is present in continuous movements by having participants vocalize while producing bimanual circling movements. Participants tended to vocalize once per movement cycle when moving in-phase. In contrast, vocalizations were not synchronized with anti-phase movements. While the in-phase result is unexpected, the latter would suggest anti-phase continuous movements lack an event structure. Second, we examined the event structure of movements marked by salient turn-around points. Participants made bimanual wrist flexion movements and were instructed to move ‘in synchrony’ with a metronome, without specifying how they should couple the movements to the metronome. During in-phase movements, participants synchronized one hand cycle with every metronome beat; during anti-phase movements, participants synchronized flexion of one hand with one metronome beat and extension of the other hand with the next beat. The results are consistent with the hypothesis that the instability of anti-phase movements is related to their more complex (or absent) event representation relative to that associated with in-phase movements.
coordination; constraints; in-phase; anti-phase; stability
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision.
auditory sensory memory; auditory stream segregation; event-related potentials; implicit memory; spectro-temporal processing
Introduction: Musical performance is thought to rely predominantly on event-based timing involving a clock-like neural process and an explicit internal representation of the time interval. Some aspects of musical performance may rely on emergent timing, which is established through the optimization of movement kinematics, and can be maintained without reference to any explicit representation of the time interval. We predicted that musical training would have its largest effect on event-based timing, supporting the dissociability of these timing processes and the dominance of event-based timing in musical performance.
Materials and Methods: We compared 22 musicians and 17 non-musicians on the prototypical event-based timing task of finger tapping and on the typically emergently timed task of circle drawing. For each task, participants first responded in synchrony with a metronome (Paced) and then responded at the same rate without the metronome (Unpaced).
Results: Analyses of the Unpaced phase revealed that non-musicians were more variable in their inter-response intervals for finger tapping compared to circle drawing. Musicians did not differ between the two tasks. Between groups, non-musicians were more variable than musicians for tapping but not for drawing. We were able to show that the differences were due to less timer variability in musicians on the tapping task. Correlational analyses of movement jerk and inter-response interval variability revealed a negative association for tapping and a positive association for drawing in non-musicians only.
Discussion: These results suggest that musical training affects temporal variability in tapping but not drawing. Additionally, musicians and non-musicians may be employing different movement strategies to maintain accurate timing in the two tasks. These findings add to our understanding of how musical training affects timing and support the dissociability of event-based and emergent timing modes.
music; timing; finger tapping; circle drawing; emergent timing; event-based timing
Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech) or low (white noise) distraction. High-distraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.
The present study was undertaken to examine if a subject’s voice F0 responded not only to perturbations in pitch of voice feedback but also to changes in pitch of a side tone presented congruent with voice feedback. Small magnitude brief duration perturbations in pitch of voice or tone auditory feedback were randomly introduced during sustained vowel phonations. Results demonstrated a higher rate and larger magnitude of voice F0 responses to changes in pitch of the voice compared with a triangular-shaped tone (experiment 1) or a pure tone (experiment 2). However, response latencies did not differ across voice or tone conditions. Data suggest that subjects responded to the change in F0 rather than harmonic frequencies of auditory feedback because voice F0 response prevalence, magnitude, or latency did not statistically differ across triangular-shaped tone or pure-tone feedback. Results indicate the audio–vocal system is sensitive to the change in pitch of a variety of sounds, which may represent a flexible system capable of adapting to changes in the subject’s voice. However, lower prevalence and smaller responses to tone pitch-shifted signals suggest that the audio–vocal system may resist changes to the pitch of other environmental sounds when voice feedback is present.
The purpose of this study was to elucidate the role of auditory feedback derived from one keystroke in the control of the rhythmicity and velocity of successive keystrokes during piano playing. We examined the effects of transient auditory perturbations with respect to the pitch, loudness, and timing of one tone on subsequent keystrokes while six pianists played short excerpts from three simple musical pieces having different tempi (“event rates”). Immediately after a delay in tone production, the inter-keystroke interval became shorter. This compensatory action depended on the tempo, being most prominent at the medium tempo. This indicates that temporal information provided by auditory feedback is utilized to regulate the timing of movement elements produced in a sequence. We also found that the keystroke velocity changed after the timing, pitch, or loudness of a tone was altered, although the response differed depending on the type of perturbation. While delaying the timing or altering the pitch led to an increase in the velocity, altering the loudness changed the velocity in an inconsistent manner. Furthermore, perturbing a tone elicited by the right hand also affected the rhythmicity and velocity of keystrokes with the left hand, indicating that bimanual coordination of tone production was maintained. Finally, altering the pitch sometimes resulted in striking an incorrect key, mostly in the slow piece, emphasizing the importance of pitch information for accurate planning and execution of sequential piano keystrokes.
Feedback control; Auditory motor integration; Sequential movements; Bimanual control; Musicians; Pianists; Music
Recent studies have documented robust and intriguing associations between affect and performance in cognitive tasks. The present two experiments sought to extend this line of work with reference to potential cross-modal effects. Specifically, the present studies examined whether word evaluations would bias subsequent judgments of low- and high-pitch tones. Because affective metaphors and related associations consistently indicate that positive is high and negative is low, we predicted and found that positive evaluations biased tone judgment in the direction of high-pitch tones, whereas the opposite was true of negative evaluations. Effects were found on accuracy rates, response biases, and reaction times. These effects occurred despite the irrelevance of prime evaluations to the tone judgment task. In addition to clarifying the nature of these cross-modal associations, the present results further the idea that affective evaluations exert large effects on perceptual judgments related to verticality.
Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training.
Behavioral and neurophysiological transfer effects from music experience to language processing are well-established but it is currently unclear whether or not linguistic expertise (e.g., speaking a tone language) benefits music-related processing and its perception. Here, we compare brainstem responses of English-speaking musicians/non-musicians and native speakers of Mandarin Chinese elicited by tuned and detuned musical chords, to determine if enhancements in subcortical processing translate to improvements in the perceptual discrimination of musical pitch. Relative to non-musicians, both musicians and Chinese had stronger brainstem representation of the defining pitches of musical sequences. In contrast, two behavioral pitch discrimination tasks revealed that neither Chinese nor non-musicians were able to discriminate subtle changes in musical pitch with the same accuracy as musicians. Pooled across all listeners, brainstem magnitudes predicted behavioral pitch discrimination performance but considering each group individually, only musicians showed connections between neural and behavioral measures. No brain-behavior correlations were found for tone language speakers or non-musicians. These findings point to a dissociation between subcortical neurophysiological processing and behavioral measures of pitch perception in Chinese listeners. We infer that sensory-level enhancement of musical pitch information yields cognitive-level perceptual benefits only when that information is behaviorally relevant to the listener.
Pitch discrimination; music perception; tone language; auditory evoked potentials; fundamental frequency-following response (FFR); experience-dependent plasticity
The mismatch negativity (MMN) is an early component of event-related potentials/fields, which can be observed in response to violations of regularities in sound sequences. The MMN can be elicited by simple feature (e.g. pitch) deviations in standard oddball paradigms as well as by violations of more complex sequential patterns. By means of magnetoencephalography (MEG) we investigated if a pattern MMN could be elicited based on global rather than local probabilities and if the underlying ability to integrate long sequences of tones is enhanced in musicians compared to nonmusicians.
A pattern MMN was observed in response to violations of a predominant sequential pattern (AAAB) within a standard oddball tone sequence consisting of only two different tones. This pattern MMN was elicited even though the probability of pattern deviants in the sequence was as high as 0.5. Musicians showed more leftward-lateralized pattern MMN responses, which might be due to a stronger specialization of the ability to integrate information in a sequence of tones over a long time range.
The results indicate that auditory grouping and the probability distribution of possible patterns within a sequence influence the expectations about upcoming tones, and that the MMN might also be based on global statistical knowledge instead of a local memory trace. The results also show that auditory grouping based on sequential regularities can occur at a much slower presentation rate than previously presumed, and that probability distributions of possible patterns should be taken into account even for the construction of simple oddball sequences.
Trait anxiety is associated with deficits in attentional control, particularly in the ability to inhibit prepotent responses. Here, we investigated this effect while varying the level of cognitive load in a modified antisaccade task that employed emotional facial expressions (neutral, happy, and angry) as targets. Load was manipulated using a secondary auditory task requiring recognition of tones (low load), or recognition of specific tone pitch (high load). Results showed that load increased antisaccade latencies on trials where gaze toward face stimuli should be inhibited. This effect was exacerbated for high anxious individuals. Emotional expression also modulated task performance on antisaccade trials for both high and low anxious participants under low cognitive load, but did not influence performance under high load. Collectively, results (1) suggest that individuals reporting high levels of anxiety are particularly vulnerable to the effects of cognitive load on inhibition, and (2) support recent evidence that loading cognitive processes can reduce emotional influences on attention and cognition.
cognitive load; trait anxiety; threat processing; visual attention; antisaccade task
Pigeons' key pecks were reinforced with grain, then extinguished. An 8-second tone preceded the availability of peck-dependent grain 1 second after tone offset. When a tone signalled grain and an 8-second clicking sound did not, three pigeons pecked during a high percentage of tone periods, but they pecked during a low percentage of click periods. When the roles of the tone and clicking sound were reversed, performance reversed. For other birds, when a key peck during the tone cancelled the availability of grain (omission procedure), the tendency to key peck during the tone decreased some, but still remained high. A third group of pigeons received the omission procedure with the addition that the tone could not end unless 2 seconds had elapsed without a key peck. The pigeons continued to respond in a high percentage of tone periods. The experiments favor an explanation based on the pairing of the tone with a reinforced response, such as Pavlovian conditioning.
stimulus control; automaintenance; Pavlovian conditioning; key pecking; pigeons
Three college students in Experiment 1 and 1 student in Experiment 2 learned visual conditional discriminations under contextual control by tones; the visual comparison stimulus that was correct with a given sample stimulus depended on whether a high tone or a low tone was present. Two of the subjects in Experiment 1 then demonstrated the emergence of two sets of contextually controlled three-member classes of equivalent stimuli, and the subject in Experiment 2 showed the emergence of contextually controlled four-member classes; the class membership of each stimulus varied as a function of the tones. Class membership was demonstrated by the subjects' performance of new conditional discriminations that they had never been taught directly. In Experiment 2, the procedures were intended to ensure that the tones exerted second-order conditional control and did not simply form compounds with each of the visual stimuli, but the subject's verbal description of the tasks suggested that this intention might not have been successful. It could not be ascertained, therefore, whether the tones exerted contextual control as independent second-order conditional stimuli or simply as common elements of auditory-visual stimulus compounds.
When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones (“streaming”), rather than a single coherent stream, when the frequency separation (Δf) between tones is greater and the number of tone presentations is greater (“buildup”). However, the same large-Δf sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning, supporting the theory that streaming is a peripheral phenomenon. Here, we used adaptation to demonstrate that the contextual effect of prior Δf arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of Δf independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., Δf) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least three levels of auditory neural representation underlies segregation and formation of auditory streams.
auditory scene analysis; adaptation; buildup; frequency shift detector; cross-modal
Attention is essential for navigating complex acoustic scenes, when the listener seeks to extract a foreground source while suppressing background acoustic clutter. This study explored the neural correlates of this perceptual ability by measuring rapid changes of spectrotemporal receptive fields (STRFs) in primary auditory cortex during detection of a target tone embedded in noise. Compared to responses in the passive state, STRF gain decreased during task performance in most cells. By contrast, STRF shape changes were excitatory and specific, being strongest in cells with best frequencies near the target tone. The net effect of these adaptations was to accentuate the representation of the target tone relative to the noise, by enhancing responses of near-target cells to the tone during high-SNR tasks, while suppressing responses of far-from-target cells to the masking noise in low-SNR tasks. These adaptive STRF changes were largest in high-performance sessions, confirming a close correlation with behavior
Motivated by linguistic theories of prosodic categoricity, symbolic representations of prosody have recently attracted the attention of speech technologists. Categorical representations such as ToBI not only bear linguistic relevance, but also have the advantage that they can be easily modeled and integrated within applications. Since manual labeling of these categories is time-consuming and expensive, there has been significant interest in automatic prosody labeling. This paper presents a fine-grained ToBI-style prosody labeling system that makes use of features derived from RFC and TILT parameterization of F0 together with a n-gram prosodic language model for 4-way pitch accent labeling and 2-way boundary tone labeling. For this task, our system achieves pitch accent labeling accuracy of 56.4% and boundary tone labeling accuracy of 67.7% on the Boston University Radio News Corpus.
prosody; pitch accent; boundary tone; ToBI; RFC; TILT
This study compared brain activations during unpaced rhythmic finger tapping in 12-year old children with those of adults. The subject pressed a button at a pace initially indicated by a metronome (12 consecutive tones) and then continued for 16 seconds of unpaced tapping to provide an assessment of his/her ability to maintain a steady rhythm. In particular, the analyses focused on the superior vermis of the cerebellum, which is known to play a key role in timing.
12 adults and 12 children performed this rhythmic finger tapping task in a 3T scanner. Whole-brain analyses were performed in Brain Voyager with a random effects analysis of variance using the general linear model. A dedicated cerebellar atlas was used to localise cerebellar activations.
As in adults, unpaced rhythmic finger tapping in children showed activations in the primary motor cortex, premotor cortex, and cerebellum. However, overall activation was different in that adults showed much more deactivation in response to the task, particularly in the occipital and frontal cortex. The other main differences were additional recruitment of motor and premotor areas in children compared to adults along with increased activity in the vermal region of the cerebellum.
These findings suggest that the timing component of the unpaced rhythmic finger tapping task is less efficient and automatic in children, who needed to recruit the superior vermis more intensively to maintain the rhythm, even though they performed somewhat more poorly than the adults.
Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, “missing fundamental” pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound, using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from ~20 – 80% across references (200 - 1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately 10 times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
Tone languages such as Thai and Mandarin Chinese use differences in fundamental frequency (F0, pitch) to distinguish lexical meaning. Previous behavioral studies have shown that native speakers of a non-tone language have difficulty discriminating among tone contrasts and are sensitive to different F0 dimensions than speakers of a tone language. The aim of the present ERP study was to investigate the effect of language background and training on the non-attentive processing of lexical tones. EEG was recorded from 12 adult native speakers of Mandarin Chinese, 12 native speakers of American English, and 11 Thai speakers while they were watching a movie and were presented with multiple tokens of low-falling, mid-level and high-rising Thai lexical tones. High-rising or low-falling tokens were presented as deviants among mid-level standard tokens, and vice versa. EEG data and data from a behavioral discrimination task were collected before and after a two-day perceptual categorization training task.
Behavioral discrimination improved after training in both the Chinese and the English groups. Low-falling tone deviants versus standards elicited a mismatch negativity (MMN) in all language groups. Before, but not after training, the English speakers showed a larger MMN compared to the Chinese, even though English speakers performed worst in the behavioral tasks. The MMN was followed by a late negativity, which became smaller with improved discrimination. The High-rising deviants versus standards elicited a late negativity, which was left-lateralized only in the English and Chinese groups.
Results showed that native speakers of English, Chinese and Thai recruited largely similar mechanisms when non-attentively processing Thai lexical tones. However, native Thai speakers differed from the Chinese and English speakers with respect to the processing of late F0 contour differences (high-rising versus mid-level tones). In addition, native speakers of a non-tone language (English) were initially more sensitive to F0 onset differences (low-falling versus mid-level contrast), which was suppressed as a result of training. This result converges with results from previous behavioral studies and supports the view that attentive as well as non-attentive processing of F0 contrasts is affected by language background, but is malleable even in adult learners.
Timing of sequential movements is altered in Parkinson disease (PD). Whether timing deficits in internally generated sequential movements in PD depends also on difficulties in motor planning, rather than merely on a defective ability to materially perform the planned movement is still undefined. To unveil this issue, we adopted a modified version of an established test for motor timing, i.e. the synchronization–continuation paradigm, by introducing a motor imagery task. Motor imagery is thought to involve mainly processes of movement preparation, with reduced involvement of end-stage movement execution-related processes. Fourteen patients with PD and twelve matched healthy volunteers were asked to tap in synchrony with a metronome cue (SYNC) and then, when the tone stopped, to keep tapping, trying to maintain the same rhythm (CONT-EXE) or to imagine tapping at the same rhythm, rather than actually performing it (CONT-MI). We tested both a sub-second and a supra-second inter-stimulus interval between the cues. Performance was recorded using a sensor-engineered glove and analyzed measuring the temporal error and the interval reproduction accuracy index. PD patients were less accurate than healthy subjects in the supra-second time reproduction task when performing both continuation tasks (CONT-MI and CONT-EXE), whereas no difference was detected in the synchronization task and on all tasks involving a sub-second interval. Our findings suggest that PD patients exhibit a selective deficit in motor timing for sequential movements that are separated by a supra-second interval and that this deficit may be explained by a defect of motor planning. Further, we propose that difficulties in motor planning are of a sufficient degree of severity in PD to affect also the motor performance in the supra-second time reproduction task.
To assess domain specificity of experience dependent pitch representation we evaluated the mismatch negativity (MMN) and discrimination judgments of English musicians, English nonmusicians, and native Chinese for pitch contours presented in a non-speech context using a passive oddball paradigm. Stimuli consisted of homologues of Mandarin high rising (T2) and high level (T1) tones, and a linear rising ramp (T2L). One condition involved a between-category contrast (T1/T2), the other, a within-category contrast (T2L/T2). Irrespective of condition, musicians and Chinese showed larger MMN responses than nonmusicians; Chinese larger than musicians. Chinese, however, were less accurate than nonnatives in overt discrimination of T2L and T2. Taken together, these findings suggest that experience-dependent effects to pitch contours are domain-general and not driven by linguistic categories. Yet specific differences in long-term experience in pitch processing between domains (music vs. language) may lead to gradations in cortical plasticity to pitch contours.
Experience-dependent plasticity; mismatch negativity (MMN); music; language; nonspeech stimuli; iterated rippled noise (IRN); pitch; lexical tone; Mandarin; speech perception