Task-specific focal dystonia is a movement disorder that is characterized by the loss of voluntary motor control in extensively trained movements. Musician's dystonia is a type of task-specific dystonia that is elicited in professional musicians during instrumental playing. The disorder has been associated with deficits in timing. In order to test the hypothesis that basic timing abilities are affected by musician's dystonia, we investigated a group of patients (N = 15) and a matched control group (N = 15) on a battery of sensory and sensorimotor synchronization tasks. Results did not show any deficits in auditory-motor processing for patients relative to controls. Both groups benefited from a pacing sequence that adapted to their timing (in a sensorimotor synchronization task at a stable tempo). In a purely perceptual task, both groups were able to detect a misaligned metronome when it was late rather than early relative to a musical beat. Overall, the results suggest that basic timing abilities stay intact in patients with musician's dystonia. This supports the idea that musician's dystonia is a highly task-specific movement disorder in which patients are mostly impaired in tasks closely related to the demands of actually playing their instrument.
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively.
Experience with a sensorimotor task, such as practicing a piano piece, leads to strong coupling of sensory (visual or auditory) and motor cortices. Here we review behavioral and neurophysiological (M/EEG, TMS and fMRI) research exploring this topic using the brain of musicians as a model system. Our review focuses on a recent body of evidence suggesting that this form of coupling might have (at least) two cognitive functions. First, it leads to the generation of equivalent predictions (concerning both when and what event is more likely to occur) during both perception and production of music. Second, it underpins the common coding of perception and action that supports the integration of the motor output of multiple musicians’ in the context of joint musical tasks. Essentially, training-based coupling of perception and action might scaffold the human ability to represent complex (structured) actions and to entrain multiple agents—via reciprocal prediction and adaptation—in the pursuit of shared goals.
action-perception coupling; music; training; prediction; joint action
When people play music and dance together, they engage in forms of musical joint action that are often characterized by a shared sense of rhythmic timing and affective state (i.e., temporal and affective entrainment). In order to understand the origins of musical joint action, we propose a model in which entrainment is linked to dual mechanisms (motor resonance and action simulation), which in turn support musical behavior (imitation and complementary joint action). To illustrate this model, we consider two generic forms of joint musical behavior: chorusing and turn-taking. We explore how these common behaviors can be founded on entrainment capacities established early in human development, specifically during musical interactions between infants and their caregivers. If the roots of entrainment are found in early musical interactions which are practiced from childhood into adulthood, then we propose that the rehearsal of advanced musical ensemble skills can be considered to be a refined, mimetic form of temporal and affective entrainment whose evolution begins in infancy.
music; joint action; entrainment; ensemble skills; development; dance
Implicit learning (IL) occurs unconsciously and without intention. Perceptual fluency is the ease of processing elicited by previous exposure to a stimulus. It has been assumed that perceptual fluency is associated with IL. However, the role of perceptual fluency following IL has not been investigated in temporal pattern learning. Two experiments by Schultz, Stevens, Keller, and Tillmann demonstrated the IL of auditory temporal patterns using a serial reaction-time task and a generation task based on the process dissociation procedure. The generation task demonstrated that learning was implicit in both experiments via motor fluency, that is, the inability to suppress learned information. With the aim to disentangle conscious and unconscious processes, we analyze unreported recognition data associated with the Schultz et al. experiments using the sequence identification measurement model. The model assumes that perceptual fluency reflects unconscious processes and IL. For Experiment 1, the model indicated that conscious and unconscious processes contributed to recognition of temporal patterns, but that unconscious processes had a greater influence on recognition than conscious processes. In the model implementation of Experiment 2, there was equal contribution of conscious and unconscious processes in the recognition of temporal patterns. As Schultz et al. demonstrated IL in both experiments using a generation task, and the conditions reported here in Experiments 1 and 2 were identical, two explanations are offered for the discrepancy in model and behavioral results based on the two tasks: 1) perceptual fluency may not be necessary to infer IL, or 2) conscious control over implicitly learned information may vary as a function of perceptual fluency and motor fluency.
The ability to evaluate spontaneity in human behavior is called upon in the esthetic appreciation of dramatic arts and music. The current study addresses the behavioral and brain mechanisms that mediate the perception of spontaneity in music performance. In a functional magnetic resonance imaging experiment, 22 jazz musicians listened to piano melodies and judged whether they were improvised or imitated. Judgment accuracy (mean 55%; range 44–65%), which was low but above chance, was positively correlated with musical experience and empathy. Analysis of listeners’ hemodynamic responses revealed that amygdala activation was stronger for improvisations than imitations. This activation correlated with the variability of performance timing and intensity (loudness) in the melodies, suggesting that the amygdala is involved in the detection of behavioral uncertainty. An analysis based on the subjective classification of melodies according to listeners’ judgments revealed that a network including the pre-supplementary motor area, frontal operculum, and anterior insula was most strongly activated for melodies judged to be improvised. This may reflect the increased engagement of an action simulation network when melodic predictions are rendered challenging due to perceived instability in the performer's actions. Taken together, our results suggest that, while certain brain regions in skilled individuals may be generally sensitive to objective cues to spontaneity in human behavior, the ability to evaluate spontaneity accurately depends upon whether an individual's action-related experience and perspective taking skills enable faithful internal simulation of the given behavior.
music; improvisation; spontaneity; uncertainty; amygdala; action simulation; human fMRI
Expert ensemble musicians produce exquisitely coordinated sounds, but rehearsal is typically required to do so. Ensemble coordination may thus be influenced by the degree to which individuals are familiar with each other's parts. Such familiarity may affect the ability to predict and synchronize with co-performers' actions. Internal models related to action simulation and anticipatory musical imagery may be affected by knowledge of (1) the musical structure of a co-performer's part (e.g., in terms of its rhythm and phrase structure) and/or (2) the co-performer's idiosyncratic playing style (e.g., expressive micro-timing variations). The current study investigated the effects of familiarity on interpersonal coordination in piano duos. Skilled pianists were required to play several duets with different partners. One condition included duets for which co-performers had previously practiced both parts, while another condition included duets for which each performer had practiced only their own part. Each piece was recorded six times without joint rehearsal or visual contact to examine the effects of increasing familiarity. Interpersonal coordination was quantified by measuring asynchronies between pianists' keystroke timing and the correlation of their body (head and torso) movements, which were recorded with a motion capture system. The results suggest that familiarity with a co-performer's part, in the absence of familiarity with their playing style, engenders predictions about micro-timing variations that are based instead upon one's own playing style, leading to a mismatch between predictions and actual events at short timescales. Predictions at longer timescales—that is, those related to musical measures and phrases, and reflected in head movements and body sway—are, however, facilitated by familiarity with the structure of a co-performer's part. These findings point to a dissociation between interpersonal coordination at the level of keystrokes and body movements.
interpersonal coordination; body movement; music; ensembles; sensorimotor synchronization
A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM.
sensorimotor synchronization; computational model; temporal adaptation; error correction; temporal anticipation; predictive internal models
Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
temporal prediction; sensorimotor synchronization; medial prefrontal cortex; motor timing; dual-task interference
This study investigated cognitive and emotional effects of syncopation, a feature
of musical rhythm that produces expectancy violations in the listener by
emphasising weak temporal locations and de-emphasising strong locations in
metric structure. Stimuli consisting of pairs of unsyncopated and syncopated
musical phrases were rated by 35 musicians for perceived complexity, enjoyment,
happiness, arousal, and tension. Overall, syncopated patterns were more enjoyed,
and rated as happier, than unsyncopated patterns, while differences in perceived
tension were unreliable. Complexity and arousal ratings were asymmetric by
serial order, increasing when patterns moved from unsyncopated to syncopated,
but not significantly changing when order was reversed. These results suggest
that syncopation influences emotional valence (positively), and that while
syncopated rhythms are objectively more complex than unsyncopated rhythms, this
difference is more salient when complexity increases than when it decreases. It
is proposed that composers and improvisers may exploit this asymmetry in
perceived complexity by favoring formal structures that progress from
rhythmically simple to complex, as can be observed in the initial sections of
musical forms such as theme and variations.
syncopation; serial asymmetry; affective response; cognition; rhythm; emotion; musical form
The influence of integrated goal representations on multilevel coordination stability was investigated in a task that required finger tapping in antiphase with metronomic tone sequences (inter-agent coordination) while alternating between the two hands (intra-personal coordination). The maximum rate at which musicians could perform this task was measured when taps did or did not trigger feedback tones. Tones produced by the two hands (very low, low, medium, high, very high) could be the same as, or different from, one another and the (medium-pitched) metronome tones. The benefits of feedback tones were greatest when they were close in pitch to the metronome and the left hand triggered low tones while the right hand triggered high tones. Thus, multilevel coordination was facilitated by tones that were easy to integrate with, but perceptually distinct from, the metronome, and by compatibility of movement patterns and feedback pitches.
Motor Coordination; Auditory Feedback; Perceptual Motor Processes; Finger Tapping